Jan 29 16:04:07.912074 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 29 16:04:07.912095 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Wed Jan 29 14:53:00 -00 2025 Jan 29 16:04:07.912105 kernel: KASLR enabled Jan 29 16:04:07.912110 kernel: efi: EFI v2.7 by EDK II Jan 29 16:04:07.912116 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Jan 29 16:04:07.912121 kernel: random: crng init done Jan 29 16:04:07.912128 kernel: secureboot: Secure boot disabled Jan 29 16:04:07.912133 kernel: ACPI: Early table checksum verification disabled Jan 29 16:04:07.912139 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Jan 29 16:04:07.912146 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 29 16:04:07.912152 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:04:07.912158 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:04:07.912164 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:04:07.912170 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:04:07.912177 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:04:07.912184 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:04:07.912190 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:04:07.912197 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:04:07.912203 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:04:07.912209 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 29 16:04:07.912215 kernel: NUMA: Failed to initialise from firmware Jan 29 16:04:07.912221 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 16:04:07.912227 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Jan 29 16:04:07.912233 kernel: Zone ranges: Jan 29 16:04:07.912239 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 16:04:07.912246 kernel: DMA32 empty Jan 29 16:04:07.912252 kernel: Normal empty Jan 29 16:04:07.912259 kernel: Movable zone start for each node Jan 29 16:04:07.912265 kernel: Early memory node ranges Jan 29 16:04:07.912271 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Jan 29 16:04:07.912290 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Jan 29 16:04:07.912297 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Jan 29 16:04:07.912303 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 29 16:04:07.912309 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 29 16:04:07.912315 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 29 16:04:07.912321 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 29 16:04:07.912327 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 29 16:04:07.912335 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 29 16:04:07.912341 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 16:04:07.912348 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 29 16:04:07.912356 kernel: psci: probing for conduit method from ACPI. Jan 29 16:04:07.912363 kernel: psci: PSCIv1.1 detected in firmware. Jan 29 16:04:07.912369 kernel: psci: Using standard PSCI v0.2 function IDs Jan 29 16:04:07.912383 kernel: psci: Trusted OS migration not required Jan 29 16:04:07.912389 kernel: psci: SMC Calling Convention v1.1 Jan 29 16:04:07.912396 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 29 16:04:07.912402 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 29 16:04:07.912409 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 29 16:04:07.912416 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 29 16:04:07.912422 kernel: Detected PIPT I-cache on CPU0 Jan 29 16:04:07.912429 kernel: CPU features: detected: GIC system register CPU interface Jan 29 16:04:07.912435 kernel: CPU features: detected: Hardware dirty bit management Jan 29 16:04:07.912442 kernel: CPU features: detected: Spectre-v4 Jan 29 16:04:07.912450 kernel: CPU features: detected: Spectre-BHB Jan 29 16:04:07.912456 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 29 16:04:07.912463 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 29 16:04:07.912474 kernel: CPU features: detected: ARM erratum 1418040 Jan 29 16:04:07.912481 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 29 16:04:07.912487 kernel: alternatives: applying boot alternatives Jan 29 16:04:07.912495 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=efa7e6e1cc8b13b443d6366d9f999907439b0271fcbeecfeffa01ef11e4dc0ac Jan 29 16:04:07.912502 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 16:04:07.912508 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 16:04:07.912515 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 16:04:07.912521 kernel: Fallback order for Node 0: 0 Jan 29 16:04:07.912529 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 29 16:04:07.912535 kernel: Policy zone: DMA Jan 29 16:04:07.912542 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 16:04:07.912548 kernel: software IO TLB: area num 4. Jan 29 16:04:07.912555 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 29 16:04:07.912562 kernel: Memory: 2387536K/2572288K available (10304K kernel code, 2186K rwdata, 8092K rodata, 38336K init, 897K bss, 184752K reserved, 0K cma-reserved) Jan 29 16:04:07.912568 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 29 16:04:07.912575 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 16:04:07.912583 kernel: rcu: RCU event tracing is enabled. Jan 29 16:04:07.912590 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 29 16:04:07.912596 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 16:04:07.912603 kernel: Tracing variant of Tasks RCU enabled. Jan 29 16:04:07.912611 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 16:04:07.912617 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 29 16:04:07.912624 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 29 16:04:07.912630 kernel: GICv3: 256 SPIs implemented Jan 29 16:04:07.912636 kernel: GICv3: 0 Extended SPIs implemented Jan 29 16:04:07.912643 kernel: Root IRQ handler: gic_handle_irq Jan 29 16:04:07.912649 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 29 16:04:07.912656 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 29 16:04:07.912662 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 29 16:04:07.912669 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 29 16:04:07.912676 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 29 16:04:07.912684 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 29 16:04:07.912690 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 29 16:04:07.912697 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 16:04:07.912703 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 16:04:07.912710 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 29 16:04:07.912716 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 29 16:04:07.912723 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 29 16:04:07.912730 kernel: arm-pv: using stolen time PV Jan 29 16:04:07.912736 kernel: Console: colour dummy device 80x25 Jan 29 16:04:07.912743 kernel: ACPI: Core revision 20230628 Jan 29 16:04:07.912750 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 29 16:04:07.912758 kernel: pid_max: default: 32768 minimum: 301 Jan 29 16:04:07.912765 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 16:04:07.912771 kernel: landlock: Up and running. Jan 29 16:04:07.912778 kernel: SELinux: Initializing. Jan 29 16:04:07.912785 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 16:04:07.912792 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 16:04:07.912798 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 16:04:07.912805 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 16:04:07.912812 kernel: rcu: Hierarchical SRCU implementation. Jan 29 16:04:07.912820 kernel: rcu: Max phase no-delay instances is 400. Jan 29 16:04:07.912826 kernel: Platform MSI: ITS@0x8080000 domain created Jan 29 16:04:07.912833 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 29 16:04:07.912839 kernel: Remapping and enabling EFI services. Jan 29 16:04:07.912846 kernel: smp: Bringing up secondary CPUs ... Jan 29 16:04:07.912853 kernel: Detected PIPT I-cache on CPU1 Jan 29 16:04:07.912859 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 29 16:04:07.912866 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 29 16:04:07.912873 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 16:04:07.912880 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 29 16:04:07.912887 kernel: Detected PIPT I-cache on CPU2 Jan 29 16:04:07.912899 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 29 16:04:07.912907 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 29 16:04:07.912926 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 16:04:07.912933 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 29 16:04:07.912940 kernel: Detected PIPT I-cache on CPU3 Jan 29 16:04:07.912947 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 29 16:04:07.912954 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 29 16:04:07.912962 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 16:04:07.912969 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 29 16:04:07.912976 kernel: smp: Brought up 1 node, 4 CPUs Jan 29 16:04:07.912983 kernel: SMP: Total of 4 processors activated. Jan 29 16:04:07.912990 kernel: CPU features: detected: 32-bit EL0 Support Jan 29 16:04:07.912997 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 29 16:04:07.913005 kernel: CPU features: detected: Common not Private translations Jan 29 16:04:07.913012 kernel: CPU features: detected: CRC32 instructions Jan 29 16:04:07.913020 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 29 16:04:07.913027 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 29 16:04:07.913034 kernel: CPU features: detected: LSE atomic instructions Jan 29 16:04:07.913041 kernel: CPU features: detected: Privileged Access Never Jan 29 16:04:07.913048 kernel: CPU features: detected: RAS Extension Support Jan 29 16:04:07.913055 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 29 16:04:07.913062 kernel: CPU: All CPU(s) started at EL1 Jan 29 16:04:07.913069 kernel: alternatives: applying system-wide alternatives Jan 29 16:04:07.913076 kernel: devtmpfs: initialized Jan 29 16:04:07.913083 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 16:04:07.913091 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 29 16:04:07.913098 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 16:04:07.913105 kernel: SMBIOS 3.0.0 present. Jan 29 16:04:07.913112 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jan 29 16:04:07.913119 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 16:04:07.913126 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 29 16:04:07.913133 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 29 16:04:07.913140 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 29 16:04:07.913148 kernel: audit: initializing netlink subsys (disabled) Jan 29 16:04:07.913156 kernel: audit: type=2000 audit(0.017:1): state=initialized audit_enabled=0 res=1 Jan 29 16:04:07.913163 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 16:04:07.913170 kernel: cpuidle: using governor menu Jan 29 16:04:07.913177 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 29 16:04:07.913184 kernel: ASID allocator initialised with 32768 entries Jan 29 16:04:07.913191 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 16:04:07.913198 kernel: Serial: AMBA PL011 UART driver Jan 29 16:04:07.913205 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 29 16:04:07.913212 kernel: Modules: 0 pages in range for non-PLT usage Jan 29 16:04:07.913220 kernel: Modules: 509280 pages in range for PLT usage Jan 29 16:04:07.913227 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 16:04:07.913234 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 16:04:07.913241 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 29 16:04:07.913248 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 29 16:04:07.913255 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 16:04:07.913262 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 16:04:07.913269 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 29 16:04:07.913294 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 29 16:04:07.913304 kernel: ACPI: Added _OSI(Module Device) Jan 29 16:04:07.913311 kernel: ACPI: Added _OSI(Processor Device) Jan 29 16:04:07.913318 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 16:04:07.913325 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 16:04:07.913332 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 16:04:07.913339 kernel: ACPI: Interpreter enabled Jan 29 16:04:07.913346 kernel: ACPI: Using GIC for interrupt routing Jan 29 16:04:07.913353 kernel: ACPI: MCFG table detected, 1 entries Jan 29 16:04:07.913360 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 29 16:04:07.913368 kernel: printk: console [ttyAMA0] enabled Jan 29 16:04:07.913380 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 16:04:07.913519 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 16:04:07.913594 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 29 16:04:07.913660 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 29 16:04:07.913720 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 29 16:04:07.913780 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 29 16:04:07.913791 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 29 16:04:07.913799 kernel: PCI host bridge to bus 0000:00 Jan 29 16:04:07.913865 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 29 16:04:07.913923 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 29 16:04:07.913979 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 29 16:04:07.914034 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 16:04:07.914111 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 29 16:04:07.914191 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 16:04:07.914255 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 29 16:04:07.914353 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 29 16:04:07.914428 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 16:04:07.914500 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 16:04:07.914567 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 29 16:04:07.914630 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 29 16:04:07.914692 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 29 16:04:07.914764 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 29 16:04:07.914820 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 29 16:04:07.914829 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 29 16:04:07.914837 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 29 16:04:07.914844 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 29 16:04:07.914851 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 29 16:04:07.914860 kernel: iommu: Default domain type: Translated Jan 29 16:04:07.914868 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 29 16:04:07.914875 kernel: efivars: Registered efivars operations Jan 29 16:04:07.914882 kernel: vgaarb: loaded Jan 29 16:04:07.914888 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 29 16:04:07.914896 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 16:04:07.914903 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 16:04:07.914910 kernel: pnp: PnP ACPI init Jan 29 16:04:07.914978 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 29 16:04:07.914989 kernel: pnp: PnP ACPI: found 1 devices Jan 29 16:04:07.914997 kernel: NET: Registered PF_INET protocol family Jan 29 16:04:07.915004 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 16:04:07.915011 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 16:04:07.915018 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 16:04:07.915025 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 16:04:07.915032 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 16:04:07.915040 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 16:04:07.915047 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 16:04:07.915055 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 16:04:07.915062 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 16:04:07.915069 kernel: PCI: CLS 0 bytes, default 64 Jan 29 16:04:07.915076 kernel: kvm [1]: HYP mode not available Jan 29 16:04:07.915083 kernel: Initialise system trusted keyrings Jan 29 16:04:07.915090 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 16:04:07.915098 kernel: Key type asymmetric registered Jan 29 16:04:07.915105 kernel: Asymmetric key parser 'x509' registered Jan 29 16:04:07.915112 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 29 16:04:07.915121 kernel: io scheduler mq-deadline registered Jan 29 16:04:07.915128 kernel: io scheduler kyber registered Jan 29 16:04:07.915135 kernel: io scheduler bfq registered Jan 29 16:04:07.915142 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 29 16:04:07.915150 kernel: ACPI: button: Power Button [PWRB] Jan 29 16:04:07.915157 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 29 16:04:07.915218 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 29 16:04:07.915228 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 16:04:07.915235 kernel: thunder_xcv, ver 1.0 Jan 29 16:04:07.915244 kernel: thunder_bgx, ver 1.0 Jan 29 16:04:07.915251 kernel: nicpf, ver 1.0 Jan 29 16:04:07.915258 kernel: nicvf, ver 1.0 Jan 29 16:04:07.915345 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 29 16:04:07.915414 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-29T16:04:07 UTC (1738166647) Jan 29 16:04:07.915424 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 16:04:07.915437 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 29 16:04:07.915445 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 29 16:04:07.915455 kernel: watchdog: Hard watchdog permanently disabled Jan 29 16:04:07.915462 kernel: NET: Registered PF_INET6 protocol family Jan 29 16:04:07.915468 kernel: Segment Routing with IPv6 Jan 29 16:04:07.915475 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 16:04:07.915487 kernel: NET: Registered PF_PACKET protocol family Jan 29 16:04:07.915494 kernel: Key type dns_resolver registered Jan 29 16:04:07.915502 kernel: registered taskstats version 1 Jan 29 16:04:07.915509 kernel: Loading compiled-in X.509 certificates Jan 29 16:04:07.915516 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6aa2640fb67e4af9702410ddab8a5c8b9fc0d77b' Jan 29 16:04:07.915524 kernel: Key type .fscrypt registered Jan 29 16:04:07.915531 kernel: Key type fscrypt-provisioning registered Jan 29 16:04:07.915538 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 16:04:07.915545 kernel: ima: Allocated hash algorithm: sha1 Jan 29 16:04:07.915552 kernel: ima: No architecture policies found Jan 29 16:04:07.915559 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 29 16:04:07.915566 kernel: clk: Disabling unused clocks Jan 29 16:04:07.915573 kernel: Freeing unused kernel memory: 38336K Jan 29 16:04:07.915580 kernel: Run /init as init process Jan 29 16:04:07.915588 kernel: with arguments: Jan 29 16:04:07.915595 kernel: /init Jan 29 16:04:07.915602 kernel: with environment: Jan 29 16:04:07.915609 kernel: HOME=/ Jan 29 16:04:07.915616 kernel: TERM=linux Jan 29 16:04:07.915623 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 16:04:07.915630 systemd[1]: Successfully made /usr/ read-only. Jan 29 16:04:07.915640 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:04:07.915649 systemd[1]: Detected virtualization kvm. Jan 29 16:04:07.915657 systemd[1]: Detected architecture arm64. Jan 29 16:04:07.915664 systemd[1]: Running in initrd. Jan 29 16:04:07.915672 systemd[1]: No hostname configured, using default hostname. Jan 29 16:04:07.915680 systemd[1]: Hostname set to . Jan 29 16:04:07.915687 systemd[1]: Initializing machine ID from VM UUID. Jan 29 16:04:07.915695 systemd[1]: Queued start job for default target initrd.target. Jan 29 16:04:07.915703 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:04:07.915712 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:04:07.915719 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 16:04:07.915727 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:04:07.915735 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 16:04:07.915743 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 16:04:07.915752 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 16:04:07.915761 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 16:04:07.915769 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:04:07.915777 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:04:07.915784 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:04:07.915792 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:04:07.915799 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:04:07.915807 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:04:07.915814 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:04:07.915822 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:04:07.915830 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 16:04:07.915838 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 29 16:04:07.915845 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:04:07.915853 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:04:07.915863 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:04:07.915871 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:04:07.915878 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 16:04:07.915886 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:04:07.915895 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 16:04:07.915902 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 16:04:07.915910 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:04:07.915917 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:04:07.915925 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:04:07.915932 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 16:04:07.915940 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:04:07.915949 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 16:04:07.915957 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:04:07.915980 systemd-journald[238]: Collecting audit messages is disabled. Jan 29 16:04:07.916000 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:04:07.916008 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 16:04:07.916016 systemd-journald[238]: Journal started Jan 29 16:04:07.916034 systemd-journald[238]: Runtime Journal (/run/log/journal/aae1846dee8344d3ac62b0b47bcf483d) is 5.9M, max 47.3M, 41.4M free. Jan 29 16:04:07.906456 systemd-modules-load[239]: Inserted module 'overlay' Jan 29 16:04:07.918339 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:04:07.918725 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:04:07.922229 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 16:04:07.922792 systemd-modules-load[239]: Inserted module 'br_netfilter' Jan 29 16:04:07.923500 kernel: Bridge firewalling registered Jan 29 16:04:07.930451 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:04:07.931962 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:04:07.933356 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:04:07.934829 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:04:07.936655 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:04:07.940852 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 16:04:07.942922 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:04:07.943885 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:04:07.950422 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:04:07.952504 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:04:07.954811 dracut-cmdline[271]: dracut-dracut-053 Jan 29 16:04:07.957158 dracut-cmdline[271]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=efa7e6e1cc8b13b443d6366d9f999907439b0271fcbeecfeffa01ef11e4dc0ac Jan 29 16:04:07.987043 systemd-resolved[285]: Positive Trust Anchors: Jan 29 16:04:07.987059 systemd-resolved[285]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:04:07.987090 systemd-resolved[285]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:04:07.991590 systemd-resolved[285]: Defaulting to hostname 'linux'. Jan 29 16:04:07.993762 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:04:07.994718 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:04:08.019297 kernel: SCSI subsystem initialized Jan 29 16:04:08.023292 kernel: Loading iSCSI transport class v2.0-870. Jan 29 16:04:08.031292 kernel: iscsi: registered transport (tcp) Jan 29 16:04:08.044572 kernel: iscsi: registered transport (qla4xxx) Jan 29 16:04:08.044587 kernel: QLogic iSCSI HBA Driver Jan 29 16:04:08.086972 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 16:04:08.106425 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 16:04:08.123903 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 16:04:08.123948 kernel: device-mapper: uevent: version 1.0.3 Jan 29 16:04:08.125020 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 16:04:08.171299 kernel: raid6: neonx8 gen() 15793 MB/s Jan 29 16:04:08.188297 kernel: raid6: neonx4 gen() 15801 MB/s Jan 29 16:04:08.205302 kernel: raid6: neonx2 gen() 13196 MB/s Jan 29 16:04:08.222326 kernel: raid6: neonx1 gen() 10510 MB/s Jan 29 16:04:08.239301 kernel: raid6: int64x8 gen() 6777 MB/s Jan 29 16:04:08.256298 kernel: raid6: int64x4 gen() 7340 MB/s Jan 29 16:04:08.273299 kernel: raid6: int64x2 gen() 6102 MB/s Jan 29 16:04:08.290414 kernel: raid6: int64x1 gen() 5046 MB/s Jan 29 16:04:08.290451 kernel: raid6: using algorithm neonx4 gen() 15801 MB/s Jan 29 16:04:08.308445 kernel: raid6: .... xor() 12401 MB/s, rmw enabled Jan 29 16:04:08.308499 kernel: raid6: using neon recovery algorithm Jan 29 16:04:08.313681 kernel: xor: measuring software checksum speed Jan 29 16:04:08.313710 kernel: 8regs : 21596 MB/sec Jan 29 16:04:08.314350 kernel: 32regs : 21618 MB/sec Jan 29 16:04:08.315592 kernel: arm64_neon : 26230 MB/sec Jan 29 16:04:08.315613 kernel: xor: using function: arm64_neon (26230 MB/sec) Jan 29 16:04:08.366311 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 16:04:08.377153 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:04:08.388476 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:04:08.402840 systemd-udevd[462]: Using default interface naming scheme 'v255'. Jan 29 16:04:08.406541 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:04:08.408842 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 16:04:08.423000 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Jan 29 16:04:08.448105 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:04:08.457434 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:04:08.494896 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:04:08.506434 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 16:04:08.518065 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 16:04:08.519407 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:04:08.522138 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:04:08.523226 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:04:08.530425 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 16:04:08.536805 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 29 16:04:08.552626 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 29 16:04:08.552901 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 16:04:08.552913 kernel: GPT:9289727 != 19775487 Jan 29 16:04:08.552923 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 16:04:08.552938 kernel: GPT:9289727 != 19775487 Jan 29 16:04:08.552946 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 16:04:08.552955 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 16:04:08.543301 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:04:08.569072 kernel: BTRFS: device fsid d7b4a0ef-7a03-4a6c-8f31-7cafae04447a devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (512) Jan 29 16:04:08.569131 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (524) Jan 29 16:04:08.576251 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 16:04:08.592683 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 16:04:08.598743 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 16:04:08.599717 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 16:04:08.607653 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 16:04:08.624439 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 16:04:08.625231 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:04:08.625318 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:04:08.627828 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:04:08.629555 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:04:08.629615 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:04:08.633141 disk-uuid[547]: Primary Header is updated. Jan 29 16:04:08.633141 disk-uuid[547]: Secondary Entries is updated. Jan 29 16:04:08.633141 disk-uuid[547]: Secondary Header is updated. Jan 29 16:04:08.632407 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:04:08.637945 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 16:04:08.634906 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:04:08.650883 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:04:08.660524 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:04:08.684408 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:04:09.646310 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 16:04:09.646884 disk-uuid[548]: The operation has completed successfully. Jan 29 16:04:09.667976 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 16:04:09.668068 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 16:04:09.706469 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 16:04:09.709013 sh[574]: Success Jan 29 16:04:09.721623 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 29 16:04:09.747768 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 16:04:09.757519 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 16:04:09.760311 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 16:04:09.768108 kernel: BTRFS info (device dm-0): first mount of filesystem d7b4a0ef-7a03-4a6c-8f31-7cafae04447a Jan 29 16:04:09.768144 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 29 16:04:09.768155 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 16:04:09.769181 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 16:04:09.770606 kernel: BTRFS info (device dm-0): using free space tree Jan 29 16:04:09.773783 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 16:04:09.774811 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 16:04:09.782419 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 16:04:09.783789 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 16:04:09.792486 kernel: BTRFS info (device vda6): first mount of filesystem c42147cd-4375-422a-9f40-8bdefff824e9 Jan 29 16:04:09.792529 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 16:04:09.792539 kernel: BTRFS info (device vda6): using free space tree Jan 29 16:04:09.795310 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 16:04:09.804293 kernel: BTRFS info (device vda6): last unmount of filesystem c42147cd-4375-422a-9f40-8bdefff824e9 Jan 29 16:04:09.808037 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 16:04:09.813446 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 16:04:09.876530 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:04:09.888572 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:04:09.894635 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 16:04:09.903567 ignition[672]: Ignition 2.20.0 Jan 29 16:04:09.903577 ignition[672]: Stage: fetch-offline Jan 29 16:04:09.903608 ignition[672]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:04:09.903617 ignition[672]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 16:04:09.903763 ignition[672]: parsed url from cmdline: "" Jan 29 16:04:09.903767 ignition[672]: no config URL provided Jan 29 16:04:09.903771 ignition[672]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 16:04:09.903777 ignition[672]: no config at "/usr/lib/ignition/user.ign" Jan 29 16:04:09.903799 ignition[672]: op(1): [started] loading QEMU firmware config module Jan 29 16:04:09.903803 ignition[672]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 29 16:04:09.912844 ignition[672]: op(1): [finished] loading QEMU firmware config module Jan 29 16:04:09.915198 systemd-networkd[767]: lo: Link UP Jan 29 16:04:09.915211 systemd-networkd[767]: lo: Gained carrier Jan 29 16:04:09.915996 systemd-networkd[767]: Enumeration completed Jan 29 16:04:09.916115 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:04:09.916381 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:04:09.916385 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:04:09.917547 systemd-networkd[767]: eth0: Link UP Jan 29 16:04:09.917550 systemd-networkd[767]: eth0: Gained carrier Jan 29 16:04:09.917556 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:04:09.917559 systemd[1]: Reached target network.target - Network. Jan 29 16:04:09.947316 systemd-networkd[767]: eth0: DHCPv4 address 10.0.0.43/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 16:04:09.957937 ignition[672]: parsing config with SHA512: 866f84df561389a4db668cbbacb7f0d7eb9e7b26e03eaabbff7a0da642a18c69eb5c28f79956d64d3fdbb3deb07d9f64b0b168a7dcf46a2dd2511a62cdb1b2ca Jan 29 16:04:09.962504 unknown[672]: fetched base config from "system" Jan 29 16:04:09.962516 unknown[672]: fetched user config from "qemu" Jan 29 16:04:09.962942 ignition[672]: fetch-offline: fetch-offline passed Jan 29 16:04:09.963007 ignition[672]: Ignition finished successfully Jan 29 16:04:09.964545 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:04:09.965823 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 16:04:09.981454 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 16:04:09.993488 ignition[774]: Ignition 2.20.0 Jan 29 16:04:09.993499 ignition[774]: Stage: kargs Jan 29 16:04:09.993640 ignition[774]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:04:09.993650 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 16:04:09.994557 ignition[774]: kargs: kargs passed Jan 29 16:04:09.994598 ignition[774]: Ignition finished successfully Jan 29 16:04:09.997782 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 16:04:10.008425 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 16:04:10.017295 ignition[784]: Ignition 2.20.0 Jan 29 16:04:10.017304 ignition[784]: Stage: disks Jan 29 16:04:10.017460 ignition[784]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:04:10.017469 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 16:04:10.018249 ignition[784]: disks: disks passed Jan 29 16:04:10.018305 ignition[784]: Ignition finished successfully Jan 29 16:04:10.020334 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 16:04:10.021768 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 16:04:10.024367 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 16:04:10.025197 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:04:10.026612 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:04:10.027981 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:04:10.036617 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 16:04:10.045230 systemd-fsck[794]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 16:04:10.048815 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 16:04:10.050852 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 16:04:10.098175 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 16:04:10.099354 kernel: EXT4-fs (vda9): mounted filesystem 41c89329-6889-4dd8-82a1-efe68f55bab8 r/w with ordered data mode. Quota mode: none. Jan 29 16:04:10.099217 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 16:04:10.112373 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:04:10.113832 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 16:04:10.115011 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 16:04:10.115052 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 16:04:10.120653 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (802) Jan 29 16:04:10.115072 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:04:10.122636 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 16:04:10.125710 kernel: BTRFS info (device vda6): first mount of filesystem c42147cd-4375-422a-9f40-8bdefff824e9 Jan 29 16:04:10.125738 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 16:04:10.125749 kernel: BTRFS info (device vda6): using free space tree Jan 29 16:04:10.124537 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 16:04:10.128033 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 16:04:10.128966 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:04:10.171493 initrd-setup-root[826]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 16:04:10.175323 initrd-setup-root[833]: cut: /sysroot/etc/group: No such file or directory Jan 29 16:04:10.178325 initrd-setup-root[840]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 16:04:10.181817 initrd-setup-root[847]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 16:04:10.252741 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 16:04:10.273375 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 16:04:10.274751 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 16:04:10.279292 kernel: BTRFS info (device vda6): last unmount of filesystem c42147cd-4375-422a-9f40-8bdefff824e9 Jan 29 16:04:10.294337 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 16:04:10.295697 ignition[917]: INFO : Ignition 2.20.0 Jan 29 16:04:10.295697 ignition[917]: INFO : Stage: mount Jan 29 16:04:10.295697 ignition[917]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:04:10.295697 ignition[917]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 16:04:10.299064 ignition[917]: INFO : mount: mount passed Jan 29 16:04:10.299064 ignition[917]: INFO : Ignition finished successfully Jan 29 16:04:10.297196 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 16:04:10.315441 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 16:04:10.895514 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 16:04:10.914478 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:04:10.921161 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (930) Jan 29 16:04:10.921189 kernel: BTRFS info (device vda6): first mount of filesystem c42147cd-4375-422a-9f40-8bdefff824e9 Jan 29 16:04:10.921207 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 16:04:10.922142 kernel: BTRFS info (device vda6): using free space tree Jan 29 16:04:10.925300 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 16:04:10.925866 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:04:10.940427 ignition[948]: INFO : Ignition 2.20.0 Jan 29 16:04:10.940427 ignition[948]: INFO : Stage: files Jan 29 16:04:10.941619 ignition[948]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:04:10.941619 ignition[948]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 16:04:10.941619 ignition[948]: DEBUG : files: compiled without relabeling support, skipping Jan 29 16:04:10.944135 ignition[948]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 16:04:10.944135 ignition[948]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 16:04:10.944135 ignition[948]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 16:04:10.944135 ignition[948]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 16:04:10.948050 ignition[948]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 16:04:10.948050 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 29 16:04:10.948050 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 29 16:04:10.944384 unknown[948]: wrote ssh authorized keys file for user: core Jan 29 16:04:10.988517 systemd-networkd[767]: eth0: Gained IPv6LL Jan 29 16:04:11.041825 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 16:04:11.514608 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 29 16:04:11.516075 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 16:04:11.516075 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 29 16:04:11.768087 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 16:04:11.833638 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 16:04:11.835041 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 16:04:11.835041 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 16:04:11.835041 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 16:04:11.835041 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 16:04:11.835041 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 16:04:11.835041 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 16:04:11.835041 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 16:04:11.835041 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 16:04:11.844851 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:04:11.844851 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:04:11.844851 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 29 16:04:11.844851 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 29 16:04:11.844851 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 29 16:04:11.844851 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Jan 29 16:04:12.085608 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 16:04:12.309715 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 29 16:04:12.309715 ignition[948]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 29 16:04:12.312552 ignition[948]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 16:04:12.312552 ignition[948]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 16:04:12.312552 ignition[948]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 29 16:04:12.312552 ignition[948]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 29 16:04:12.312552 ignition[948]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 16:04:12.312552 ignition[948]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 16:04:12.312552 ignition[948]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 29 16:04:12.312552 ignition[948]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 29 16:04:12.324658 ignition[948]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 16:04:12.327674 ignition[948]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 16:04:12.328766 ignition[948]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 29 16:04:12.328766 ignition[948]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 29 16:04:12.328766 ignition[948]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 16:04:12.328766 ignition[948]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:04:12.328766 ignition[948]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:04:12.328766 ignition[948]: INFO : files: files passed Jan 29 16:04:12.328766 ignition[948]: INFO : Ignition finished successfully Jan 29 16:04:12.330327 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 16:04:12.342428 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 16:04:12.344513 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 16:04:12.345617 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 16:04:12.345689 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 16:04:12.350885 initrd-setup-root-after-ignition[976]: grep: /sysroot/oem/oem-release: No such file or directory Jan 29 16:04:12.353857 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:04:12.353857 initrd-setup-root-after-ignition[978]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:04:12.356119 initrd-setup-root-after-ignition[982]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:04:12.359307 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:04:12.360380 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 16:04:12.368390 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 16:04:12.385369 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 16:04:12.386104 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 16:04:12.388040 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 16:04:12.388900 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 16:04:12.389678 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 16:04:12.392074 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 16:04:12.404037 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:04:12.411499 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 16:04:12.418424 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:04:12.419333 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:04:12.421006 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 16:04:12.422417 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 16:04:12.422523 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:04:12.424372 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 16:04:12.425861 systemd[1]: Stopped target basic.target - Basic System. Jan 29 16:04:12.427116 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 16:04:12.430155 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:04:12.431993 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 16:04:12.433399 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 16:04:12.434875 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:04:12.436372 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 16:04:12.437892 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 16:04:12.439427 systemd[1]: Stopped target swap.target - Swaps. Jan 29 16:04:12.440595 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 16:04:12.440706 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:04:12.442553 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:04:12.444131 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:04:12.445530 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 16:04:12.446326 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:04:12.447384 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 16:04:12.447491 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 16:04:12.450091 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 16:04:12.450209 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:04:12.451904 systemd[1]: Stopped target paths.target - Path Units. Jan 29 16:04:12.453907 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 16:04:12.459314 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:04:12.460254 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 16:04:12.462113 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 16:04:12.463480 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 16:04:12.463559 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:04:12.465023 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 16:04:12.465100 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:04:12.466549 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 16:04:12.466650 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:04:12.468129 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 16:04:12.468226 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 16:04:12.476451 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 16:04:12.477109 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 16:04:12.477225 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:04:12.479596 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 16:04:12.480231 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 16:04:12.480510 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:04:12.482305 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 16:04:12.482418 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:04:12.487703 ignition[1002]: INFO : Ignition 2.20.0 Jan 29 16:04:12.487703 ignition[1002]: INFO : Stage: umount Jan 29 16:04:12.487703 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:04:12.487703 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 16:04:12.493548 ignition[1002]: INFO : umount: umount passed Jan 29 16:04:12.493548 ignition[1002]: INFO : Ignition finished successfully Jan 29 16:04:12.489127 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 16:04:12.489217 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 16:04:12.491348 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 16:04:12.491771 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 16:04:12.491850 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 16:04:12.493157 systemd[1]: Stopped target network.target - Network. Jan 29 16:04:12.494152 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 16:04:12.494223 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 16:04:12.495629 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 16:04:12.495673 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 16:04:12.497073 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 16:04:12.497115 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 16:04:12.498437 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 16:04:12.498480 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 16:04:12.500262 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 16:04:12.501649 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 16:04:12.512929 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 16:04:12.514119 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 16:04:12.517109 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 29 16:04:12.517298 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 16:04:12.517420 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 16:04:12.519799 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 29 16:04:12.520674 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 16:04:12.520711 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:04:12.534438 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 16:04:12.535106 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 16:04:12.535155 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:04:12.536680 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 16:04:12.536719 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:04:12.538924 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 16:04:12.538961 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 16:04:12.540325 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 16:04:12.540375 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:04:12.542627 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:04:12.544057 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 29 16:04:12.544106 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:04:12.551907 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 16:04:12.552844 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 16:04:12.559646 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 16:04:12.559771 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:04:12.561796 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 16:04:12.561854 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 16:04:12.563036 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 16:04:12.563066 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:04:12.563946 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 16:04:12.563989 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:04:12.566022 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 16:04:12.566064 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 16:04:12.568067 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:04:12.568110 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:04:12.580485 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 16:04:12.581253 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 16:04:12.581319 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:04:12.583653 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:04:12.583692 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:04:12.586524 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 29 16:04:12.586571 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:04:12.586832 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 16:04:12.586937 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 16:04:12.588545 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 16:04:12.588635 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 16:04:12.591649 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 16:04:12.592710 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 16:04:12.592767 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 16:04:12.594769 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 16:04:12.602927 systemd[1]: Switching root. Jan 29 16:04:12.629230 systemd-journald[238]: Journal stopped Jan 29 16:04:13.372996 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jan 29 16:04:13.373048 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 16:04:13.373060 kernel: SELinux: policy capability open_perms=1 Jan 29 16:04:13.373069 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 16:04:13.373078 kernel: SELinux: policy capability always_check_network=0 Jan 29 16:04:13.373087 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 16:04:13.373097 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 16:04:13.373106 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 16:04:13.373117 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 16:04:13.373126 kernel: audit: type=1403 audit(1738166652.785:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 16:04:13.373139 systemd[1]: Successfully loaded SELinux policy in 34.922ms. Jan 29 16:04:13.373158 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.098ms. Jan 29 16:04:13.373169 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:04:13.373180 systemd[1]: Detected virtualization kvm. Jan 29 16:04:13.373191 systemd[1]: Detected architecture arm64. Jan 29 16:04:13.373201 systemd[1]: Detected first boot. Jan 29 16:04:13.373211 systemd[1]: Initializing machine ID from VM UUID. Jan 29 16:04:13.373222 zram_generator::config[1050]: No configuration found. Jan 29 16:04:13.373234 kernel: NET: Registered PF_VSOCK protocol family Jan 29 16:04:13.373243 systemd[1]: Populated /etc with preset unit settings. Jan 29 16:04:13.373254 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 29 16:04:13.373266 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 16:04:13.373292 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 16:04:13.373305 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 16:04:13.373315 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 16:04:13.373325 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 16:04:13.373338 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 16:04:13.373356 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 16:04:13.373369 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 16:04:13.373379 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 16:04:13.373389 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 16:04:13.373398 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 16:04:13.373408 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:04:13.373419 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:04:13.373429 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 16:04:13.373439 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 16:04:13.373452 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 16:04:13.373463 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:04:13.373474 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 29 16:04:13.373483 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:04:13.373494 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 16:04:13.373504 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 16:04:13.373514 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 16:04:13.373526 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 16:04:13.373540 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:04:13.373550 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:04:13.373560 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:04:13.373572 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:04:13.373582 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 16:04:13.373592 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 16:04:13.373602 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 29 16:04:13.373611 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:04:13.373623 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:04:13.373633 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:04:13.373643 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 16:04:13.373653 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 16:04:13.373663 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 16:04:13.373673 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 16:04:13.373683 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 16:04:13.373693 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 16:04:13.373703 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 16:04:13.373715 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 16:04:13.373726 systemd[1]: Reached target machines.target - Containers. Jan 29 16:04:13.373736 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 16:04:13.373746 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:04:13.373756 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:04:13.373767 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 16:04:13.373776 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:04:13.373788 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:04:13.373801 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:04:13.373813 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 16:04:13.373823 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:04:13.373834 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 16:04:13.373844 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 16:04:13.373854 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 16:04:13.373865 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 16:04:13.373874 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 16:04:13.373885 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:04:13.373897 kernel: fuse: init (API version 7.39) Jan 29 16:04:13.373906 kernel: ACPI: bus type drm_connector registered Jan 29 16:04:13.373916 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:04:13.373926 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:04:13.373936 kernel: loop: module loaded Jan 29 16:04:13.373946 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 16:04:13.373956 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 16:04:13.373965 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 29 16:04:13.373975 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:04:13.373987 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 16:04:13.373997 systemd[1]: Stopped verity-setup.service. Jan 29 16:04:13.374008 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 16:04:13.374018 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 16:04:13.374028 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 16:04:13.374040 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 16:04:13.374050 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 16:04:13.374060 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 16:04:13.374089 systemd-journald[1125]: Collecting audit messages is disabled. Jan 29 16:04:13.374112 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 16:04:13.374122 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:04:13.374133 systemd-journald[1125]: Journal started Jan 29 16:04:13.374155 systemd-journald[1125]: Runtime Journal (/run/log/journal/aae1846dee8344d3ac62b0b47bcf483d) is 5.9M, max 47.3M, 41.4M free. Jan 29 16:04:13.166918 systemd[1]: Queued start job for default target multi-user.target. Jan 29 16:04:13.178067 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 16:04:13.178464 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 16:04:13.377896 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:04:13.378614 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 16:04:13.378780 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 16:04:13.379979 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:04:13.380133 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:04:13.381355 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:04:13.381614 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:04:13.382681 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:04:13.382824 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:04:13.383961 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 16:04:13.384200 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 16:04:13.385315 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:04:13.385485 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:04:13.386555 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:04:13.387843 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 16:04:13.389148 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 16:04:13.390591 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 29 16:04:13.402171 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 16:04:13.412395 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 16:04:13.414121 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 16:04:13.414995 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 16:04:13.415031 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:04:13.416651 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 29 16:04:13.420448 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 16:04:13.422153 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 16:04:13.423057 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:04:13.423989 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 16:04:13.425667 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 16:04:13.426690 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:04:13.430265 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 16:04:13.431636 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:04:13.432045 systemd-journald[1125]: Time spent on flushing to /var/log/journal/aae1846dee8344d3ac62b0b47bcf483d is 19.390ms for 870 entries. Jan 29 16:04:13.432045 systemd-journald[1125]: System Journal (/var/log/journal/aae1846dee8344d3ac62b0b47bcf483d) is 8M, max 195.6M, 187.6M free. Jan 29 16:04:13.462632 systemd-journald[1125]: Received client request to flush runtime journal. Jan 29 16:04:13.462675 kernel: loop0: detected capacity change from 0 to 201592 Jan 29 16:04:13.432494 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:04:13.437481 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 16:04:13.440303 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 16:04:13.442483 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:04:13.446620 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 16:04:13.447679 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 16:04:13.449131 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 16:04:13.450548 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 16:04:13.453021 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 16:04:13.463469 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 29 16:04:13.465860 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 16:04:13.472309 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 16:04:13.474548 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:04:13.480457 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 16:04:13.483742 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 29 16:04:13.489443 udevadm[1179]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 16:04:13.490406 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 16:04:13.504461 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:04:13.511394 kernel: loop1: detected capacity change from 0 to 123192 Jan 29 16:04:13.521527 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Jan 29 16:04:13.521543 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Jan 29 16:04:13.525685 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:04:13.548490 kernel: loop2: detected capacity change from 0 to 113512 Jan 29 16:04:13.582309 kernel: loop3: detected capacity change from 0 to 201592 Jan 29 16:04:13.590296 kernel: loop4: detected capacity change from 0 to 123192 Jan 29 16:04:13.597308 kernel: loop5: detected capacity change from 0 to 113512 Jan 29 16:04:13.601455 (sd-merge)[1193]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 29 16:04:13.601868 (sd-merge)[1193]: Merged extensions into '/usr'. Jan 29 16:04:13.606035 systemd[1]: Reload requested from client PID 1167 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 16:04:13.606055 systemd[1]: Reloading... Jan 29 16:04:13.668372 zram_generator::config[1219]: No configuration found. Jan 29 16:04:13.694356 ldconfig[1162]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 16:04:13.752602 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:04:13.801988 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 16:04:13.802232 systemd[1]: Reloading finished in 195 ms. Jan 29 16:04:13.824086 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 16:04:13.825227 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 16:04:13.840831 systemd[1]: Starting ensure-sysext.service... Jan 29 16:04:13.842471 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:04:13.858210 systemd[1]: Reload requested from client PID 1255 ('systemctl') (unit ensure-sysext.service)... Jan 29 16:04:13.858229 systemd[1]: Reloading... Jan 29 16:04:13.870510 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 16:04:13.870995 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 16:04:13.871721 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 16:04:13.871921 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Jan 29 16:04:13.871963 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Jan 29 16:04:13.874320 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:04:13.874331 systemd-tmpfiles[1256]: Skipping /boot Jan 29 16:04:13.882583 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:04:13.882596 systemd-tmpfiles[1256]: Skipping /boot Jan 29 16:04:13.907332 zram_generator::config[1285]: No configuration found. Jan 29 16:04:13.985415 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:04:14.034503 systemd[1]: Reloading finished in 175 ms. Jan 29 16:04:14.048320 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 16:04:14.049522 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:04:14.066436 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:04:14.068356 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 16:04:14.070188 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 16:04:14.075505 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:04:14.080552 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:04:14.083791 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 16:04:14.092635 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 16:04:14.097042 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:04:14.098533 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:04:14.102626 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:04:14.105422 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:04:14.106572 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:04:14.106677 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:04:14.108542 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 16:04:14.111571 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 16:04:14.114387 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 16:04:14.115674 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:04:14.115808 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:04:14.118839 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:04:14.118976 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:04:14.120374 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:04:14.120520 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:04:14.122652 systemd-udevd[1326]: Using default interface naming scheme 'v255'. Jan 29 16:04:14.128457 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:04:14.139949 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:04:14.144603 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:04:14.149243 augenrules[1359]: No rules Jan 29 16:04:14.151107 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:04:14.152003 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:04:14.152154 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:04:14.152305 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 16:04:14.154475 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:04:14.157241 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:04:14.157460 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:04:14.158728 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 16:04:14.160032 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:04:14.160167 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:04:14.161603 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:04:14.161739 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:04:14.163294 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 16:04:14.164566 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:04:14.164705 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:04:14.172923 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 16:04:14.188521 systemd[1]: Finished ensure-sysext.service. Jan 29 16:04:14.194761 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 29 16:04:14.200551 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:04:14.201438 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:04:14.205453 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:04:14.211444 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:04:14.213823 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:04:14.218486 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:04:14.219321 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:04:14.219372 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:04:14.222593 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:04:14.226552 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 16:04:14.227339 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 16:04:14.227885 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:04:14.228017 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:04:14.229731 augenrules[1395]: /sbin/augenrules: No change Jan 29 16:04:14.230672 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:04:14.230840 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:04:14.231873 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:04:14.232017 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:04:14.233381 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:04:14.233519 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:04:14.237317 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1384) Jan 29 16:04:14.240983 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:04:14.241051 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:04:14.254791 augenrules[1424]: No rules Jan 29 16:04:14.258698 systemd-resolved[1324]: Positive Trust Anchors: Jan 29 16:04:14.258716 systemd-resolved[1324]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:04:14.258747 systemd-resolved[1324]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:04:14.264628 systemd-resolved[1324]: Defaulting to hostname 'linux'. Jan 29 16:04:14.267304 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:04:14.270594 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:04:14.271406 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:04:14.278724 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:04:14.288008 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 16:04:14.301447 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 16:04:14.310516 systemd-networkd[1407]: lo: Link UP Jan 29 16:04:14.310523 systemd-networkd[1407]: lo: Gained carrier Jan 29 16:04:14.311620 systemd-networkd[1407]: Enumeration completed Jan 29 16:04:14.311997 systemd-networkd[1407]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:04:14.312006 systemd-networkd[1407]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:04:14.312022 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:04:14.312618 systemd-networkd[1407]: eth0: Link UP Jan 29 16:04:14.312628 systemd-networkd[1407]: eth0: Gained carrier Jan 29 16:04:14.312640 systemd-networkd[1407]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:04:14.313575 systemd[1]: Reached target network.target - Network. Jan 29 16:04:14.323431 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 29 16:04:14.325375 systemd-networkd[1407]: eth0: DHCPv4 address 10.0.0.43/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 16:04:14.326385 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 16:04:14.327410 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 16:04:14.327979 systemd-timesyncd[1411]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 29 16:04:14.328291 systemd-timesyncd[1411]: Initial clock synchronization to Wed 2025-01-29 16:04:14.371078 UTC. Jan 29 16:04:14.329518 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 16:04:14.331442 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 16:04:14.353599 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:04:14.354896 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 29 16:04:14.365087 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 16:04:14.370122 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 16:04:14.387041 lvm[1449]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:04:14.390744 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:04:14.422390 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 16:04:14.423569 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:04:14.424408 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:04:14.425300 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 16:04:14.426184 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 16:04:14.427269 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 16:04:14.428158 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 16:04:14.429093 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 16:04:14.429985 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 16:04:14.430014 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:04:14.430719 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:04:14.432306 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 16:04:14.434250 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 16:04:14.437098 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 29 16:04:14.438249 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 29 16:04:14.439179 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 29 16:04:14.443082 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 16:04:14.444549 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 29 16:04:14.446449 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 16:04:14.447779 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 16:04:14.448669 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:04:14.449411 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:04:14.450088 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:04:14.450119 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:04:14.451002 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 16:04:14.452751 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 16:04:14.455432 lvm[1456]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:04:14.455414 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 16:04:14.457467 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 16:04:14.458245 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 16:04:14.460201 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 16:04:14.464166 jq[1459]: false Jan 29 16:04:14.465441 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 16:04:14.467259 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 16:04:14.471476 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 16:04:14.475552 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 16:04:14.477096 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 16:04:14.477565 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 16:04:14.480480 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 16:04:14.483075 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 16:04:14.484137 extend-filesystems[1460]: Found loop3 Jan 29 16:04:14.484137 extend-filesystems[1460]: Found loop4 Jan 29 16:04:14.489148 extend-filesystems[1460]: Found loop5 Jan 29 16:04:14.489148 extend-filesystems[1460]: Found vda Jan 29 16:04:14.489148 extend-filesystems[1460]: Found vda1 Jan 29 16:04:14.489148 extend-filesystems[1460]: Found vda2 Jan 29 16:04:14.489148 extend-filesystems[1460]: Found vda3 Jan 29 16:04:14.489148 extend-filesystems[1460]: Found usr Jan 29 16:04:14.489148 extend-filesystems[1460]: Found vda4 Jan 29 16:04:14.489148 extend-filesystems[1460]: Found vda6 Jan 29 16:04:14.489148 extend-filesystems[1460]: Found vda7 Jan 29 16:04:14.489148 extend-filesystems[1460]: Found vda9 Jan 29 16:04:14.489148 extend-filesystems[1460]: Checking size of /dev/vda9 Jan 29 16:04:14.487114 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 16:04:14.496286 dbus-daemon[1458]: [system] SELinux support is enabled Jan 29 16:04:14.492787 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 16:04:14.504634 extend-filesystems[1460]: Resized partition /dev/vda9 Jan 29 16:04:14.506811 jq[1473]: true Jan 29 16:04:14.492998 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 16:04:14.494652 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 16:04:14.496314 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 16:04:14.498606 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 16:04:14.505736 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 16:04:14.505911 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 16:04:14.514745 tar[1480]: linux-arm64/LICENSE Jan 29 16:04:14.514745 tar[1480]: linux-arm64/helm Jan 29 16:04:14.515329 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 16:04:14.515383 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 16:04:14.516221 extend-filesystems[1485]: resize2fs 1.47.1 (20-May-2024) Jan 29 16:04:14.522378 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1371) Jan 29 16:04:14.516350 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 16:04:14.516369 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 16:04:14.525305 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 29 16:04:14.541306 jq[1483]: true Jan 29 16:04:14.551291 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 29 16:04:14.555694 (ntainerd)[1493]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 16:04:14.568909 update_engine[1469]: I20250129 16:04:14.561627 1469 main.cc:92] Flatcar Update Engine starting Jan 29 16:04:14.569102 extend-filesystems[1485]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 16:04:14.569102 extend-filesystems[1485]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 16:04:14.569102 extend-filesystems[1485]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 29 16:04:14.573359 extend-filesystems[1460]: Resized filesystem in /dev/vda9 Jan 29 16:04:14.570976 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 16:04:14.574046 update_engine[1469]: I20250129 16:04:14.572999 1469 update_check_scheduler.cc:74] Next update check in 11m22s Jan 29 16:04:14.571173 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 16:04:14.574710 systemd[1]: Started update-engine.service - Update Engine. Jan 29 16:04:14.588552 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 16:04:14.590872 systemd-logind[1467]: Watching system buttons on /dev/input/event0 (Power Button) Jan 29 16:04:14.594543 systemd-logind[1467]: New seat seat0. Jan 29 16:04:14.595593 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 16:04:14.597662 bash[1512]: Updated "/home/core/.ssh/authorized_keys" Jan 29 16:04:14.598848 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 16:04:14.601081 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 16:04:14.648730 locksmithd[1514]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 16:04:14.758394 containerd[1493]: time="2025-01-29T16:04:14.756907280Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 16:04:14.783608 containerd[1493]: time="2025-01-29T16:04:14.783551040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:04:14.785061 containerd[1493]: time="2025-01-29T16:04:14.785026960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:04:14.785061 containerd[1493]: time="2025-01-29T16:04:14.785059240Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 16:04:14.785132 containerd[1493]: time="2025-01-29T16:04:14.785077080Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 16:04:14.785242 containerd[1493]: time="2025-01-29T16:04:14.785222880Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 16:04:14.785270 containerd[1493]: time="2025-01-29T16:04:14.785250320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 16:04:14.785347 containerd[1493]: time="2025-01-29T16:04:14.785323200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:04:14.785383 containerd[1493]: time="2025-01-29T16:04:14.785348040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:04:14.785556 containerd[1493]: time="2025-01-29T16:04:14.785536320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:04:14.785585 containerd[1493]: time="2025-01-29T16:04:14.785556040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 16:04:14.785585 containerd[1493]: time="2025-01-29T16:04:14.785570400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:04:14.785585 containerd[1493]: time="2025-01-29T16:04:14.785580200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 16:04:14.785698 containerd[1493]: time="2025-01-29T16:04:14.785650920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:04:14.785883 containerd[1493]: time="2025-01-29T16:04:14.785843440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:04:14.785982 containerd[1493]: time="2025-01-29T16:04:14.785965640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:04:14.786016 containerd[1493]: time="2025-01-29T16:04:14.785982240Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 16:04:14.786091 containerd[1493]: time="2025-01-29T16:04:14.786049120Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 16:04:14.786122 containerd[1493]: time="2025-01-29T16:04:14.786094000Z" level=info msg="metadata content store policy set" policy=shared Jan 29 16:04:14.789814 containerd[1493]: time="2025-01-29T16:04:14.789786320Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 16:04:14.789865 containerd[1493]: time="2025-01-29T16:04:14.789836520Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 16:04:14.789865 containerd[1493]: time="2025-01-29T16:04:14.789853240Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 16:04:14.789898 containerd[1493]: time="2025-01-29T16:04:14.789867680Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 16:04:14.789898 containerd[1493]: time="2025-01-29T16:04:14.789881440Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 16:04:14.790022 containerd[1493]: time="2025-01-29T16:04:14.790004160Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 16:04:14.790253 containerd[1493]: time="2025-01-29T16:04:14.790212320Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 16:04:14.790357 containerd[1493]: time="2025-01-29T16:04:14.790333160Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 16:04:14.790382 containerd[1493]: time="2025-01-29T16:04:14.790361760Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 16:04:14.790382 containerd[1493]: time="2025-01-29T16:04:14.790375960Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 16:04:14.790415 containerd[1493]: time="2025-01-29T16:04:14.790388880Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 16:04:14.790415 containerd[1493]: time="2025-01-29T16:04:14.790405680Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 16:04:14.790458 containerd[1493]: time="2025-01-29T16:04:14.790417360Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 16:04:14.790458 containerd[1493]: time="2025-01-29T16:04:14.790431040Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 16:04:14.790458 containerd[1493]: time="2025-01-29T16:04:14.790444760Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 16:04:14.790458 containerd[1493]: time="2025-01-29T16:04:14.790457000Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 16:04:14.790528 containerd[1493]: time="2025-01-29T16:04:14.790469600Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 16:04:14.790528 containerd[1493]: time="2025-01-29T16:04:14.790480760Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 16:04:14.790528 containerd[1493]: time="2025-01-29T16:04:14.790500240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 16:04:14.790528 containerd[1493]: time="2025-01-29T16:04:14.790513920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 16:04:14.790528 containerd[1493]: time="2025-01-29T16:04:14.790525480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 16:04:14.790617 containerd[1493]: time="2025-01-29T16:04:14.790537440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 16:04:14.790617 containerd[1493]: time="2025-01-29T16:04:14.790550000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 16:04:14.790617 containerd[1493]: time="2025-01-29T16:04:14.790562800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 16:04:14.790617 containerd[1493]: time="2025-01-29T16:04:14.790573640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 16:04:14.790617 containerd[1493]: time="2025-01-29T16:04:14.790585480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 16:04:14.790617 containerd[1493]: time="2025-01-29T16:04:14.790599040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 16:04:14.790617 containerd[1493]: time="2025-01-29T16:04:14.790612720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 16:04:14.790726 containerd[1493]: time="2025-01-29T16:04:14.790625400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 16:04:14.790726 containerd[1493]: time="2025-01-29T16:04:14.790638880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 16:04:14.790726 containerd[1493]: time="2025-01-29T16:04:14.790658240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 16:04:14.790726 containerd[1493]: time="2025-01-29T16:04:14.790673840Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 16:04:14.790726 containerd[1493]: time="2025-01-29T16:04:14.790693880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 16:04:14.790726 containerd[1493]: time="2025-01-29T16:04:14.790706400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 16:04:14.790726 containerd[1493]: time="2025-01-29T16:04:14.790716800Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 16:04:14.790908 containerd[1493]: time="2025-01-29T16:04:14.790881480Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 16:04:14.790908 containerd[1493]: time="2025-01-29T16:04:14.790902440Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 16:04:14.790954 containerd[1493]: time="2025-01-29T16:04:14.790912160Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 16:04:14.792578 containerd[1493]: time="2025-01-29T16:04:14.790923680Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 16:04:14.792578 containerd[1493]: time="2025-01-29T16:04:14.792464560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 16:04:14.792578 containerd[1493]: time="2025-01-29T16:04:14.792511160Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 16:04:14.792578 containerd[1493]: time="2025-01-29T16:04:14.792523400Z" level=info msg="NRI interface is disabled by configuration." Jan 29 16:04:14.792578 containerd[1493]: time="2025-01-29T16:04:14.792537920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 16:04:14.793024 containerd[1493]: time="2025-01-29T16:04:14.792969760Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 16:04:14.793956 containerd[1493]: time="2025-01-29T16:04:14.793374840Z" level=info msg="Connect containerd service" Jan 29 16:04:14.793956 containerd[1493]: time="2025-01-29T16:04:14.793423720Z" level=info msg="using legacy CRI server" Jan 29 16:04:14.793956 containerd[1493]: time="2025-01-29T16:04:14.793432600Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 16:04:14.793956 containerd[1493]: time="2025-01-29T16:04:14.793667400Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 16:04:14.795131 containerd[1493]: time="2025-01-29T16:04:14.794253400Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 16:04:14.795131 containerd[1493]: time="2025-01-29T16:04:14.794571920Z" level=info msg="Start subscribing containerd event" Jan 29 16:04:14.795131 containerd[1493]: time="2025-01-29T16:04:14.794626480Z" level=info msg="Start recovering state" Jan 29 16:04:14.795131 containerd[1493]: time="2025-01-29T16:04:14.794685680Z" level=info msg="Start event monitor" Jan 29 16:04:14.795131 containerd[1493]: time="2025-01-29T16:04:14.794695880Z" level=info msg="Start snapshots syncer" Jan 29 16:04:14.795131 containerd[1493]: time="2025-01-29T16:04:14.794704320Z" level=info msg="Start cni network conf syncer for default" Jan 29 16:04:14.795131 containerd[1493]: time="2025-01-29T16:04:14.794711000Z" level=info msg="Start streaming server" Jan 29 16:04:14.795131 containerd[1493]: time="2025-01-29T16:04:14.794782240Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 16:04:14.795131 containerd[1493]: time="2025-01-29T16:04:14.794821000Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 16:04:14.795131 containerd[1493]: time="2025-01-29T16:04:14.794867000Z" level=info msg="containerd successfully booted in 0.038917s" Jan 29 16:04:14.794948 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 16:04:14.876387 sshd_keygen[1472]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 16:04:14.894759 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 16:04:14.909575 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 16:04:14.913902 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 16:04:14.914107 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 16:04:14.916810 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 16:04:14.928718 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 16:04:14.938648 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 16:04:14.943178 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 29 16:04:14.944628 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 16:04:14.949358 tar[1480]: linux-arm64/README.md Jan 29 16:04:14.956485 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 16:04:16.235477 systemd-networkd[1407]: eth0: Gained IPv6LL Jan 29 16:04:16.237886 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 16:04:16.239350 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 16:04:16.247598 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 16:04:16.249685 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:04:16.251429 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 16:04:16.264581 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 16:04:16.264796 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 16:04:16.266142 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 16:04:16.269396 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 16:04:16.786449 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:04:16.787906 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 16:04:16.788863 systemd[1]: Startup finished in 547ms (kernel) + 5.087s (initrd) + 4.038s (userspace) = 9.672s. Jan 29 16:04:16.789880 (kubelet)[1570]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:04:17.211562 kubelet[1570]: E0129 16:04:17.211435 1570 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:04:17.213935 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:04:17.214075 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:04:17.214539 systemd[1]: kubelet.service: Consumed 798ms CPU time, 250.5M memory peak. Jan 29 16:04:20.404146 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 16:04:20.405310 systemd[1]: Started sshd@0-10.0.0.43:22-10.0.0.1:60122.service - OpenSSH per-connection server daemon (10.0.0.1:60122). Jan 29 16:04:20.470803 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 60122 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU Jan 29 16:04:20.472476 sshd-session[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:04:20.478001 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 16:04:20.489504 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 16:04:20.495000 systemd-logind[1467]: New session 1 of user core. Jan 29 16:04:20.498432 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 16:04:20.500632 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 16:04:20.505939 (systemd)[1588]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 16:04:20.507967 systemd-logind[1467]: New session c1 of user core. Jan 29 16:04:20.610513 systemd[1588]: Queued start job for default target default.target. Jan 29 16:04:20.625135 systemd[1588]: Created slice app.slice - User Application Slice. Jan 29 16:04:20.625163 systemd[1588]: Reached target paths.target - Paths. Jan 29 16:04:20.625196 systemd[1588]: Reached target timers.target - Timers. Jan 29 16:04:20.626382 systemd[1588]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 16:04:20.634523 systemd[1588]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 16:04:20.634582 systemd[1588]: Reached target sockets.target - Sockets. Jan 29 16:04:20.634617 systemd[1588]: Reached target basic.target - Basic System. Jan 29 16:04:20.634645 systemd[1588]: Reached target default.target - Main User Target. Jan 29 16:04:20.634668 systemd[1588]: Startup finished in 121ms. Jan 29 16:04:20.634867 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 16:04:20.636271 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 16:04:20.717628 systemd[1]: Started sshd@1-10.0.0.43:22-10.0.0.1:60132.service - OpenSSH per-connection server daemon (10.0.0.1:60132). Jan 29 16:04:20.751199 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 60132 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU Jan 29 16:04:20.752370 sshd-session[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:04:20.756393 systemd-logind[1467]: New session 2 of user core. Jan 29 16:04:20.770425 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 16:04:20.822770 sshd[1601]: Connection closed by 10.0.0.1 port 60132 Jan 29 16:04:20.822691 sshd-session[1599]: pam_unix(sshd:session): session closed for user core Jan 29 16:04:20.834454 systemd[1]: sshd@1-10.0.0.43:22-10.0.0.1:60132.service: Deactivated successfully. Jan 29 16:04:20.835992 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 16:04:20.837192 systemd-logind[1467]: Session 2 logged out. Waiting for processes to exit. Jan 29 16:04:20.838341 systemd[1]: Started sshd@2-10.0.0.43:22-10.0.0.1:60134.service - OpenSSH per-connection server daemon (10.0.0.1:60134). Jan 29 16:04:20.839164 systemd-logind[1467]: Removed session 2. Jan 29 16:04:20.875715 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 60134 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU Jan 29 16:04:20.876815 sshd-session[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:04:20.880993 systemd-logind[1467]: New session 3 of user core. Jan 29 16:04:20.897465 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 16:04:20.944147 sshd[1609]: Connection closed by 10.0.0.1 port 60134 Jan 29 16:04:20.944410 sshd-session[1606]: pam_unix(sshd:session): session closed for user core Jan 29 16:04:20.954353 systemd[1]: sshd@2-10.0.0.43:22-10.0.0.1:60134.service: Deactivated successfully. Jan 29 16:04:20.955904 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 16:04:20.957096 systemd-logind[1467]: Session 3 logged out. Waiting for processes to exit. Jan 29 16:04:20.958173 systemd[1]: Started sshd@3-10.0.0.43:22-10.0.0.1:60144.service - OpenSSH per-connection server daemon (10.0.0.1:60144). Jan 29 16:04:20.959043 systemd-logind[1467]: Removed session 3. Jan 29 16:04:20.994866 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 60144 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU Jan 29 16:04:20.995863 sshd-session[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:04:20.999924 systemd-logind[1467]: New session 4 of user core. Jan 29 16:04:21.011470 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 16:04:21.063179 sshd[1617]: Connection closed by 10.0.0.1 port 60144 Jan 29 16:04:21.063463 sshd-session[1614]: pam_unix(sshd:session): session closed for user core Jan 29 16:04:21.079287 systemd[1]: sshd@3-10.0.0.43:22-10.0.0.1:60144.service: Deactivated successfully. Jan 29 16:04:21.080733 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 16:04:21.081935 systemd-logind[1467]: Session 4 logged out. Waiting for processes to exit. Jan 29 16:04:21.083013 systemd[1]: Started sshd@4-10.0.0.43:22-10.0.0.1:60150.service - OpenSSH per-connection server daemon (10.0.0.1:60150). Jan 29 16:04:21.083995 systemd-logind[1467]: Removed session 4. Jan 29 16:04:21.119168 sshd[1622]: Accepted publickey for core from 10.0.0.1 port 60150 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU Jan 29 16:04:21.120219 sshd-session[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:04:21.123481 systemd-logind[1467]: New session 5 of user core. Jan 29 16:04:21.136404 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 16:04:21.193193 sudo[1626]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 16:04:21.193497 sudo[1626]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:04:21.211094 sudo[1626]: pam_unix(sudo:session): session closed for user root Jan 29 16:04:21.212857 sshd[1625]: Connection closed by 10.0.0.1 port 60150 Jan 29 16:04:21.212678 sshd-session[1622]: pam_unix(sshd:session): session closed for user core Jan 29 16:04:21.227240 systemd[1]: sshd@4-10.0.0.43:22-10.0.0.1:60150.service: Deactivated successfully. Jan 29 16:04:21.228738 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 16:04:21.229417 systemd-logind[1467]: Session 5 logged out. Waiting for processes to exit. Jan 29 16:04:21.231126 systemd[1]: Started sshd@5-10.0.0.43:22-10.0.0.1:60158.service - OpenSSH per-connection server daemon (10.0.0.1:60158). Jan 29 16:04:21.232633 systemd-logind[1467]: Removed session 5. Jan 29 16:04:21.268088 sshd[1631]: Accepted publickey for core from 10.0.0.1 port 60158 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU Jan 29 16:04:21.269092 sshd-session[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:04:21.272642 systemd-logind[1467]: New session 6 of user core. Jan 29 16:04:21.281476 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 16:04:21.331838 sudo[1636]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 16:04:21.332099 sudo[1636]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:04:21.334840 sudo[1636]: pam_unix(sudo:session): session closed for user root Jan 29 16:04:21.339203 sudo[1635]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 16:04:21.339485 sudo[1635]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:04:21.357978 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:04:21.379666 augenrules[1658]: No rules Jan 29 16:04:21.380755 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:04:21.380944 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:04:21.382149 sudo[1635]: pam_unix(sudo:session): session closed for user root Jan 29 16:04:21.383303 sshd[1634]: Connection closed by 10.0.0.1 port 60158 Jan 29 16:04:21.383627 sshd-session[1631]: pam_unix(sshd:session): session closed for user core Jan 29 16:04:21.390087 systemd[1]: sshd@5-10.0.0.43:22-10.0.0.1:60158.service: Deactivated successfully. Jan 29 16:04:21.391376 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 16:04:21.392558 systemd-logind[1467]: Session 6 logged out. Waiting for processes to exit. Jan 29 16:04:21.393639 systemd[1]: Started sshd@6-10.0.0.43:22-10.0.0.1:60168.service - OpenSSH per-connection server daemon (10.0.0.1:60168). Jan 29 16:04:21.396610 systemd-logind[1467]: Removed session 6. Jan 29 16:04:21.431845 sshd[1666]: Accepted publickey for core from 10.0.0.1 port 60168 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU Jan 29 16:04:21.433061 sshd-session[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:04:21.437981 systemd-logind[1467]: New session 7 of user core. Jan 29 16:04:21.443427 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 16:04:21.494199 sudo[1670]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 16:04:21.494784 sudo[1670]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:04:21.835521 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 16:04:21.835680 (dockerd)[1690]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 16:04:22.085394 dockerd[1690]: time="2025-01-29T16:04:22.085333956Z" level=info msg="Starting up" Jan 29 16:04:22.289545 dockerd[1690]: time="2025-01-29T16:04:22.289219815Z" level=info msg="Loading containers: start." Jan 29 16:04:22.429319 kernel: Initializing XFRM netlink socket Jan 29 16:04:22.500373 systemd-networkd[1407]: docker0: Link UP Jan 29 16:04:22.536560 dockerd[1690]: time="2025-01-29T16:04:22.536512955Z" level=info msg="Loading containers: done." Jan 29 16:04:22.549582 dockerd[1690]: time="2025-01-29T16:04:22.549480061Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 16:04:22.549582 dockerd[1690]: time="2025-01-29T16:04:22.549572259Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 29 16:04:22.549764 dockerd[1690]: time="2025-01-29T16:04:22.549745235Z" level=info msg="Daemon has completed initialization" Jan 29 16:04:22.576323 dockerd[1690]: time="2025-01-29T16:04:22.576264970Z" level=info msg="API listen on /run/docker.sock" Jan 29 16:04:22.576424 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 16:04:23.099555 containerd[1493]: time="2025-01-29T16:04:23.099518152Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\"" Jan 29 16:04:23.870712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3720325104.mount: Deactivated successfully. Jan 29 16:04:24.894033 containerd[1493]: time="2025-01-29T16:04:24.893976399Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:04:24.894441 containerd[1493]: time="2025-01-29T16:04:24.894355731Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.1: active requests=0, bytes read=26220950" Jan 29 16:04:24.895351 containerd[1493]: time="2025-01-29T16:04:24.895326954Z" level=info msg="ImageCreate event name:\"sha256:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:04:24.898193 containerd[1493]: time="2025-01-29T16:04:24.898136547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:04:24.899505 containerd[1493]: time="2025-01-29T16:04:24.899468955Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.1\" with image id \"sha256:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\", size \"26217748\" in 1.799908095s" Jan 29 16:04:24.899567 containerd[1493]: time="2025-01-29T16:04:24.899507693Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\" returns image reference \"sha256:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19\"" Jan 29 16:04:24.900223 containerd[1493]: time="2025-01-29T16:04:24.900197813Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\"" Jan 29 16:04:26.125826 containerd[1493]: time="2025-01-29T16:04:26.125767235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:04:26.126896 containerd[1493]: time="2025-01-29T16:04:26.126850349Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.1: active requests=0, bytes read=22527109" Jan 29 16:04:26.127627 containerd[1493]: time="2025-01-29T16:04:26.127589969Z" level=info msg="ImageCreate event name:\"sha256:2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:04:26.131881 containerd[1493]: time="2025-01-29T16:04:26.131820853Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:04:26.133655 containerd[1493]: time="2025-01-29T16:04:26.133621197Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.1\" with image id \"sha256:2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\", size \"23968433\" in 1.233393181s" Jan 29 16:04:26.133712 containerd[1493]: time="2025-01-29T16:04:26.133661972Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\" returns image reference \"sha256:2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13\"" Jan 29 16:04:26.134116 containerd[1493]: time="2025-01-29T16:04:26.134093343Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\"" Jan 29 16:04:27.464545 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 16:04:27.474492 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:04:27.491246 containerd[1493]: time="2025-01-29T16:04:27.491188330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:04:27.513728 containerd[1493]: time="2025-01-29T16:04:27.513663881Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.1: active requests=0, bytes read=17481115" Jan 29 16:04:27.569904 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:04:27.573659 (kubelet)[1958]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:04:27.576846 containerd[1493]: time="2025-01-29T16:04:27.576769890Z" level=info msg="ImageCreate event name:\"sha256:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:04:27.588488 containerd[1493]: time="2025-01-29T16:04:27.588392724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:04:27.590639 containerd[1493]: time="2025-01-29T16:04:27.590017662Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.1\" with image id \"sha256:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\", size \"18922457\" in 1.455891756s" Jan 29 16:04:27.590639 containerd[1493]: time="2025-01-29T16:04:27.590064240Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\" returns image reference \"sha256:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c\"" Jan 29 16:04:27.590991 containerd[1493]: time="2025-01-29T16:04:27.590959712Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 29 16:04:27.612638 kubelet[1958]: E0129 16:04:27.612536 1958 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:04:27.615772 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:04:27.615916 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:04:27.616411 systemd[1]: kubelet.service: Consumed 137ms CPU time, 103.2M memory peak. Jan 29 16:04:28.905613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3735556655.mount: Deactivated successfully. Jan 29 16:04:29.283162 containerd[1493]: time="2025-01-29T16:04:29.283028206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:04:29.284101 containerd[1493]: time="2025-01-29T16:04:29.284045597Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=27364399" Jan 29 16:04:29.285137 containerd[1493]: time="2025-01-29T16:04:29.285093781Z" level=info msg="ImageCreate event name:\"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:04:29.287934 containerd[1493]: time="2025-01-29T16:04:29.287898123Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:04:29.288441 containerd[1493]: time="2025-01-29T16:04:29.288408680Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"27363416\" in 1.697413125s" Jan 29 16:04:29.288441 containerd[1493]: time="2025-01-29T16:04:29.288436070Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\"" Jan 29 16:04:29.288939 containerd[1493]: time="2025-01-29T16:04:29.288885200Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 29 16:04:30.119561 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1591745603.mount: Deactivated successfully. Jan 29 16:04:30.908300 containerd[1493]: time="2025-01-29T16:04:30.908172058Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:04:30.909154 containerd[1493]: time="2025-01-29T16:04:30.908895598Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jan 29 16:04:30.909859 containerd[1493]: time="2025-01-29T16:04:30.909805369Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:04:30.912933 containerd[1493]: time="2025-01-29T16:04:30.912868024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:04:30.914250 containerd[1493]: time="2025-01-29T16:04:30.914190978Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.625272262s" Jan 29 16:04:30.914250 containerd[1493]: time="2025-01-29T16:04:30.914224612Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 29 16:04:30.914785 containerd[1493]: time="2025-01-29T16:04:30.914759239Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 29 16:04:31.453611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1800095979.mount: Deactivated successfully. Jan 29 16:04:31.457262 containerd[1493]: time="2025-01-29T16:04:31.457206999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:04:31.458037 containerd[1493]: time="2025-01-29T16:04:31.457991352Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jan 29 16:04:31.458730 containerd[1493]: time="2025-01-29T16:04:31.458696589Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:04:31.460967 containerd[1493]: time="2025-01-29T16:04:31.460914317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:04:31.461815 containerd[1493]: time="2025-01-29T16:04:31.461789477Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 546.996203ms" Jan 29 16:04:31.461869 containerd[1493]: time="2025-01-29T16:04:31.461821267Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 29 16:04:31.462388 containerd[1493]: time="2025-01-29T16:04:31.462240990Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 29 16:04:32.082426 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount681562902.mount: Deactivated successfully. Jan 29 16:04:33.834081 containerd[1493]: time="2025-01-29T16:04:33.834026332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:04:33.835183 containerd[1493]: time="2025-01-29T16:04:33.834556259Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812431" Jan 29 16:04:33.835967 containerd[1493]: time="2025-01-29T16:04:33.835895268Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:04:33.840717 containerd[1493]: time="2025-01-29T16:04:33.840685909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:04:33.842066 containerd[1493]: time="2025-01-29T16:04:33.842030683Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.37975306s" Jan 29 16:04:33.842066 containerd[1493]: time="2025-01-29T16:04:33.842062991Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 29 16:04:37.867736 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 16:04:37.881559 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:04:37.892232 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 16:04:37.892328 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 16:04:37.892543 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:04:37.895415 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:04:37.915712 systemd[1]: Reload requested from client PID 2118 ('systemctl') (unit session-7.scope)... Jan 29 16:04:37.915727 systemd[1]: Reloading... Jan 29 16:04:37.997398 zram_generator::config[2166]: No configuration found. Jan 29 16:04:38.109955 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:04:38.180207 systemd[1]: Reloading finished in 264 ms. Jan 29 16:04:38.227881 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:04:38.230641 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:04:38.231242 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 16:04:38.232356 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:04:38.232417 systemd[1]: kubelet.service: Consumed 84ms CPU time, 90.2M memory peak. Jan 29 16:04:38.233907 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:04:38.328328 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:04:38.331874 (kubelet)[2209]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:04:38.374192 kubelet[2209]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:04:38.374192 kubelet[2209]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 29 16:04:38.374192 kubelet[2209]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:04:38.374546 kubelet[2209]: I0129 16:04:38.374254 2209 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:04:39.586715 kubelet[2209]: I0129 16:04:39.586667 2209 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 29 16:04:39.586715 kubelet[2209]: I0129 16:04:39.586701 2209 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:04:39.587107 kubelet[2209]: I0129 16:04:39.586963 2209 server.go:954] "Client rotation is on, will bootstrap in background" Jan 29 16:04:39.614612 kubelet[2209]: E0129 16:04:39.614556 2209 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.43:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:04:39.615761 kubelet[2209]: I0129 16:04:39.615703 2209 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:04:39.623896 kubelet[2209]: E0129 16:04:39.623788 2209 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 16:04:39.623896 kubelet[2209]: I0129 16:04:39.623825 2209 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 16:04:39.626580 kubelet[2209]: I0129 16:04:39.626555 2209 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 16:04:39.627182 kubelet[2209]: I0129 16:04:39.627135 2209 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:04:39.627384 kubelet[2209]: I0129 16:04:39.627176 2209 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 16:04:39.627479 kubelet[2209]: I0129 16:04:39.627443 2209 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:04:39.627479 kubelet[2209]: I0129 16:04:39.627453 2209 container_manager_linux.go:304] "Creating device plugin manager" Jan 29 16:04:39.627670 kubelet[2209]: I0129 16:04:39.627643 2209 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:04:39.630044 kubelet[2209]: I0129 16:04:39.630009 2209 kubelet.go:446] "Attempting to sync node with API server" Jan 29 16:04:39.630044 kubelet[2209]: I0129 16:04:39.630038 2209 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:04:39.630109 kubelet[2209]: I0129 16:04:39.630059 2209 kubelet.go:352] "Adding apiserver pod source" Jan 29 16:04:39.630109 kubelet[2209]: I0129 16:04:39.630070 2209 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:04:39.636191 kubelet[2209]: I0129 16:04:39.635893 2209 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 16:04:39.636191 kubelet[2209]: W0129 16:04:39.636047 2209 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jan 29 16:04:39.636191 kubelet[2209]: W0129 16:04:39.636064 2209 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jan 29 16:04:39.636191 kubelet[2209]: E0129 16:04:39.636108 2209 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:04:39.636191 kubelet[2209]: E0129 16:04:39.636121 2209 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:04:39.636591 kubelet[2209]: I0129 16:04:39.636570 2209 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:04:39.636717 kubelet[2209]: W0129 16:04:39.636690 2209 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 16:04:39.637702 kubelet[2209]: I0129 16:04:39.637602 2209 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 29 16:04:39.637702 kubelet[2209]: I0129 16:04:39.637647 2209 server.go:1287] "Started kubelet" Jan 29 16:04:39.639478 kubelet[2209]: I0129 16:04:39.639240 2209 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:04:39.640172 kubelet[2209]: I0129 16:04:39.640133 2209 server.go:490] "Adding debug handlers to kubelet server" Jan 29 16:04:39.641571 kubelet[2209]: I0129 16:04:39.641494 2209 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:04:39.641821 kubelet[2209]: I0129 16:04:39.641787 2209 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:04:39.641972 kubelet[2209]: I0129 16:04:39.641936 2209 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:04:39.642439 kubelet[2209]: I0129 16:04:39.642141 2209 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 16:04:39.642599 kubelet[2209]: E0129 16:04:39.642571 2209 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:04:39.642641 kubelet[2209]: E0129 16:04:39.642365 2209 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.43:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.43:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f3562cc387873 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 16:04:39.637620851 +0000 UTC m=+1.302265579,LastTimestamp:2025-01-29 16:04:39.637620851 +0000 UTC m=+1.302265579,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 16:04:39.642641 kubelet[2209]: I0129 16:04:39.642612 2209 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 29 16:04:39.642770 kubelet[2209]: E0129 16:04:39.642744 2209 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 16:04:39.642804 kubelet[2209]: I0129 16:04:39.642787 2209 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 16:04:39.642859 kubelet[2209]: I0129 16:04:39.642845 2209 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:04:39.643126 kubelet[2209]: E0129 16:04:39.643081 2209 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="200ms" Jan 29 16:04:39.643167 kubelet[2209]: W0129 16:04:39.643140 2209 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jan 29 16:04:39.643204 kubelet[2209]: E0129 16:04:39.643183 2209 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:04:39.644016 kubelet[2209]: I0129 16:04:39.643795 2209 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:04:39.644016 kubelet[2209]: I0129 16:04:39.643904 2209 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 16:04:39.644940 kubelet[2209]: I0129 16:04:39.644919 2209 factory.go:221] Registration of the containerd container factory successfully Jan 29 16:04:39.657159 kubelet[2209]: I0129 16:04:39.657124 2209 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 29 16:04:39.657159 kubelet[2209]: I0129 16:04:39.657147 2209 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 29 16:04:39.657159 kubelet[2209]: I0129 16:04:39.657165 2209 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:04:39.659954 kubelet[2209]: I0129 16:04:39.659934 2209 policy_none.go:49] "None policy: Start" Jan 29 16:04:39.659954 kubelet[2209]: I0129 16:04:39.659955 2209 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 29 16:04:39.660056 kubelet[2209]: I0129 16:04:39.659965 2209 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:04:39.660239 kubelet[2209]: I0129 16:04:39.660210 2209 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:04:39.661440 kubelet[2209]: I0129 16:04:39.661407 2209 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:04:39.661440 kubelet[2209]: I0129 16:04:39.661436 2209 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 29 16:04:39.661685 kubelet[2209]: I0129 16:04:39.661456 2209 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 29 16:04:39.661685 kubelet[2209]: I0129 16:04:39.661464 2209 kubelet.go:2388] "Starting kubelet main sync loop" Jan 29 16:04:39.661685 kubelet[2209]: E0129 16:04:39.661503 2209 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 16:04:39.661937 kubelet[2209]: W0129 16:04:39.661890 2209 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jan 29 16:04:39.661970 kubelet[2209]: E0129 16:04:39.661950 2209 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:04:39.667236 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 16:04:39.676965 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 16:04:39.687851 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 16:04:39.689052 kubelet[2209]: I0129 16:04:39.689007 2209 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:04:39.689528 kubelet[2209]: I0129 16:04:39.689210 2209 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 16:04:39.689528 kubelet[2209]: I0129 16:04:39.689230 2209 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:04:39.689528 kubelet[2209]: I0129 16:04:39.689493 2209 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:04:39.690424 kubelet[2209]: E0129 16:04:39.690394 2209 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 29 16:04:39.690503 kubelet[2209]: E0129 16:04:39.690435 2209 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 29 16:04:39.770499 systemd[1]: Created slice kubepods-burstable-poddd8835c02fe8c2b9983405d232d313dc.slice - libcontainer container kubepods-burstable-poddd8835c02fe8c2b9983405d232d313dc.slice. Jan 29 16:04:39.789039 kubelet[2209]: E0129 16:04:39.788794 2209 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 16:04:39.790137 kubelet[2209]: I0129 16:04:39.790082 2209 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 16:04:39.790534 kubelet[2209]: E0129 16:04:39.790503 2209 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" Jan 29 16:04:39.791994 systemd[1]: Created slice kubepods-burstable-pode9ba8773e418c2bbf5a955ad3b2b2e16.slice - libcontainer container kubepods-burstable-pode9ba8773e418c2bbf5a955ad3b2b2e16.slice. Jan 29 16:04:39.801474 kubelet[2209]: E0129 16:04:39.801452 2209 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 16:04:39.803967 systemd[1]: Created slice kubepods-burstable-podeb981ecac1bbdbbdd50082f31745642c.slice - libcontainer container kubepods-burstable-podeb981ecac1bbdbbdd50082f31745642c.slice. Jan 29 16:04:39.805574 kubelet[2209]: E0129 16:04:39.805536 2209 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 16:04:39.844070 kubelet[2209]: E0129 16:04:39.843975 2209 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="400ms" Jan 29 16:04:39.944618 kubelet[2209]: I0129 16:04:39.944571 2209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:04:39.944618 kubelet[2209]: I0129 16:04:39.944620 2209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:04:39.944762 kubelet[2209]: I0129 16:04:39.944643 2209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:04:39.944762 kubelet[2209]: I0129 16:04:39.944662 2209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:04:39.944762 kubelet[2209]: I0129 16:04:39.944691 2209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb981ecac1bbdbbdd50082f31745642c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"eb981ecac1bbdbbdd50082f31745642c\") " pod="kube-system/kube-scheduler-localhost" Jan 29 16:04:39.944762 kubelet[2209]: I0129 16:04:39.944719 2209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd8835c02fe8c2b9983405d232d313dc-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"dd8835c02fe8c2b9983405d232d313dc\") " pod="kube-system/kube-apiserver-localhost" Jan 29 16:04:39.944848 kubelet[2209]: I0129 16:04:39.944762 2209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd8835c02fe8c2b9983405d232d313dc-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"dd8835c02fe8c2b9983405d232d313dc\") " pod="kube-system/kube-apiserver-localhost" Jan 29 16:04:39.944848 kubelet[2209]: I0129 16:04:39.944803 2209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd8835c02fe8c2b9983405d232d313dc-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"dd8835c02fe8c2b9983405d232d313dc\") " pod="kube-system/kube-apiserver-localhost" Jan 29 16:04:39.944848 kubelet[2209]: I0129 16:04:39.944826 2209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:04:39.992627 kubelet[2209]: I0129 16:04:39.992598 2209 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 16:04:39.992977 kubelet[2209]: E0129 16:04:39.992940 2209 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" Jan 29 16:04:40.089648 kubelet[2209]: E0129 16:04:40.089615 2209 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:04:40.090368 containerd[1493]: time="2025-01-29T16:04:40.090327784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:dd8835c02fe8c2b9983405d232d313dc,Namespace:kube-system,Attempt:0,}" Jan 29 16:04:40.102571 kubelet[2209]: E0129 16:04:40.102470 2209 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:04:40.103176 containerd[1493]: time="2025-01-29T16:04:40.103131460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ba8773e418c2bbf5a955ad3b2b2e16,Namespace:kube-system,Attempt:0,}" Jan 29 16:04:40.106417 kubelet[2209]: E0129 16:04:40.106397 2209 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:04:40.106951 containerd[1493]: time="2025-01-29T16:04:40.106747002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:eb981ecac1bbdbbdd50082f31745642c,Namespace:kube-system,Attempt:0,}" Jan 29 16:04:40.245332 kubelet[2209]: E0129 16:04:40.245265 2209 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="800ms" Jan 29 16:04:40.395110 kubelet[2209]: I0129 16:04:40.394856 2209 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 16:04:40.395323 kubelet[2209]: E0129 16:04:40.395290 2209 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" Jan 29 16:04:40.478241 kubelet[2209]: W0129 16:04:40.478154 2209 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jan 29 16:04:40.478241 kubelet[2209]: E0129 16:04:40.478238 2209 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:04:40.491857 kubelet[2209]: W0129 16:04:40.491813 2209 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jan 29 16:04:40.491928 kubelet[2209]: E0129 16:04:40.491863 2209 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:04:40.661580 kubelet[2209]: W0129 16:04:40.661383 2209 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jan 29 16:04:40.661580 kubelet[2209]: E0129 16:04:40.661463 2209 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:04:40.680560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2477074063.mount: Deactivated successfully. Jan 29 16:04:40.685222 containerd[1493]: time="2025-01-29T16:04:40.685178482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:04:40.685996 containerd[1493]: time="2025-01-29T16:04:40.685740904Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 29 16:04:40.687790 containerd[1493]: time="2025-01-29T16:04:40.687758507Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:04:40.688986 containerd[1493]: time="2025-01-29T16:04:40.688904283Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 16:04:40.689173 containerd[1493]: time="2025-01-29T16:04:40.689150015Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:04:40.690298 containerd[1493]: time="2025-01-29T16:04:40.690256049Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:04:40.690711 containerd[1493]: time="2025-01-29T16:04:40.690687480Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 16:04:40.693005 containerd[1493]: time="2025-01-29T16:04:40.692943612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:04:40.694638 containerd[1493]: time="2025-01-29T16:04:40.694601182Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 591.397043ms" Jan 29 16:04:40.695360 containerd[1493]: time="2025-01-29T16:04:40.695333015Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 604.927109ms" Jan 29 16:04:40.698183 containerd[1493]: time="2025-01-29T16:04:40.698139802Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 591.331528ms" Jan 29 16:04:40.865165 containerd[1493]: time="2025-01-29T16:04:40.864620769Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:04:40.865165 containerd[1493]: time="2025-01-29T16:04:40.865141889Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:04:40.865656 containerd[1493]: time="2025-01-29T16:04:40.865488635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:04:40.865749 containerd[1493]: time="2025-01-29T16:04:40.865308298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:04:40.865749 containerd[1493]: time="2025-01-29T16:04:40.865371852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:04:40.865749 containerd[1493]: time="2025-01-29T16:04:40.865388421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:04:40.865749 containerd[1493]: time="2025-01-29T16:04:40.865486154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:04:40.866033 containerd[1493]: time="2025-01-29T16:04:40.865951364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:04:40.871238 containerd[1493]: time="2025-01-29T16:04:40.869116503Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:04:40.871238 containerd[1493]: time="2025-01-29T16:04:40.869172934Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:04:40.871238 containerd[1493]: time="2025-01-29T16:04:40.869184980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:04:40.871238 containerd[1493]: time="2025-01-29T16:04:40.869290197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:04:40.893738 kubelet[2209]: W0129 16:04:40.893664 2209 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jan 29 16:04:40.893738 kubelet[2209]: E0129 16:04:40.893732 2209 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:04:40.896458 systemd[1]: Started cri-containerd-79679d581c4c1eb9e4c8ed7e3f0ff2888cb49a24d8fb2c7e1b174822d957d0db.scope - libcontainer container 79679d581c4c1eb9e4c8ed7e3f0ff2888cb49a24d8fb2c7e1b174822d957d0db. Jan 29 16:04:40.897792 systemd[1]: Started cri-containerd-c04ed351d1b96e2d760067bebe23db2a6d44359a3a94dbd03c4c7d1d031a9e96.scope - libcontainer container c04ed351d1b96e2d760067bebe23db2a6d44359a3a94dbd03c4c7d1d031a9e96. Jan 29 16:04:40.899133 systemd[1]: Started cri-containerd-e8b8904f062535b2df0c4bb1c3ae3e56f4db538fa7d88c2a6a36d54cc27ee9f9.scope - libcontainer container e8b8904f062535b2df0c4bb1c3ae3e56f4db538fa7d88c2a6a36d54cc27ee9f9. Jan 29 16:04:40.934650 containerd[1493]: time="2025-01-29T16:04:40.934510943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ba8773e418c2bbf5a955ad3b2b2e16,Namespace:kube-system,Attempt:0,} returns sandbox id \"c04ed351d1b96e2d760067bebe23db2a6d44359a3a94dbd03c4c7d1d031a9e96\"" Jan 29 16:04:40.935455 containerd[1493]: time="2025-01-29T16:04:40.935428876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:eb981ecac1bbdbbdd50082f31745642c,Namespace:kube-system,Attempt:0,} returns sandbox id \"79679d581c4c1eb9e4c8ed7e3f0ff2888cb49a24d8fb2c7e1b174822d957d0db\"" Jan 29 16:04:40.936512 kubelet[2209]: E0129 16:04:40.936491 2209 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:04:40.936578 kubelet[2209]: E0129 16:04:40.936559 2209 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:04:40.939622 containerd[1493]: time="2025-01-29T16:04:40.939595033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:dd8835c02fe8c2b9983405d232d313dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8b8904f062535b2df0c4bb1c3ae3e56f4db538fa7d88c2a6a36d54cc27ee9f9\"" Jan 29 16:04:40.939773 containerd[1493]: time="2025-01-29T16:04:40.939741192Z" level=info msg="CreateContainer within sandbox \"79679d581c4c1eb9e4c8ed7e3f0ff2888cb49a24d8fb2c7e1b174822d957d0db\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 16:04:40.940045 containerd[1493]: time="2025-01-29T16:04:40.939816312Z" level=info msg="CreateContainer within sandbox \"c04ed351d1b96e2d760067bebe23db2a6d44359a3a94dbd03c4c7d1d031a9e96\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 16:04:40.940578 kubelet[2209]: E0129 16:04:40.940560 2209 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:04:40.942152 containerd[1493]: time="2025-01-29T16:04:40.942108783Z" level=info msg="CreateContainer within sandbox \"e8b8904f062535b2df0c4bb1c3ae3e56f4db538fa7d88c2a6a36d54cc27ee9f9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 16:04:40.958410 containerd[1493]: time="2025-01-29T16:04:40.958363472Z" level=info msg="CreateContainer within sandbox \"c04ed351d1b96e2d760067bebe23db2a6d44359a3a94dbd03c4c7d1d031a9e96\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3962784baf2c571a196346e78a4433d564af08e9648fc371fd45203cebe1aad8\"" Jan 29 16:04:40.958940 containerd[1493]: time="2025-01-29T16:04:40.958912327Z" level=info msg="CreateContainer within sandbox \"e8b8904f062535b2df0c4bb1c3ae3e56f4db538fa7d88c2a6a36d54cc27ee9f9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"29ede998464a9c599d709e11500e1ab377fc934530829381a2245489b2a4c88e\"" Jan 29 16:04:40.959225 containerd[1493]: time="2025-01-29T16:04:40.959186674Z" level=info msg="StartContainer for \"3962784baf2c571a196346e78a4433d564af08e9648fc371fd45203cebe1aad8\"" Jan 29 16:04:40.959412 containerd[1493]: time="2025-01-29T16:04:40.959231138Z" level=info msg="StartContainer for \"29ede998464a9c599d709e11500e1ab377fc934530829381a2245489b2a4c88e\"" Jan 29 16:04:40.960114 containerd[1493]: time="2025-01-29T16:04:40.960077033Z" level=info msg="CreateContainer within sandbox \"79679d581c4c1eb9e4c8ed7e3f0ff2888cb49a24d8fb2c7e1b174822d957d0db\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"50f2566136ce6a0fb201e67c368a8b74e609944d81d5028d592f1e012647276c\"" Jan 29 16:04:40.960543 containerd[1493]: time="2025-01-29T16:04:40.960451434Z" level=info msg="StartContainer for \"50f2566136ce6a0fb201e67c368a8b74e609944d81d5028d592f1e012647276c\"" Jan 29 16:04:40.990428 systemd[1]: Started cri-containerd-29ede998464a9c599d709e11500e1ab377fc934530829381a2245489b2a4c88e.scope - libcontainer container 29ede998464a9c599d709e11500e1ab377fc934530829381a2245489b2a4c88e. Jan 29 16:04:40.991538 systemd[1]: Started cri-containerd-3962784baf2c571a196346e78a4433d564af08e9648fc371fd45203cebe1aad8.scope - libcontainer container 3962784baf2c571a196346e78a4433d564af08e9648fc371fd45203cebe1aad8. Jan 29 16:04:40.992422 systemd[1]: Started cri-containerd-50f2566136ce6a0fb201e67c368a8b74e609944d81d5028d592f1e012647276c.scope - libcontainer container 50f2566136ce6a0fb201e67c368a8b74e609944d81d5028d592f1e012647276c. Jan 29 16:04:41.033758 containerd[1493]: time="2025-01-29T16:04:41.033475330Z" level=info msg="StartContainer for \"50f2566136ce6a0fb201e67c368a8b74e609944d81d5028d592f1e012647276c\" returns successfully" Jan 29 16:04:41.034329 containerd[1493]: time="2025-01-29T16:04:41.034204697Z" level=info msg="StartContainer for \"29ede998464a9c599d709e11500e1ab377fc934530829381a2245489b2a4c88e\" returns successfully" Jan 29 16:04:41.034329 containerd[1493]: time="2025-01-29T16:04:41.034259885Z" level=info msg="StartContainer for \"3962784baf2c571a196346e78a4433d564af08e9648fc371fd45203cebe1aad8\" returns successfully" Jan 29 16:04:41.046033 kubelet[2209]: E0129 16:04:41.045998 2209 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="1.6s" Jan 29 16:04:41.197443 kubelet[2209]: I0129 16:04:41.197025 2209 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 16:04:41.197443 kubelet[2209]: E0129 16:04:41.197414 2209 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" Jan 29 16:04:41.669851 kubelet[2209]: E0129 16:04:41.669596 2209 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 16:04:41.669851 kubelet[2209]: E0129 16:04:41.669740 2209 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:04:41.674655 kubelet[2209]: E0129 16:04:41.673674 2209 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 16:04:41.674655 kubelet[2209]: E0129 16:04:41.673842 2209 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:04:41.675147 kubelet[2209]: E0129 16:04:41.675131 2209 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 16:04:41.675369 kubelet[2209]: E0129 16:04:41.675355 2209 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:04:42.568771 kubelet[2209]: E0129 16:04:42.568610 2209 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.181f3562cc387873 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 16:04:39.637620851 +0000 UTC m=+1.302265579,LastTimestamp:2025-01-29 16:04:39.637620851 +0000 UTC m=+1.302265579,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 16:04:42.622508 kubelet[2209]: E0129 16:04:42.622390 2209 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.181f3562cc867fcc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 16:04:39.64273454 +0000 UTC m=+1.307379308,LastTimestamp:2025-01-29 16:04:39.64273454 +0000 UTC m=+1.307379308,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 16:04:42.649341 kubelet[2209]: E0129 16:04:42.649304 2209 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 29 16:04:42.678585 kubelet[2209]: E0129 16:04:42.678168 2209 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.181f3562cd599050 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 16:04:39.656566864 +0000 UTC m=+1.321211592,LastTimestamp:2025-01-29 16:04:39.656566864 +0000 UTC m=+1.321211592,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 16:04:42.678585 kubelet[2209]: E0129 16:04:42.678367 2209 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 16:04:42.679365 kubelet[2209]: E0129 16:04:42.678869 2209 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:04:42.680075 kubelet[2209]: E0129 16:04:42.679949 2209 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 16:04:42.680105 kubelet[2209]: E0129 16:04:42.680081 2209 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:04:42.801144 kubelet[2209]: I0129 16:04:42.800782 2209 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 16:04:42.804571 kubelet[2209]: I0129 16:04:42.804522 2209 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Jan 29 16:04:42.804571 kubelet[2209]: E0129 16:04:42.804552 2209 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 29 16:04:42.811338 kubelet[2209]: E0129 16:04:42.811312 2209 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:04:42.943782 kubelet[2209]: I0129 16:04:42.943662 2209 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 29 16:04:42.952024 kubelet[2209]: E0129 16:04:42.951978 2209 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 29 16:04:42.952024 kubelet[2209]: I0129 16:04:42.952014 2209 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 29 16:04:42.953551 kubelet[2209]: E0129 16:04:42.953506 2209 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 29 16:04:42.953551 kubelet[2209]: I0129 16:04:42.953529 2209 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 29 16:04:42.954954 kubelet[2209]: E0129 16:04:42.954933 2209 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 29 16:04:43.632833 kubelet[2209]: I0129 16:04:43.632805 2209 apiserver.go:52] "Watching apiserver" Jan 29 16:04:43.643179 kubelet[2209]: I0129 16:04:43.643149 2209 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 16:04:44.295510 systemd[1]: Reload requested from client PID 2489 ('systemctl') (unit session-7.scope)... Jan 29 16:04:44.295526 systemd[1]: Reloading... Jan 29 16:04:44.369303 zram_generator::config[2536]: No configuration found. Jan 29 16:04:44.454765 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:04:44.537480 systemd[1]: Reloading finished in 241 ms. Jan 29 16:04:44.556124 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:04:44.572268 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 16:04:44.572619 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:04:44.572697 systemd[1]: kubelet.service: Consumed 1.662s CPU time, 124.9M memory peak. Jan 29 16:04:44.581561 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:04:44.686152 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:04:44.693472 (kubelet)[2575]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:04:44.740288 kubelet[2575]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:04:44.740288 kubelet[2575]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 29 16:04:44.740288 kubelet[2575]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:04:44.740626 kubelet[2575]: I0129 16:04:44.740420 2575 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:04:44.746327 kubelet[2575]: I0129 16:04:44.746271 2575 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 29 16:04:44.746327 kubelet[2575]: I0129 16:04:44.746317 2575 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:04:44.746592 kubelet[2575]: I0129 16:04:44.746568 2575 server.go:954] "Client rotation is on, will bootstrap in background" Jan 29 16:04:44.747866 kubelet[2575]: I0129 16:04:44.747842 2575 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 16:04:44.750244 kubelet[2575]: I0129 16:04:44.750217 2575 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:04:44.754864 kubelet[2575]: E0129 16:04:44.754807 2575 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 16:04:44.754864 kubelet[2575]: I0129 16:04:44.754841 2575 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 16:04:44.757424 kubelet[2575]: I0129 16:04:44.757403 2575 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 16:04:44.757632 kubelet[2575]: I0129 16:04:44.757606 2575 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:04:44.757800 kubelet[2575]: I0129 16:04:44.757634 2575 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 16:04:44.757894 kubelet[2575]: I0129 16:04:44.757810 2575 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:04:44.757894 kubelet[2575]: I0129 16:04:44.757818 2575 container_manager_linux.go:304] "Creating device plugin manager" Jan 29 16:04:44.757894 kubelet[2575]: I0129 16:04:44.757861 2575 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:04:44.757995 kubelet[2575]: I0129 16:04:44.757983 2575 kubelet.go:446] "Attempting to sync node with API server" Jan 29 16:04:44.758022 kubelet[2575]: I0129 16:04:44.757996 2575 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:04:44.758022 kubelet[2575]: I0129 16:04:44.758013 2575 kubelet.go:352] "Adding apiserver pod source" Jan 29 16:04:44.758022 kubelet[2575]: I0129 16:04:44.758021 2575 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:04:44.759266 kubelet[2575]: I0129 16:04:44.759209 2575 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 16:04:44.759872 kubelet[2575]: I0129 16:04:44.759853 2575 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:04:44.760375 kubelet[2575]: I0129 16:04:44.760355 2575 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 29 16:04:44.761327 kubelet[2575]: I0129 16:04:44.760456 2575 server.go:1287] "Started kubelet" Jan 29 16:04:44.761327 kubelet[2575]: I0129 16:04:44.760757 2575 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:04:44.761327 kubelet[2575]: I0129 16:04:44.760532 2575 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:04:44.761327 kubelet[2575]: I0129 16:04:44.761103 2575 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:04:44.762677 kubelet[2575]: I0129 16:04:44.762656 2575 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:04:44.762870 kubelet[2575]: I0129 16:04:44.762847 2575 server.go:490] "Adding debug handlers to kubelet server" Jan 29 16:04:44.763692 kubelet[2575]: I0129 16:04:44.762658 2575 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 16:04:44.763850 kubelet[2575]: I0129 16:04:44.763829 2575 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 29 16:04:44.764126 kubelet[2575]: I0129 16:04:44.764105 2575 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 16:04:44.764471 kubelet[2575]: I0129 16:04:44.764222 2575 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:04:44.764927 kubelet[2575]: I0129 16:04:44.764894 2575 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 16:04:44.771777 kubelet[2575]: E0129 16:04:44.771748 2575 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 16:04:44.774804 kubelet[2575]: E0129 16:04:44.774760 2575 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:04:44.775500 kubelet[2575]: I0129 16:04:44.775476 2575 factory.go:221] Registration of the containerd container factory successfully Jan 29 16:04:44.775577 kubelet[2575]: I0129 16:04:44.775566 2575 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:04:44.787993 kubelet[2575]: I0129 16:04:44.787660 2575 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:04:44.790357 kubelet[2575]: I0129 16:04:44.790301 2575 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:04:44.790357 kubelet[2575]: I0129 16:04:44.790334 2575 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 29 16:04:44.790357 kubelet[2575]: I0129 16:04:44.790352 2575 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 29 16:04:44.790357 kubelet[2575]: I0129 16:04:44.790358 2575 kubelet.go:2388] "Starting kubelet main sync loop" Jan 29 16:04:44.790605 kubelet[2575]: E0129 16:04:44.790400 2575 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 16:04:44.813082 kubelet[2575]: I0129 16:04:44.812958 2575 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 29 16:04:44.813082 kubelet[2575]: I0129 16:04:44.812979 2575 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 29 16:04:44.813082 kubelet[2575]: I0129 16:04:44.813000 2575 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:04:44.813212 kubelet[2575]: I0129 16:04:44.813158 2575 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 16:04:44.813212 kubelet[2575]: I0129 16:04:44.813170 2575 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 16:04:44.813212 kubelet[2575]: I0129 16:04:44.813188 2575 policy_none.go:49] "None policy: Start" Jan 29 16:04:44.813212 kubelet[2575]: I0129 16:04:44.813196 2575 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 29 16:04:44.813212 kubelet[2575]: I0129 16:04:44.813204 2575 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:04:44.813338 kubelet[2575]: I0129 16:04:44.813323 2575 state_mem.go:75] "Updated machine memory state" Jan 29 16:04:44.817448 kubelet[2575]: I0129 16:04:44.817260 2575 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:04:44.817448 kubelet[2575]: I0129 16:04:44.817443 2575 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 16:04:44.817833 kubelet[2575]: I0129 16:04:44.817455 2575 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:04:44.817833 kubelet[2575]: I0129 16:04:44.817674 2575 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:04:44.819258 kubelet[2575]: E0129 16:04:44.819242 2575 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 29 16:04:44.891911 kubelet[2575]: I0129 16:04:44.891631 2575 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 29 16:04:44.891911 kubelet[2575]: I0129 16:04:44.891785 2575 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 29 16:04:44.892100 kubelet[2575]: I0129 16:04:44.891631 2575 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 29 16:04:44.921405 kubelet[2575]: I0129 16:04:44.921384 2575 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 16:04:44.928078 kubelet[2575]: I0129 16:04:44.928050 2575 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Jan 29 16:04:44.928547 kubelet[2575]: I0129 16:04:44.928236 2575 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Jan 29 16:04:44.965416 kubelet[2575]: I0129 16:04:44.965371 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:04:44.965416 kubelet[2575]: I0129 16:04:44.965414 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb981ecac1bbdbbdd50082f31745642c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"eb981ecac1bbdbbdd50082f31745642c\") " pod="kube-system/kube-scheduler-localhost" Jan 29 16:04:44.965555 kubelet[2575]: I0129 16:04:44.965440 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd8835c02fe8c2b9983405d232d313dc-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"dd8835c02fe8c2b9983405d232d313dc\") " pod="kube-system/kube-apiserver-localhost" Jan 29 16:04:44.965555 kubelet[2575]: I0129 16:04:44.965489 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd8835c02fe8c2b9983405d232d313dc-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"dd8835c02fe8c2b9983405d232d313dc\") " pod="kube-system/kube-apiserver-localhost" Jan 29 16:04:44.965555 kubelet[2575]: I0129 16:04:44.965523 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:04:44.965555 kubelet[2575]: I0129 16:04:44.965549 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:04:44.965638 kubelet[2575]: I0129 16:04:44.965568 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:04:44.965638 kubelet[2575]: I0129 16:04:44.965582 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd8835c02fe8c2b9983405d232d313dc-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"dd8835c02fe8c2b9983405d232d313dc\") " pod="kube-system/kube-apiserver-localhost" Jan 29 16:04:44.965638 kubelet[2575]: I0129 16:04:44.965597 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:04:45.198534 kubelet[2575]: E0129 16:04:45.198409 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:04:45.198534 kubelet[2575]: E0129 16:04:45.198473 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:04:45.198534 kubelet[2575]: E0129 16:04:45.198480 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:04:45.297902 sudo[2614]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 29 16:04:45.298211 sudo[2614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 29 16:04:45.724704 sudo[2614]: pam_unix(sudo:session): session closed for user root Jan 29 16:04:45.758371 kubelet[2575]: I0129 16:04:45.758341 2575 apiserver.go:52] "Watching apiserver" Jan 29 16:04:45.764876 kubelet[2575]: I0129 16:04:45.764823 2575 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 16:04:45.800505 kubelet[2575]: I0129 16:04:45.800365 2575 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 29 16:04:45.800505 kubelet[2575]: I0129 16:04:45.800444 2575 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 29 16:04:45.801852 kubelet[2575]: I0129 16:04:45.800618 2575 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 29 16:04:45.809464 kubelet[2575]: E0129 16:04:45.808884 2575 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 29 16:04:45.809464 kubelet[2575]: E0129 16:04:45.808997 2575 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 29 16:04:45.809464 kubelet[2575]: E0129 16:04:45.809035 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:04:45.809464 kubelet[2575]: E0129 16:04:45.809145 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:04:45.809464 kubelet[2575]: E0129 16:04:45.809233 2575 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 29 16:04:45.809464 kubelet[2575]: E0129 16:04:45.809351 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:04:45.828650 kubelet[2575]: I0129 16:04:45.828522 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.828506273 podStartE2EDuration="1.828506273s" podCreationTimestamp="2025-01-29 16:04:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:04:45.820097242 +0000 UTC m=+1.122090972" watchObservedRunningTime="2025-01-29 16:04:45.828506273 +0000 UTC m=+1.130500003" Jan 29 16:04:45.837590 kubelet[2575]: I0129 16:04:45.837542 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.837525941 podStartE2EDuration="1.837525941s" podCreationTimestamp="2025-01-29 16:04:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:04:45.837406174 +0000 UTC m=+1.139399905" watchObservedRunningTime="2025-01-29 16:04:45.837525941 +0000 UTC m=+1.139519671" Jan 29 16:04:45.837714 kubelet[2575]: I0129 16:04:45.837636 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.837632382 podStartE2EDuration="1.837632382s" podCreationTimestamp="2025-01-29 16:04:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:04:45.829042601 +0000 UTC m=+1.131036331" watchObservedRunningTime="2025-01-29 16:04:45.837632382 +0000 UTC m=+1.139626112" Jan 29 16:04:46.801737 kubelet[2575]: E0129 16:04:46.801567 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:04:46.801737 kubelet[2575]: E0129 16:04:46.801678 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:04:46.802155 kubelet[2575]: E0129 16:04:46.801881 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:04:47.443356 sudo[1670]: pam_unix(sudo:session): session closed for user root Jan 29 16:04:47.445076 sshd[1669]: Connection closed by 10.0.0.1 port 60168 Jan 29 16:04:47.445634 sshd-session[1666]: pam_unix(sshd:session): session closed for user core Jan 29 16:04:47.449421 systemd[1]: sshd@6-10.0.0.43:22-10.0.0.1:60168.service: Deactivated successfully. Jan 29 16:04:47.451848 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 16:04:47.452051 systemd[1]: session-7.scope: Consumed 6.299s CPU time, 261.9M memory peak. Jan 29 16:04:47.452917 systemd-logind[1467]: Session 7 logged out. Waiting for processes to exit. Jan 29 16:04:47.453673 systemd-logind[1467]: Removed session 7. Jan 29 16:04:47.803778 kubelet[2575]: E0129 16:04:47.803390 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:04:48.927094 kubelet[2575]: E0129 16:04:48.927061 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:04:49.634687 kubelet[2575]: I0129 16:04:49.634659 2575 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 16:04:49.635018 containerd[1493]: time="2025-01-29T16:04:49.634982275Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 16:04:49.635950 kubelet[2575]: I0129 16:04:49.635493 2575 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 16:04:50.619023 systemd[1]: Created slice kubepods-besteffort-pod89cd74de_705f_47a4_9310_7b5c089ca8af.slice - libcontainer container kubepods-besteffort-pod89cd74de_705f_47a4_9310_7b5c089ca8af.slice. Jan 29 16:04:50.635420 systemd[1]: Created slice kubepods-burstable-pod2c82e933_8b52_443e_879d_bf71a97c89ac.slice - libcontainer container kubepods-burstable-pod2c82e933_8b52_443e_879d_bf71a97c89ac.slice. Jan 29 16:04:50.702185 kubelet[2575]: I0129 16:04:50.701865 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/89cd74de-705f-47a4-9310-7b5c089ca8af-kube-proxy\") pod \"kube-proxy-8zq6x\" (UID: \"89cd74de-705f-47a4-9310-7b5c089ca8af\") " pod="kube-system/kube-proxy-8zq6x" Jan 29 16:04:50.702185 kubelet[2575]: I0129 16:04:50.701907 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/89cd74de-705f-47a4-9310-7b5c089ca8af-xtables-lock\") pod \"kube-proxy-8zq6x\" (UID: \"89cd74de-705f-47a4-9310-7b5c089ca8af\") " pod="kube-system/kube-proxy-8zq6x" Jan 29 16:04:50.702185 kubelet[2575]: I0129 16:04:50.701931 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c82e933-8b52-443e-879d-bf71a97c89ac-etc-cni-netd\") pod \"cilium-b8qp7\" (UID: \"2c82e933-8b52-443e-879d-bf71a97c89ac\") " pod="kube-system/cilium-b8qp7" Jan 29 16:04:50.702185 kubelet[2575]: I0129 16:04:50.701992 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c82e933-8b52-443e-879d-bf71a97c89ac-xtables-lock\") pod \"cilium-b8qp7\" (UID: \"2c82e933-8b52-443e-879d-bf71a97c89ac\") " pod="kube-system/cilium-b8qp7" Jan 29 16:04:50.702185 kubelet[2575]: I0129 16:04:50.702035 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2c82e933-8b52-443e-879d-bf71a97c89ac-cilium-cgroup\") pod \"cilium-b8qp7\" (UID: \"2c82e933-8b52-443e-879d-bf71a97c89ac\") " pod="kube-system/cilium-b8qp7" Jan 29 16:04:50.702185 kubelet[2575]: I0129 16:04:50.702055 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2c82e933-8b52-443e-879d-bf71a97c89ac-cni-path\") pod \"cilium-b8qp7\" (UID: \"2c82e933-8b52-443e-879d-bf71a97c89ac\") " pod="kube-system/cilium-b8qp7" Jan 29 16:04:50.702626 kubelet[2575]: I0129 16:04:50.702140 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/89cd74de-705f-47a4-9310-7b5c089ca8af-lib-modules\") pod \"kube-proxy-8zq6x\" (UID: \"89cd74de-705f-47a4-9310-7b5c089ca8af\") " pod="kube-system/kube-proxy-8zq6x" Jan 29 16:04:50.702626 kubelet[2575]: I0129 16:04:50.702168 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2c82e933-8b52-443e-879d-bf71a97c89ac-cilium-run\") pod \"cilium-b8qp7\" (UID: \"2c82e933-8b52-443e-879d-bf71a97c89ac\") " pod="kube-system/cilium-b8qp7" Jan 29 16:04:50.702626 kubelet[2575]: I0129 16:04:50.702188 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2c82e933-8b52-443e-879d-bf71a97c89ac-hostproc\") pod \"cilium-b8qp7\" (UID: \"2c82e933-8b52-443e-879d-bf71a97c89ac\") " pod="kube-system/cilium-b8qp7" Jan 29 16:04:50.702626 kubelet[2575]: I0129 16:04:50.702219 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2c82e933-8b52-443e-879d-bf71a97c89ac-clustermesh-secrets\") pod \"cilium-b8qp7\" (UID: \"2c82e933-8b52-443e-879d-bf71a97c89ac\") " pod="kube-system/cilium-b8qp7" Jan 29 16:04:50.702626 kubelet[2575]: I0129 16:04:50.702317 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whmr8\" (UniqueName: \"kubernetes.io/projected/89cd74de-705f-47a4-9310-7b5c089ca8af-kube-api-access-whmr8\") pod \"kube-proxy-8zq6x\" (UID: \"89cd74de-705f-47a4-9310-7b5c089ca8af\") " pod="kube-system/kube-proxy-8zq6x" Jan 29 16:04:50.702626 kubelet[2575]: I0129 16:04:50.702376 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2c82e933-8b52-443e-879d-bf71a97c89ac-bpf-maps\") pod \"cilium-b8qp7\" (UID: \"2c82e933-8b52-443e-879d-bf71a97c89ac\") " pod="kube-system/cilium-b8qp7" Jan 29 16:04:50.702754 kubelet[2575]: I0129 16:04:50.702398 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c82e933-8b52-443e-879d-bf71a97c89ac-lib-modules\") pod \"cilium-b8qp7\" (UID: \"2c82e933-8b52-443e-879d-bf71a97c89ac\") " pod="kube-system/cilium-b8qp7" Jan 29 16:04:50.713645 systemd[1]: Created slice kubepods-besteffort-podef2c2b0e_aa33_485c_ba90_94c736c51d47.slice - libcontainer container kubepods-besteffort-podef2c2b0e_aa33_485c_ba90_94c736c51d47.slice. Jan 29 16:04:50.803599 kubelet[2575]: I0129 16:04:50.803480 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2c82e933-8b52-443e-879d-bf71a97c89ac-hubble-tls\") pod \"cilium-b8qp7\" (UID: \"2c82e933-8b52-443e-879d-bf71a97c89ac\") " pod="kube-system/cilium-b8qp7" Jan 29 16:04:50.803599 kubelet[2575]: I0129 16:04:50.803513 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2c82e933-8b52-443e-879d-bf71a97c89ac-host-proc-sys-net\") pod \"cilium-b8qp7\" (UID: \"2c82e933-8b52-443e-879d-bf71a97c89ac\") " pod="kube-system/cilium-b8qp7" Jan 29 16:04:50.803759 kubelet[2575]: I0129 16:04:50.803652 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2c82e933-8b52-443e-879d-bf71a97c89ac-host-proc-sys-kernel\") pod \"cilium-b8qp7\" (UID: \"2c82e933-8b52-443e-879d-bf71a97c89ac\") " pod="kube-system/cilium-b8qp7" Jan 29 16:04:50.803759 kubelet[2575]: I0129 16:04:50.803688 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqtmf\" (UniqueName: \"kubernetes.io/projected/2c82e933-8b52-443e-879d-bf71a97c89ac-kube-api-access-cqtmf\") pod \"cilium-b8qp7\" (UID: \"2c82e933-8b52-443e-879d-bf71a97c89ac\") " pod="kube-system/cilium-b8qp7" Jan 29 16:04:50.803759 kubelet[2575]: I0129 16:04:50.803724 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2c82e933-8b52-443e-879d-bf71a97c89ac-cilium-config-path\") pod \"cilium-b8qp7\" (UID: \"2c82e933-8b52-443e-879d-bf71a97c89ac\") " pod="kube-system/cilium-b8qp7" Jan 29 16:04:50.904887 kubelet[2575]: I0129 16:04:50.904777 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nmd4\" (UniqueName: \"kubernetes.io/projected/ef2c2b0e-aa33-485c-ba90-94c736c51d47-kube-api-access-4nmd4\") pod \"cilium-operator-6c4d7847fc-hl2ww\" (UID: \"ef2c2b0e-aa33-485c-ba90-94c736c51d47\") " pod="kube-system/cilium-operator-6c4d7847fc-hl2ww" Jan 29 16:04:50.904887 kubelet[2575]: I0129 16:04:50.904861 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ef2c2b0e-aa33-485c-ba90-94c736c51d47-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-hl2ww\" (UID: \"ef2c2b0e-aa33-485c-ba90-94c736c51d47\") " pod="kube-system/cilium-operator-6c4d7847fc-hl2ww" Jan 29 16:04:50.935122 kubelet[2575]: E0129 16:04:50.935057 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:04:50.935757 containerd[1493]: time="2025-01-29T16:04:50.935601688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8zq6x,Uid:89cd74de-705f-47a4-9310-7b5c089ca8af,Namespace:kube-system,Attempt:0,}" Jan 29 16:04:50.938437 kubelet[2575]: E0129 16:04:50.938331 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:04:50.938697 containerd[1493]: time="2025-01-29T16:04:50.938667272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b8qp7,Uid:2c82e933-8b52-443e-879d-bf71a97c89ac,Namespace:kube-system,Attempt:0,}" Jan 29 16:04:50.997608 containerd[1493]: time="2025-01-29T16:04:50.997515091Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:04:50.997608 containerd[1493]: time="2025-01-29T16:04:50.997575588Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:04:50.998074 containerd[1493]: time="2025-01-29T16:04:50.997590352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:04:50.998300 containerd[1493]: time="2025-01-29T16:04:50.998182919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:04:50.998865 containerd[1493]: time="2025-01-29T16:04:50.998540980Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:04:50.998865 containerd[1493]: time="2025-01-29T16:04:50.998600196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:04:50.998865 containerd[1493]: time="2025-01-29T16:04:50.998615961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:04:50.998865 containerd[1493]: time="2025-01-29T16:04:50.998695583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:04:51.018082 kubelet[2575]: E0129 16:04:51.018056 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:04:51.018604 containerd[1493]: time="2025-01-29T16:04:51.018559115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-hl2ww,Uid:ef2c2b0e-aa33-485c-ba90-94c736c51d47,Namespace:kube-system,Attempt:0,}" Jan 29 16:04:51.022449 systemd[1]: Started cri-containerd-93914c729bb3af8c100b7e539e65ece1e300273cfdba63488d6d4cfe9671462b.scope - libcontainer container 93914c729bb3af8c100b7e539e65ece1e300273cfdba63488d6d4cfe9671462b. Jan 29 16:04:51.023918 systemd[1]: Started cri-containerd-9c079f448f18c37c6314b0027009a96f8f79b4c468211b6a717d33d1aee3abf7.scope - libcontainer container 9c079f448f18c37c6314b0027009a96f8f79b4c468211b6a717d33d1aee3abf7. Jan 29 16:04:51.042575 containerd[1493]: time="2025-01-29T16:04:51.041908002Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:04:51.042575 containerd[1493]: time="2025-01-29T16:04:51.042416976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:04:51.042575 containerd[1493]: time="2025-01-29T16:04:51.042435261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:04:51.042575 containerd[1493]: time="2025-01-29T16:04:51.042534247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:04:51.045207 containerd[1493]: time="2025-01-29T16:04:51.045165903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8zq6x,Uid:89cd74de-705f-47a4-9310-7b5c089ca8af,Namespace:kube-system,Attempt:0,} returns sandbox id \"93914c729bb3af8c100b7e539e65ece1e300273cfdba63488d6d4cfe9671462b\"" Jan 29 16:04:51.046369 kubelet[2575]: E0129 16:04:51.046340 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:04:51.049120 containerd[1493]: time="2025-01-29T16:04:51.049053129Z" level=info msg="CreateContainer within sandbox \"93914c729bb3af8c100b7e539e65ece1e300273cfdba63488d6d4cfe9671462b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 16:04:51.052183 containerd[1493]: time="2025-01-29T16:04:51.052099734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b8qp7,Uid:2c82e933-8b52-443e-879d-bf71a97c89ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c079f448f18c37c6314b0027009a96f8f79b4c468211b6a717d33d1aee3abf7\"" Jan 29 16:04:51.052890 kubelet[2575]: E0129 16:04:51.052848 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:04:51.054079 containerd[1493]: time="2025-01-29T16:04:51.054037566Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 16:04:51.065505 systemd[1]: Started cri-containerd-0dec453c86b39e016acf01425c2beb0a33159662848e0bc6354102dfc4b7d259.scope - libcontainer container 0dec453c86b39e016acf01425c2beb0a33159662848e0bc6354102dfc4b7d259. Jan 29 16:04:51.066513 containerd[1493]: time="2025-01-29T16:04:51.066443322Z" level=info msg="CreateContainer within sandbox \"93914c729bb3af8c100b7e539e65ece1e300273cfdba63488d6d4cfe9671462b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"51e856c5578a76c43a63ad56d2f3dda28753b5040f9e44d368536c5d84d2a810\"" Jan 29 16:04:51.067178 containerd[1493]: time="2025-01-29T16:04:51.066884079Z" level=info msg="StartContainer for \"51e856c5578a76c43a63ad56d2f3dda28753b5040f9e44d368536c5d84d2a810\"" Jan 29 16:04:51.104464 systemd[1]: Started cri-containerd-51e856c5578a76c43a63ad56d2f3dda28753b5040f9e44d368536c5d84d2a810.scope - libcontainer container 51e856c5578a76c43a63ad56d2f3dda28753b5040f9e44d368536c5d84d2a810. Jan 29 16:04:51.105394 containerd[1493]: time="2025-01-29T16:04:51.105333154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-hl2ww,Uid:ef2c2b0e-aa33-485c-ba90-94c736c51d47,Namespace:kube-system,Attempt:0,} returns sandbox id \"0dec453c86b39e016acf01425c2beb0a33159662848e0bc6354102dfc4b7d259\"" Jan 29 16:04:51.106207 kubelet[2575]: E0129 16:04:51.106050 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:04:51.132049 containerd[1493]: time="2025-01-29T16:04:51.131972510Z" level=info msg="StartContainer for \"51e856c5578a76c43a63ad56d2f3dda28753b5040f9e44d368536c5d84d2a810\" returns successfully" Jan 29 16:04:51.812846 kubelet[2575]: E0129 16:04:51.812558 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:04:53.831287 kubelet[2575]: E0129 16:04:53.828797 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:04:53.854472 kubelet[2575]: I0129 16:04:53.853887 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8zq6x" podStartSLOduration=3.853866777 podStartE2EDuration="3.853866777s" podCreationTimestamp="2025-01-29 16:04:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:04:51.821604859 +0000 UTC m=+7.123598589" watchObservedRunningTime="2025-01-29 16:04:53.853866777 +0000 UTC m=+9.155860467" Jan 29 16:04:54.825381 kubelet[2575]: E0129 16:04:54.825319 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:04:57.766914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1128183784.mount: Deactivated successfully. Jan 29 16:04:57.899457 kubelet[2575]: E0129 16:04:57.899421 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:04:59.074758 kubelet[2575]: E0129 16:04:59.073546 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:04:59.075704 containerd[1493]: time="2025-01-29T16:04:59.075257329Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:04:59.077082 containerd[1493]: time="2025-01-29T16:04:59.076867302Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 29 16:04:59.079736 containerd[1493]: time="2025-01-29T16:04:59.079463712Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:04:59.083267 containerd[1493]: time="2025-01-29T16:04:59.082878170Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.028795194s" Jan 29 16:04:59.083267 containerd[1493]: time="2025-01-29T16:04:59.082942020Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 29 16:04:59.089972 containerd[1493]: time="2025-01-29T16:04:59.089812263Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 29 16:04:59.092080 containerd[1493]: time="2025-01-29T16:04:59.092040494Z" level=info msg="CreateContainer within sandbox \"9c079f448f18c37c6314b0027009a96f8f79b4c468211b6a717d33d1aee3abf7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 16:04:59.154110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2531968586.mount: Deactivated successfully. Jan 29 16:04:59.158217 containerd[1493]: time="2025-01-29T16:04:59.158170958Z" level=info msg="CreateContainer within sandbox \"9c079f448f18c37c6314b0027009a96f8f79b4c468211b6a717d33d1aee3abf7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bc0f99935cb6da5a50a287ccba3d33e914559ef32c6233bb20872c6388b52ad5\"" Jan 29 16:04:59.158631 containerd[1493]: time="2025-01-29T16:04:59.158595985Z" level=info msg="StartContainer for \"bc0f99935cb6da5a50a287ccba3d33e914559ef32c6233bb20872c6388b52ad5\"" Jan 29 16:04:59.188455 systemd[1]: Started cri-containerd-bc0f99935cb6da5a50a287ccba3d33e914559ef32c6233bb20872c6388b52ad5.scope - libcontainer container bc0f99935cb6da5a50a287ccba3d33e914559ef32c6233bb20872c6388b52ad5. Jan 29 16:04:59.212858 containerd[1493]: time="2025-01-29T16:04:59.212816491Z" level=info msg="StartContainer for \"bc0f99935cb6da5a50a287ccba3d33e914559ef32c6233bb20872c6388b52ad5\" returns successfully" Jan 29 16:04:59.257189 systemd[1]: cri-containerd-bc0f99935cb6da5a50a287ccba3d33e914559ef32c6233bb20872c6388b52ad5.scope: Deactivated successfully. Jan 29 16:04:59.420908 containerd[1493]: time="2025-01-29T16:04:59.416295005Z" level=info msg="shim disconnected" id=bc0f99935cb6da5a50a287ccba3d33e914559ef32c6233bb20872c6388b52ad5 namespace=k8s.io Jan 29 16:04:59.420908 containerd[1493]: time="2025-01-29T16:04:59.420836961Z" level=warning msg="cleaning up after shim disconnected" id=bc0f99935cb6da5a50a287ccba3d33e914559ef32c6233bb20872c6388b52ad5 namespace=k8s.io Jan 29 16:04:59.420908 containerd[1493]: time="2025-01-29T16:04:59.420848762Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:04:59.906795 kubelet[2575]: E0129 16:04:59.906373 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:04:59.911400 containerd[1493]: time="2025-01-29T16:04:59.911306111Z" level=info msg="CreateContainer within sandbox \"9c079f448f18c37c6314b0027009a96f8f79b4c468211b6a717d33d1aee3abf7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 16:04:59.926894 containerd[1493]: time="2025-01-29T16:04:59.926846920Z" level=info msg="CreateContainer within sandbox \"9c079f448f18c37c6314b0027009a96f8f79b4c468211b6a717d33d1aee3abf7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"669cc102e2a45c80299e2de86a28f4e3c58487488028c6c4311ae36c0d3cd1ac\"" Jan 29 16:04:59.927669 containerd[1493]: time="2025-01-29T16:04:59.927541910Z" level=info msg="StartContainer for \"669cc102e2a45c80299e2de86a28f4e3c58487488028c6c4311ae36c0d3cd1ac\"" Jan 29 16:04:59.950432 systemd[1]: Started cri-containerd-669cc102e2a45c80299e2de86a28f4e3c58487488028c6c4311ae36c0d3cd1ac.scope - libcontainer container 669cc102e2a45c80299e2de86a28f4e3c58487488028c6c4311ae36c0d3cd1ac. Jan 29 16:04:59.970435 containerd[1493]: time="2025-01-29T16:04:59.970378142Z" level=info msg="StartContainer for \"669cc102e2a45c80299e2de86a28f4e3c58487488028c6c4311ae36c0d3cd1ac\" returns successfully" Jan 29 16:05:00.002144 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 16:05:00.002387 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:05:00.002749 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:05:00.010093 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:05:00.010302 systemd[1]: cri-containerd-669cc102e2a45c80299e2de86a28f4e3c58487488028c6c4311ae36c0d3cd1ac.scope: Deactivated successfully. Jan 29 16:05:00.040945 containerd[1493]: time="2025-01-29T16:05:00.040888346Z" level=info msg="shim disconnected" id=669cc102e2a45c80299e2de86a28f4e3c58487488028c6c4311ae36c0d3cd1ac namespace=k8s.io Jan 29 16:05:00.040945 containerd[1493]: time="2025-01-29T16:05:00.040939073Z" level=warning msg="cleaning up after shim disconnected" id=669cc102e2a45c80299e2de86a28f4e3c58487488028c6c4311ae36c0d3cd1ac namespace=k8s.io Jan 29 16:05:00.041130 containerd[1493]: time="2025-01-29T16:05:00.040949195Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:05:00.041193 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:05:00.116980 update_engine[1469]: I20250129 16:05:00.116431 1469 update_attempter.cc:509] Updating boot flags... Jan 29 16:05:00.138378 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3107) Jan 29 16:05:00.172957 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc0f99935cb6da5a50a287ccba3d33e914559ef32c6233bb20872c6388b52ad5-rootfs.mount: Deactivated successfully. Jan 29 16:05:00.197294 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3106) Jan 29 16:05:00.234298 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3106) Jan 29 16:05:00.260436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2934028513.mount: Deactivated successfully. Jan 29 16:05:00.526150 containerd[1493]: time="2025-01-29T16:05:00.526106329Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:05:00.526633 containerd[1493]: time="2025-01-29T16:05:00.526591640Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 29 16:05:00.527882 containerd[1493]: time="2025-01-29T16:05:00.527854787Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:05:00.529135 containerd[1493]: time="2025-01-29T16:05:00.529083809Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.439227299s" Jan 29 16:05:00.529135 containerd[1493]: time="2025-01-29T16:05:00.529114813Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 29 16:05:00.531441 containerd[1493]: time="2025-01-29T16:05:00.531407632Z" level=info msg="CreateContainer within sandbox \"0dec453c86b39e016acf01425c2beb0a33159662848e0bc6354102dfc4b7d259\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 29 16:05:00.543388 containerd[1493]: time="2025-01-29T16:05:00.543335355Z" level=info msg="CreateContainer within sandbox \"0dec453c86b39e016acf01425c2beb0a33159662848e0bc6354102dfc4b7d259\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"dfcf5240f82c4910c848ba1ccd6989d6d770c326be6b2277785a798901e235e5\"" Jan 29 16:05:00.543825 containerd[1493]: time="2025-01-29T16:05:00.543792462Z" level=info msg="StartContainer for \"dfcf5240f82c4910c848ba1ccd6989d6d770c326be6b2277785a798901e235e5\"" Jan 29 16:05:00.569450 systemd[1]: Started cri-containerd-dfcf5240f82c4910c848ba1ccd6989d6d770c326be6b2277785a798901e235e5.scope - libcontainer container dfcf5240f82c4910c848ba1ccd6989d6d770c326be6b2277785a798901e235e5. Jan 29 16:05:00.591856 containerd[1493]: time="2025-01-29T16:05:00.591808278Z" level=info msg="StartContainer for \"dfcf5240f82c4910c848ba1ccd6989d6d770c326be6b2277785a798901e235e5\" returns successfully" Jan 29 16:05:00.910182 kubelet[2575]: E0129 16:05:00.909662 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:05:00.914139 kubelet[2575]: E0129 16:05:00.913946 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:05:00.916299 containerd[1493]: time="2025-01-29T16:05:00.916237700Z" level=info msg="CreateContainer within sandbox \"9c079f448f18c37c6314b0027009a96f8f79b4c468211b6a717d33d1aee3abf7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 16:05:00.922439 kubelet[2575]: I0129 16:05:00.922385 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-hl2ww" podStartSLOduration=1.498864702 podStartE2EDuration="10.922370006s" podCreationTimestamp="2025-01-29 16:04:50 +0000 UTC" firstStartedPulling="2025-01-29 16:04:51.106658984 +0000 UTC m=+6.408652714" lastFinishedPulling="2025-01-29 16:05:00.530164288 +0000 UTC m=+15.832158018" observedRunningTime="2025-01-29 16:05:00.922230306 +0000 UTC m=+16.224223996" watchObservedRunningTime="2025-01-29 16:05:00.922370006 +0000 UTC m=+16.224363776" Jan 29 16:05:00.945926 containerd[1493]: time="2025-01-29T16:05:00.945871919Z" level=info msg="CreateContainer within sandbox \"9c079f448f18c37c6314b0027009a96f8f79b4c468211b6a717d33d1aee3abf7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"920bcfb1fba289c374461df41882270cbec3b1924165b600430e59c07456bc91\"" Jan 29 16:05:00.946430 containerd[1493]: time="2025-01-29T16:05:00.946390716Z" level=info msg="StartContainer for \"920bcfb1fba289c374461df41882270cbec3b1924165b600430e59c07456bc91\"" Jan 29 16:05:00.976455 systemd[1]: Started cri-containerd-920bcfb1fba289c374461df41882270cbec3b1924165b600430e59c07456bc91.scope - libcontainer container 920bcfb1fba289c374461df41882270cbec3b1924165b600430e59c07456bc91. Jan 29 16:05:01.003037 containerd[1493]: time="2025-01-29T16:05:01.002905812Z" level=info msg="StartContainer for \"920bcfb1fba289c374461df41882270cbec3b1924165b600430e59c07456bc91\" returns successfully" Jan 29 16:05:01.036364 systemd[1]: cri-containerd-920bcfb1fba289c374461df41882270cbec3b1924165b600430e59c07456bc91.scope: Deactivated successfully. Jan 29 16:05:01.101563 containerd[1493]: time="2025-01-29T16:05:01.101492071Z" level=info msg="shim disconnected" id=920bcfb1fba289c374461df41882270cbec3b1924165b600430e59c07456bc91 namespace=k8s.io Jan 29 16:05:01.101563 containerd[1493]: time="2025-01-29T16:05:01.101545598Z" level=warning msg="cleaning up after shim disconnected" id=920bcfb1fba289c374461df41882270cbec3b1924165b600430e59c07456bc91 namespace=k8s.io Jan 29 16:05:01.101563 containerd[1493]: time="2025-01-29T16:05:01.101558200Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:05:01.918309 kubelet[2575]: E0129 16:05:01.918202 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:05:01.918309 kubelet[2575]: E0129 16:05:01.918294 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:05:01.922065 containerd[1493]: time="2025-01-29T16:05:01.921831600Z" level=info msg="CreateContainer within sandbox \"9c079f448f18c37c6314b0027009a96f8f79b4c468211b6a717d33d1aee3abf7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 16:05:01.937937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3568229531.mount: Deactivated successfully. Jan 29 16:05:01.940181 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2497630012.mount: Deactivated successfully. Jan 29 16:05:01.940718 containerd[1493]: time="2025-01-29T16:05:01.940534831Z" level=info msg="CreateContainer within sandbox \"9c079f448f18c37c6314b0027009a96f8f79b4c468211b6a717d33d1aee3abf7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b58be859c8f2049c6a521f10f0c6958d95fc36218d5050daf02d48d4d7c4ebb0\"" Jan 29 16:05:01.941462 containerd[1493]: time="2025-01-29T16:05:01.941436436Z" level=info msg="StartContainer for \"b58be859c8f2049c6a521f10f0c6958d95fc36218d5050daf02d48d4d7c4ebb0\"" Jan 29 16:05:01.975416 systemd[1]: Started cri-containerd-b58be859c8f2049c6a521f10f0c6958d95fc36218d5050daf02d48d4d7c4ebb0.scope - libcontainer container b58be859c8f2049c6a521f10f0c6958d95fc36218d5050daf02d48d4d7c4ebb0. Jan 29 16:05:01.995480 systemd[1]: cri-containerd-b58be859c8f2049c6a521f10f0c6958d95fc36218d5050daf02d48d4d7c4ebb0.scope: Deactivated successfully. Jan 29 16:05:01.997442 containerd[1493]: time="2025-01-29T16:05:01.997399229Z" level=info msg="StartContainer for \"b58be859c8f2049c6a521f10f0c6958d95fc36218d5050daf02d48d4d7c4ebb0\" returns successfully" Jan 29 16:05:02.015416 containerd[1493]: time="2025-01-29T16:05:02.015249901Z" level=info msg="shim disconnected" id=b58be859c8f2049c6a521f10f0c6958d95fc36218d5050daf02d48d4d7c4ebb0 namespace=k8s.io Jan 29 16:05:02.015416 containerd[1493]: time="2025-01-29T16:05:02.015402321Z" level=warning msg="cleaning up after shim disconnected" id=b58be859c8f2049c6a521f10f0c6958d95fc36218d5050daf02d48d4d7c4ebb0 namespace=k8s.io Jan 29 16:05:02.015596 containerd[1493]: time="2025-01-29T16:05:02.015412083Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:05:02.173264 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b58be859c8f2049c6a521f10f0c6958d95fc36218d5050daf02d48d4d7c4ebb0-rootfs.mount: Deactivated successfully. Jan 29 16:05:02.924220 kubelet[2575]: E0129 16:05:02.924165 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:05:02.927264 containerd[1493]: time="2025-01-29T16:05:02.927180665Z" level=info msg="CreateContainer within sandbox \"9c079f448f18c37c6314b0027009a96f8f79b4c468211b6a717d33d1aee3abf7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 16:05:02.943902 containerd[1493]: time="2025-01-29T16:05:02.943806184Z" level=info msg="CreateContainer within sandbox \"9c079f448f18c37c6314b0027009a96f8f79b4c468211b6a717d33d1aee3abf7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2af2ab81e866b67b58662a39e77b81731e651c457b8fc6289a36ec574cb91e80\"" Jan 29 16:05:02.944668 containerd[1493]: time="2025-01-29T16:05:02.944300209Z" level=info msg="StartContainer for \"2af2ab81e866b67b58662a39e77b81731e651c457b8fc6289a36ec574cb91e80\"" Jan 29 16:05:02.970463 systemd[1]: Started cri-containerd-2af2ab81e866b67b58662a39e77b81731e651c457b8fc6289a36ec574cb91e80.scope - libcontainer container 2af2ab81e866b67b58662a39e77b81731e651c457b8fc6289a36ec574cb91e80. Jan 29 16:05:02.993720 containerd[1493]: time="2025-01-29T16:05:02.992221113Z" level=info msg="StartContainer for \"2af2ab81e866b67b58662a39e77b81731e651c457b8fc6289a36ec574cb91e80\" returns successfully" Jan 29 16:05:03.113423 kubelet[2575]: I0129 16:05:03.113219 2575 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 29 16:05:03.155325 systemd[1]: Created slice kubepods-burstable-pod9960e5c3_ce25_4be3_8446_00439d98e732.slice - libcontainer container kubepods-burstable-pod9960e5c3_ce25_4be3_8446_00439d98e732.slice. Jan 29 16:05:03.163525 systemd[1]: Created slice kubepods-burstable-podb2276ffb_9451_4ec9_8152_3511a43bbf2f.slice - libcontainer container kubepods-burstable-podb2276ffb_9451_4ec9_8152_3511a43bbf2f.slice. Jan 29 16:05:03.226791 kubelet[2575]: I0129 16:05:03.226614 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2276ffb-9451-4ec9-8152-3511a43bbf2f-config-volume\") pod \"coredns-668d6bf9bc-rxhjn\" (UID: \"b2276ffb-9451-4ec9-8152-3511a43bbf2f\") " pod="kube-system/coredns-668d6bf9bc-rxhjn" Jan 29 16:05:03.226791 kubelet[2575]: I0129 16:05:03.226659 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9960e5c3-ce25-4be3-8446-00439d98e732-config-volume\") pod \"coredns-668d6bf9bc-2bpc2\" (UID: \"9960e5c3-ce25-4be3-8446-00439d98e732\") " pod="kube-system/coredns-668d6bf9bc-2bpc2" Jan 29 16:05:03.226791 kubelet[2575]: I0129 16:05:03.226678 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q85l\" (UniqueName: \"kubernetes.io/projected/9960e5c3-ce25-4be3-8446-00439d98e732-kube-api-access-7q85l\") pod \"coredns-668d6bf9bc-2bpc2\" (UID: \"9960e5c3-ce25-4be3-8446-00439d98e732\") " pod="kube-system/coredns-668d6bf9bc-2bpc2" Jan 29 16:05:03.226791 kubelet[2575]: I0129 16:05:03.226695 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrc5b\" (UniqueName: \"kubernetes.io/projected/b2276ffb-9451-4ec9-8152-3511a43bbf2f-kube-api-access-rrc5b\") pod \"coredns-668d6bf9bc-rxhjn\" (UID: \"b2276ffb-9451-4ec9-8152-3511a43bbf2f\") " pod="kube-system/coredns-668d6bf9bc-rxhjn" Jan 29 16:05:03.459803 kubelet[2575]: E0129 16:05:03.459509 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:05:03.461499 containerd[1493]: time="2025-01-29T16:05:03.461465324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2bpc2,Uid:9960e5c3-ce25-4be3-8446-00439d98e732,Namespace:kube-system,Attempt:0,}" Jan 29 16:05:03.465981 kubelet[2575]: E0129 16:05:03.465875 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:05:03.466622 containerd[1493]: time="2025-01-29T16:05:03.466479455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rxhjn,Uid:b2276ffb-9451-4ec9-8152-3511a43bbf2f,Namespace:kube-system,Attempt:0,}" Jan 29 16:05:03.928535 kubelet[2575]: E0129 16:05:03.928494 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:05:04.932322 kubelet[2575]: E0129 16:05:04.932256 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:05:05.209153 systemd-networkd[1407]: cilium_host: Link UP Jan 29 16:05:05.209711 systemd-networkd[1407]: cilium_net: Link UP Jan 29 16:05:05.209940 systemd-networkd[1407]: cilium_net: Gained carrier Jan 29 16:05:05.210134 systemd-networkd[1407]: cilium_host: Gained carrier Jan 29 16:05:05.285777 systemd-networkd[1407]: cilium_vxlan: Link UP Jan 29 16:05:05.285785 systemd-networkd[1407]: cilium_vxlan: Gained carrier Jan 29 16:05:05.364549 systemd-networkd[1407]: cilium_net: Gained IPv6LL Jan 29 16:05:05.608314 kernel: NET: Registered PF_ALG protocol family Jan 29 16:05:05.778871 systemd-networkd[1407]: cilium_host: Gained IPv6LL Jan 29 16:05:05.931205 kubelet[2575]: E0129 16:05:05.931110 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:05:06.260110 systemd-networkd[1407]: lxc_health: Link UP Jan 29 16:05:06.262327 systemd-networkd[1407]: lxc_health: Gained carrier Jan 29 16:05:06.655337 kernel: eth0: renamed from tmp441b0 Jan 29 16:05:06.662952 systemd-networkd[1407]: lxc44065dee1ca8: Link UP Jan 29 16:05:06.663600 systemd-networkd[1407]: lxca1133d75006f: Link UP Jan 29 16:05:06.665357 kernel: eth0: renamed from tmp43118 Jan 29 16:05:06.673866 systemd-networkd[1407]: lxca1133d75006f: Gained carrier Jan 29 16:05:06.677446 systemd-networkd[1407]: lxc44065dee1ca8: Gained carrier Jan 29 16:05:06.943323 kubelet[2575]: E0129 16:05:06.943271 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:05:06.962001 kubelet[2575]: I0129 16:05:06.960946 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-b8qp7" podStartSLOduration=8.925415695 podStartE2EDuration="16.960931249s" podCreationTimestamp="2025-01-29 16:04:50 +0000 UTC" firstStartedPulling="2025-01-29 16:04:51.053492502 +0000 UTC m=+6.355486232" lastFinishedPulling="2025-01-29 16:04:59.089008056 +0000 UTC m=+14.391001786" observedRunningTime="2025-01-29 16:05:03.942217863 +0000 UTC m=+19.244211593" watchObservedRunningTime="2025-01-29 16:05:06.960931249 +0000 UTC m=+22.262924979" Jan 29 16:05:07.307727 systemd-networkd[1407]: cilium_vxlan: Gained IPv6LL Jan 29 16:05:07.820440 systemd-networkd[1407]: lxca1133d75006f: Gained IPv6LL Jan 29 16:05:07.934925 kubelet[2575]: E0129 16:05:07.934888 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:05:08.139408 systemd-networkd[1407]: lxc_health: Gained IPv6LL Jan 29 16:05:08.588418 systemd-networkd[1407]: lxc44065dee1ca8: Gained IPv6LL Jan 29 16:05:08.937209 kubelet[2575]: E0129 16:05:08.936832 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:05:10.306683 containerd[1493]: time="2025-01-29T16:05:10.306056043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:05:10.306683 containerd[1493]: time="2025-01-29T16:05:10.306117928Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:05:10.306683 containerd[1493]: time="2025-01-29T16:05:10.306132449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:05:10.306683 containerd[1493]: time="2025-01-29T16:05:10.306211215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:05:10.323030 containerd[1493]: time="2025-01-29T16:05:10.319855632Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:05:10.323030 containerd[1493]: time="2025-01-29T16:05:10.319909636Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:05:10.323030 containerd[1493]: time="2025-01-29T16:05:10.319920797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:05:10.323030 containerd[1493]: time="2025-01-29T16:05:10.319988403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:05:10.347480 systemd[1]: Started cri-containerd-441b018a24a5bf727e08c959cd5adbc219cd7ae30f5322cf2c51c7d405d93c95.scope - libcontainer container 441b018a24a5bf727e08c959cd5adbc219cd7ae30f5322cf2c51c7d405d93c95. Jan 29 16:05:10.350938 systemd[1]: Started cri-containerd-43118149dcd68877b416bbc5771a2bacc4fef6c11bb97fb9510a57995dba9027.scope - libcontainer container 43118149dcd68877b416bbc5771a2bacc4fef6c11bb97fb9510a57995dba9027. Jan 29 16:05:10.358893 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 16:05:10.361484 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 16:05:10.379774 containerd[1493]: time="2025-01-29T16:05:10.379694510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2bpc2,Uid:9960e5c3-ce25-4be3-8446-00439d98e732,Namespace:kube-system,Attempt:0,} returns sandbox id \"441b018a24a5bf727e08c959cd5adbc219cd7ae30f5322cf2c51c7d405d93c95\"" Jan 29 16:05:10.380578 kubelet[2575]: E0129 16:05:10.380553 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:05:10.381923 containerd[1493]: time="2025-01-29T16:05:10.380977570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rxhjn,Uid:b2276ffb-9451-4ec9-8152-3511a43bbf2f,Namespace:kube-system,Attempt:0,} returns sandbox id \"43118149dcd68877b416bbc5771a2bacc4fef6c11bb97fb9510a57995dba9027\"" Jan 29 16:05:10.383265 kubelet[2575]: E0129 16:05:10.383155 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:05:10.384567 containerd[1493]: time="2025-01-29T16:05:10.384437238Z" level=info msg="CreateContainer within sandbox \"441b018a24a5bf727e08c959cd5adbc219cd7ae30f5322cf2c51c7d405d93c95\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 16:05:10.386569 containerd[1493]: time="2025-01-29T16:05:10.386432553Z" level=info msg="CreateContainer within sandbox \"43118149dcd68877b416bbc5771a2bacc4fef6c11bb97fb9510a57995dba9027\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 16:05:10.401755 containerd[1493]: time="2025-01-29T16:05:10.401710337Z" level=info msg="CreateContainer within sandbox \"441b018a24a5bf727e08c959cd5adbc219cd7ae30f5322cf2c51c7d405d93c95\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"aa2477eee84452e15227e61be9f39f1e75869ee1bb99c6dbca02de6a7017209a\"" Jan 29 16:05:10.402399 containerd[1493]: time="2025-01-29T16:05:10.402267460Z" level=info msg="StartContainer for \"aa2477eee84452e15227e61be9f39f1e75869ee1bb99c6dbca02de6a7017209a\"" Jan 29 16:05:10.414722 containerd[1493]: time="2025-01-29T16:05:10.414640259Z" level=info msg="CreateContainer within sandbox \"43118149dcd68877b416bbc5771a2bacc4fef6c11bb97fb9510a57995dba9027\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ca9d7d0f7630ec1adf3f78ae34a3c816ae4c1c6e4200fadfcbc3cd6bce11ba56\"" Jan 29 16:05:10.415419 containerd[1493]: time="2025-01-29T16:05:10.415319591Z" level=info msg="StartContainer for \"ca9d7d0f7630ec1adf3f78ae34a3c816ae4c1c6e4200fadfcbc3cd6bce11ba56\"" Jan 29 16:05:10.430449 systemd[1]: Started cri-containerd-aa2477eee84452e15227e61be9f39f1e75869ee1bb99c6dbca02de6a7017209a.scope - libcontainer container aa2477eee84452e15227e61be9f39f1e75869ee1bb99c6dbca02de6a7017209a. Jan 29 16:05:10.443450 systemd[1]: Started cri-containerd-ca9d7d0f7630ec1adf3f78ae34a3c816ae4c1c6e4200fadfcbc3cd6bce11ba56.scope - libcontainer container ca9d7d0f7630ec1adf3f78ae34a3c816ae4c1c6e4200fadfcbc3cd6bce11ba56. Jan 29 16:05:10.463235 containerd[1493]: time="2025-01-29T16:05:10.462641579Z" level=info msg="StartContainer for \"aa2477eee84452e15227e61be9f39f1e75869ee1bb99c6dbca02de6a7017209a\" returns successfully" Jan 29 16:05:10.492584 containerd[1493]: time="2025-01-29T16:05:10.492509814Z" level=info msg="StartContainer for \"ca9d7d0f7630ec1adf3f78ae34a3c816ae4c1c6e4200fadfcbc3cd6bce11ba56\" returns successfully" Jan 29 16:05:10.684193 systemd[1]: Started sshd@7-10.0.0.43:22-10.0.0.1:35400.service - OpenSSH per-connection server daemon (10.0.0.1:35400). Jan 29 16:05:10.743479 sshd[3981]: Accepted publickey for core from 10.0.0.1 port 35400 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU Jan 29 16:05:10.744881 sshd-session[3981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:05:10.749155 systemd-logind[1467]: New session 8 of user core. Jan 29 16:05:10.763454 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 16:05:10.915695 sshd[3986]: Connection closed by 10.0.0.1 port 35400 Jan 29 16:05:10.916550 sshd-session[3981]: pam_unix(sshd:session): session closed for user core Jan 29 16:05:10.919980 systemd[1]: sshd@7-10.0.0.43:22-10.0.0.1:35400.service: Deactivated successfully. Jan 29 16:05:10.921748 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 16:05:10.922562 systemd-logind[1467]: Session 8 logged out. Waiting for processes to exit. Jan 29 16:05:10.923516 systemd-logind[1467]: Removed session 8. Jan 29 16:05:10.947309 kubelet[2575]: E0129 16:05:10.947196 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:05:10.950137 kubelet[2575]: E0129 16:05:10.950013 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:05:10.964846 kubelet[2575]: I0129 16:05:10.964556 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rxhjn" podStartSLOduration=20.96454028 podStartE2EDuration="20.96454028s" podCreationTimestamp="2025-01-29 16:04:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:05:10.964070524 +0000 UTC m=+26.266064254" watchObservedRunningTime="2025-01-29 16:05:10.96454028 +0000 UTC m=+26.266534010" Jan 29 16:05:10.991784 kubelet[2575]: I0129 16:05:10.991710 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2bpc2" podStartSLOduration=20.991690264 podStartE2EDuration="20.991690264s" podCreationTimestamp="2025-01-29 16:04:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:05:10.977448721 +0000 UTC m=+26.279442411" watchObservedRunningTime="2025-01-29 16:05:10.991690264 +0000 UTC m=+26.293683954" Jan 29 16:05:11.315845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2120995015.mount: Deactivated successfully. Jan 29 16:05:11.952183 kubelet[2575]: E0129 16:05:11.952097 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:05:11.952183 kubelet[2575]: E0129 16:05:11.952160 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:05:12.953446 kubelet[2575]: E0129 16:05:12.953402 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:05:12.953589 kubelet[2575]: E0129 16:05:12.953494 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:05:15.941576 systemd[1]: Started sshd@8-10.0.0.43:22-10.0.0.1:53008.service - OpenSSH per-connection server daemon (10.0.0.1:53008). Jan 29 16:05:15.978379 sshd[4007]: Accepted publickey for core from 10.0.0.1 port 53008 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU Jan 29 16:05:15.979706 sshd-session[4007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:05:15.983225 systemd-logind[1467]: New session 9 of user core. Jan 29 16:05:15.994486 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 16:05:16.113394 sshd[4009]: Connection closed by 10.0.0.1 port 53008 Jan 29 16:05:16.113740 sshd-session[4007]: pam_unix(sshd:session): session closed for user core Jan 29 16:05:16.116603 systemd-logind[1467]: Session 9 logged out. Waiting for processes to exit. Jan 29 16:05:16.116746 systemd[1]: sshd@8-10.0.0.43:22-10.0.0.1:53008.service: Deactivated successfully. Jan 29 16:05:16.118359 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 16:05:16.119862 systemd-logind[1467]: Removed session 9. Jan 29 16:05:21.129221 systemd[1]: Started sshd@9-10.0.0.43:22-10.0.0.1:53024.service - OpenSSH per-connection server daemon (10.0.0.1:53024). Jan 29 16:05:21.174115 sshd[4024]: Accepted publickey for core from 10.0.0.1 port 53024 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU Jan 29 16:05:21.175437 sshd-session[4024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:05:21.179017 systemd-logind[1467]: New session 10 of user core. Jan 29 16:05:21.188499 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 16:05:21.297659 sshd[4026]: Connection closed by 10.0.0.1 port 53024 Jan 29 16:05:21.298034 sshd-session[4024]: pam_unix(sshd:session): session closed for user core Jan 29 16:05:21.301031 systemd-logind[1467]: Session 10 logged out. Waiting for processes to exit. Jan 29 16:05:21.301269 systemd[1]: sshd@9-10.0.0.43:22-10.0.0.1:53024.service: Deactivated successfully. Jan 29 16:05:21.303139 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 16:05:21.304934 systemd-logind[1467]: Removed session 10. Jan 29 16:05:26.312201 systemd[1]: Started sshd@10-10.0.0.43:22-10.0.0.1:58246.service - OpenSSH per-connection server daemon (10.0.0.1:58246). Jan 29 16:05:26.351649 sshd[4043]: Accepted publickey for core from 10.0.0.1 port 58246 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU Jan 29 16:05:26.353418 sshd-session[4043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:05:26.357294 systemd-logind[1467]: New session 11 of user core. Jan 29 16:05:26.367442 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 16:05:26.481306 sshd[4045]: Connection closed by 10.0.0.1 port 58246 Jan 29 16:05:26.481925 sshd-session[4043]: pam_unix(sshd:session): session closed for user core Jan 29 16:05:26.493711 systemd[1]: sshd@10-10.0.0.43:22-10.0.0.1:58246.service: Deactivated successfully. Jan 29 16:05:26.495374 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 16:05:26.495998 systemd-logind[1467]: Session 11 logged out. Waiting for processes to exit. Jan 29 16:05:26.497968 systemd[1]: Started sshd@11-10.0.0.43:22-10.0.0.1:58256.service - OpenSSH per-connection server daemon (10.0.0.1:58256). Jan 29 16:05:26.498919 systemd-logind[1467]: Removed session 11. Jan 29 16:05:26.543238 sshd[4058]: Accepted publickey for core from 10.0.0.1 port 58256 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU Jan 29 16:05:26.544774 sshd-session[4058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:05:26.549334 systemd-logind[1467]: New session 12 of user core. Jan 29 16:05:26.557462 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 16:05:26.705314 sshd[4061]: Connection closed by 10.0.0.1 port 58256 Jan 29 16:05:26.706224 sshd-session[4058]: pam_unix(sshd:session): session closed for user core Jan 29 16:05:26.715631 systemd[1]: sshd@11-10.0.0.43:22-10.0.0.1:58256.service: Deactivated successfully. Jan 29 16:05:26.718904 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 16:05:26.720453 systemd-logind[1467]: Session 12 logged out. Waiting for processes to exit. Jan 29 16:05:26.726624 systemd[1]: Started sshd@12-10.0.0.43:22-10.0.0.1:58260.service - OpenSSH per-connection server daemon (10.0.0.1:58260). Jan 29 16:05:26.729142 systemd-logind[1467]: Removed session 12. Jan 29 16:05:26.765058 sshd[4071]: Accepted publickey for core from 10.0.0.1 port 58260 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU Jan 29 16:05:26.766628 sshd-session[4071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:05:26.772359 systemd-logind[1467]: New session 13 of user core. Jan 29 16:05:26.779459 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 16:05:26.893209 sshd[4074]: Connection closed by 10.0.0.1 port 58260 Jan 29 16:05:26.893112 sshd-session[4071]: pam_unix(sshd:session): session closed for user core Jan 29 16:05:26.896530 systemd[1]: sshd@12-10.0.0.43:22-10.0.0.1:58260.service: Deactivated successfully. Jan 29 16:05:26.898376 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 16:05:26.901158 systemd-logind[1467]: Session 13 logged out. Waiting for processes to exit. Jan 29 16:05:26.902340 systemd-logind[1467]: Removed session 13. Jan 29 16:05:31.904944 systemd[1]: Started sshd@13-10.0.0.43:22-10.0.0.1:58268.service - OpenSSH per-connection server daemon (10.0.0.1:58268). Jan 29 16:05:31.945709 sshd[4087]: Accepted publickey for core from 10.0.0.1 port 58268 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU Jan 29 16:05:31.946174 sshd-session[4087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:05:31.951053 systemd-logind[1467]: New session 14 of user core. Jan 29 16:05:31.960550 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 16:05:32.097935 sshd[4089]: Connection closed by 10.0.0.1 port 58268 Jan 29 16:05:32.098293 sshd-session[4087]: pam_unix(sshd:session): session closed for user core Jan 29 16:05:32.101884 systemd[1]: sshd@13-10.0.0.43:22-10.0.0.1:58268.service: Deactivated successfully. Jan 29 16:05:32.103469 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 16:05:32.104054 systemd-logind[1467]: Session 14 logged out. Waiting for processes to exit. Jan 29 16:05:32.104947 systemd-logind[1467]: Removed session 14. Jan 29 16:05:37.112463 systemd[1]: Started sshd@14-10.0.0.43:22-10.0.0.1:50746.service - OpenSSH per-connection server daemon (10.0.0.1:50746). Jan 29 16:05:37.154291 sshd[4102]: Accepted publickey for core from 10.0.0.1 port 50746 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU Jan 29 16:05:37.155425 sshd-session[4102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:05:37.159518 systemd-logind[1467]: New session 15 of user core. Jan 29 16:05:37.172452 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 16:05:37.296692 sshd[4104]: Connection closed by 10.0.0.1 port 50746 Jan 29 16:05:37.298161 sshd-session[4102]: pam_unix(sshd:session): session closed for user core Jan 29 16:05:37.313775 systemd[1]: sshd@14-10.0.0.43:22-10.0.0.1:50746.service: Deactivated successfully. Jan 29 16:05:37.315434 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 16:05:37.316098 systemd-logind[1467]: Session 15 logged out. Waiting for processes to exit. Jan 29 16:05:37.317603 systemd-logind[1467]: Removed session 15. Jan 29 16:05:37.327581 systemd[1]: Started sshd@15-10.0.0.43:22-10.0.0.1:50762.service - OpenSSH per-connection server daemon (10.0.0.1:50762). Jan 29 16:05:37.368974 sshd[4117]: Accepted publickey for core from 10.0.0.1 port 50762 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU Jan 29 16:05:37.370086 sshd-session[4117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:05:37.374216 systemd-logind[1467]: New session 16 of user core. Jan 29 16:05:37.380421 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 16:05:37.585684 sshd[4120]: Connection closed by 10.0.0.1 port 50762 Jan 29 16:05:37.584871 sshd-session[4117]: pam_unix(sshd:session): session closed for user core Jan 29 16:05:37.594465 systemd[1]: sshd@15-10.0.0.43:22-10.0.0.1:50762.service: Deactivated successfully. Jan 29 16:05:37.596737 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 16:05:37.602324 systemd-logind[1467]: Session 16 logged out. Waiting for processes to exit. Jan 29 16:05:37.608607 systemd[1]: Started sshd@16-10.0.0.43:22-10.0.0.1:50776.service - OpenSSH per-connection server daemon (10.0.0.1:50776). Jan 29 16:05:37.609995 systemd-logind[1467]: Removed session 16. Jan 29 16:05:37.647098 sshd[4131]: Accepted publickey for core from 10.0.0.1 port 50776 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU Jan 29 16:05:37.648151 sshd-session[4131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:05:37.654366 systemd-logind[1467]: New session 17 of user core. Jan 29 16:05:37.660482 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 16:05:38.496845 sshd[4134]: Connection closed by 10.0.0.1 port 50776 Jan 29 16:05:38.497609 sshd-session[4131]: pam_unix(sshd:session): session closed for user core Jan 29 16:05:38.509041 systemd[1]: sshd@16-10.0.0.43:22-10.0.0.1:50776.service: Deactivated successfully. Jan 29 16:05:38.512934 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 16:05:38.518240 systemd-logind[1467]: Session 17 logged out. Waiting for processes to exit. Jan 29 16:05:38.524082 systemd[1]: Started sshd@17-10.0.0.43:22-10.0.0.1:50778.service - OpenSSH per-connection server daemon (10.0.0.1:50778). Jan 29 16:05:38.524994 systemd-logind[1467]: Removed session 17. Jan 29 16:05:38.562257 sshd[4154]: Accepted publickey for core from 10.0.0.1 port 50778 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU Jan 29 16:05:38.563747 sshd-session[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:05:38.567432 systemd-logind[1467]: New session 18 of user core. Jan 29 16:05:38.576417 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 16:05:38.795721 sshd[4157]: Connection closed by 10.0.0.1 port 50778 Jan 29 16:05:38.796811 sshd-session[4154]: pam_unix(sshd:session): session closed for user core Jan 29 16:05:38.810893 systemd[1]: Started sshd@18-10.0.0.43:22-10.0.0.1:50788.service - OpenSSH per-connection server daemon (10.0.0.1:50788). Jan 29 16:05:38.811344 systemd[1]: sshd@17-10.0.0.43:22-10.0.0.1:50778.service: Deactivated successfully. Jan 29 16:05:38.814706 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 16:05:38.816491 systemd-logind[1467]: Session 18 logged out. Waiting for processes to exit. Jan 29 16:05:38.827595 systemd-logind[1467]: Removed session 18. Jan 29 16:05:38.859770 sshd[4166]: Accepted publickey for core from 10.0.0.1 port 50788 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU Jan 29 16:05:38.861087 sshd-session[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:05:38.865337 systemd-logind[1467]: New session 19 of user core. Jan 29 16:05:38.873504 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 16:05:38.982157 sshd[4171]: Connection closed by 10.0.0.1 port 50788 Jan 29 16:05:38.982512 sshd-session[4166]: pam_unix(sshd:session): session closed for user core Jan 29 16:05:38.986615 systemd[1]: sshd@18-10.0.0.43:22-10.0.0.1:50788.service: Deactivated successfully. Jan 29 16:05:38.990132 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 16:05:38.991528 systemd-logind[1467]: Session 19 logged out. Waiting for processes to exit. Jan 29 16:05:38.992462 systemd-logind[1467]: Removed session 19. Jan 29 16:05:43.998656 systemd[1]: Started sshd@19-10.0.0.43:22-10.0.0.1:40282.service - OpenSSH per-connection server daemon (10.0.0.1:40282). Jan 29 16:05:44.036461 sshd[4188]: Accepted publickey for core from 10.0.0.1 port 40282 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU Jan 29 16:05:44.037539 sshd-session[4188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:05:44.041227 systemd-logind[1467]: New session 20 of user core. Jan 29 16:05:44.053494 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 16:05:44.162376 sshd[4190]: Connection closed by 10.0.0.1 port 40282 Jan 29 16:05:44.162703 sshd-session[4188]: pam_unix(sshd:session): session closed for user core Jan 29 16:05:44.166017 systemd[1]: sshd@19-10.0.0.43:22-10.0.0.1:40282.service: Deactivated successfully. Jan 29 16:05:44.167752 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 16:05:44.169468 systemd-logind[1467]: Session 20 logged out. Waiting for processes to exit. Jan 29 16:05:44.170258 systemd-logind[1467]: Removed session 20. Jan 29 16:05:49.178315 systemd[1]: Started sshd@20-10.0.0.43:22-10.0.0.1:40298.service - OpenSSH per-connection server daemon (10.0.0.1:40298). Jan 29 16:05:49.217640 sshd[4206]: Accepted publickey for core from 10.0.0.1 port 40298 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU Jan 29 16:05:49.218911 sshd-session[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:05:49.223385 systemd-logind[1467]: New session 21 of user core. Jan 29 16:05:49.230433 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 16:05:49.340187 sshd[4208]: Connection closed by 10.0.0.1 port 40298 Jan 29 16:05:49.340906 sshd-session[4206]: pam_unix(sshd:session): session closed for user core Jan 29 16:05:49.343939 systemd[1]: sshd@20-10.0.0.43:22-10.0.0.1:40298.service: Deactivated successfully. Jan 29 16:05:49.346889 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 16:05:49.347616 systemd-logind[1467]: Session 21 logged out. Waiting for processes to exit. Jan 29 16:05:49.348590 systemd-logind[1467]: Removed session 21. Jan 29 16:05:54.352739 systemd[1]: Started sshd@21-10.0.0.43:22-10.0.0.1:37160.service - OpenSSH per-connection server daemon (10.0.0.1:37160). Jan 29 16:05:54.393371 sshd[4225]: Accepted publickey for core from 10.0.0.1 port 37160 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU Jan 29 16:05:54.394150 sshd-session[4225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:05:54.399474 systemd-logind[1467]: New session 22 of user core. Jan 29 16:05:54.410476 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 16:05:54.521391 sshd[4227]: Connection closed by 10.0.0.1 port 37160 Jan 29 16:05:54.521969 sshd-session[4225]: pam_unix(sshd:session): session closed for user core Jan 29 16:05:54.531580 systemd[1]: sshd@21-10.0.0.43:22-10.0.0.1:37160.service: Deactivated successfully. Jan 29 16:05:54.534017 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 16:05:54.536406 systemd-logind[1467]: Session 22 logged out. Waiting for processes to exit. Jan 29 16:05:54.548780 systemd[1]: Started sshd@22-10.0.0.43:22-10.0.0.1:37166.service - OpenSSH per-connection server daemon (10.0.0.1:37166). Jan 29 16:05:54.550595 systemd-logind[1467]: Removed session 22. Jan 29 16:05:54.584501 sshd[4239]: Accepted publickey for core from 10.0.0.1 port 37166 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU Jan 29 16:05:54.585704 sshd-session[4239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:05:54.590166 systemd-logind[1467]: New session 23 of user core. Jan 29 16:05:54.601440 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 16:05:56.339849 containerd[1493]: time="2025-01-29T16:05:56.339785771Z" level=info msg="StopContainer for \"dfcf5240f82c4910c848ba1ccd6989d6d770c326be6b2277785a798901e235e5\" with timeout 30 (s)" Jan 29 16:05:56.341331 containerd[1493]: time="2025-01-29T16:05:56.340183738Z" level=info msg="Stop container \"dfcf5240f82c4910c848ba1ccd6989d6d770c326be6b2277785a798901e235e5\" with signal terminated" Jan 29 16:05:56.354022 systemd[1]: cri-containerd-dfcf5240f82c4910c848ba1ccd6989d6d770c326be6b2277785a798901e235e5.scope: Deactivated successfully. Jan 29 16:05:56.376455 containerd[1493]: time="2025-01-29T16:05:56.374525884Z" level=info msg="StopContainer for \"2af2ab81e866b67b58662a39e77b81731e651c457b8fc6289a36ec574cb91e80\" with timeout 2 (s)" Jan 29 16:05:56.376455 containerd[1493]: time="2025-01-29T16:05:56.374898970Z" level=info msg="Stop container \"2af2ab81e866b67b58662a39e77b81731e651c457b8fc6289a36ec574cb91e80\" with signal terminated" Jan 29 16:05:56.375467 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dfcf5240f82c4910c848ba1ccd6989d6d770c326be6b2277785a798901e235e5-rootfs.mount: Deactivated successfully. Jan 29 16:05:56.377955 containerd[1493]: time="2025-01-29T16:05:56.377897342Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 16:05:56.380770 systemd-networkd[1407]: lxc_health: Link DOWN Jan 29 16:05:56.380778 systemd-networkd[1407]: lxc_health: Lost carrier Jan 29 16:05:56.381527 containerd[1493]: time="2025-01-29T16:05:56.381256799Z" level=info msg="shim disconnected" id=dfcf5240f82c4910c848ba1ccd6989d6d770c326be6b2277785a798901e235e5 namespace=k8s.io Jan 29 16:05:56.381527 containerd[1493]: time="2025-01-29T16:05:56.381361761Z" level=warning msg="cleaning up after shim disconnected" id=dfcf5240f82c4910c848ba1ccd6989d6d770c326be6b2277785a798901e235e5 namespace=k8s.io Jan 29 16:05:56.381527 containerd[1493]: time="2025-01-29T16:05:56.381413882Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:05:56.398899 systemd[1]: cri-containerd-2af2ab81e866b67b58662a39e77b81731e651c457b8fc6289a36ec574cb91e80.scope: Deactivated successfully. Jan 29 16:05:56.399409 systemd[1]: cri-containerd-2af2ab81e866b67b58662a39e77b81731e651c457b8fc6289a36ec574cb91e80.scope: Consumed 6.649s CPU time, 124.5M memory peak, 192K read from disk, 12.9M written to disk. Jan 29 16:05:56.426596 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2af2ab81e866b67b58662a39e77b81731e651c457b8fc6289a36ec574cb91e80-rootfs.mount: Deactivated successfully. Jan 29 16:05:56.433669 containerd[1493]: time="2025-01-29T16:05:56.433614853Z" level=info msg="shim disconnected" id=2af2ab81e866b67b58662a39e77b81731e651c457b8fc6289a36ec574cb91e80 namespace=k8s.io Jan 29 16:05:56.433669 containerd[1493]: time="2025-01-29T16:05:56.433665494Z" level=warning msg="cleaning up after shim disconnected" id=2af2ab81e866b67b58662a39e77b81731e651c457b8fc6289a36ec574cb91e80 namespace=k8s.io Jan 29 16:05:56.433669 containerd[1493]: time="2025-01-29T16:05:56.433674054Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:05:56.434068 containerd[1493]: time="2025-01-29T16:05:56.434022220Z" level=info msg="StopContainer for \"dfcf5240f82c4910c848ba1ccd6989d6d770c326be6b2277785a798901e235e5\" returns successfully" Jan 29 16:05:56.434712 containerd[1493]: time="2025-01-29T16:05:56.434689951Z" level=info msg="StopPodSandbox for \"0dec453c86b39e016acf01425c2beb0a33159662848e0bc6354102dfc4b7d259\"" Jan 29 16:05:56.434755 containerd[1493]: time="2025-01-29T16:05:56.434729032Z" level=info msg="Container to stop \"dfcf5240f82c4910c848ba1ccd6989d6d770c326be6b2277785a798901e235e5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:05:56.436644 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0dec453c86b39e016acf01425c2beb0a33159662848e0bc6354102dfc4b7d259-shm.mount: Deactivated successfully. Jan 29 16:05:56.441789 systemd[1]: cri-containerd-0dec453c86b39e016acf01425c2beb0a33159662848e0bc6354102dfc4b7d259.scope: Deactivated successfully. Jan 29 16:05:56.451027 containerd[1493]: time="2025-01-29T16:05:56.450992790Z" level=info msg="StopContainer for \"2af2ab81e866b67b58662a39e77b81731e651c457b8fc6289a36ec574cb91e80\" returns successfully" Jan 29 16:05:56.451470 containerd[1493]: time="2025-01-29T16:05:56.451444758Z" level=info msg="StopPodSandbox for \"9c079f448f18c37c6314b0027009a96f8f79b4c468211b6a717d33d1aee3abf7\"" Jan 29 16:05:56.451516 containerd[1493]: time="2025-01-29T16:05:56.451479718Z" level=info msg="Container to stop \"bc0f99935cb6da5a50a287ccba3d33e914559ef32c6233bb20872c6388b52ad5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:05:56.451516 containerd[1493]: time="2025-01-29T16:05:56.451491478Z" level=info msg="Container to stop \"669cc102e2a45c80299e2de86a28f4e3c58487488028c6c4311ae36c0d3cd1ac\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:05:56.451516 containerd[1493]: time="2025-01-29T16:05:56.451499478Z" level=info msg="Container to stop \"920bcfb1fba289c374461df41882270cbec3b1924165b600430e59c07456bc91\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:05:56.451516 containerd[1493]: time="2025-01-29T16:05:56.451507439Z" level=info msg="Container to stop \"b58be859c8f2049c6a521f10f0c6958d95fc36218d5050daf02d48d4d7c4ebb0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:05:56.451516 containerd[1493]: time="2025-01-29T16:05:56.451515439Z" level=info msg="Container to stop \"2af2ab81e866b67b58662a39e77b81731e651c457b8fc6289a36ec574cb91e80\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:05:56.453130 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9c079f448f18c37c6314b0027009a96f8f79b4c468211b6a717d33d1aee3abf7-shm.mount: Deactivated successfully. Jan 29 16:05:56.460427 systemd[1]: cri-containerd-9c079f448f18c37c6314b0027009a96f8f79b4c468211b6a717d33d1aee3abf7.scope: Deactivated successfully. Jan 29 16:05:56.482072 containerd[1493]: time="2025-01-29T16:05:56.481992359Z" level=info msg="shim disconnected" id=0dec453c86b39e016acf01425c2beb0a33159662848e0bc6354102dfc4b7d259 namespace=k8s.io Jan 29 16:05:56.482072 containerd[1493]: time="2025-01-29T16:05:56.482047080Z" level=warning msg="cleaning up after shim disconnected" id=0dec453c86b39e016acf01425c2beb0a33159662848e0bc6354102dfc4b7d259 namespace=k8s.io Jan 29 16:05:56.482072 containerd[1493]: time="2025-01-29T16:05:56.482055640Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:05:56.483603 containerd[1493]: time="2025-01-29T16:05:56.483195660Z" level=info msg="shim disconnected" id=9c079f448f18c37c6314b0027009a96f8f79b4c468211b6a717d33d1aee3abf7 namespace=k8s.io Jan 29 16:05:56.483603 containerd[1493]: time="2025-01-29T16:05:56.483553226Z" level=warning msg="cleaning up after shim disconnected" id=9c079f448f18c37c6314b0027009a96f8f79b4c468211b6a717d33d1aee3abf7 namespace=k8s.io Jan 29 16:05:56.483603 containerd[1493]: time="2025-01-29T16:05:56.483563986Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:05:56.503237 containerd[1493]: time="2025-01-29T16:05:56.502902676Z" level=warning msg="cleanup warnings time=\"2025-01-29T16:05:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 16:05:56.517772 containerd[1493]: time="2025-01-29T16:05:56.517721209Z" level=info msg="TearDown network for sandbox \"9c079f448f18c37c6314b0027009a96f8f79b4c468211b6a717d33d1aee3abf7\" successfully" Jan 29 16:05:56.517921 containerd[1493]: time="2025-01-29T16:05:56.517905532Z" level=info msg="StopPodSandbox for \"9c079f448f18c37c6314b0027009a96f8f79b4c468211b6a717d33d1aee3abf7\" returns successfully" Jan 29 16:05:56.518078 containerd[1493]: time="2025-01-29T16:05:56.517798651Z" level=info msg="TearDown network for sandbox \"0dec453c86b39e016acf01425c2beb0a33159662848e0bc6354102dfc4b7d259\" successfully" Jan 29 16:05:56.518078 containerd[1493]: time="2025-01-29T16:05:56.518069855Z" level=info msg="StopPodSandbox for \"0dec453c86b39e016acf01425c2beb0a33159662848e0bc6354102dfc4b7d259\" returns successfully" Jan 29 16:05:56.560968 kubelet[2575]: I0129 16:05:56.560913 2575 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2c82e933-8b52-443e-879d-bf71a97c89ac-host-proc-sys-net\") pod \"2c82e933-8b52-443e-879d-bf71a97c89ac\" (UID: \"2c82e933-8b52-443e-879d-bf71a97c89ac\") " Jan 29 16:05:56.560968 kubelet[2575]: I0129 16:05:56.560955 2575 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c82e933-8b52-443e-879d-bf71a97c89ac-etc-cni-netd\") pod \"2c82e933-8b52-443e-879d-bf71a97c89ac\" (UID: \"2c82e933-8b52-443e-879d-bf71a97c89ac\") " Jan 29 16:05:56.560968 kubelet[2575]: I0129 16:05:56.560975 2575 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2c82e933-8b52-443e-879d-bf71a97c89ac-bpf-maps\") pod \"2c82e933-8b52-443e-879d-bf71a97c89ac\" (UID: \"2c82e933-8b52-443e-879d-bf71a97c89ac\") " Jan 29 16:05:56.561544 kubelet[2575]: I0129 16:05:56.561000 2575 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqtmf\" (UniqueName: \"kubernetes.io/projected/2c82e933-8b52-443e-879d-bf71a97c89ac-kube-api-access-cqtmf\") pod \"2c82e933-8b52-443e-879d-bf71a97c89ac\" (UID: \"2c82e933-8b52-443e-879d-bf71a97c89ac\") " Jan 29 16:05:56.561544 kubelet[2575]: I0129 16:05:56.561019 2575 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2c82e933-8b52-443e-879d-bf71a97c89ac-cilium-config-path\") pod \"2c82e933-8b52-443e-879d-bf71a97c89ac\" (UID: \"2c82e933-8b52-443e-879d-bf71a97c89ac\") " Jan 29 16:05:56.561544 kubelet[2575]: I0129 16:05:56.561032 2575 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c82e933-8b52-443e-879d-bf71a97c89ac-lib-modules\") pod \"2c82e933-8b52-443e-879d-bf71a97c89ac\" (UID: \"2c82e933-8b52-443e-879d-bf71a97c89ac\") " Jan 29 16:05:56.561544 kubelet[2575]: I0129 16:05:56.561070 2575 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c82e933-8b52-443e-879d-bf71a97c89ac-xtables-lock\") pod \"2c82e933-8b52-443e-879d-bf71a97c89ac\" (UID: \"2c82e933-8b52-443e-879d-bf71a97c89ac\") " Jan 29 16:05:56.561544 kubelet[2575]: I0129 16:05:56.561088 2575 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2c82e933-8b52-443e-879d-bf71a97c89ac-clustermesh-secrets\") pod \"2c82e933-8b52-443e-879d-bf71a97c89ac\" (UID: \"2c82e933-8b52-443e-879d-bf71a97c89ac\") " Jan 29 16:05:56.561544 kubelet[2575]: I0129 16:05:56.561109 2575 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2c82e933-8b52-443e-879d-bf71a97c89ac-hubble-tls\") pod \"2c82e933-8b52-443e-879d-bf71a97c89ac\" (UID: \"2c82e933-8b52-443e-879d-bf71a97c89ac\") " Jan 29 16:05:56.561703 kubelet[2575]: I0129 16:05:56.561128 2575 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ef2c2b0e-aa33-485c-ba90-94c736c51d47-cilium-config-path\") pod \"ef2c2b0e-aa33-485c-ba90-94c736c51d47\" (UID: \"ef2c2b0e-aa33-485c-ba90-94c736c51d47\") " Jan 29 16:05:56.561703 kubelet[2575]: I0129 16:05:56.561146 2575 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2c82e933-8b52-443e-879d-bf71a97c89ac-cilium-cgroup\") pod \"2c82e933-8b52-443e-879d-bf71a97c89ac\" (UID: \"2c82e933-8b52-443e-879d-bf71a97c89ac\") " Jan 29 16:05:56.561703 kubelet[2575]: I0129 16:05:56.561159 2575 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2c82e933-8b52-443e-879d-bf71a97c89ac-cni-path\") pod \"2c82e933-8b52-443e-879d-bf71a97c89ac\" (UID: \"2c82e933-8b52-443e-879d-bf71a97c89ac\") " Jan 29 16:05:56.561703 kubelet[2575]: I0129 16:05:56.561174 2575 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2c82e933-8b52-443e-879d-bf71a97c89ac-cilium-run\") pod \"2c82e933-8b52-443e-879d-bf71a97c89ac\" (UID: \"2c82e933-8b52-443e-879d-bf71a97c89ac\") " Jan 29 16:05:56.561703 kubelet[2575]: I0129 16:05:56.561189 2575 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2c82e933-8b52-443e-879d-bf71a97c89ac-host-proc-sys-kernel\") pod \"2c82e933-8b52-443e-879d-bf71a97c89ac\" (UID: \"2c82e933-8b52-443e-879d-bf71a97c89ac\") " Jan 29 16:05:56.561703 kubelet[2575]: I0129 16:05:56.561206 2575 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nmd4\" (UniqueName: \"kubernetes.io/projected/ef2c2b0e-aa33-485c-ba90-94c736c51d47-kube-api-access-4nmd4\") pod \"ef2c2b0e-aa33-485c-ba90-94c736c51d47\" (UID: \"ef2c2b0e-aa33-485c-ba90-94c736c51d47\") " Jan 29 16:05:56.561867 kubelet[2575]: I0129 16:05:56.561224 2575 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2c82e933-8b52-443e-879d-bf71a97c89ac-hostproc\") pod \"2c82e933-8b52-443e-879d-bf71a97c89ac\" (UID: \"2c82e933-8b52-443e-879d-bf71a97c89ac\") " Jan 29 16:05:56.564998 kubelet[2575]: I0129 16:05:56.564959 2575 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c82e933-8b52-443e-879d-bf71a97c89ac-hostproc" (OuterVolumeSpecName: "hostproc") pod "2c82e933-8b52-443e-879d-bf71a97c89ac" (UID: "2c82e933-8b52-443e-879d-bf71a97c89ac"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 16:05:56.565077 kubelet[2575]: I0129 16:05:56.564995 2575 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c82e933-8b52-443e-879d-bf71a97c89ac-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2c82e933-8b52-443e-879d-bf71a97c89ac" (UID: "2c82e933-8b52-443e-879d-bf71a97c89ac"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 16:05:56.567233 kubelet[2575]: I0129 16:05:56.566776 2575 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c82e933-8b52-443e-879d-bf71a97c89ac-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2c82e933-8b52-443e-879d-bf71a97c89ac" (UID: "2c82e933-8b52-443e-879d-bf71a97c89ac"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 29 16:05:56.567233 kubelet[2575]: I0129 16:05:56.566857 2575 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c82e933-8b52-443e-879d-bf71a97c89ac-cni-path" (OuterVolumeSpecName: "cni-path") pod "2c82e933-8b52-443e-879d-bf71a97c89ac" (UID: "2c82e933-8b52-443e-879d-bf71a97c89ac"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 16:05:56.567948 kubelet[2575]: I0129 16:05:56.567922 2575 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c82e933-8b52-443e-879d-bf71a97c89ac-kube-api-access-cqtmf" (OuterVolumeSpecName: "kube-api-access-cqtmf") pod "2c82e933-8b52-443e-879d-bf71a97c89ac" (UID: "2c82e933-8b52-443e-879d-bf71a97c89ac"). InnerVolumeSpecName "kube-api-access-cqtmf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 29 16:05:56.568067 kubelet[2575]: I0129 16:05:56.568049 2575 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c82e933-8b52-443e-879d-bf71a97c89ac-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2c82e933-8b52-443e-879d-bf71a97c89ac" (UID: "2c82e933-8b52-443e-879d-bf71a97c89ac"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 16:05:56.568154 kubelet[2575]: I0129 16:05:56.568140 2575 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c82e933-8b52-443e-879d-bf71a97c89ac-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2c82e933-8b52-443e-879d-bf71a97c89ac" (UID: "2c82e933-8b52-443e-879d-bf71a97c89ac"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 16:05:56.568228 kubelet[2575]: I0129 16:05:56.568217 2575 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c82e933-8b52-443e-879d-bf71a97c89ac-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2c82e933-8b52-443e-879d-bf71a97c89ac" (UID: "2c82e933-8b52-443e-879d-bf71a97c89ac"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 16:05:56.568314 kubelet[2575]: I0129 16:05:56.568296 2575 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c82e933-8b52-443e-879d-bf71a97c89ac-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2c82e933-8b52-443e-879d-bf71a97c89ac" (UID: "2c82e933-8b52-443e-879d-bf71a97c89ac"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 16:05:56.568402 kubelet[2575]: I0129 16:05:56.568388 2575 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c82e933-8b52-443e-879d-bf71a97c89ac-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2c82e933-8b52-443e-879d-bf71a97c89ac" (UID: "2c82e933-8b52-443e-879d-bf71a97c89ac"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 16:05:56.568474 kubelet[2575]: I0129 16:05:56.568462 2575 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c82e933-8b52-443e-879d-bf71a97c89ac-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2c82e933-8b52-443e-879d-bf71a97c89ac" (UID: "2c82e933-8b52-443e-879d-bf71a97c89ac"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 16:05:56.568612 kubelet[2575]: I0129 16:05:56.568587 2575 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef2c2b0e-aa33-485c-ba90-94c736c51d47-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ef2c2b0e-aa33-485c-ba90-94c736c51d47" (UID: "ef2c2b0e-aa33-485c-ba90-94c736c51d47"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 29 16:05:56.568657 kubelet[2575]: I0129 16:05:56.568631 2575 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c82e933-8b52-443e-879d-bf71a97c89ac-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2c82e933-8b52-443e-879d-bf71a97c89ac" (UID: "2c82e933-8b52-443e-879d-bf71a97c89ac"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 16:05:56.568857 kubelet[2575]: I0129 16:05:56.568826 2575 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c82e933-8b52-443e-879d-bf71a97c89ac-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2c82e933-8b52-443e-879d-bf71a97c89ac" (UID: "2c82e933-8b52-443e-879d-bf71a97c89ac"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 29 16:05:56.570263 kubelet[2575]: I0129 16:05:56.570230 2575 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c82e933-8b52-443e-879d-bf71a97c89ac-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2c82e933-8b52-443e-879d-bf71a97c89ac" (UID: "2c82e933-8b52-443e-879d-bf71a97c89ac"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 29 16:05:56.570680 kubelet[2575]: I0129 16:05:56.570645 2575 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef2c2b0e-aa33-485c-ba90-94c736c51d47-kube-api-access-4nmd4" (OuterVolumeSpecName: "kube-api-access-4nmd4") pod "ef2c2b0e-aa33-485c-ba90-94c736c51d47" (UID: "ef2c2b0e-aa33-485c-ba90-94c736c51d47"). InnerVolumeSpecName "kube-api-access-4nmd4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 29 16:05:56.662306 kubelet[2575]: I0129 16:05:56.662035 2575 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2c82e933-8b52-443e-879d-bf71a97c89ac-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 29 16:05:56.662306 kubelet[2575]: I0129 16:05:56.662064 2575 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c82e933-8b52-443e-879d-bf71a97c89ac-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 29 16:05:56.662306 kubelet[2575]: I0129 16:05:56.662074 2575 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2c82e933-8b52-443e-879d-bf71a97c89ac-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 29 16:05:56.662306 kubelet[2575]: I0129 16:05:56.662082 2575 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2c82e933-8b52-443e-879d-bf71a97c89ac-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 29 16:05:56.662306 kubelet[2575]: I0129 16:05:56.662092 2575 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cqtmf\" (UniqueName: \"kubernetes.io/projected/2c82e933-8b52-443e-879d-bf71a97c89ac-kube-api-access-cqtmf\") on node \"localhost\" DevicePath \"\"" Jan 29 16:05:56.662306 kubelet[2575]: I0129 16:05:56.662100 2575 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c82e933-8b52-443e-879d-bf71a97c89ac-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 29 16:05:56.662306 kubelet[2575]: I0129 16:05:56.662115 2575 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c82e933-8b52-443e-879d-bf71a97c89ac-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 29 16:05:56.662306 kubelet[2575]: I0129 16:05:56.662124 2575 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2c82e933-8b52-443e-879d-bf71a97c89ac-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 29 16:05:56.662552 kubelet[2575]: I0129 16:05:56.662133 2575 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2c82e933-8b52-443e-879d-bf71a97c89ac-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 29 16:05:56.662552 kubelet[2575]: I0129 16:05:56.662142 2575 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2c82e933-8b52-443e-879d-bf71a97c89ac-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 29 16:05:56.662552 kubelet[2575]: I0129 16:05:56.662149 2575 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2c82e933-8b52-443e-879d-bf71a97c89ac-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 29 16:05:56.662552 kubelet[2575]: I0129 16:05:56.662157 2575 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ef2c2b0e-aa33-485c-ba90-94c736c51d47-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 29 16:05:56.662552 kubelet[2575]: I0129 16:05:56.662164 2575 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2c82e933-8b52-443e-879d-bf71a97c89ac-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 29 16:05:56.662552 kubelet[2575]: I0129 16:05:56.662171 2575 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2c82e933-8b52-443e-879d-bf71a97c89ac-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 29 16:05:56.662552 kubelet[2575]: I0129 16:05:56.662178 2575 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2c82e933-8b52-443e-879d-bf71a97c89ac-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 29 16:05:56.662552 kubelet[2575]: I0129 16:05:56.662185 2575 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4nmd4\" (UniqueName: \"kubernetes.io/projected/ef2c2b0e-aa33-485c-ba90-94c736c51d47-kube-api-access-4nmd4\") on node \"localhost\" DevicePath \"\"" Jan 29 16:05:56.802825 systemd[1]: Removed slice kubepods-besteffort-podef2c2b0e_aa33_485c_ba90_94c736c51d47.slice - libcontainer container kubepods-besteffort-podef2c2b0e_aa33_485c_ba90_94c736c51d47.slice. Jan 29 16:05:56.803970 systemd[1]: Removed slice kubepods-burstable-pod2c82e933_8b52_443e_879d_bf71a97c89ac.slice - libcontainer container kubepods-burstable-pod2c82e933_8b52_443e_879d_bf71a97c89ac.slice. Jan 29 16:05:56.804062 systemd[1]: kubepods-burstable-pod2c82e933_8b52_443e_879d_bf71a97c89ac.slice: Consumed 6.808s CPU time, 124.8M memory peak, 300K read from disk, 12.9M written to disk. Jan 29 16:05:57.038906 kubelet[2575]: I0129 16:05:57.038867 2575 scope.go:117] "RemoveContainer" containerID="dfcf5240f82c4910c848ba1ccd6989d6d770c326be6b2277785a798901e235e5" Jan 29 16:05:57.041925 containerd[1493]: time="2025-01-29T16:05:57.041612260Z" level=info msg="RemoveContainer for \"dfcf5240f82c4910c848ba1ccd6989d6d770c326be6b2277785a798901e235e5\"" Jan 29 16:05:57.046960 containerd[1493]: time="2025-01-29T16:05:57.046921109Z" level=info msg="RemoveContainer for \"dfcf5240f82c4910c848ba1ccd6989d6d770c326be6b2277785a798901e235e5\" returns successfully" Jan 29 16:05:57.047642 kubelet[2575]: I0129 16:05:57.047605 2575 scope.go:117] "RemoveContainer" containerID="dfcf5240f82c4910c848ba1ccd6989d6d770c326be6b2277785a798901e235e5" Jan 29 16:05:57.047873 containerd[1493]: time="2025-01-29T16:05:57.047830404Z" level=error msg="ContainerStatus for \"dfcf5240f82c4910c848ba1ccd6989d6d770c326be6b2277785a798901e235e5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dfcf5240f82c4910c848ba1ccd6989d6d770c326be6b2277785a798901e235e5\": not found" Jan 29 16:05:57.055410 kubelet[2575]: E0129 16:05:57.054935 2575 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dfcf5240f82c4910c848ba1ccd6989d6d770c326be6b2277785a798901e235e5\": not found" containerID="dfcf5240f82c4910c848ba1ccd6989d6d770c326be6b2277785a798901e235e5" Jan 29 16:05:57.056145 kubelet[2575]: I0129 16:05:57.055424 2575 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dfcf5240f82c4910c848ba1ccd6989d6d770c326be6b2277785a798901e235e5"} err="failed to get container status \"dfcf5240f82c4910c848ba1ccd6989d6d770c326be6b2277785a798901e235e5\": rpc error: code = NotFound desc = an error occurred when try to find container \"dfcf5240f82c4910c848ba1ccd6989d6d770c326be6b2277785a798901e235e5\": not found" Jan 29 16:05:57.056145 kubelet[2575]: I0129 16:05:57.055600 2575 scope.go:117] "RemoveContainer" containerID="2af2ab81e866b67b58662a39e77b81731e651c457b8fc6289a36ec574cb91e80" Jan 29 16:05:57.057777 containerd[1493]: time="2025-01-29T16:05:57.057673208Z" level=info msg="RemoveContainer for \"2af2ab81e866b67b58662a39e77b81731e651c457b8fc6289a36ec574cb91e80\"" Jan 29 16:05:57.060289 containerd[1493]: time="2025-01-29T16:05:57.060184770Z" level=info msg="RemoveContainer for \"2af2ab81e866b67b58662a39e77b81731e651c457b8fc6289a36ec574cb91e80\" returns successfully" Jan 29 16:05:57.061254 kubelet[2575]: I0129 16:05:57.060359 2575 scope.go:117] "RemoveContainer" containerID="b58be859c8f2049c6a521f10f0c6958d95fc36218d5050daf02d48d4d7c4ebb0" Jan 29 16:05:57.061371 containerd[1493]: time="2025-01-29T16:05:57.061235188Z" level=info msg="RemoveContainer for \"b58be859c8f2049c6a521f10f0c6958d95fc36218d5050daf02d48d4d7c4ebb0\"" Jan 29 16:05:57.069011 containerd[1493]: time="2025-01-29T16:05:57.068951516Z" level=info msg="RemoveContainer for \"b58be859c8f2049c6a521f10f0c6958d95fc36218d5050daf02d48d4d7c4ebb0\" returns successfully" Jan 29 16:05:57.072837 kubelet[2575]: I0129 16:05:57.069238 2575 scope.go:117] "RemoveContainer" containerID="920bcfb1fba289c374461df41882270cbec3b1924165b600430e59c07456bc91" Jan 29 16:05:57.074131 containerd[1493]: time="2025-01-29T16:05:57.074088682Z" level=info msg="RemoveContainer for \"920bcfb1fba289c374461df41882270cbec3b1924165b600430e59c07456bc91\"" Jan 29 16:05:57.079784 containerd[1493]: time="2025-01-29T16:05:57.079744897Z" level=info msg="RemoveContainer for \"920bcfb1fba289c374461df41882270cbec3b1924165b600430e59c07456bc91\" returns successfully" Jan 29 16:05:57.080142 kubelet[2575]: I0129 16:05:57.080104 2575 scope.go:117] "RemoveContainer" containerID="669cc102e2a45c80299e2de86a28f4e3c58487488028c6c4311ae36c0d3cd1ac" Jan 29 16:05:57.087463 containerd[1493]: time="2025-01-29T16:05:57.087179861Z" level=info msg="RemoveContainer for \"669cc102e2a45c80299e2de86a28f4e3c58487488028c6c4311ae36c0d3cd1ac\"" Jan 29 16:05:57.091660 containerd[1493]: time="2025-01-29T16:05:57.091623895Z" level=info msg="RemoveContainer for \"669cc102e2a45c80299e2de86a28f4e3c58487488028c6c4311ae36c0d3cd1ac\" returns successfully" Jan 29 16:05:57.093454 kubelet[2575]: I0129 16:05:57.093425 2575 scope.go:117] "RemoveContainer" containerID="bc0f99935cb6da5a50a287ccba3d33e914559ef32c6233bb20872c6388b52ad5" Jan 29 16:05:57.094807 containerd[1493]: time="2025-01-29T16:05:57.094773227Z" level=info msg="RemoveContainer for \"bc0f99935cb6da5a50a287ccba3d33e914559ef32c6233bb20872c6388b52ad5\"" Jan 29 16:05:57.097146 containerd[1493]: time="2025-01-29T16:05:57.097097986Z" level=info msg="RemoveContainer for \"bc0f99935cb6da5a50a287ccba3d33e914559ef32c6233bb20872c6388b52ad5\" returns successfully" Jan 29 16:05:57.097367 kubelet[2575]: I0129 16:05:57.097317 2575 scope.go:117] "RemoveContainer" containerID="2af2ab81e866b67b58662a39e77b81731e651c457b8fc6289a36ec574cb91e80" Jan 29 16:05:57.097559 containerd[1493]: time="2025-01-29T16:05:57.097525033Z" level=error msg="ContainerStatus for \"2af2ab81e866b67b58662a39e77b81731e651c457b8fc6289a36ec574cb91e80\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2af2ab81e866b67b58662a39e77b81731e651c457b8fc6289a36ec574cb91e80\": not found" Jan 29 16:05:57.097671 kubelet[2575]: E0129 16:05:57.097648 2575 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2af2ab81e866b67b58662a39e77b81731e651c457b8fc6289a36ec574cb91e80\": not found" containerID="2af2ab81e866b67b58662a39e77b81731e651c457b8fc6289a36ec574cb91e80" Jan 29 16:05:57.097710 kubelet[2575]: I0129 16:05:57.097675 2575 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2af2ab81e866b67b58662a39e77b81731e651c457b8fc6289a36ec574cb91e80"} err="failed to get container status \"2af2ab81e866b67b58662a39e77b81731e651c457b8fc6289a36ec574cb91e80\": rpc error: code = NotFound desc = an error occurred when try to find container \"2af2ab81e866b67b58662a39e77b81731e651c457b8fc6289a36ec574cb91e80\": not found" Jan 29 16:05:57.097710 kubelet[2575]: I0129 16:05:57.097695 2575 scope.go:117] "RemoveContainer" containerID="b58be859c8f2049c6a521f10f0c6958d95fc36218d5050daf02d48d4d7c4ebb0" Jan 29 16:05:57.098178 kubelet[2575]: E0129 16:05:57.097966 2575 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b58be859c8f2049c6a521f10f0c6958d95fc36218d5050daf02d48d4d7c4ebb0\": not found" containerID="b58be859c8f2049c6a521f10f0c6958d95fc36218d5050daf02d48d4d7c4ebb0" Jan 29 16:05:57.098178 kubelet[2575]: I0129 16:05:57.097991 2575 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b58be859c8f2049c6a521f10f0c6958d95fc36218d5050daf02d48d4d7c4ebb0"} err="failed to get container status \"b58be859c8f2049c6a521f10f0c6958d95fc36218d5050daf02d48d4d7c4ebb0\": rpc error: code = NotFound desc = an error occurred when try to find container \"b58be859c8f2049c6a521f10f0c6958d95fc36218d5050daf02d48d4d7c4ebb0\": not found" Jan 29 16:05:57.098178 kubelet[2575]: I0129 16:05:57.098008 2575 scope.go:117] "RemoveContainer" containerID="920bcfb1fba289c374461df41882270cbec3b1924165b600430e59c07456bc91" Jan 29 16:05:57.098249 containerd[1493]: time="2025-01-29T16:05:57.097851679Z" level=error msg="ContainerStatus for \"b58be859c8f2049c6a521f10f0c6958d95fc36218d5050daf02d48d4d7c4ebb0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b58be859c8f2049c6a521f10f0c6958d95fc36218d5050daf02d48d4d7c4ebb0\": not found" Jan 29 16:05:57.098249 containerd[1493]: time="2025-01-29T16:05:57.098164044Z" level=error msg="ContainerStatus for \"920bcfb1fba289c374461df41882270cbec3b1924165b600430e59c07456bc91\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"920bcfb1fba289c374461df41882270cbec3b1924165b600430e59c07456bc91\": not found" Jan 29 16:05:57.099379 containerd[1493]: time="2025-01-29T16:05:57.098424408Z" level=error msg="ContainerStatus for \"669cc102e2a45c80299e2de86a28f4e3c58487488028c6c4311ae36c0d3cd1ac\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"669cc102e2a45c80299e2de86a28f4e3c58487488028c6c4311ae36c0d3cd1ac\": not found" Jan 29 16:05:57.099379 containerd[1493]: time="2025-01-29T16:05:57.098621852Z" level=error msg="ContainerStatus for \"bc0f99935cb6da5a50a287ccba3d33e914559ef32c6233bb20872c6388b52ad5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bc0f99935cb6da5a50a287ccba3d33e914559ef32c6233bb20872c6388b52ad5\": not found" Jan 29 16:05:57.099433 kubelet[2575]: E0129 16:05:57.098253 2575 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"920bcfb1fba289c374461df41882270cbec3b1924165b600430e59c07456bc91\": not found" containerID="920bcfb1fba289c374461df41882270cbec3b1924165b600430e59c07456bc91" Jan 29 16:05:57.099433 kubelet[2575]: I0129 16:05:57.098269 2575 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"920bcfb1fba289c374461df41882270cbec3b1924165b600430e59c07456bc91"} err="failed to get container status \"920bcfb1fba289c374461df41882270cbec3b1924165b600430e59c07456bc91\": rpc error: code = NotFound desc = an error occurred when try to find container \"920bcfb1fba289c374461df41882270cbec3b1924165b600430e59c07456bc91\": not found" Jan 29 16:05:57.099433 kubelet[2575]: I0129 16:05:57.098297 2575 scope.go:117] "RemoveContainer" containerID="669cc102e2a45c80299e2de86a28f4e3c58487488028c6c4311ae36c0d3cd1ac" Jan 29 16:05:57.099433 kubelet[2575]: E0129 16:05:57.098501 2575 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"669cc102e2a45c80299e2de86a28f4e3c58487488028c6c4311ae36c0d3cd1ac\": not found" containerID="669cc102e2a45c80299e2de86a28f4e3c58487488028c6c4311ae36c0d3cd1ac" Jan 29 16:05:57.099433 kubelet[2575]: I0129 16:05:57.098515 2575 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"669cc102e2a45c80299e2de86a28f4e3c58487488028c6c4311ae36c0d3cd1ac"} err="failed to get container status \"669cc102e2a45c80299e2de86a28f4e3c58487488028c6c4311ae36c0d3cd1ac\": rpc error: code = NotFound desc = an error occurred when try to find container \"669cc102e2a45c80299e2de86a28f4e3c58487488028c6c4311ae36c0d3cd1ac\": not found" Jan 29 16:05:57.099433 kubelet[2575]: I0129 16:05:57.098527 2575 scope.go:117] "RemoveContainer" containerID="bc0f99935cb6da5a50a287ccba3d33e914559ef32c6233bb20872c6388b52ad5" Jan 29 16:05:57.099559 kubelet[2575]: E0129 16:05:57.098692 2575 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bc0f99935cb6da5a50a287ccba3d33e914559ef32c6233bb20872c6388b52ad5\": not found" containerID="bc0f99935cb6da5a50a287ccba3d33e914559ef32c6233bb20872c6388b52ad5" Jan 29 16:05:57.099559 kubelet[2575]: I0129 16:05:57.098705 2575 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bc0f99935cb6da5a50a287ccba3d33e914559ef32c6233bb20872c6388b52ad5"} err="failed to get container status \"bc0f99935cb6da5a50a287ccba3d33e914559ef32c6233bb20872c6388b52ad5\": rpc error: code = NotFound desc = an error occurred when try to find container \"bc0f99935cb6da5a50a287ccba3d33e914559ef32c6233bb20872c6388b52ad5\": not found" Jan 29 16:05:57.353824 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0dec453c86b39e016acf01425c2beb0a33159662848e0bc6354102dfc4b7d259-rootfs.mount: Deactivated successfully. Jan 29 16:05:57.353922 systemd[1]: var-lib-kubelet-pods-ef2c2b0e\x2daa33\x2d485c\x2dba90\x2d94c736c51d47-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4nmd4.mount: Deactivated successfully. Jan 29 16:05:57.353975 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c079f448f18c37c6314b0027009a96f8f79b4c468211b6a717d33d1aee3abf7-rootfs.mount: Deactivated successfully. Jan 29 16:05:57.354027 systemd[1]: var-lib-kubelet-pods-2c82e933\x2d8b52\x2d443e\x2d879d\x2dbf71a97c89ac-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcqtmf.mount: Deactivated successfully. Jan 29 16:05:57.354081 systemd[1]: var-lib-kubelet-pods-2c82e933\x2d8b52\x2d443e\x2d879d\x2dbf71a97c89ac-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 29 16:05:57.354143 systemd[1]: var-lib-kubelet-pods-2c82e933\x2d8b52\x2d443e\x2d879d\x2dbf71a97c89ac-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 29 16:05:58.297092 sshd[4242]: Connection closed by 10.0.0.1 port 37166 Jan 29 16:05:58.297846 sshd-session[4239]: pam_unix(sshd:session): session closed for user core Jan 29 16:05:58.307493 systemd[1]: sshd@22-10.0.0.43:22-10.0.0.1:37166.service: Deactivated successfully. Jan 29 16:05:58.309020 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 16:05:58.309226 systemd[1]: session-23.scope: Consumed 1.067s CPU time, 24.7M memory peak. Jan 29 16:05:58.309892 systemd-logind[1467]: Session 23 logged out. Waiting for processes to exit. Jan 29 16:05:58.311577 systemd[1]: Started sshd@23-10.0.0.43:22-10.0.0.1:37176.service - OpenSSH per-connection server daemon (10.0.0.1:37176). Jan 29 16:05:58.312627 systemd-logind[1467]: Removed session 23. Jan 29 16:05:58.357344 sshd[4401]: Accepted publickey for core from 10.0.0.1 port 37176 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU Jan 29 16:05:58.358729 sshd-session[4401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:05:58.363150 systemd-logind[1467]: New session 24 of user core. Jan 29 16:05:58.374492 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 16:05:58.793413 kubelet[2575]: I0129 16:05:58.793367 2575 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c82e933-8b52-443e-879d-bf71a97c89ac" path="/var/lib/kubelet/pods/2c82e933-8b52-443e-879d-bf71a97c89ac/volumes" Jan 29 16:05:58.794394 kubelet[2575]: I0129 16:05:58.794022 2575 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef2c2b0e-aa33-485c-ba90-94c736c51d47" path="/var/lib/kubelet/pods/ef2c2b0e-aa33-485c-ba90-94c736c51d47/volumes" Jan 29 16:05:59.834976 kubelet[2575]: E0129 16:05:59.834914 2575 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 16:06:00.595122 sshd[4404]: Connection closed by 10.0.0.1 port 37176 Jan 29 16:06:00.595787 sshd-session[4401]: pam_unix(sshd:session): session closed for user core Jan 29 16:06:00.606249 systemd[1]: Started sshd@24-10.0.0.43:22-10.0.0.1:37192.service - OpenSSH per-connection server daemon (10.0.0.1:37192). Jan 29 16:06:00.606675 systemd[1]: sshd@23-10.0.0.43:22-10.0.0.1:37176.service: Deactivated successfully. Jan 29 16:06:00.609052 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 16:06:00.609266 systemd[1]: session-24.scope: Consumed 2.142s CPU time, 28.5M memory peak. Jan 29 16:06:00.612959 systemd-logind[1467]: Session 24 logged out. Waiting for processes to exit. Jan 29 16:06:00.617783 systemd-logind[1467]: Removed session 24. Jan 29 16:06:00.618259 kubelet[2575]: I0129 16:06:00.618231 2575 memory_manager.go:355] "RemoveStaleState removing state" podUID="2c82e933-8b52-443e-879d-bf71a97c89ac" containerName="cilium-agent" Jan 29 16:06:00.619240 kubelet[2575]: I0129 16:06:00.619206 2575 memory_manager.go:355] "RemoveStaleState removing state" podUID="ef2c2b0e-aa33-485c-ba90-94c736c51d47" containerName="cilium-operator" Jan 29 16:06:00.631135 systemd[1]: Created slice kubepods-burstable-poda69ca53a_ecd9_416d_99f1_8ceb18ca40ed.slice - libcontainer container kubepods-burstable-poda69ca53a_ecd9_416d_99f1_8ceb18ca40ed.slice. Jan 29 16:06:00.675778 sshd[4413]: Accepted publickey for core from 10.0.0.1 port 37192 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU Jan 29 16:06:00.677078 sshd-session[4413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:06:00.682173 systemd-logind[1467]: New session 25 of user core. Jan 29 16:06:00.686045 kubelet[2575]: I0129 16:06:00.685957 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a69ca53a-ecd9-416d-99f1-8ceb18ca40ed-clustermesh-secrets\") pod \"cilium-67m47\" (UID: \"a69ca53a-ecd9-416d-99f1-8ceb18ca40ed\") " pod="kube-system/cilium-67m47" Jan 29 16:06:00.686045 kubelet[2575]: I0129 16:06:00.686001 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a69ca53a-ecd9-416d-99f1-8ceb18ca40ed-cilium-config-path\") pod \"cilium-67m47\" (UID: \"a69ca53a-ecd9-416d-99f1-8ceb18ca40ed\") " pod="kube-system/cilium-67m47" Jan 29 16:06:00.686045 kubelet[2575]: I0129 16:06:00.686022 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxz8j\" (UniqueName: \"kubernetes.io/projected/a69ca53a-ecd9-416d-99f1-8ceb18ca40ed-kube-api-access-qxz8j\") pod \"cilium-67m47\" (UID: \"a69ca53a-ecd9-416d-99f1-8ceb18ca40ed\") " pod="kube-system/cilium-67m47" Jan 29 16:06:00.686187 kubelet[2575]: I0129 16:06:00.686066 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a69ca53a-ecd9-416d-99f1-8ceb18ca40ed-cilium-run\") pod \"cilium-67m47\" (UID: \"a69ca53a-ecd9-416d-99f1-8ceb18ca40ed\") " pod="kube-system/cilium-67m47" Jan 29 16:06:00.686187 kubelet[2575]: I0129 16:06:00.686119 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a69ca53a-ecd9-416d-99f1-8ceb18ca40ed-cilium-cgroup\") pod \"cilium-67m47\" (UID: \"a69ca53a-ecd9-416d-99f1-8ceb18ca40ed\") " pod="kube-system/cilium-67m47" Jan 29 16:06:00.686187 kubelet[2575]: I0129 16:06:00.686142 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a69ca53a-ecd9-416d-99f1-8ceb18ca40ed-lib-modules\") pod \"cilium-67m47\" (UID: \"a69ca53a-ecd9-416d-99f1-8ceb18ca40ed\") " pod="kube-system/cilium-67m47" Jan 29 16:06:00.686187 kubelet[2575]: I0129 16:06:00.686170 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a69ca53a-ecd9-416d-99f1-8ceb18ca40ed-cilium-ipsec-secrets\") pod \"cilium-67m47\" (UID: \"a69ca53a-ecd9-416d-99f1-8ceb18ca40ed\") " pod="kube-system/cilium-67m47" Jan 29 16:06:00.686415 kubelet[2575]: I0129 16:06:00.686191 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a69ca53a-ecd9-416d-99f1-8ceb18ca40ed-bpf-maps\") pod \"cilium-67m47\" (UID: \"a69ca53a-ecd9-416d-99f1-8ceb18ca40ed\") " pod="kube-system/cilium-67m47" Jan 29 16:06:00.686415 kubelet[2575]: I0129 16:06:00.686206 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a69ca53a-ecd9-416d-99f1-8ceb18ca40ed-etc-cni-netd\") pod \"cilium-67m47\" (UID: \"a69ca53a-ecd9-416d-99f1-8ceb18ca40ed\") " pod="kube-system/cilium-67m47" Jan 29 16:06:00.686415 kubelet[2575]: I0129 16:06:00.686220 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a69ca53a-ecd9-416d-99f1-8ceb18ca40ed-hubble-tls\") pod \"cilium-67m47\" (UID: \"a69ca53a-ecd9-416d-99f1-8ceb18ca40ed\") " pod="kube-system/cilium-67m47" Jan 29 16:06:00.686415 kubelet[2575]: I0129 16:06:00.686238 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a69ca53a-ecd9-416d-99f1-8ceb18ca40ed-hostproc\") pod \"cilium-67m47\" (UID: \"a69ca53a-ecd9-416d-99f1-8ceb18ca40ed\") " pod="kube-system/cilium-67m47" Jan 29 16:06:00.686415 kubelet[2575]: I0129 16:06:00.686254 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a69ca53a-ecd9-416d-99f1-8ceb18ca40ed-xtables-lock\") pod \"cilium-67m47\" (UID: \"a69ca53a-ecd9-416d-99f1-8ceb18ca40ed\") " pod="kube-system/cilium-67m47" Jan 29 16:06:00.686415 kubelet[2575]: I0129 16:06:00.686270 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a69ca53a-ecd9-416d-99f1-8ceb18ca40ed-host-proc-sys-net\") pod \"cilium-67m47\" (UID: \"a69ca53a-ecd9-416d-99f1-8ceb18ca40ed\") " pod="kube-system/cilium-67m47" Jan 29 16:06:00.686562 kubelet[2575]: I0129 16:06:00.686329 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a69ca53a-ecd9-416d-99f1-8ceb18ca40ed-cni-path\") pod \"cilium-67m47\" (UID: \"a69ca53a-ecd9-416d-99f1-8ceb18ca40ed\") " pod="kube-system/cilium-67m47" Jan 29 16:06:00.686562 kubelet[2575]: I0129 16:06:00.686347 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a69ca53a-ecd9-416d-99f1-8ceb18ca40ed-host-proc-sys-kernel\") pod \"cilium-67m47\" (UID: \"a69ca53a-ecd9-416d-99f1-8ceb18ca40ed\") " pod="kube-system/cilium-67m47" Jan 29 16:06:00.690448 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 16:06:00.739873 sshd[4418]: Connection closed by 10.0.0.1 port 37192 Jan 29 16:06:00.740396 sshd-session[4413]: pam_unix(sshd:session): session closed for user core Jan 29 16:06:00.756506 systemd[1]: sshd@24-10.0.0.43:22-10.0.0.1:37192.service: Deactivated successfully. Jan 29 16:06:00.758113 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 16:06:00.758800 systemd-logind[1467]: Session 25 logged out. Waiting for processes to exit. Jan 29 16:06:00.765584 systemd[1]: Started sshd@25-10.0.0.43:22-10.0.0.1:37208.service - OpenSSH per-connection server daemon (10.0.0.1:37208). Jan 29 16:06:00.766484 systemd-logind[1467]: Removed session 25. Jan 29 16:06:00.820799 sshd[4424]: Accepted publickey for core from 10.0.0.1 port 37208 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU Jan 29 16:06:00.822006 sshd-session[4424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:06:00.825702 systemd-logind[1467]: New session 26 of user core. Jan 29 16:06:00.832441 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 16:06:00.946846 kubelet[2575]: E0129 16:06:00.946796 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:06:00.948111 containerd[1493]: time="2025-01-29T16:06:00.947312448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-67m47,Uid:a69ca53a-ecd9-416d-99f1-8ceb18ca40ed,Namespace:kube-system,Attempt:0,}" Jan 29 16:06:00.965502 containerd[1493]: time="2025-01-29T16:06:00.965396450Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:06:00.965502 containerd[1493]: time="2025-01-29T16:06:00.965458731Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:06:00.965502 containerd[1493]: time="2025-01-29T16:06:00.965470731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:06:00.965678 containerd[1493]: time="2025-01-29T16:06:00.965541932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:06:00.982489 systemd[1]: Started cri-containerd-9c06d62ed93eeeaf95ff6104dd6ea07948c6511300223ef231ccda095fabc467.scope - libcontainer container 9c06d62ed93eeeaf95ff6104dd6ea07948c6511300223ef231ccda095fabc467. Jan 29 16:06:01.000967 containerd[1493]: time="2025-01-29T16:06:01.000911005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-67m47,Uid:a69ca53a-ecd9-416d-99f1-8ceb18ca40ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c06d62ed93eeeaf95ff6104dd6ea07948c6511300223ef231ccda095fabc467\"" Jan 29 16:06:01.001884 kubelet[2575]: E0129 16:06:01.001861 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:06:01.005667 containerd[1493]: time="2025-01-29T16:06:01.005527596Z" level=info msg="CreateContainer within sandbox \"9c06d62ed93eeeaf95ff6104dd6ea07948c6511300223ef231ccda095fabc467\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 16:06:01.015315 containerd[1493]: time="2025-01-29T16:06:01.015188224Z" level=info msg="CreateContainer within sandbox \"9c06d62ed93eeeaf95ff6104dd6ea07948c6511300223ef231ccda095fabc467\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"22b8c028b4201744cac42d1a59ead83ea74ce7e3cea3b989dc73e93af06e9132\"" Jan 29 16:06:01.015822 containerd[1493]: time="2025-01-29T16:06:01.015691471Z" level=info msg="StartContainer for \"22b8c028b4201744cac42d1a59ead83ea74ce7e3cea3b989dc73e93af06e9132\"" Jan 29 16:06:01.042513 systemd[1]: Started cri-containerd-22b8c028b4201744cac42d1a59ead83ea74ce7e3cea3b989dc73e93af06e9132.scope - libcontainer container 22b8c028b4201744cac42d1a59ead83ea74ce7e3cea3b989dc73e93af06e9132. Jan 29 16:06:01.066933 containerd[1493]: time="2025-01-29T16:06:01.066248844Z" level=info msg="StartContainer for \"22b8c028b4201744cac42d1a59ead83ea74ce7e3cea3b989dc73e93af06e9132\" returns successfully" Jan 29 16:06:01.072735 systemd[1]: cri-containerd-22b8c028b4201744cac42d1a59ead83ea74ce7e3cea3b989dc73e93af06e9132.scope: Deactivated successfully. Jan 29 16:06:01.101322 containerd[1493]: time="2025-01-29T16:06:01.101241059Z" level=info msg="shim disconnected" id=22b8c028b4201744cac42d1a59ead83ea74ce7e3cea3b989dc73e93af06e9132 namespace=k8s.io Jan 29 16:06:01.101322 containerd[1493]: time="2025-01-29T16:06:01.101307980Z" level=warning msg="cleaning up after shim disconnected" id=22b8c028b4201744cac42d1a59ead83ea74ce7e3cea3b989dc73e93af06e9132 namespace=k8s.io Jan 29 16:06:01.101322 containerd[1493]: time="2025-01-29T16:06:01.101318261Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:06:02.070195 kubelet[2575]: E0129 16:06:02.069980 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:06:02.072385 containerd[1493]: time="2025-01-29T16:06:02.072285205Z" level=info msg="CreateContainer within sandbox \"9c06d62ed93eeeaf95ff6104dd6ea07948c6511300223ef231ccda095fabc467\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 16:06:02.087511 containerd[1493]: time="2025-01-29T16:06:02.086898984Z" level=info msg="CreateContainer within sandbox \"9c06d62ed93eeeaf95ff6104dd6ea07948c6511300223ef231ccda095fabc467\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ec8ef275933b37292af925139ad38a385982a87192f26d30873c815667d01bb0\"" Jan 29 16:06:02.087924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1359597824.mount: Deactivated successfully. Jan 29 16:06:02.088848 containerd[1493]: time="2025-01-29T16:06:02.087947999Z" level=info msg="StartContainer for \"ec8ef275933b37292af925139ad38a385982a87192f26d30873c815667d01bb0\"" Jan 29 16:06:02.119438 systemd[1]: Started cri-containerd-ec8ef275933b37292af925139ad38a385982a87192f26d30873c815667d01bb0.scope - libcontainer container ec8ef275933b37292af925139ad38a385982a87192f26d30873c815667d01bb0. Jan 29 16:06:02.138534 containerd[1493]: time="2025-01-29T16:06:02.138476036Z" level=info msg="StartContainer for \"ec8ef275933b37292af925139ad38a385982a87192f26d30873c815667d01bb0\" returns successfully" Jan 29 16:06:02.149389 systemd[1]: cri-containerd-ec8ef275933b37292af925139ad38a385982a87192f26d30873c815667d01bb0.scope: Deactivated successfully. Jan 29 16:06:02.169648 containerd[1493]: time="2025-01-29T16:06:02.169579541Z" level=info msg="shim disconnected" id=ec8ef275933b37292af925139ad38a385982a87192f26d30873c815667d01bb0 namespace=k8s.io Jan 29 16:06:02.169648 containerd[1493]: time="2025-01-29T16:06:02.169638902Z" level=warning msg="cleaning up after shim disconnected" id=ec8ef275933b37292af925139ad38a385982a87192f26d30873c815667d01bb0 namespace=k8s.io Jan 29 16:06:02.169648 containerd[1493]: time="2025-01-29T16:06:02.169648382Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:06:02.791050 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec8ef275933b37292af925139ad38a385982a87192f26d30873c815667d01bb0-rootfs.mount: Deactivated successfully. Jan 29 16:06:03.074446 kubelet[2575]: E0129 16:06:03.074330 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:06:03.076076 containerd[1493]: time="2025-01-29T16:06:03.076031645Z" level=info msg="CreateContainer within sandbox \"9c06d62ed93eeeaf95ff6104dd6ea07948c6511300223ef231ccda095fabc467\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 16:06:03.093707 containerd[1493]: time="2025-01-29T16:06:03.093655023Z" level=info msg="CreateContainer within sandbox \"9c06d62ed93eeeaf95ff6104dd6ea07948c6511300223ef231ccda095fabc467\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"60686ea1cace938a156d4cde221834991ccf5f3596615abf739385c77a5f703a\"" Jan 29 16:06:03.095315 containerd[1493]: time="2025-01-29T16:06:03.094348873Z" level=info msg="StartContainer for \"60686ea1cace938a156d4cde221834991ccf5f3596615abf739385c77a5f703a\"" Jan 29 16:06:03.121435 systemd[1]: Started cri-containerd-60686ea1cace938a156d4cde221834991ccf5f3596615abf739385c77a5f703a.scope - libcontainer container 60686ea1cace938a156d4cde221834991ccf5f3596615abf739385c77a5f703a. Jan 29 16:06:03.145086 systemd[1]: cri-containerd-60686ea1cace938a156d4cde221834991ccf5f3596615abf739385c77a5f703a.scope: Deactivated successfully. Jan 29 16:06:03.146020 containerd[1493]: time="2025-01-29T16:06:03.145985150Z" level=info msg="StartContainer for \"60686ea1cace938a156d4cde221834991ccf5f3596615abf739385c77a5f703a\" returns successfully" Jan 29 16:06:03.163343 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-60686ea1cace938a156d4cde221834991ccf5f3596615abf739385c77a5f703a-rootfs.mount: Deactivated successfully. Jan 29 16:06:03.167184 containerd[1493]: time="2025-01-29T16:06:03.167112579Z" level=info msg="shim disconnected" id=60686ea1cace938a156d4cde221834991ccf5f3596615abf739385c77a5f703a namespace=k8s.io Jan 29 16:06:03.167184 containerd[1493]: time="2025-01-29T16:06:03.167184700Z" level=warning msg="cleaning up after shim disconnected" id=60686ea1cace938a156d4cde221834991ccf5f3596615abf739385c77a5f703a namespace=k8s.io Jan 29 16:06:03.167323 containerd[1493]: time="2025-01-29T16:06:03.167196380Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:06:04.078592 kubelet[2575]: E0129 16:06:04.078525 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:06:04.081608 containerd[1493]: time="2025-01-29T16:06:04.081000227Z" level=info msg="CreateContainer within sandbox \"9c06d62ed93eeeaf95ff6104dd6ea07948c6511300223ef231ccda095fabc467\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 16:06:04.096096 containerd[1493]: time="2025-01-29T16:06:04.096052963Z" level=info msg="CreateContainer within sandbox \"9c06d62ed93eeeaf95ff6104dd6ea07948c6511300223ef231ccda095fabc467\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"38b9a71a4d167bba22dc7a099d7f09e1b9bd63bf2e1c94057aea35d626c66e5e\"" Jan 29 16:06:04.097033 containerd[1493]: time="2025-01-29T16:06:04.096995017Z" level=info msg="StartContainer for \"38b9a71a4d167bba22dc7a099d7f09e1b9bd63bf2e1c94057aea35d626c66e5e\"" Jan 29 16:06:04.120470 systemd[1]: Started cri-containerd-38b9a71a4d167bba22dc7a099d7f09e1b9bd63bf2e1c94057aea35d626c66e5e.scope - libcontainer container 38b9a71a4d167bba22dc7a099d7f09e1b9bd63bf2e1c94057aea35d626c66e5e. Jan 29 16:06:04.140873 systemd[1]: cri-containerd-38b9a71a4d167bba22dc7a099d7f09e1b9bd63bf2e1c94057aea35d626c66e5e.scope: Deactivated successfully. Jan 29 16:06:04.142825 containerd[1493]: time="2025-01-29T16:06:04.142772874Z" level=info msg="StartContainer for \"38b9a71a4d167bba22dc7a099d7f09e1b9bd63bf2e1c94057aea35d626c66e5e\" returns successfully" Jan 29 16:06:04.157086 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38b9a71a4d167bba22dc7a099d7f09e1b9bd63bf2e1c94057aea35d626c66e5e-rootfs.mount: Deactivated successfully. Jan 29 16:06:04.160122 containerd[1493]: time="2025-01-29T16:06:04.160068802Z" level=info msg="shim disconnected" id=38b9a71a4d167bba22dc7a099d7f09e1b9bd63bf2e1c94057aea35d626c66e5e namespace=k8s.io Jan 29 16:06:04.160122 containerd[1493]: time="2025-01-29T16:06:04.160118882Z" level=warning msg="cleaning up after shim disconnected" id=38b9a71a4d167bba22dc7a099d7f09e1b9bd63bf2e1c94057aea35d626c66e5e namespace=k8s.io Jan 29 16:06:04.160247 containerd[1493]: time="2025-01-29T16:06:04.160127483Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:06:04.836716 kubelet[2575]: E0129 16:06:04.836677 2575 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 16:06:05.083247 kubelet[2575]: E0129 16:06:05.082696 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:06:05.088554 containerd[1493]: time="2025-01-29T16:06:05.088464738Z" level=info msg="CreateContainer within sandbox \"9c06d62ed93eeeaf95ff6104dd6ea07948c6511300223ef231ccda095fabc467\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 16:06:05.105767 containerd[1493]: time="2025-01-29T16:06:05.105715020Z" level=info msg="CreateContainer within sandbox \"9c06d62ed93eeeaf95ff6104dd6ea07948c6511300223ef231ccda095fabc467\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"66bc8e4f5aa93763a0e1be189cf0ab80b96b4d23176fb5841ee2f3930eab405e\"" Jan 29 16:06:05.106220 containerd[1493]: time="2025-01-29T16:06:05.106166306Z" level=info msg="StartContainer for \"66bc8e4f5aa93763a0e1be189cf0ab80b96b4d23176fb5841ee2f3930eab405e\"" Jan 29 16:06:05.136437 systemd[1]: Started cri-containerd-66bc8e4f5aa93763a0e1be189cf0ab80b96b4d23176fb5841ee2f3930eab405e.scope - libcontainer container 66bc8e4f5aa93763a0e1be189cf0ab80b96b4d23176fb5841ee2f3930eab405e. Jan 29 16:06:05.158792 containerd[1493]: time="2025-01-29T16:06:05.158752805Z" level=info msg="StartContainer for \"66bc8e4f5aa93763a0e1be189cf0ab80b96b4d23176fb5841ee2f3930eab405e\" returns successfully" Jan 29 16:06:05.408339 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 29 16:06:06.087637 kubelet[2575]: E0129 16:06:06.087608 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:06:06.098463 systemd[1]: run-containerd-runc-k8s.io-66bc8e4f5aa93763a0e1be189cf0ab80b96b4d23176fb5841ee2f3930eab405e-runc.VLtyQD.mount: Deactivated successfully. Jan 29 16:06:06.102572 kubelet[2575]: I0129 16:06:06.102226 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-67m47" podStartSLOduration=6.10221076 podStartE2EDuration="6.10221076s" podCreationTimestamp="2025-01-29 16:06:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:06:06.101547271 +0000 UTC m=+81.403541001" watchObservedRunningTime="2025-01-29 16:06:06.10221076 +0000 UTC m=+81.404204490" Jan 29 16:06:06.542382 kubelet[2575]: I0129 16:06:06.542330 2575 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-29T16:06:06Z","lastTransitionTime":"2025-01-29T16:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 29 16:06:06.792001 kubelet[2575]: E0129 16:06:06.791356 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:06:07.089246 kubelet[2575]: E0129 16:06:07.089198 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:06:08.090622 kubelet[2575]: E0129 16:06:08.090596 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:06:08.265545 systemd-networkd[1407]: lxc_health: Link UP Jan 29 16:06:08.267654 systemd-networkd[1407]: lxc_health: Gained carrier Jan 29 16:06:09.091977 kubelet[2575]: E0129 16:06:09.091940 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:06:09.708466 systemd-networkd[1407]: lxc_health: Gained IPv6LL Jan 29 16:06:09.791715 kubelet[2575]: E0129 16:06:09.791611 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:06:10.093889 kubelet[2575]: E0129 16:06:10.093465 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:06:11.095330 kubelet[2575]: E0129 16:06:11.094843 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:06:13.568829 sshd[4431]: Connection closed by 10.0.0.1 port 37208 Jan 29 16:06:13.569662 sshd-session[4424]: pam_unix(sshd:session): session closed for user core Jan 29 16:06:13.573201 systemd[1]: sshd@25-10.0.0.43:22-10.0.0.1:37208.service: Deactivated successfully. Jan 29 16:06:13.575161 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 16:06:13.575889 systemd-logind[1467]: Session 26 logged out. Waiting for processes to exit. Jan 29 16:06:13.576732 systemd-logind[1467]: Removed session 26.