Jan 29 10:50:53.950601 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 29 10:50:53.950622 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Wed Jan 29 09:30:22 -00 2025 Jan 29 10:50:53.950632 kernel: KASLR enabled Jan 29 10:50:53.950638 kernel: efi: EFI v2.7 by EDK II Jan 29 10:50:53.950643 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Jan 29 10:50:53.950648 kernel: random: crng init done Jan 29 10:50:53.950655 kernel: secureboot: Secure boot disabled Jan 29 10:50:53.950661 kernel: ACPI: Early table checksum verification disabled Jan 29 10:50:53.950667 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Jan 29 10:50:53.950674 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 29 10:50:53.950680 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 10:50:53.950686 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 10:50:53.950691 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 10:50:53.950697 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 10:50:53.950704 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 10:50:53.950712 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 10:50:53.950718 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 10:50:53.950724 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 10:50:53.950730 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 10:50:53.950736 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 29 10:50:53.950742 kernel: NUMA: Failed to initialise from firmware Jan 29 10:50:53.950748 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 10:50:53.950754 kernel: NUMA: NODE_DATA [mem 0xdc959800-0xdc95efff] Jan 29 10:50:53.950760 kernel: Zone ranges: Jan 29 10:50:53.950766 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 10:50:53.950773 kernel: DMA32 empty Jan 29 10:50:53.950779 kernel: Normal empty Jan 29 10:50:53.950785 kernel: Movable zone start for each node Jan 29 10:50:53.950791 kernel: Early memory node ranges Jan 29 10:50:53.950797 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Jan 29 10:50:53.950803 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Jan 29 10:50:53.950809 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Jan 29 10:50:53.950814 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 29 10:50:53.950820 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 29 10:50:53.950826 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 29 10:50:53.950832 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 29 10:50:53.950838 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 29 10:50:53.950845 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 29 10:50:53.950852 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 10:50:53.950858 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 29 10:50:53.950866 kernel: psci: probing for conduit method from ACPI. Jan 29 10:50:53.950873 kernel: psci: PSCIv1.1 detected in firmware. Jan 29 10:50:53.950880 kernel: psci: Using standard PSCI v0.2 function IDs Jan 29 10:50:53.950887 kernel: psci: Trusted OS migration not required Jan 29 10:50:53.950907 kernel: psci: SMC Calling Convention v1.1 Jan 29 10:50:53.950914 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 29 10:50:53.950920 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 29 10:50:53.950927 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 29 10:50:53.950934 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 29 10:50:53.950940 kernel: Detected PIPT I-cache on CPU0 Jan 29 10:50:53.950947 kernel: CPU features: detected: GIC system register CPU interface Jan 29 10:50:53.950953 kernel: CPU features: detected: Hardware dirty bit management Jan 29 10:50:53.950960 kernel: CPU features: detected: Spectre-v4 Jan 29 10:50:53.950967 kernel: CPU features: detected: Spectre-BHB Jan 29 10:50:53.950973 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 29 10:50:53.950980 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 29 10:50:53.950986 kernel: CPU features: detected: ARM erratum 1418040 Jan 29 10:50:53.950993 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 29 10:50:53.950999 kernel: alternatives: applying boot alternatives Jan 29 10:50:53.951006 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e6957044c3256d96283265c263579aa4275d1d707b02496fcb081f5fc6356346 Jan 29 10:50:53.951014 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 10:50:53.951020 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 10:50:53.951027 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 10:50:53.951033 kernel: Fallback order for Node 0: 0 Jan 29 10:50:53.951041 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 29 10:50:53.951047 kernel: Policy zone: DMA Jan 29 10:50:53.951054 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 10:50:53.951060 kernel: software IO TLB: area num 4. Jan 29 10:50:53.951067 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 29 10:50:53.951074 kernel: Memory: 2385944K/2572288K available (10304K kernel code, 2186K rwdata, 8092K rodata, 39936K init, 897K bss, 186344K reserved, 0K cma-reserved) Jan 29 10:50:53.951081 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 29 10:50:53.951088 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 10:50:53.951095 kernel: rcu: RCU event tracing is enabled. Jan 29 10:50:53.951101 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 29 10:50:53.951108 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 10:50:53.951114 kernel: Tracing variant of Tasks RCU enabled. Jan 29 10:50:53.951123 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 10:50:53.951129 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 29 10:50:53.951135 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 29 10:50:53.951189 kernel: GICv3: 256 SPIs implemented Jan 29 10:50:53.951198 kernel: GICv3: 0 Extended SPIs implemented Jan 29 10:50:53.951205 kernel: Root IRQ handler: gic_handle_irq Jan 29 10:50:53.951211 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 29 10:50:53.951218 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 29 10:50:53.951224 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 29 10:50:53.951231 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 29 10:50:53.951238 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 29 10:50:53.951247 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 29 10:50:53.951254 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 29 10:50:53.951260 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 10:50:53.951267 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 10:50:53.951273 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 29 10:50:53.951280 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 29 10:50:53.951286 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 29 10:50:53.951292 kernel: arm-pv: using stolen time PV Jan 29 10:50:53.951299 kernel: Console: colour dummy device 80x25 Jan 29 10:50:53.951306 kernel: ACPI: Core revision 20230628 Jan 29 10:50:53.951313 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 29 10:50:53.951321 kernel: pid_max: default: 32768 minimum: 301 Jan 29 10:50:53.951327 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 10:50:53.951334 kernel: landlock: Up and running. Jan 29 10:50:53.951340 kernel: SELinux: Initializing. Jan 29 10:50:53.951347 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 10:50:53.951354 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 10:50:53.951360 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 10:50:53.951367 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 10:50:53.951373 kernel: rcu: Hierarchical SRCU implementation. Jan 29 10:50:53.951382 kernel: rcu: Max phase no-delay instances is 400. Jan 29 10:50:53.951388 kernel: Platform MSI: ITS@0x8080000 domain created Jan 29 10:50:53.951395 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 29 10:50:53.951401 kernel: Remapping and enabling EFI services. Jan 29 10:50:53.951408 kernel: smp: Bringing up secondary CPUs ... Jan 29 10:50:53.951414 kernel: Detected PIPT I-cache on CPU1 Jan 29 10:50:53.951421 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 29 10:50:53.951427 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 29 10:50:53.951434 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 10:50:53.951442 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 29 10:50:53.951448 kernel: Detected PIPT I-cache on CPU2 Jan 29 10:50:53.951460 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 29 10:50:53.951468 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 29 10:50:53.951475 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 10:50:53.951482 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 29 10:50:53.951489 kernel: Detected PIPT I-cache on CPU3 Jan 29 10:50:53.951495 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 29 10:50:53.951502 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 29 10:50:53.951511 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 10:50:53.951517 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 29 10:50:53.951524 kernel: smp: Brought up 1 node, 4 CPUs Jan 29 10:50:53.951531 kernel: SMP: Total of 4 processors activated. Jan 29 10:50:53.951539 kernel: CPU features: detected: 32-bit EL0 Support Jan 29 10:50:53.951546 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 29 10:50:53.951553 kernel: CPU features: detected: Common not Private translations Jan 29 10:50:53.951561 kernel: CPU features: detected: CRC32 instructions Jan 29 10:50:53.951569 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 29 10:50:53.951577 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 29 10:50:53.951584 kernel: CPU features: detected: LSE atomic instructions Jan 29 10:50:53.951592 kernel: CPU features: detected: Privileged Access Never Jan 29 10:50:53.951599 kernel: CPU features: detected: RAS Extension Support Jan 29 10:50:53.951606 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 29 10:50:53.951614 kernel: CPU: All CPU(s) started at EL1 Jan 29 10:50:53.951621 kernel: alternatives: applying system-wide alternatives Jan 29 10:50:53.951628 kernel: devtmpfs: initialized Jan 29 10:50:53.951636 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 10:50:53.951645 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 29 10:50:53.951652 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 10:50:53.951659 kernel: SMBIOS 3.0.0 present. Jan 29 10:50:53.951667 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jan 29 10:50:53.951674 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 10:50:53.951681 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 29 10:50:53.951689 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 29 10:50:53.951696 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 29 10:50:53.951704 kernel: audit: initializing netlink subsys (disabled) Jan 29 10:50:53.951718 kernel: audit: type=2000 audit(0.026:1): state=initialized audit_enabled=0 res=1 Jan 29 10:50:53.951726 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 10:50:53.951733 kernel: cpuidle: using governor menu Jan 29 10:50:53.951741 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 29 10:50:53.951748 kernel: ASID allocator initialised with 32768 entries Jan 29 10:50:53.951755 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 10:50:53.951763 kernel: Serial: AMBA PL011 UART driver Jan 29 10:50:53.951770 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 29 10:50:53.951777 kernel: Modules: 0 pages in range for non-PLT usage Jan 29 10:50:53.951786 kernel: Modules: 508880 pages in range for PLT usage Jan 29 10:50:53.951793 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 10:50:53.951800 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 10:50:53.951807 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 29 10:50:53.951814 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 29 10:50:53.951821 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 10:50:53.951828 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 10:50:53.951835 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 29 10:50:53.951842 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 29 10:50:53.951850 kernel: ACPI: Added _OSI(Module Device) Jan 29 10:50:53.951857 kernel: ACPI: Added _OSI(Processor Device) Jan 29 10:50:53.951864 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 10:50:53.951871 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 10:50:53.951878 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 10:50:53.951885 kernel: ACPI: Interpreter enabled Jan 29 10:50:53.951892 kernel: ACPI: Using GIC for interrupt routing Jan 29 10:50:53.951899 kernel: ACPI: MCFG table detected, 1 entries Jan 29 10:50:53.951906 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 29 10:50:53.951915 kernel: printk: console [ttyAMA0] enabled Jan 29 10:50:53.951928 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 10:50:53.952072 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 10:50:53.952166 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 29 10:50:53.952238 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 29 10:50:53.952303 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 29 10:50:53.952367 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 29 10:50:53.952379 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 29 10:50:53.952387 kernel: PCI host bridge to bus 0000:00 Jan 29 10:50:53.952458 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 29 10:50:53.952519 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 29 10:50:53.952579 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 29 10:50:53.952639 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 10:50:53.952724 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 29 10:50:53.952801 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 10:50:53.952868 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 29 10:50:53.952933 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 29 10:50:53.952998 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 10:50:53.953065 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 10:50:53.953133 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 29 10:50:53.953228 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 29 10:50:53.953295 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 29 10:50:53.953355 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 29 10:50:53.953413 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 29 10:50:53.953423 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 29 10:50:53.953430 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 29 10:50:53.953437 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 29 10:50:53.953444 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 29 10:50:53.953451 kernel: iommu: Default domain type: Translated Jan 29 10:50:53.953460 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 29 10:50:53.953468 kernel: efivars: Registered efivars operations Jan 29 10:50:53.953475 kernel: vgaarb: loaded Jan 29 10:50:53.953482 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 29 10:50:53.953489 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 10:50:53.953496 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 10:50:53.953503 kernel: pnp: PnP ACPI init Jan 29 10:50:53.953573 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 29 10:50:53.953585 kernel: pnp: PnP ACPI: found 1 devices Jan 29 10:50:53.953593 kernel: NET: Registered PF_INET protocol family Jan 29 10:50:53.953600 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 10:50:53.953607 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 10:50:53.953614 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 10:50:53.953622 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 10:50:53.953629 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 10:50:53.953636 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 10:50:53.953643 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 10:50:53.953655 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 10:50:53.953665 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 10:50:53.953672 kernel: PCI: CLS 0 bytes, default 64 Jan 29 10:50:53.953679 kernel: kvm [1]: HYP mode not available Jan 29 10:50:53.953686 kernel: Initialise system trusted keyrings Jan 29 10:50:53.953693 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 10:50:53.953700 kernel: Key type asymmetric registered Jan 29 10:50:53.953707 kernel: Asymmetric key parser 'x509' registered Jan 29 10:50:53.953714 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 29 10:50:53.953724 kernel: io scheduler mq-deadline registered Jan 29 10:50:53.953731 kernel: io scheduler kyber registered Jan 29 10:50:53.953738 kernel: io scheduler bfq registered Jan 29 10:50:53.953746 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 29 10:50:53.953755 kernel: ACPI: button: Power Button [PWRB] Jan 29 10:50:53.953764 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 29 10:50:53.953829 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 29 10:50:53.953839 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 10:50:53.953846 kernel: thunder_xcv, ver 1.0 Jan 29 10:50:53.953855 kernel: thunder_bgx, ver 1.0 Jan 29 10:50:53.953862 kernel: nicpf, ver 1.0 Jan 29 10:50:53.953869 kernel: nicvf, ver 1.0 Jan 29 10:50:53.953943 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 29 10:50:53.954007 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-29T10:50:53 UTC (1738147853) Jan 29 10:50:53.954016 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 10:50:53.954024 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 29 10:50:53.954031 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 29 10:50:53.954040 kernel: watchdog: Hard watchdog permanently disabled Jan 29 10:50:53.954048 kernel: NET: Registered PF_INET6 protocol family Jan 29 10:50:53.954055 kernel: Segment Routing with IPv6 Jan 29 10:50:53.954062 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 10:50:53.954069 kernel: NET: Registered PF_PACKET protocol family Jan 29 10:50:53.954076 kernel: Key type dns_resolver registered Jan 29 10:50:53.954083 kernel: registered taskstats version 1 Jan 29 10:50:53.954090 kernel: Loading compiled-in X.509 certificates Jan 29 10:50:53.954097 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: c31663d2c680b3b306c17f44b5295280d3a2e28a' Jan 29 10:50:53.954106 kernel: Key type .fscrypt registered Jan 29 10:50:53.954113 kernel: Key type fscrypt-provisioning registered Jan 29 10:50:53.954120 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 10:50:53.954127 kernel: ima: Allocated hash algorithm: sha1 Jan 29 10:50:53.954134 kernel: ima: No architecture policies found Jan 29 10:50:53.954158 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 29 10:50:53.954167 kernel: clk: Disabling unused clocks Jan 29 10:50:53.954174 kernel: Freeing unused kernel memory: 39936K Jan 29 10:50:53.954181 kernel: Run /init as init process Jan 29 10:50:53.954190 kernel: with arguments: Jan 29 10:50:53.954197 kernel: /init Jan 29 10:50:53.954204 kernel: with environment: Jan 29 10:50:53.954211 kernel: HOME=/ Jan 29 10:50:53.954218 kernel: TERM=linux Jan 29 10:50:53.954224 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 10:50:53.954233 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 10:50:53.954242 systemd[1]: Detected virtualization kvm. Jan 29 10:50:53.954252 systemd[1]: Detected architecture arm64. Jan 29 10:50:53.954259 systemd[1]: Running in initrd. Jan 29 10:50:53.954266 systemd[1]: No hostname configured, using default hostname. Jan 29 10:50:53.954274 systemd[1]: Hostname set to <localhost>. Jan 29 10:50:53.954282 systemd[1]: Initializing machine ID from VM UUID. Jan 29 10:50:53.954289 systemd[1]: Queued start job for default target initrd.target. Jan 29 10:50:53.954297 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 10:50:53.954305 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 10:50:53.954315 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 10:50:53.954323 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 10:50:53.954331 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 10:50:53.954339 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 10:50:53.954348 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 10:50:53.954356 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 10:50:53.954365 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 10:50:53.954373 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 10:50:53.954380 systemd[1]: Reached target paths.target - Path Units. Jan 29 10:50:53.954388 systemd[1]: Reached target slices.target - Slice Units. Jan 29 10:50:53.954396 systemd[1]: Reached target swap.target - Swaps. Jan 29 10:50:53.954403 systemd[1]: Reached target timers.target - Timer Units. Jan 29 10:50:53.954411 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 10:50:53.954419 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 10:50:53.954427 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 10:50:53.954436 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 10:50:53.954444 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 10:50:53.954452 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 10:50:53.954460 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 10:50:53.954467 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 10:50:53.954475 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 10:50:53.954483 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 10:50:53.954491 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 10:50:53.954498 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 10:50:53.954507 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 10:50:53.954515 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 10:50:53.954523 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 10:50:53.954530 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 10:50:53.954538 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 10:50:53.954545 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 10:50:53.954572 systemd-journald[239]: Collecting audit messages is disabled. Jan 29 10:50:53.954591 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 10:50:53.954601 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 10:50:53.954608 kernel: Bridge firewalling registered Jan 29 10:50:53.954616 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 10:50:53.954624 systemd-journald[239]: Journal started Jan 29 10:50:53.954648 systemd-journald[239]: Runtime Journal (/run/log/journal/da6184d4e2d64dceabff57748b90f607) is 5.9M, max 47.3M, 41.4M free. Jan 29 10:50:53.936970 systemd-modules-load[240]: Inserted module 'overlay' Jan 29 10:50:53.957480 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 10:50:53.953707 systemd-modules-load[240]: Inserted module 'br_netfilter' Jan 29 10:50:53.958723 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 10:50:53.960702 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 10:50:53.964692 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 10:50:53.966663 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 10:50:53.971386 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 10:50:53.973224 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 10:50:53.977321 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 10:50:53.987189 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 10:50:53.990224 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 10:50:53.991733 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 10:50:54.006311 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 10:50:54.008840 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 10:50:54.019233 dracut-cmdline[277]: dracut-dracut-053 Jan 29 10:50:54.021681 dracut-cmdline[277]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e6957044c3256d96283265c263579aa4275d1d707b02496fcb081f5fc6356346 Jan 29 10:50:54.039755 systemd-resolved[279]: Positive Trust Anchors: Jan 29 10:50:54.039776 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 10:50:54.039807 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 10:50:54.044600 systemd-resolved[279]: Defaulting to hostname 'linux'. Jan 29 10:50:54.045610 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 10:50:54.049662 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 10:50:54.090182 kernel: SCSI subsystem initialized Jan 29 10:50:54.095167 kernel: Loading iSCSI transport class v2.0-870. Jan 29 10:50:54.102170 kernel: iscsi: registered transport (tcp) Jan 29 10:50:54.117175 kernel: iscsi: registered transport (qla4xxx) Jan 29 10:50:54.117191 kernel: QLogic iSCSI HBA Driver Jan 29 10:50:54.162829 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 10:50:54.180298 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 10:50:54.196624 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 10:50:54.198383 kernel: device-mapper: uevent: version 1.0.3 Jan 29 10:50:54.198424 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 10:50:54.243188 kernel: raid6: neonx8 gen() 15609 MB/s Jan 29 10:50:54.260169 kernel: raid6: neonx4 gen() 15691 MB/s Jan 29 10:50:54.277174 kernel: raid6: neonx2 gen() 13114 MB/s Jan 29 10:50:54.294169 kernel: raid6: neonx1 gen() 10413 MB/s Jan 29 10:50:54.311172 kernel: raid6: int64x8 gen() 6729 MB/s Jan 29 10:50:54.328162 kernel: raid6: int64x4 gen() 7300 MB/s Jan 29 10:50:54.345161 kernel: raid6: int64x2 gen() 6068 MB/s Jan 29 10:50:54.362284 kernel: raid6: int64x1 gen() 5049 MB/s Jan 29 10:50:54.362300 kernel: raid6: using algorithm neonx4 gen() 15691 MB/s Jan 29 10:50:54.380315 kernel: raid6: .... xor() 12451 MB/s, rmw enabled Jan 29 10:50:54.380328 kernel: raid6: using neon recovery algorithm Jan 29 10:50:54.385175 kernel: xor: measuring software checksum speed Jan 29 10:50:54.386458 kernel: 8regs : 18461 MB/sec Jan 29 10:50:54.386471 kernel: 32regs : 21670 MB/sec Jan 29 10:50:54.387777 kernel: arm64_neon : 27710 MB/sec Jan 29 10:50:54.387806 kernel: xor: using function: arm64_neon (27710 MB/sec) Jan 29 10:50:54.446183 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 10:50:54.456101 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 10:50:54.468283 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 10:50:54.479954 systemd-udevd[462]: Using default interface naming scheme 'v255'. Jan 29 10:50:54.482993 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 10:50:54.486425 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 10:50:54.499931 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation Jan 29 10:50:54.523998 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 10:50:54.537294 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 10:50:54.578488 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 10:50:54.587304 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 10:50:54.601173 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 10:50:54.602730 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 10:50:54.604606 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 10:50:54.606983 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 10:50:54.617280 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 10:50:54.627173 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 29 10:50:54.640283 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 29 10:50:54.640512 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 10:50:54.640526 kernel: GPT:9289727 != 19775487 Jan 29 10:50:54.640535 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 10:50:54.640544 kernel: GPT:9289727 != 19775487 Jan 29 10:50:54.640560 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 10:50:54.640569 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 10:50:54.628811 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 10:50:54.634892 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 10:50:54.634998 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 10:50:54.639206 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 10:50:54.640324 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 10:50:54.640460 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 10:50:54.641701 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 10:50:54.649360 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 10:50:54.661067 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 10:50:54.670170 kernel: BTRFS: device fsid 1e2e5fa7-c757-4d5d-af66-73afe98fbaae devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (520) Jan 29 10:50:54.672182 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (506) Jan 29 10:50:54.673351 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 10:50:54.682360 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 10:50:54.688186 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 10:50:54.697171 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 10:50:54.701295 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 10:50:54.702497 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 10:50:54.708222 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 10:50:54.722335 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 10:50:54.727852 disk-uuid[562]: Primary Header is updated. Jan 29 10:50:54.727852 disk-uuid[562]: Secondary Entries is updated. Jan 29 10:50:54.727852 disk-uuid[562]: Secondary Header is updated. Jan 29 10:50:54.735165 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 10:50:55.746173 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 10:50:55.746450 disk-uuid[563]: The operation has completed successfully. Jan 29 10:50:55.775254 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 10:50:55.775370 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 10:50:55.791312 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 10:50:55.796571 sh[575]: Success Jan 29 10:50:55.808163 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 29 10:50:55.838240 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 10:50:55.854537 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 10:50:55.857167 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 10:50:55.868222 kernel: BTRFS info (device dm-0): first mount of filesystem 1e2e5fa7-c757-4d5d-af66-73afe98fbaae Jan 29 10:50:55.868264 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 29 10:50:55.868276 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 10:50:55.868285 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 10:50:55.869683 kernel: BTRFS info (device dm-0): using free space tree Jan 29 10:50:55.872838 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 10:50:55.874313 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 10:50:55.882288 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 10:50:55.883871 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 10:50:55.892004 kernel: BTRFS info (device vda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 10:50:55.892043 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 10:50:55.892053 kernel: BTRFS info (device vda6): using free space tree Jan 29 10:50:55.895168 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 10:50:55.902468 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 10:50:55.904272 kernel: BTRFS info (device vda6): last unmount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 10:50:55.908631 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 10:50:55.918311 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 10:50:55.981266 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 10:50:55.993394 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 10:50:56.018704 systemd-networkd[761]: lo: Link UP Jan 29 10:50:56.018711 systemd-networkd[761]: lo: Gained carrier Jan 29 10:50:56.019706 systemd-networkd[761]: Enumeration completed Jan 29 10:50:56.019791 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 10:50:56.020165 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 10:50:56.020168 systemd-networkd[761]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 10:50:56.021221 systemd[1]: Reached target network.target - Network. Jan 29 10:50:56.021332 systemd-networkd[761]: eth0: Link UP Jan 29 10:50:56.027794 ignition[669]: Ignition 2.20.0 Jan 29 10:50:56.021335 systemd-networkd[761]: eth0: Gained carrier Jan 29 10:50:56.027801 ignition[669]: Stage: fetch-offline Jan 29 10:50:56.021342 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 10:50:56.027833 ignition[669]: no configs at "/usr/lib/ignition/base.d" Jan 29 10:50:56.027841 ignition[669]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 10:50:56.027997 ignition[669]: parsed url from cmdline: "" Jan 29 10:50:56.028001 ignition[669]: no config URL provided Jan 29 10:50:56.028005 ignition[669]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 10:50:56.028012 ignition[669]: no config at "/usr/lib/ignition/user.ign" Jan 29 10:50:56.028037 ignition[669]: op(1): [started] loading QEMU firmware config module Jan 29 10:50:56.028041 ignition[669]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 29 10:50:56.032988 ignition[669]: op(1): [finished] loading QEMU firmware config module Jan 29 10:50:56.042216 systemd-networkd[761]: eth0: DHCPv4 address 10.0.0.53/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 10:50:56.076433 ignition[669]: parsing config with SHA512: 2e2995f490a015a2d6739fcab0bac68f40a68ca57d98a28c2c3ca301451e288446b400a15136852d3f6878f25c1b38a0e958fe348994cc13a40653ee5fed6012 Jan 29 10:50:56.080943 unknown[669]: fetched base config from "system" Jan 29 10:50:56.080953 unknown[669]: fetched user config from "qemu" Jan 29 10:50:56.081364 ignition[669]: fetch-offline: fetch-offline passed Jan 29 10:50:56.081293 systemd-resolved[279]: Detected conflict on linux IN A 10.0.0.53 Jan 29 10:50:56.081432 ignition[669]: Ignition finished successfully Jan 29 10:50:56.081300 systemd-resolved[279]: Hostname conflict, changing published hostname from 'linux' to 'linux8'. Jan 29 10:50:56.083370 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 10:50:56.084720 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 10:50:56.092375 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 10:50:56.102639 ignition[773]: Ignition 2.20.0 Jan 29 10:50:56.102650 ignition[773]: Stage: kargs Jan 29 10:50:56.102798 ignition[773]: no configs at "/usr/lib/ignition/base.d" Jan 29 10:50:56.102808 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 10:50:56.103692 ignition[773]: kargs: kargs passed Jan 29 10:50:56.103731 ignition[773]: Ignition finished successfully Jan 29 10:50:56.108223 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 10:50:56.110105 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 10:50:56.123528 ignition[781]: Ignition 2.20.0 Jan 29 10:50:56.124426 ignition[781]: Stage: disks Jan 29 10:50:56.124605 ignition[781]: no configs at "/usr/lib/ignition/base.d" Jan 29 10:50:56.124616 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 10:50:56.127317 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 10:50:56.125546 ignition[781]: disks: disks passed Jan 29 10:50:56.129407 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 10:50:56.125595 ignition[781]: Ignition finished successfully Jan 29 10:50:56.131229 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 10:50:56.132940 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 10:50:56.134869 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 10:50:56.136567 systemd[1]: Reached target basic.target - Basic System. Jan 29 10:50:56.149285 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 10:50:56.167535 systemd-fsck[791]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 10:50:56.171649 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 10:50:56.184274 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 10:50:56.225043 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 10:50:56.226659 kernel: EXT4-fs (vda9): mounted filesystem 88903c49-366d-43ff-90b1-141790b6e85c r/w with ordered data mode. Quota mode: none. Jan 29 10:50:56.226388 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 10:50:56.241240 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 10:50:56.243209 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 10:50:56.245571 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 10:50:56.245618 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 10:50:56.245650 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 10:50:56.253388 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (799) Jan 29 10:50:56.249811 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 10:50:56.258020 kernel: BTRFS info (device vda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 10:50:56.258041 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 10:50:56.258051 kernel: BTRFS info (device vda6): using free space tree Jan 29 10:50:56.253290 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 10:50:56.261189 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 10:50:56.262925 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 10:50:56.297931 initrd-setup-root[823]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 10:50:56.301637 initrd-setup-root[830]: cut: /sysroot/etc/group: No such file or directory Jan 29 10:50:56.305563 initrd-setup-root[837]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 10:50:56.309206 initrd-setup-root[844]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 10:50:56.378300 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 10:50:56.386255 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 10:50:56.388390 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 10:50:56.393155 kernel: BTRFS info (device vda6): last unmount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 10:50:56.408327 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 10:50:56.410170 ignition[912]: INFO : Ignition 2.20.0 Jan 29 10:50:56.410170 ignition[912]: INFO : Stage: mount Jan 29 10:50:56.410170 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 10:50:56.410170 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 10:50:56.414609 ignition[912]: INFO : mount: mount passed Jan 29 10:50:56.414609 ignition[912]: INFO : Ignition finished successfully Jan 29 10:50:56.411999 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 10:50:56.426273 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 10:50:56.865829 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 10:50:56.882366 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 10:50:56.888166 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (926) Jan 29 10:50:56.890482 kernel: BTRFS info (device vda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 10:50:56.890505 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 10:50:56.890516 kernel: BTRFS info (device vda6): using free space tree Jan 29 10:50:56.893326 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 10:50:56.894398 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 10:50:56.917417 ignition[943]: INFO : Ignition 2.20.0 Jan 29 10:50:56.917417 ignition[943]: INFO : Stage: files Jan 29 10:50:56.919015 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 10:50:56.919015 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 10:50:56.919015 ignition[943]: DEBUG : files: compiled without relabeling support, skipping Jan 29 10:50:56.922567 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 10:50:56.922567 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 10:50:56.922567 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 10:50:56.926837 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 10:50:56.926837 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 10:50:56.926837 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 10:50:56.926837 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 29 10:50:56.922987 unknown[943]: wrote ssh authorized keys file for user: core Jan 29 10:50:56.989593 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 10:50:57.100590 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 10:50:57.102595 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 10:50:57.102595 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 29 10:50:57.509940 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 10:50:57.641982 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 10:50:57.644025 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 10:50:57.644025 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 10:50:57.644025 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 10:50:57.644025 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 10:50:57.644025 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 10:50:57.644025 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 10:50:57.644025 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 10:50:57.644025 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 10:50:57.644025 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 10:50:57.644025 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 10:50:57.644025 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 10:50:57.644025 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 10:50:57.644025 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 10:50:57.644025 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Jan 29 10:50:57.785283 systemd-networkd[761]: eth0: Gained IPv6LL Jan 29 10:50:57.977462 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 10:50:58.515790 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 10:50:58.515790 ignition[943]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 29 10:50:58.519632 ignition[943]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 10:50:58.521792 ignition[943]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 10:50:58.521792 ignition[943]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 29 10:50:58.521792 ignition[943]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 29 10:50:58.521792 ignition[943]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 10:50:58.521792 ignition[943]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 10:50:58.521792 ignition[943]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 29 10:50:58.521792 ignition[943]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 29 10:50:58.575661 ignition[943]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 10:50:58.579835 ignition[943]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 10:50:58.582222 ignition[943]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 29 10:50:58.582222 ignition[943]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 29 10:50:58.582222 ignition[943]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 10:50:58.582222 ignition[943]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 10:50:58.582222 ignition[943]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 10:50:58.582222 ignition[943]: INFO : files: files passed Jan 29 10:50:58.582222 ignition[943]: INFO : Ignition finished successfully Jan 29 10:50:58.584318 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 10:50:58.596411 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 10:50:58.599293 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 10:50:58.601068 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 10:50:58.601213 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 10:50:58.610622 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory Jan 29 10:50:58.615189 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 10:50:58.615189 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 10:50:58.618972 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 10:50:58.620745 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 10:50:58.623096 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 10:50:58.631355 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 10:50:58.652194 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 10:50:58.652317 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 10:50:58.654612 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 10:50:58.656435 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 10:50:58.658283 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 10:50:58.659221 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 10:50:58.677664 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 10:50:58.696494 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 10:50:58.706434 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 10:50:58.707788 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 10:50:58.710029 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 10:50:58.712006 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 10:50:58.712139 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 10:50:58.714929 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 10:50:58.717016 systemd[1]: Stopped target basic.target - Basic System. Jan 29 10:50:58.718711 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 10:50:58.720476 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 10:50:58.722434 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 10:50:58.724437 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 10:50:58.726296 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 10:50:58.728352 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 10:50:58.730372 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 10:50:58.732126 systemd[1]: Stopped target swap.target - Swaps. Jan 29 10:50:58.733711 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 10:50:58.733844 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 10:50:58.736162 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 10:50:58.738191 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 10:50:58.740196 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 10:50:58.742189 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 10:50:58.743477 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 10:50:58.743600 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 10:50:58.746579 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 10:50:58.746702 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 10:50:58.748822 systemd[1]: Stopped target paths.target - Path Units. Jan 29 10:50:58.750523 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 10:50:58.753393 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 10:50:58.754760 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 10:50:58.757088 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 10:50:58.758712 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 10:50:58.758814 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 10:50:58.760584 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 10:50:58.760673 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 10:50:58.762299 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 10:50:58.762407 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 10:50:58.764304 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 10:50:58.764408 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 10:50:58.777332 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 10:50:58.778946 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 10:50:58.779896 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 10:50:58.780021 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 10:50:58.782017 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 10:50:58.782117 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 10:50:58.787487 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 10:50:58.789184 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 10:50:58.792930 ignition[998]: INFO : Ignition 2.20.0 Jan 29 10:50:58.792930 ignition[998]: INFO : Stage: umount Jan 29 10:50:58.792930 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 10:50:58.792930 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 10:50:58.792930 ignition[998]: INFO : umount: umount passed Jan 29 10:50:58.792930 ignition[998]: INFO : Ignition finished successfully Jan 29 10:50:58.793347 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 10:50:58.793443 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 10:50:58.795135 systemd[1]: Stopped target network.target - Network. Jan 29 10:50:58.796460 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 10:50:58.796519 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 10:50:58.798428 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 10:50:58.798473 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 10:50:58.800626 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 10:50:58.800673 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 10:50:58.802233 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 10:50:58.802289 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 10:50:58.804283 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 10:50:58.806132 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 10:50:58.809479 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 10:50:58.813179 systemd-networkd[761]: eth0: DHCPv6 lease lost Jan 29 10:50:58.815666 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 10:50:58.815773 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 10:50:58.822171 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 10:50:58.822279 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 10:50:58.825310 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 10:50:58.825353 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 10:50:58.841326 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 10:50:58.842249 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 10:50:58.842324 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 10:50:58.844315 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 10:50:58.844363 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 10:50:58.846155 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 10:50:58.846208 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 10:50:58.848295 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 10:50:58.848348 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 10:50:58.850427 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 10:50:58.877533 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 10:50:58.877641 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 10:50:58.879624 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 10:50:58.879732 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 10:50:58.881640 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 10:50:58.881713 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 10:50:58.883896 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 10:50:58.883957 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 10:50:58.885445 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 10:50:58.885480 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 10:50:58.887194 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 10:50:58.887256 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 10:50:58.889940 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 10:50:58.889986 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 10:50:58.892735 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 10:50:58.892783 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 10:50:58.895697 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 10:50:58.895743 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 10:50:58.908321 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 10:50:58.909359 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 10:50:58.909417 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 10:50:58.911557 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 10:50:58.911601 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 10:50:58.913689 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 10:50:58.913737 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 10:50:58.915902 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 10:50:58.915947 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 10:50:58.918286 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 10:50:58.918369 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 10:50:58.920928 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 10:50:58.922766 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 10:50:58.933125 systemd[1]: Switching root. Jan 29 10:50:58.967633 systemd-journald[239]: Journal stopped Jan 29 10:50:59.712112 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Jan 29 10:50:59.712188 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 10:50:59.712202 kernel: SELinux: policy capability open_perms=1 Jan 29 10:50:59.712213 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 10:50:59.712226 kernel: SELinux: policy capability always_check_network=0 Jan 29 10:50:59.712235 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 10:50:59.712245 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 10:50:59.712254 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 10:50:59.712272 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 10:50:59.712282 kernel: audit: type=1403 audit(1738147859.124:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 10:50:59.712293 systemd[1]: Successfully loaded SELinux policy in 33.330ms. Jan 29 10:50:59.712313 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.236ms. Jan 29 10:50:59.712324 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 10:50:59.712337 systemd[1]: Detected virtualization kvm. Jan 29 10:50:59.712349 systemd[1]: Detected architecture arm64. Jan 29 10:50:59.712359 systemd[1]: Detected first boot. Jan 29 10:50:59.712370 systemd[1]: Initializing machine ID from VM UUID. Jan 29 10:50:59.712382 zram_generator::config[1042]: No configuration found. Jan 29 10:50:59.712393 systemd[1]: Populated /etc with preset unit settings. Jan 29 10:50:59.712404 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 10:50:59.712415 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 10:50:59.712427 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 10:50:59.712441 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 10:50:59.712452 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 10:50:59.712462 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 10:50:59.712473 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 10:50:59.712484 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 10:50:59.712494 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 10:50:59.712505 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 10:50:59.712517 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 10:50:59.712528 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 10:50:59.712540 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 10:50:59.712550 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 10:50:59.712561 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 10:50:59.712572 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 10:50:59.712583 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 10:50:59.712594 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 29 10:50:59.712605 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 10:50:59.712617 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 10:50:59.712628 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 10:50:59.712638 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 10:50:59.712649 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 10:50:59.712660 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 10:50:59.712673 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 10:50:59.712684 systemd[1]: Reached target slices.target - Slice Units. Jan 29 10:50:59.712695 systemd[1]: Reached target swap.target - Swaps. Jan 29 10:50:59.712708 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 10:50:59.712720 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 10:50:59.712730 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 10:50:59.712741 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 10:50:59.712761 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 10:50:59.712772 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 10:50:59.712783 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 10:50:59.712793 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 10:50:59.712804 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 10:50:59.712820 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 10:50:59.712831 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 10:50:59.712842 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 10:50:59.712853 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 10:50:59.712864 systemd[1]: Reached target machines.target - Containers. Jan 29 10:50:59.712875 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 10:50:59.712886 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 10:50:59.712897 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 10:50:59.712908 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 10:50:59.712922 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 10:50:59.712933 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 10:50:59.712943 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 10:50:59.712955 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 10:50:59.712965 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 10:50:59.712977 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 10:50:59.712988 kernel: fuse: init (API version 7.39) Jan 29 10:50:59.712998 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 10:50:59.713010 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 10:50:59.713020 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 10:50:59.713031 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 10:50:59.713041 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 10:50:59.713052 kernel: loop: module loaded Jan 29 10:50:59.713062 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 10:50:59.713072 kernel: ACPI: bus type drm_connector registered Jan 29 10:50:59.713083 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 10:50:59.713095 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 10:50:59.713108 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 10:50:59.713118 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 10:50:59.713129 systemd[1]: Stopped verity-setup.service. Jan 29 10:50:59.713140 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 10:50:59.713160 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 10:50:59.713171 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 10:50:59.713182 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 10:50:59.713192 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 10:50:59.713202 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 10:50:59.713233 systemd-journald[1120]: Collecting audit messages is disabled. Jan 29 10:50:59.713265 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 10:50:59.713279 systemd-journald[1120]: Journal started Jan 29 10:50:59.713307 systemd-journald[1120]: Runtime Journal (/run/log/journal/da6184d4e2d64dceabff57748b90f607) is 5.9M, max 47.3M, 41.4M free. Jan 29 10:50:59.492480 systemd[1]: Queued start job for default target multi-user.target. Jan 29 10:50:59.509574 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 10:50:59.509934 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 10:50:59.715396 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 10:50:59.716172 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 10:50:59.717664 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 10:50:59.717822 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 10:50:59.719390 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 10:50:59.719520 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 10:50:59.720895 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 10:50:59.721032 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 10:50:59.722419 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 10:50:59.722565 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 10:50:59.724234 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 10:50:59.724373 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 10:50:59.725723 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 10:50:59.725871 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 10:50:59.727320 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 10:50:59.728809 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 10:50:59.730374 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 10:50:59.743039 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 10:50:59.756299 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 10:50:59.758659 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 10:50:59.759798 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 10:50:59.759839 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 10:50:59.761826 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 10:50:59.764065 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 10:50:59.766311 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 10:50:59.767441 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 10:50:59.768763 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 10:50:59.770888 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 10:50:59.772196 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 10:50:59.776424 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 10:50:59.777628 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 10:50:59.780287 systemd-journald[1120]: Time spent on flushing to /var/log/journal/da6184d4e2d64dceabff57748b90f607 is 21.484ms for 862 entries. Jan 29 10:50:59.780287 systemd-journald[1120]: System Journal (/var/log/journal/da6184d4e2d64dceabff57748b90f607) is 8.0M, max 195.6M, 187.6M free. Jan 29 10:50:59.812255 systemd-journald[1120]: Received client request to flush runtime journal. Jan 29 10:50:59.812302 kernel: loop0: detected capacity change from 0 to 189592 Jan 29 10:50:59.780471 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 10:50:59.783978 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 10:50:59.789350 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 10:50:59.792063 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 10:50:59.793671 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 10:50:59.795008 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 10:50:59.796559 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 10:50:59.798889 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 10:50:59.811573 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 10:50:59.821660 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 10:50:59.820353 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 10:50:59.824405 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 10:50:59.827813 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 10:50:59.832371 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 10:50:59.840821 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 10:50:59.842302 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 10:50:59.844035 systemd-tmpfiles[1154]: ACLs are not supported, ignoring. Jan 29 10:50:59.844054 systemd-tmpfiles[1154]: ACLs are not supported, ignoring. Jan 29 10:50:59.845407 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 10:50:59.847195 kernel: loop1: detected capacity change from 0 to 113552 Jan 29 10:50:59.851017 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 10:50:59.861387 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 10:50:59.881416 kernel: loop2: detected capacity change from 0 to 116784 Jan 29 10:50:59.888832 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 10:50:59.898408 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 10:50:59.910526 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Jan 29 10:50:59.910546 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Jan 29 10:50:59.914370 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 10:50:59.917169 kernel: loop3: detected capacity change from 0 to 189592 Jan 29 10:50:59.923171 kernel: loop4: detected capacity change from 0 to 113552 Jan 29 10:50:59.931171 kernel: loop5: detected capacity change from 0 to 116784 Jan 29 10:50:59.935175 (sd-merge)[1183]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 29 10:50:59.935571 (sd-merge)[1183]: Merged extensions into '/usr'. Jan 29 10:50:59.938869 systemd[1]: Reloading requested from client PID 1153 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 10:50:59.938888 systemd[1]: Reloading... Jan 29 10:50:59.987738 zram_generator::config[1207]: No configuration found. Jan 29 10:51:00.087825 ldconfig[1148]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 10:51:00.093691 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 10:51:00.129192 systemd[1]: Reloading finished in 189 ms. Jan 29 10:51:00.163620 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 10:51:00.166186 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 10:51:00.185415 systemd[1]: Starting ensure-sysext.service... Jan 29 10:51:00.187388 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 10:51:00.196115 systemd[1]: Reloading requested from client PID 1244 ('systemctl') (unit ensure-sysext.service)... Jan 29 10:51:00.196133 systemd[1]: Reloading... Jan 29 10:51:00.206284 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 10:51:00.206498 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 10:51:00.207107 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 10:51:00.207340 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Jan 29 10:51:00.207383 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Jan 29 10:51:00.210377 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 10:51:00.210486 systemd-tmpfiles[1246]: Skipping /boot Jan 29 10:51:00.218911 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 10:51:00.219028 systemd-tmpfiles[1246]: Skipping /boot Jan 29 10:51:00.243174 zram_generator::config[1273]: No configuration found. Jan 29 10:51:00.327175 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 10:51:00.362843 systemd[1]: Reloading finished in 166 ms. Jan 29 10:51:00.377083 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 10:51:00.389666 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 10:51:00.397806 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 10:51:00.400361 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 10:51:00.405895 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 10:51:00.415254 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 10:51:00.420433 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 10:51:00.423994 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 10:51:00.427411 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 10:51:00.428786 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 10:51:00.431387 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 10:51:00.440006 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 10:51:00.441284 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 10:51:00.444601 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 10:51:00.448466 systemd-udevd[1314]: Using default interface naming scheme 'v255'. Jan 29 10:51:00.449745 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 10:51:00.453809 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 10:51:00.454827 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 10:51:00.458396 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 10:51:00.458531 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 10:51:00.460558 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 10:51:00.460695 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 10:51:00.473665 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 10:51:00.477047 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 10:51:00.479988 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 10:51:00.489232 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 10:51:00.492034 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 10:51:00.495483 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 10:51:00.498479 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 10:51:00.499595 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 10:51:00.502535 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 10:51:00.504689 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 10:51:00.508927 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 10:51:00.510645 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 10:51:00.512425 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 10:51:00.512556 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 10:51:00.527200 augenrules[1372]: No rules Jan 29 10:51:00.529179 systemd[1]: Finished ensure-sysext.service. Jan 29 10:51:00.534260 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 10:51:00.534453 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 10:51:00.536037 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 10:51:00.537225 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 10:51:00.539613 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 10:51:00.539749 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 10:51:00.541584 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 10:51:00.541722 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 10:51:00.545318 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 10:51:00.554474 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 29 10:51:00.556696 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 10:51:00.556762 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 10:51:00.560195 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1348) Jan 29 10:51:00.569370 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 10:51:00.570526 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 10:51:00.599133 systemd-networkd[1368]: lo: Link UP Jan 29 10:51:00.599155 systemd-networkd[1368]: lo: Gained carrier Jan 29 10:51:00.604511 systemd-networkd[1368]: Enumeration completed Jan 29 10:51:00.604703 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 10:51:00.605183 systemd-networkd[1368]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 10:51:00.605286 systemd-networkd[1368]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 10:51:00.606029 systemd-networkd[1368]: eth0: Link UP Jan 29 10:51:00.606100 systemd-networkd[1368]: eth0: Gained carrier Jan 29 10:51:00.606199 systemd-networkd[1368]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 10:51:00.611688 systemd-networkd[1368]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 10:51:00.614327 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 10:51:00.617945 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 10:51:00.622238 systemd-networkd[1368]: eth0: DHCPv4 address 10.0.0.53/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 10:51:00.623317 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 10:51:00.636857 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 10:51:00.650484 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 10:51:00.651298 systemd-resolved[1313]: Positive Trust Anchors: Jan 29 10:51:00.651314 systemd-resolved[1313]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 10:51:00.651346 systemd-resolved[1313]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 10:51:00.651794 systemd-timesyncd[1389]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 29 10:51:00.651847 systemd-timesyncd[1389]: Initial clock synchronization to Wed 2025-01-29 10:51:00.344997 UTC. Jan 29 10:51:00.652014 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 10:51:00.653793 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 10:51:00.658716 systemd-resolved[1313]: Defaulting to hostname 'linux'. Jan 29 10:51:00.658948 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 10:51:00.661864 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 10:51:00.664834 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 10:51:00.666077 systemd[1]: Reached target network.target - Network. Jan 29 10:51:00.667088 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 10:51:00.680206 lvm[1404]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 10:51:00.706180 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 10:51:00.725772 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 10:51:00.727439 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 10:51:00.728610 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 10:51:00.729865 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 10:51:00.731307 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 10:51:00.732785 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 10:51:00.734026 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 10:51:00.735399 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 10:51:00.736748 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 10:51:00.736787 systemd[1]: Reached target paths.target - Path Units. Jan 29 10:51:00.737743 systemd[1]: Reached target timers.target - Timer Units. Jan 29 10:51:00.739676 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 10:51:00.742197 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 10:51:00.753934 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 10:51:00.756261 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 10:51:00.757878 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 10:51:00.759118 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 10:51:00.760099 systemd[1]: Reached target basic.target - Basic System. Jan 29 10:51:00.761125 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 10:51:00.761172 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 10:51:00.762176 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 10:51:00.764240 lvm[1412]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 10:51:00.764508 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 10:51:00.766928 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 10:51:00.770467 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 10:51:00.771641 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 10:51:00.773460 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 10:51:00.777982 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 10:51:00.780085 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 10:51:00.782334 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 10:51:00.783938 jq[1415]: false Jan 29 10:51:00.786372 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 10:51:00.790255 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 10:51:00.791072 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 10:51:00.791783 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 10:51:00.795531 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 10:51:00.798333 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 10:51:00.802803 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 10:51:00.802961 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 10:51:00.804518 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 10:51:00.805217 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 10:51:00.806901 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 10:51:00.811604 jq[1426]: true Jan 29 10:51:00.807048 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 10:51:00.818929 dbus-daemon[1414]: [system] SELinux support is enabled Jan 29 10:51:00.820343 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 10:51:00.821803 extend-filesystems[1416]: Found loop3 Jan 29 10:51:00.821803 extend-filesystems[1416]: Found loop4 Jan 29 10:51:00.825342 extend-filesystems[1416]: Found loop5 Jan 29 10:51:00.825342 extend-filesystems[1416]: Found vda Jan 29 10:51:00.825342 extend-filesystems[1416]: Found vda1 Jan 29 10:51:00.825342 extend-filesystems[1416]: Found vda2 Jan 29 10:51:00.825342 extend-filesystems[1416]: Found vda3 Jan 29 10:51:00.825342 extend-filesystems[1416]: Found usr Jan 29 10:51:00.825342 extend-filesystems[1416]: Found vda4 Jan 29 10:51:00.825342 extend-filesystems[1416]: Found vda6 Jan 29 10:51:00.825342 extend-filesystems[1416]: Found vda7 Jan 29 10:51:00.825342 extend-filesystems[1416]: Found vda9 Jan 29 10:51:00.825342 extend-filesystems[1416]: Checking size of /dev/vda9 Jan 29 10:51:00.824928 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 10:51:00.838435 tar[1432]: linux-arm64/helm Jan 29 10:51:00.824960 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 10:51:00.846452 extend-filesystems[1416]: Resized partition /dev/vda9 Jan 29 10:51:00.829871 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 10:51:00.850438 extend-filesystems[1450]: resize2fs 1.47.1 (20-May-2024) Jan 29 10:51:00.855437 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 29 10:51:00.829889 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 10:51:00.839691 (ntainerd)[1442]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 10:51:00.857214 jq[1437]: true Jan 29 10:51:00.860316 systemd-logind[1422]: Watching system buttons on /dev/input/event0 (Power Button) Jan 29 10:51:00.894993 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1356) Jan 29 10:51:00.895036 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 29 10:51:00.861603 systemd-logind[1422]: New seat seat0. Jan 29 10:51:00.895155 update_engine[1425]: I20250129 10:51:00.875622 1425 main.cc:92] Flatcar Update Engine starting Jan 29 10:51:00.895155 update_engine[1425]: I20250129 10:51:00.882007 1425 update_check_scheduler.cc:74] Next update check in 5m42s Jan 29 10:51:00.890081 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 10:51:00.893468 systemd[1]: Started update-engine.service - Update Engine. Jan 29 10:51:00.900421 extend-filesystems[1450]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 10:51:00.900421 extend-filesystems[1450]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 10:51:00.900421 extend-filesystems[1450]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 29 10:51:00.906716 extend-filesystems[1416]: Resized filesystem in /dev/vda9 Jan 29 10:51:00.903746 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 10:51:00.908837 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 10:51:00.909012 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 10:51:00.953312 bash[1469]: Updated "/home/core/.ssh/authorized_keys" Jan 29 10:51:00.954769 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 10:51:00.956784 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 10:51:00.987075 locksmithd[1459]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 10:51:01.065348 containerd[1442]: time="2025-01-29T10:51:01.065255065Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 10:51:01.092015 containerd[1442]: time="2025-01-29T10:51:01.091943934Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 10:51:01.093476 containerd[1442]: time="2025-01-29T10:51:01.093430314Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 10:51:01.094268 containerd[1442]: time="2025-01-29T10:51:01.093547829Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 10:51:01.094268 containerd[1442]: time="2025-01-29T10:51:01.093581602Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 10:51:01.094268 containerd[1442]: time="2025-01-29T10:51:01.093733314Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 10:51:01.094268 containerd[1442]: time="2025-01-29T10:51:01.093753124Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 10:51:01.094268 containerd[1442]: time="2025-01-29T10:51:01.093810170Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 10:51:01.094268 containerd[1442]: time="2025-01-29T10:51:01.093822171Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 10:51:01.094268 containerd[1442]: time="2025-01-29T10:51:01.093982038Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 10:51:01.094268 containerd[1442]: time="2025-01-29T10:51:01.093995232Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 10:51:01.094268 containerd[1442]: time="2025-01-29T10:51:01.094006348Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 10:51:01.094268 containerd[1442]: time="2025-01-29T10:51:01.094014888Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 10:51:01.094268 containerd[1442]: time="2025-01-29T10:51:01.094080319Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 10:51:01.094631 containerd[1442]: time="2025-01-29T10:51:01.094303347Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 10:51:01.094631 containerd[1442]: time="2025-01-29T10:51:01.094401821Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 10:51:01.094631 containerd[1442]: time="2025-01-29T10:51:01.094415669Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 10:51:01.094631 containerd[1442]: time="2025-01-29T10:51:01.094489986Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 10:51:01.094631 containerd[1442]: time="2025-01-29T10:51:01.094542185Z" level=info msg="metadata content store policy set" policy=shared Jan 29 10:51:01.098123 containerd[1442]: time="2025-01-29T10:51:01.098091749Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 10:51:01.098250 containerd[1442]: time="2025-01-29T10:51:01.098163027Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 10:51:01.098250 containerd[1442]: time="2025-01-29T10:51:01.098181260Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 10:51:01.098250 containerd[1442]: time="2025-01-29T10:51:01.098196339Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 10:51:01.098250 containerd[1442]: time="2025-01-29T10:51:01.098209571Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 10:51:01.098463 containerd[1442]: time="2025-01-29T10:51:01.098405981Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 10:51:01.098681 containerd[1442]: time="2025-01-29T10:51:01.098665244Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 10:51:01.098788 containerd[1442]: time="2025-01-29T10:51:01.098769757Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 10:51:01.098824 containerd[1442]: time="2025-01-29T10:51:01.098792260Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 10:51:01.098824 containerd[1442]: time="2025-01-29T10:51:01.098807724Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 10:51:01.098824 containerd[1442]: time="2025-01-29T10:51:01.098820225Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 10:51:01.098873 containerd[1442]: time="2025-01-29T10:51:01.098832534Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 10:51:01.098873 containerd[1442]: time="2025-01-29T10:51:01.098844498Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 10:51:01.098873 containerd[1442]: time="2025-01-29T10:51:01.098859884Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 10:51:01.098873 containerd[1442]: time="2025-01-29T10:51:01.098874578Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 10:51:01.098948 containerd[1442]: time="2025-01-29T10:51:01.098887349Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 10:51:01.098948 containerd[1442]: time="2025-01-29T10:51:01.098899774Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 10:51:01.098948 containerd[1442]: time="2025-01-29T10:51:01.098910506Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 10:51:01.098948 containerd[1442]: time="2025-01-29T10:51:01.098940664Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 10:51:01.099018 containerd[1442]: time="2025-01-29T10:51:01.098955627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 10:51:01.099018 containerd[1442]: time="2025-01-29T10:51:01.098967936Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 10:51:01.099018 containerd[1442]: time="2025-01-29T10:51:01.099001402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 10:51:01.099018 containerd[1442]: time="2025-01-29T10:51:01.099013788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 10:51:01.099091 containerd[1442]: time="2025-01-29T10:51:01.099027174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 10:51:01.099091 containerd[1442]: time="2025-01-29T10:51:01.099050331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 10:51:01.099091 containerd[1442]: time="2025-01-29T10:51:01.099064218Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 10:51:01.099091 containerd[1442]: time="2025-01-29T10:51:01.099077450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 10:51:01.099180 containerd[1442]: time="2025-01-29T10:51:01.099091644Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 10:51:01.099180 containerd[1442]: time="2025-01-29T10:51:01.099109416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 10:51:01.099180 containerd[1442]: time="2025-01-29T10:51:01.099124264Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 10:51:01.099180 containerd[1442]: time="2025-01-29T10:51:01.099135996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 10:51:01.099180 containerd[1442]: time="2025-01-29T10:51:01.099173578Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 10:51:01.099267 containerd[1442]: time="2025-01-29T10:51:01.099194580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 10:51:01.099267 containerd[1442]: time="2025-01-29T10:51:01.099208928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 10:51:01.099267 containerd[1442]: time="2025-01-29T10:51:01.099219391Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 10:51:01.099527 containerd[1442]: time="2025-01-29T10:51:01.099513120Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 10:51:01.099601 containerd[1442]: time="2025-01-29T10:51:01.099535008Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 10:51:01.099601 containerd[1442]: time="2025-01-29T10:51:01.099546817Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 10:51:01.099601 containerd[1442]: time="2025-01-29T10:51:01.099570551Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 10:51:01.099601 containerd[1442]: time="2025-01-29T10:51:01.099586437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 10:51:01.099601 containerd[1442]: time="2025-01-29T10:51:01.099597785Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 10:51:01.099690 containerd[1442]: time="2025-01-29T10:51:01.099607902Z" level=info msg="NRI interface is disabled by configuration." Jan 29 10:51:01.099690 containerd[1442]: time="2025-01-29T10:51:01.099618788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 10:51:01.101170 containerd[1442]: time="2025-01-29T10:51:01.100202938Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 10:51:01.101170 containerd[1442]: time="2025-01-29T10:51:01.100557445Z" level=info msg="Connect containerd service" Jan 29 10:51:01.101170 containerd[1442]: time="2025-01-29T10:51:01.100635070Z" level=info msg="using legacy CRI server" Jan 29 10:51:01.101170 containerd[1442]: time="2025-01-29T10:51:01.100644609Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 10:51:01.101170 containerd[1442]: time="2025-01-29T10:51:01.100889102Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 10:51:01.101844 containerd[1442]: time="2025-01-29T10:51:01.101800024Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 10:51:01.102707 containerd[1442]: time="2025-01-29T10:51:01.102612512Z" level=info msg="Start subscribing containerd event" Jan 29 10:51:01.102707 containerd[1442]: time="2025-01-29T10:51:01.102668249Z" level=info msg="Start recovering state" Jan 29 10:51:01.102774 containerd[1442]: time="2025-01-29T10:51:01.102733450Z" level=info msg="Start event monitor" Jan 29 10:51:01.102774 containerd[1442]: time="2025-01-29T10:51:01.102743567Z" level=info msg="Start snapshots syncer" Jan 29 10:51:01.102774 containerd[1442]: time="2025-01-29T10:51:01.102752337Z" level=info msg="Start cni network conf syncer for default" Jan 29 10:51:01.102774 containerd[1442]: time="2025-01-29T10:51:01.102758992Z" level=info msg="Start streaming server" Jan 29 10:51:01.103550 containerd[1442]: time="2025-01-29T10:51:01.103525358Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 10:51:01.103673 containerd[1442]: time="2025-01-29T10:51:01.103657413Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 10:51:01.103780 containerd[1442]: time="2025-01-29T10:51:01.103765157Z" level=info msg="containerd successfully booted in 0.039808s" Jan 29 10:51:01.103843 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 10:51:01.209179 tar[1432]: linux-arm64/LICENSE Jan 29 10:51:01.209277 tar[1432]: linux-arm64/README.md Jan 29 10:51:01.223242 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 10:51:01.440230 sshd_keygen[1447]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 10:51:01.457662 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 10:51:01.473757 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 10:51:01.478792 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 10:51:01.479056 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 10:51:01.482620 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 10:51:01.497869 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 10:51:01.509452 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 10:51:01.511764 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 29 10:51:01.513074 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 10:51:02.649265 systemd-networkd[1368]: eth0: Gained IPv6LL Jan 29 10:51:02.651588 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 10:51:02.653307 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 10:51:02.665356 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 10:51:02.667590 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:51:02.669596 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 10:51:02.682736 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 10:51:02.682913 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 10:51:02.684985 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 10:51:02.690352 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 10:51:03.130570 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:51:03.132110 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 10:51:03.134807 (kubelet)[1527]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 10:51:03.137525 systemd[1]: Startup finished in 616ms (kernel) + 5.388s (initrd) + 4.050s (userspace) = 10.055s. Jan 29 10:51:03.150719 agetty[1504]: failed to open credentials directory Jan 29 10:51:03.152054 agetty[1503]: failed to open credentials directory Jan 29 10:51:03.536705 kubelet[1527]: E0129 10:51:03.536598 1527 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 10:51:03.538696 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 10:51:03.538832 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 10:51:06.023155 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 10:51:06.024762 systemd[1]: Started sshd@0-10.0.0.53:22-10.0.0.1:52702.service - OpenSSH per-connection server daemon (10.0.0.1:52702). Jan 29 10:51:06.092927 sshd[1541]: Accepted publickey for core from 10.0.0.1 port 52702 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 10:51:06.094691 sshd-session[1541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:51:06.103717 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 10:51:06.116456 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 10:51:06.118021 systemd-logind[1422]: New session 1 of user core. Jan 29 10:51:06.127093 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 10:51:06.137491 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 10:51:06.156307 (systemd)[1545]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 10:51:06.236966 systemd[1545]: Queued start job for default target default.target. Jan 29 10:51:06.245085 systemd[1545]: Created slice app.slice - User Application Slice. Jan 29 10:51:06.245138 systemd[1545]: Reached target paths.target - Paths. Jan 29 10:51:06.245170 systemd[1545]: Reached target timers.target - Timers. Jan 29 10:51:06.246373 systemd[1545]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 10:51:06.255715 systemd[1545]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 10:51:06.255778 systemd[1545]: Reached target sockets.target - Sockets. Jan 29 10:51:06.255790 systemd[1545]: Reached target basic.target - Basic System. Jan 29 10:51:06.255827 systemd[1545]: Reached target default.target - Main User Target. Jan 29 10:51:06.255854 systemd[1545]: Startup finished in 93ms. Jan 29 10:51:06.256082 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 10:51:06.257687 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 10:51:06.316537 systemd[1]: Started sshd@1-10.0.0.53:22-10.0.0.1:52710.service - OpenSSH per-connection server daemon (10.0.0.1:52710). Jan 29 10:51:06.358362 sshd[1556]: Accepted publickey for core from 10.0.0.1 port 52710 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 10:51:06.359619 sshd-session[1556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:51:06.364652 systemd-logind[1422]: New session 2 of user core. Jan 29 10:51:06.373316 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 10:51:06.423249 sshd[1558]: Connection closed by 10.0.0.1 port 52710 Jan 29 10:51:06.423859 sshd-session[1556]: pam_unix(sshd:session): session closed for user core Jan 29 10:51:06.430077 systemd[1]: sshd@1-10.0.0.53:22-10.0.0.1:52710.service: Deactivated successfully. Jan 29 10:51:06.431350 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 10:51:06.434393 systemd-logind[1422]: Session 2 logged out. Waiting for processes to exit. Jan 29 10:51:06.435493 systemd[1]: Started sshd@2-10.0.0.53:22-10.0.0.1:52722.service - OpenSSH per-connection server daemon (10.0.0.1:52722). Jan 29 10:51:06.436438 systemd-logind[1422]: Removed session 2. Jan 29 10:51:06.473670 sshd[1563]: Accepted publickey for core from 10.0.0.1 port 52722 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 10:51:06.474837 sshd-session[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:51:06.479929 systemd-logind[1422]: New session 3 of user core. Jan 29 10:51:06.489301 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 10:51:06.535490 sshd[1565]: Connection closed by 10.0.0.1 port 52722 Jan 29 10:51:06.535819 sshd-session[1563]: pam_unix(sshd:session): session closed for user core Jan 29 10:51:06.552448 systemd[1]: sshd@2-10.0.0.53:22-10.0.0.1:52722.service: Deactivated successfully. Jan 29 10:51:06.553808 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 10:51:06.554994 systemd-logind[1422]: Session 3 logged out. Waiting for processes to exit. Jan 29 10:51:06.556023 systemd[1]: Started sshd@3-10.0.0.53:22-10.0.0.1:52738.service - OpenSSH per-connection server daemon (10.0.0.1:52738). Jan 29 10:51:06.556743 systemd-logind[1422]: Removed session 3. Jan 29 10:51:06.595018 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 52738 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 10:51:06.596084 sshd-session[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:51:06.599990 systemd-logind[1422]: New session 4 of user core. Jan 29 10:51:06.609300 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 10:51:06.658394 sshd[1572]: Connection closed by 10.0.0.1 port 52738 Jan 29 10:51:06.658713 sshd-session[1570]: pam_unix(sshd:session): session closed for user core Jan 29 10:51:06.668982 systemd[1]: sshd@3-10.0.0.53:22-10.0.0.1:52738.service: Deactivated successfully. Jan 29 10:51:06.671457 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 10:51:06.672626 systemd-logind[1422]: Session 4 logged out. Waiting for processes to exit. Jan 29 10:51:06.680474 systemd[1]: Started sshd@4-10.0.0.53:22-10.0.0.1:52742.service - OpenSSH per-connection server daemon (10.0.0.1:52742). Jan 29 10:51:06.681279 systemd-logind[1422]: Removed session 4. Jan 29 10:51:06.715451 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 52742 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 10:51:06.716594 sshd-session[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:51:06.720042 systemd-logind[1422]: New session 5 of user core. Jan 29 10:51:06.734315 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 10:51:06.797502 sudo[1580]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 10:51:06.797792 sudo[1580]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 10:51:06.810005 sudo[1580]: pam_unix(sudo:session): session closed for user root Jan 29 10:51:06.813070 sshd[1579]: Connection closed by 10.0.0.1 port 52742 Jan 29 10:51:06.813611 sshd-session[1577]: pam_unix(sshd:session): session closed for user core Jan 29 10:51:06.828400 systemd[1]: sshd@4-10.0.0.53:22-10.0.0.1:52742.service: Deactivated successfully. Jan 29 10:51:06.829707 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 10:51:06.830821 systemd-logind[1422]: Session 5 logged out. Waiting for processes to exit. Jan 29 10:51:06.832022 systemd[1]: Started sshd@5-10.0.0.53:22-10.0.0.1:52744.service - OpenSSH per-connection server daemon (10.0.0.1:52744). Jan 29 10:51:06.832742 systemd-logind[1422]: Removed session 5. Jan 29 10:51:06.870482 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 52744 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 10:51:06.871625 sshd-session[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:51:06.876220 systemd-logind[1422]: New session 6 of user core. Jan 29 10:51:06.885299 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 10:51:06.934680 sudo[1589]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 10:51:06.934946 sudo[1589]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 10:51:06.938298 sudo[1589]: pam_unix(sudo:session): session closed for user root Jan 29 10:51:06.942958 sudo[1588]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 10:51:06.943374 sudo[1588]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 10:51:06.961440 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 10:51:06.983417 augenrules[1611]: No rules Jan 29 10:51:06.984050 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 10:51:06.984227 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 10:51:06.985181 sudo[1588]: pam_unix(sudo:session): session closed for user root Jan 29 10:51:06.986970 sshd[1587]: Connection closed by 10.0.0.1 port 52744 Jan 29 10:51:06.986850 sshd-session[1585]: pam_unix(sshd:session): session closed for user core Jan 29 10:51:06.996455 systemd[1]: sshd@5-10.0.0.53:22-10.0.0.1:52744.service: Deactivated successfully. Jan 29 10:51:06.997760 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 10:51:06.998876 systemd-logind[1422]: Session 6 logged out. Waiting for processes to exit. Jan 29 10:51:06.999875 systemd[1]: Started sshd@6-10.0.0.53:22-10.0.0.1:52746.service - OpenSSH per-connection server daemon (10.0.0.1:52746). Jan 29 10:51:07.001583 systemd-logind[1422]: Removed session 6. Jan 29 10:51:07.043731 sshd[1619]: Accepted publickey for core from 10.0.0.1 port 52746 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 10:51:07.045034 sshd-session[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:51:07.048926 systemd-logind[1422]: New session 7 of user core. Jan 29 10:51:07.060325 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 10:51:07.110978 sudo[1622]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 10:51:07.111270 sudo[1622]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 10:51:07.449417 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 10:51:07.449515 (dockerd)[1642]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 10:51:07.695698 dockerd[1642]: time="2025-01-29T10:51:07.695622682Z" level=info msg="Starting up" Jan 29 10:51:08.183607 dockerd[1642]: time="2025-01-29T10:51:08.183545411Z" level=info msg="Loading containers: start." Jan 29 10:51:08.345232 kernel: Initializing XFRM netlink socket Jan 29 10:51:08.421767 systemd-networkd[1368]: docker0: Link UP Jan 29 10:51:08.454705 dockerd[1642]: time="2025-01-29T10:51:08.454564290Z" level=info msg="Loading containers: done." Jan 29 10:51:08.469471 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck786312829-merged.mount: Deactivated successfully. Jan 29 10:51:08.473463 dockerd[1642]: time="2025-01-29T10:51:08.473416946Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 10:51:08.473548 dockerd[1642]: time="2025-01-29T10:51:08.473531632Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 29 10:51:08.473737 dockerd[1642]: time="2025-01-29T10:51:08.473714162Z" level=info msg="Daemon has completed initialization" Jan 29 10:51:08.503922 dockerd[1642]: time="2025-01-29T10:51:08.503858452Z" level=info msg="API listen on /run/docker.sock" Jan 29 10:51:08.504091 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 10:51:09.279005 containerd[1442]: time="2025-01-29T10:51:09.278955966Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 29 10:51:10.048473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2671288445.mount: Deactivated successfully. Jan 29 10:51:12.767319 containerd[1442]: time="2025-01-29T10:51:12.767254426Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:51:12.768341 containerd[1442]: time="2025-01-29T10:51:12.768286675Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=25618072" Jan 29 10:51:12.769565 containerd[1442]: time="2025-01-29T10:51:12.769504590Z" level=info msg="ImageCreate event name:\"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:51:12.775649 containerd[1442]: time="2025-01-29T10:51:12.775571529Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:51:12.776699 containerd[1442]: time="2025-01-29T10:51:12.776572933Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"25614870\" in 3.497570574s" Jan 29 10:51:12.776699 containerd[1442]: time="2025-01-29T10:51:12.776609606Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\"" Jan 29 10:51:12.777329 containerd[1442]: time="2025-01-29T10:51:12.777292050Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 29 10:51:13.789172 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 10:51:13.803379 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:51:13.914789 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:51:13.919481 (kubelet)[1899]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 10:51:13.961278 kubelet[1899]: E0129 10:51:13.961184 1899 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 10:51:13.964356 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 10:51:13.964503 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 10:51:14.731394 containerd[1442]: time="2025-01-29T10:51:14.731333537Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:51:14.731920 containerd[1442]: time="2025-01-29T10:51:14.731877159Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=22469469" Jan 29 10:51:14.732647 containerd[1442]: time="2025-01-29T10:51:14.732622250Z" level=info msg="ImageCreate event name:\"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:51:14.736235 containerd[1442]: time="2025-01-29T10:51:14.736177185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:51:14.737286 containerd[1442]: time="2025-01-29T10:51:14.737255291Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"23873257\" in 1.959926153s" Jan 29 10:51:14.737328 containerd[1442]: time="2025-01-29T10:51:14.737290333Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\"" Jan 29 10:51:14.737974 containerd[1442]: time="2025-01-29T10:51:14.737848019Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 29 10:51:16.005669 containerd[1442]: time="2025-01-29T10:51:16.005607957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:51:16.006443 containerd[1442]: time="2025-01-29T10:51:16.006393949Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=17024219" Jan 29 10:51:16.007146 containerd[1442]: time="2025-01-29T10:51:16.007104972Z" level=info msg="ImageCreate event name:\"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:51:16.010250 containerd[1442]: time="2025-01-29T10:51:16.010205767Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:51:16.011306 containerd[1442]: time="2025-01-29T10:51:16.011269953Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"18428025\" in 1.273390095s" Jan 29 10:51:16.011348 containerd[1442]: time="2025-01-29T10:51:16.011307517Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\"" Jan 29 10:51:16.011773 containerd[1442]: time="2025-01-29T10:51:16.011744883Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 29 10:51:17.281595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount913009279.mount: Deactivated successfully. Jan 29 10:51:17.662535 containerd[1442]: time="2025-01-29T10:51:17.662400159Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:51:17.663242 containerd[1442]: time="2025-01-29T10:51:17.663179336Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=26772119" Jan 29 10:51:17.664018 containerd[1442]: time="2025-01-29T10:51:17.663977068Z" level=info msg="ImageCreate event name:\"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:51:17.666770 containerd[1442]: time="2025-01-29T10:51:17.666735485Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:51:17.667449 containerd[1442]: time="2025-01-29T10:51:17.667309714Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"26771136\" in 1.65553295s" Jan 29 10:51:17.667449 containerd[1442]: time="2025-01-29T10:51:17.667338901Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\"" Jan 29 10:51:17.667917 containerd[1442]: time="2025-01-29T10:51:17.667887447Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 10:51:18.469219 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount133339895.mount: Deactivated successfully. Jan 29 10:51:19.365030 containerd[1442]: time="2025-01-29T10:51:19.364972585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:51:19.365578 containerd[1442]: time="2025-01-29T10:51:19.365532956Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Jan 29 10:51:19.366432 containerd[1442]: time="2025-01-29T10:51:19.366397749Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:51:19.373258 containerd[1442]: time="2025-01-29T10:51:19.373201168Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:51:19.374216 containerd[1442]: time="2025-01-29T10:51:19.374191444Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.706265761s" Jan 29 10:51:19.374268 containerd[1442]: time="2025-01-29T10:51:19.374221419Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 29 10:51:19.374880 containerd[1442]: time="2025-01-29T10:51:19.374701629Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 29 10:51:19.906511 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1467989068.mount: Deactivated successfully. Jan 29 10:51:19.911595 containerd[1442]: time="2025-01-29T10:51:19.911536895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:51:19.912882 containerd[1442]: time="2025-01-29T10:51:19.912842036Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jan 29 10:51:19.913977 containerd[1442]: time="2025-01-29T10:51:19.913870181Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:51:19.917870 containerd[1442]: time="2025-01-29T10:51:19.917809879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:51:19.918824 containerd[1442]: time="2025-01-29T10:51:19.918761888Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 544.029486ms" Jan 29 10:51:19.918881 containerd[1442]: time="2025-01-29T10:51:19.918855762Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 29 10:51:19.919498 containerd[1442]: time="2025-01-29T10:51:19.919464086Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 29 10:51:20.567021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount667209071.mount: Deactivated successfully. Jan 29 10:51:23.200038 containerd[1442]: time="2025-01-29T10:51:23.199968080Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:51:23.200586 containerd[1442]: time="2025-01-29T10:51:23.200533928Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406427" Jan 29 10:51:23.201431 containerd[1442]: time="2025-01-29T10:51:23.201371743Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:51:23.204531 containerd[1442]: time="2025-01-29T10:51:23.204494865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:51:23.206318 containerd[1442]: time="2025-01-29T10:51:23.206280111Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.286785079s" Jan 29 10:51:23.206355 containerd[1442]: time="2025-01-29T10:51:23.206321586Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jan 29 10:51:24.214762 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 10:51:24.224362 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:51:24.318466 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:51:24.323947 (kubelet)[2058]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 10:51:24.359736 kubelet[2058]: E0129 10:51:24.359634 2058 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 10:51:24.361342 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 10:51:24.361490 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 10:51:27.956098 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:51:27.971660 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:51:27.992263 systemd[1]: Reloading requested from client PID 2074 ('systemctl') (unit session-7.scope)... Jan 29 10:51:27.992284 systemd[1]: Reloading... Jan 29 10:51:28.054179 zram_generator::config[2116]: No configuration found. Jan 29 10:51:28.300124 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 10:51:28.353222 systemd[1]: Reloading finished in 360 ms. Jan 29 10:51:28.394853 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:51:28.398254 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 10:51:28.398449 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:51:28.399896 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:51:28.498366 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:51:28.502073 (kubelet)[2160]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 10:51:28.539798 kubelet[2160]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 10:51:28.539798 kubelet[2160]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 10:51:28.539798 kubelet[2160]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 10:51:28.539798 kubelet[2160]: I0129 10:51:28.539743 2160 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 10:51:29.338167 kubelet[2160]: I0129 10:51:29.336475 2160 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 10:51:29.338167 kubelet[2160]: I0129 10:51:29.336509 2160 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 10:51:29.338167 kubelet[2160]: I0129 10:51:29.336750 2160 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 10:51:29.362092 kubelet[2160]: E0129 10:51:29.362046 2160 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.53:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Jan 29 10:51:29.362654 kubelet[2160]: I0129 10:51:29.362629 2160 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 10:51:29.371280 kubelet[2160]: E0129 10:51:29.371239 2160 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 10:51:29.371280 kubelet[2160]: I0129 10:51:29.371271 2160 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 10:51:29.374742 kubelet[2160]: I0129 10:51:29.374705 2160 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 10:51:29.374995 kubelet[2160]: I0129 10:51:29.374973 2160 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 10:51:29.375120 kubelet[2160]: I0129 10:51:29.375098 2160 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 10:51:29.375309 kubelet[2160]: I0129 10:51:29.375120 2160 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 10:51:29.375452 kubelet[2160]: I0129 10:51:29.375440 2160 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 10:51:29.375452 kubelet[2160]: I0129 10:51:29.375451 2160 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 10:51:29.375639 kubelet[2160]: I0129 10:51:29.375627 2160 state_mem.go:36] "Initialized new in-memory state store" Jan 29 10:51:29.377421 kubelet[2160]: I0129 10:51:29.377393 2160 kubelet.go:408] "Attempting to sync node with API server" Jan 29 10:51:29.377472 kubelet[2160]: I0129 10:51:29.377427 2160 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 10:51:29.380164 kubelet[2160]: I0129 10:51:29.377518 2160 kubelet.go:314] "Adding apiserver pod source" Jan 29 10:51:29.380164 kubelet[2160]: I0129 10:51:29.377532 2160 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 10:51:29.380164 kubelet[2160]: W0129 10:51:29.380008 2160 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Jan 29 10:51:29.380164 kubelet[2160]: E0129 10:51:29.380059 2160 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Jan 29 10:51:29.382261 kubelet[2160]: W0129 10:51:29.382205 2160 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.53:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Jan 29 10:51:29.382261 kubelet[2160]: E0129 10:51:29.382260 2160 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.53:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Jan 29 10:51:29.382680 kubelet[2160]: I0129 10:51:29.382648 2160 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 10:51:29.384515 kubelet[2160]: I0129 10:51:29.384443 2160 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 10:51:29.385128 kubelet[2160]: W0129 10:51:29.385096 2160 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 10:51:29.385894 kubelet[2160]: I0129 10:51:29.385778 2160 server.go:1269] "Started kubelet" Jan 29 10:51:29.386900 kubelet[2160]: I0129 10:51:29.386839 2160 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 10:51:29.387419 kubelet[2160]: I0129 10:51:29.386965 2160 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 10:51:29.387419 kubelet[2160]: I0129 10:51:29.387121 2160 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 10:51:29.387419 kubelet[2160]: I0129 10:51:29.387349 2160 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 10:51:29.387976 kubelet[2160]: I0129 10:51:29.387730 2160 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 10:51:29.388522 kubelet[2160]: I0129 10:51:29.388473 2160 server.go:460] "Adding debug handlers to kubelet server" Jan 29 10:51:29.389499 kubelet[2160]: I0129 10:51:29.389478 2160 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 10:51:29.389580 kubelet[2160]: E0129 10:51:29.389549 2160 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 10:51:29.389629 kubelet[2160]: I0129 10:51:29.389611 2160 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 10:51:29.389711 kubelet[2160]: E0129 10:51:29.389621 2160 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.53:6443: connect: connection refused" interval="200ms" Jan 29 10:51:29.389711 kubelet[2160]: I0129 10:51:29.389677 2160 reconciler.go:26] "Reconciler: start to sync state" Jan 29 10:51:29.390319 kubelet[2160]: I0129 10:51:29.389832 2160 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 10:51:29.390319 kubelet[2160]: W0129 10:51:29.389926 2160 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Jan 29 10:51:29.390319 kubelet[2160]: E0129 10:51:29.389967 2160 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Jan 29 10:51:29.390319 kubelet[2160]: E0129 10:51:29.388789 2160 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.53:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.53:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f244bd9eed19b default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 10:51:29.385755035 +0000 UTC m=+0.880827049,LastTimestamp:2025-01-29 10:51:29.385755035 +0000 UTC m=+0.880827049,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 10:51:29.391037 kubelet[2160]: E0129 10:51:29.391011 2160 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 10:51:29.392398 kubelet[2160]: I0129 10:51:29.392375 2160 factory.go:221] Registration of the containerd container factory successfully Jan 29 10:51:29.392398 kubelet[2160]: I0129 10:51:29.392396 2160 factory.go:221] Registration of the systemd container factory successfully Jan 29 10:51:29.401134 kubelet[2160]: I0129 10:51:29.401082 2160 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 10:51:29.402317 kubelet[2160]: I0129 10:51:29.402289 2160 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 10:51:29.402317 kubelet[2160]: I0129 10:51:29.402319 2160 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 10:51:29.402416 kubelet[2160]: I0129 10:51:29.402335 2160 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 10:51:29.402416 kubelet[2160]: E0129 10:51:29.402375 2160 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 10:51:29.403030 kubelet[2160]: W0129 10:51:29.402853 2160 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Jan 29 10:51:29.403030 kubelet[2160]: E0129 10:51:29.402907 2160 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Jan 29 10:51:29.405649 kubelet[2160]: I0129 10:51:29.405431 2160 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 10:51:29.405649 kubelet[2160]: I0129 10:51:29.405446 2160 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 10:51:29.405649 kubelet[2160]: I0129 10:51:29.405462 2160 state_mem.go:36] "Initialized new in-memory state store" Jan 29 10:51:29.475746 kubelet[2160]: I0129 10:51:29.475717 2160 policy_none.go:49] "None policy: Start" Jan 29 10:51:29.477058 kubelet[2160]: I0129 10:51:29.476909 2160 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 10:51:29.477058 kubelet[2160]: I0129 10:51:29.476936 2160 state_mem.go:35] "Initializing new in-memory state store" Jan 29 10:51:29.483294 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 10:51:29.490261 kubelet[2160]: E0129 10:51:29.490215 2160 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 10:51:29.493440 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 10:51:29.497872 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 10:51:29.502850 kubelet[2160]: E0129 10:51:29.502821 2160 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 10:51:29.506925 kubelet[2160]: I0129 10:51:29.506905 2160 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 10:51:29.507462 kubelet[2160]: I0129 10:51:29.507307 2160 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 10:51:29.507462 kubelet[2160]: I0129 10:51:29.507323 2160 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 10:51:29.508152 kubelet[2160]: I0129 10:51:29.508081 2160 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 10:51:29.508651 kubelet[2160]: E0129 10:51:29.508625 2160 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 29 10:51:29.590939 kubelet[2160]: E0129 10:51:29.590845 2160 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.53:6443: connect: connection refused" interval="400ms" Jan 29 10:51:29.608823 kubelet[2160]: I0129 10:51:29.608763 2160 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 10:51:29.609105 kubelet[2160]: E0129 10:51:29.609084 2160 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.53:6443/api/v1/nodes\": dial tcp 10.0.0.53:6443: connect: connection refused" node="localhost" Jan 29 10:51:29.710017 systemd[1]: Created slice kubepods-burstable-pod9d15970b67a1d341ed20ab95d3152efb.slice - libcontainer container kubepods-burstable-pod9d15970b67a1d341ed20ab95d3152efb.slice. Jan 29 10:51:29.720075 systemd[1]: Created slice kubepods-burstable-podfa5289f3c0ba7f1736282e713231ffc5.slice - libcontainer container kubepods-burstable-podfa5289f3c0ba7f1736282e713231ffc5.slice. Jan 29 10:51:29.736180 systemd[1]: Created slice kubepods-burstable-podc988230cd0d49eebfaffbefbe8c74a10.slice - libcontainer container kubepods-burstable-podc988230cd0d49eebfaffbefbe8c74a10.slice. Jan 29 10:51:29.792204 kubelet[2160]: I0129 10:51:29.792172 2160 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9d15970b67a1d341ed20ab95d3152efb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9d15970b67a1d341ed20ab95d3152efb\") " pod="kube-system/kube-apiserver-localhost" Jan 29 10:51:29.792204 kubelet[2160]: I0129 10:51:29.792207 2160 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9d15970b67a1d341ed20ab95d3152efb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9d15970b67a1d341ed20ab95d3152efb\") " pod="kube-system/kube-apiserver-localhost" Jan 29 10:51:29.792435 kubelet[2160]: I0129 10:51:29.792226 2160 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 10:51:29.792435 kubelet[2160]: I0129 10:51:29.792254 2160 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 10:51:29.792435 kubelet[2160]: I0129 10:51:29.792270 2160 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 10:51:29.792435 kubelet[2160]: I0129 10:51:29.792284 2160 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c988230cd0d49eebfaffbefbe8c74a10-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c988230cd0d49eebfaffbefbe8c74a10\") " pod="kube-system/kube-scheduler-localhost" Jan 29 10:51:29.792435 kubelet[2160]: I0129 10:51:29.792298 2160 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9d15970b67a1d341ed20ab95d3152efb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9d15970b67a1d341ed20ab95d3152efb\") " pod="kube-system/kube-apiserver-localhost" Jan 29 10:51:29.792588 kubelet[2160]: I0129 10:51:29.792313 2160 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 10:51:29.792588 kubelet[2160]: I0129 10:51:29.792326 2160 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 10:51:29.810115 kubelet[2160]: I0129 10:51:29.810063 2160 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 10:51:29.810433 kubelet[2160]: E0129 10:51:29.810409 2160 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.53:6443/api/v1/nodes\": dial tcp 10.0.0.53:6443: connect: connection refused" node="localhost" Jan 29 10:51:29.991966 kubelet[2160]: E0129 10:51:29.991850 2160 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.53:6443: connect: connection refused" interval="800ms" Jan 29 10:51:30.018921 kubelet[2160]: E0129 10:51:30.018547 2160 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:51:30.019306 containerd[1442]: time="2025-01-29T10:51:30.019253417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9d15970b67a1d341ed20ab95d3152efb,Namespace:kube-system,Attempt:0,}" Jan 29 10:51:30.034535 kubelet[2160]: E0129 10:51:30.034489 2160 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:51:30.034870 containerd[1442]: time="2025-01-29T10:51:30.034811995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fa5289f3c0ba7f1736282e713231ffc5,Namespace:kube-system,Attempt:0,}" Jan 29 10:51:30.039201 kubelet[2160]: E0129 10:51:30.039179 2160 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:51:30.039559 containerd[1442]: time="2025-01-29T10:51:30.039492138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c988230cd0d49eebfaffbefbe8c74a10,Namespace:kube-system,Attempt:0,}" Jan 29 10:51:30.211367 kubelet[2160]: I0129 10:51:30.211338 2160 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 10:51:30.211681 kubelet[2160]: E0129 10:51:30.211628 2160 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.53:6443/api/v1/nodes\": dial tcp 10.0.0.53:6443: connect: connection refused" node="localhost" Jan 29 10:51:30.232134 kubelet[2160]: W0129 10:51:30.232046 2160 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Jan 29 10:51:30.232134 kubelet[2160]: E0129 10:51:30.232113 2160 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Jan 29 10:51:30.239744 kubelet[2160]: W0129 10:51:30.239679 2160 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Jan 29 10:51:30.239744 kubelet[2160]: E0129 10:51:30.239713 2160 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Jan 29 10:51:30.535554 kubelet[2160]: W0129 10:51:30.535494 2160 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.53:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Jan 29 10:51:30.535678 kubelet[2160]: E0129 10:51:30.535563 2160 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.53:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Jan 29 10:51:30.611070 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount607958155.mount: Deactivated successfully. Jan 29 10:51:30.616566 containerd[1442]: time="2025-01-29T10:51:30.616485213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 10:51:30.618650 containerd[1442]: time="2025-01-29T10:51:30.618610116Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 29 10:51:30.619580 containerd[1442]: time="2025-01-29T10:51:30.619535777Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 10:51:30.621230 containerd[1442]: time="2025-01-29T10:51:30.621127866Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 10:51:30.622274 containerd[1442]: time="2025-01-29T10:51:30.622242336Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 10:51:30.623197 containerd[1442]: time="2025-01-29T10:51:30.623153728Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 10:51:30.625162 containerd[1442]: time="2025-01-29T10:51:30.624447415Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 10:51:30.625162 containerd[1442]: time="2025-01-29T10:51:30.624993140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 10:51:30.627046 containerd[1442]: time="2025-01-29T10:51:30.627001416Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 592.134425ms" Jan 29 10:51:30.627775 containerd[1442]: time="2025-01-29T10:51:30.627750498Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 608.387049ms" Jan 29 10:51:30.630167 containerd[1442]: time="2025-01-29T10:51:30.630125642Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 590.566397ms" Jan 29 10:51:30.763339 containerd[1442]: time="2025-01-29T10:51:30.763228969Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:51:30.763339 containerd[1442]: time="2025-01-29T10:51:30.763304189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:51:30.763339 containerd[1442]: time="2025-01-29T10:51:30.763330208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:51:30.763574 containerd[1442]: time="2025-01-29T10:51:30.763402950Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:51:30.763574 containerd[1442]: time="2025-01-29T10:51:30.763464301Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:51:30.763574 containerd[1442]: time="2025-01-29T10:51:30.763496595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:51:30.768006 containerd[1442]: time="2025-01-29T10:51:30.763686763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:51:30.768586 containerd[1442]: time="2025-01-29T10:51:30.768495923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:51:30.768923 containerd[1442]: time="2025-01-29T10:51:30.768854038Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:51:30.768923 containerd[1442]: time="2025-01-29T10:51:30.768900800Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:51:30.768923 containerd[1442]: time="2025-01-29T10:51:30.768911671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:51:30.769109 containerd[1442]: time="2025-01-29T10:51:30.768977459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:51:30.793113 kubelet[2160]: E0129 10:51:30.793002 2160 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.53:6443: connect: connection refused" interval="1.6s" Jan 29 10:51:30.793308 systemd[1]: Started cri-containerd-61e3220d8540d4559d014432ea28e1dc238fcc2fea208afb698f9d88f69d81ea.scope - libcontainer container 61e3220d8540d4559d014432ea28e1dc238fcc2fea208afb698f9d88f69d81ea. Jan 29 10:51:30.794832 systemd[1]: Started cri-containerd-c05f4f5cacba0fa92fbecb3a3067969197a25c784e2669f83c25c810b013bed6.scope - libcontainer container c05f4f5cacba0fa92fbecb3a3067969197a25c784e2669f83c25c810b013bed6. Jan 29 10:51:30.797291 systemd[1]: Started cri-containerd-91794e86f3bb1b9ce71abe1cc88efe312f152e62c93f1f8d654ecda087f19abd.scope - libcontainer container 91794e86f3bb1b9ce71abe1cc88efe312f152e62c93f1f8d654ecda087f19abd. Jan 29 10:51:30.806281 kubelet[2160]: W0129 10:51:30.805632 2160 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Jan 29 10:51:30.806381 kubelet[2160]: E0129 10:51:30.806308 2160 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Jan 29 10:51:30.827347 containerd[1442]: time="2025-01-29T10:51:30.827306967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9d15970b67a1d341ed20ab95d3152efb,Namespace:kube-system,Attempt:0,} returns sandbox id \"c05f4f5cacba0fa92fbecb3a3067969197a25c784e2669f83c25c810b013bed6\"" Jan 29 10:51:30.828634 kubelet[2160]: E0129 10:51:30.828431 2160 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:51:30.831518 containerd[1442]: time="2025-01-29T10:51:30.831487789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fa5289f3c0ba7f1736282e713231ffc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"61e3220d8540d4559d014432ea28e1dc238fcc2fea208afb698f9d88f69d81ea\"" Jan 29 10:51:30.832080 containerd[1442]: time="2025-01-29T10:51:30.832050940Z" level=info msg="CreateContainer within sandbox \"c05f4f5cacba0fa92fbecb3a3067969197a25c784e2669f83c25c810b013bed6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 10:51:30.832262 kubelet[2160]: E0129 10:51:30.832241 2160 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:51:30.833982 containerd[1442]: time="2025-01-29T10:51:30.833958057Z" level=info msg="CreateContainer within sandbox \"61e3220d8540d4559d014432ea28e1dc238fcc2fea208afb698f9d88f69d81ea\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 10:51:30.837197 containerd[1442]: time="2025-01-29T10:51:30.837132163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c988230cd0d49eebfaffbefbe8c74a10,Namespace:kube-system,Attempt:0,} returns sandbox id \"91794e86f3bb1b9ce71abe1cc88efe312f152e62c93f1f8d654ecda087f19abd\"" Jan 29 10:51:30.837998 kubelet[2160]: E0129 10:51:30.837845 2160 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:51:30.840340 containerd[1442]: time="2025-01-29T10:51:30.840280569Z" level=info msg="CreateContainer within sandbox \"91794e86f3bb1b9ce71abe1cc88efe312f152e62c93f1f8d654ecda087f19abd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 10:51:30.849683 containerd[1442]: time="2025-01-29T10:51:30.849553445Z" level=info msg="CreateContainer within sandbox \"c05f4f5cacba0fa92fbecb3a3067969197a25c784e2669f83c25c810b013bed6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0e0a2457014f42cf2d7157cd8ad504f44d24ab3a1d8bcead99f3bc7b162d5a55\"" Jan 29 10:51:30.850121 containerd[1442]: time="2025-01-29T10:51:30.850094493Z" level=info msg="StartContainer for \"0e0a2457014f42cf2d7157cd8ad504f44d24ab3a1d8bcead99f3bc7b162d5a55\"" Jan 29 10:51:30.856506 containerd[1442]: time="2025-01-29T10:51:30.856464287Z" level=info msg="CreateContainer within sandbox \"61e3220d8540d4559d014432ea28e1dc238fcc2fea208afb698f9d88f69d81ea\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"31a2ce5de444402ed92cd710b8a6a95a6554321c78cada88db3ee67386153d6d\"" Jan 29 10:51:30.857125 containerd[1442]: time="2025-01-29T10:51:30.857091227Z" level=info msg="StartContainer for \"31a2ce5de444402ed92cd710b8a6a95a6554321c78cada88db3ee67386153d6d\"" Jan 29 10:51:30.857754 containerd[1442]: time="2025-01-29T10:51:30.857714209Z" level=info msg="CreateContainer within sandbox \"91794e86f3bb1b9ce71abe1cc88efe312f152e62c93f1f8d654ecda087f19abd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c93e780061dd6b69c0c9d7ef4e23880cc603e599359fd5b53910ec50b36e6c17\"" Jan 29 10:51:30.858417 containerd[1442]: time="2025-01-29T10:51:30.858047303Z" level=info msg="StartContainer for \"c93e780061dd6b69c0c9d7ef4e23880cc603e599359fd5b53910ec50b36e6c17\"" Jan 29 10:51:30.877296 systemd[1]: Started cri-containerd-0e0a2457014f42cf2d7157cd8ad504f44d24ab3a1d8bcead99f3bc7b162d5a55.scope - libcontainer container 0e0a2457014f42cf2d7157cd8ad504f44d24ab3a1d8bcead99f3bc7b162d5a55. Jan 29 10:51:30.880975 systemd[1]: Started cri-containerd-c93e780061dd6b69c0c9d7ef4e23880cc603e599359fd5b53910ec50b36e6c17.scope - libcontainer container c93e780061dd6b69c0c9d7ef4e23880cc603e599359fd5b53910ec50b36e6c17. Jan 29 10:51:30.883741 systemd[1]: Started cri-containerd-31a2ce5de444402ed92cd710b8a6a95a6554321c78cada88db3ee67386153d6d.scope - libcontainer container 31a2ce5de444402ed92cd710b8a6a95a6554321c78cada88db3ee67386153d6d. Jan 29 10:51:30.912315 containerd[1442]: time="2025-01-29T10:51:30.912266813Z" level=info msg="StartContainer for \"0e0a2457014f42cf2d7157cd8ad504f44d24ab3a1d8bcead99f3bc7b162d5a55\" returns successfully" Jan 29 10:51:30.948205 containerd[1442]: time="2025-01-29T10:51:30.948017469Z" level=info msg="StartContainer for \"c93e780061dd6b69c0c9d7ef4e23880cc603e599359fd5b53910ec50b36e6c17\" returns successfully" Jan 29 10:51:30.953522 containerd[1442]: time="2025-01-29T10:51:30.948197086Z" level=info msg="StartContainer for \"31a2ce5de444402ed92cd710b8a6a95a6554321c78cada88db3ee67386153d6d\" returns successfully" Jan 29 10:51:31.016716 kubelet[2160]: I0129 10:51:31.016671 2160 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 10:51:31.017011 kubelet[2160]: E0129 10:51:31.016981 2160 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.53:6443/api/v1/nodes\": dial tcp 10.0.0.53:6443: connect: connection refused" node="localhost" Jan 29 10:51:31.409430 kubelet[2160]: E0129 10:51:31.409399 2160 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:51:31.411071 kubelet[2160]: E0129 10:51:31.411052 2160 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:51:31.413310 kubelet[2160]: E0129 10:51:31.413283 2160 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:51:32.414631 kubelet[2160]: E0129 10:51:32.414578 2160 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:51:32.618813 kubelet[2160]: I0129 10:51:32.618782 2160 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 10:51:33.372393 kubelet[2160]: E0129 10:51:33.372337 2160 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 29 10:51:33.381660 kubelet[2160]: I0129 10:51:33.381614 2160 apiserver.go:52] "Watching apiserver" Jan 29 10:51:33.389853 kubelet[2160]: I0129 10:51:33.389816 2160 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 10:51:33.463754 kubelet[2160]: I0129 10:51:33.463699 2160 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 29 10:51:35.475913 systemd[1]: Reloading requested from client PID 2439 ('systemctl') (unit session-7.scope)... Jan 29 10:51:35.475930 systemd[1]: Reloading... Jan 29 10:51:35.497399 kubelet[2160]: E0129 10:51:35.497365 2160 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:51:35.553229 zram_generator::config[2476]: No configuration found. Jan 29 10:51:35.640252 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 10:51:35.703835 systemd[1]: Reloading finished in 227 ms. Jan 29 10:51:35.735052 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:51:35.750000 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 10:51:35.750219 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:51:35.750264 systemd[1]: kubelet.service: Consumed 1.250s CPU time, 120.0M memory peak, 0B memory swap peak. Jan 29 10:51:35.759487 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:51:35.864797 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:51:35.868796 (kubelet)[2520]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 10:51:35.908836 kubelet[2520]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 10:51:35.908836 kubelet[2520]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 10:51:35.908836 kubelet[2520]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 10:51:35.909205 kubelet[2520]: I0129 10:51:35.908984 2520 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 10:51:35.915973 kubelet[2520]: I0129 10:51:35.915931 2520 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 10:51:35.915973 kubelet[2520]: I0129 10:51:35.915959 2520 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 10:51:35.916181 kubelet[2520]: I0129 10:51:35.916167 2520 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 10:51:35.917519 kubelet[2520]: I0129 10:51:35.917484 2520 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 10:51:35.919977 kubelet[2520]: I0129 10:51:35.919945 2520 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 10:51:35.924026 kubelet[2520]: E0129 10:51:35.923941 2520 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 10:51:35.924026 kubelet[2520]: I0129 10:51:35.924026 2520 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 10:51:35.926685 kubelet[2520]: I0129 10:51:35.926659 2520 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 10:51:35.926802 kubelet[2520]: I0129 10:51:35.926780 2520 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 10:51:35.926927 kubelet[2520]: I0129 10:51:35.926897 2520 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 10:51:35.927092 kubelet[2520]: I0129 10:51:35.926922 2520 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 10:51:35.927239 kubelet[2520]: I0129 10:51:35.927099 2520 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 10:51:35.927239 kubelet[2520]: I0129 10:51:35.927109 2520 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 10:51:35.927239 kubelet[2520]: I0129 10:51:35.927138 2520 state_mem.go:36] "Initialized new in-memory state store" Jan 29 10:51:35.927330 kubelet[2520]: I0129 10:51:35.927259 2520 kubelet.go:408] "Attempting to sync node with API server" Jan 29 10:51:35.927330 kubelet[2520]: I0129 10:51:35.927271 2520 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 10:51:35.927330 kubelet[2520]: I0129 10:51:35.927290 2520 kubelet.go:314] "Adding apiserver pod source" Jan 29 10:51:35.927786 kubelet[2520]: I0129 10:51:35.927299 2520 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 10:51:35.928637 kubelet[2520]: I0129 10:51:35.928616 2520 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 10:51:35.931507 kubelet[2520]: I0129 10:51:35.931468 2520 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 10:51:35.934898 kubelet[2520]: I0129 10:51:35.931898 2520 server.go:1269] "Started kubelet" Jan 29 10:51:35.934898 kubelet[2520]: I0129 10:51:35.932481 2520 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 10:51:35.934898 kubelet[2520]: I0129 10:51:35.933274 2520 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 10:51:35.934898 kubelet[2520]: I0129 10:51:35.933617 2520 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 10:51:35.934898 kubelet[2520]: I0129 10:51:35.932525 2520 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 10:51:35.936628 kubelet[2520]: I0129 10:51:35.936599 2520 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 10:51:35.937171 kubelet[2520]: I0129 10:51:35.937122 2520 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 10:51:35.937429 kubelet[2520]: I0129 10:51:35.937399 2520 server.go:460] "Adding debug handlers to kubelet server" Jan 29 10:51:35.938092 kubelet[2520]: I0129 10:51:35.937975 2520 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 10:51:35.938432 kubelet[2520]: I0129 10:51:35.938415 2520 reconciler.go:26] "Reconciler: start to sync state" Jan 29 10:51:35.938915 kubelet[2520]: E0129 10:51:35.937406 2520 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 10:51:35.940339 kubelet[2520]: I0129 10:51:35.940321 2520 factory.go:221] Registration of the systemd container factory successfully Jan 29 10:51:35.940648 kubelet[2520]: I0129 10:51:35.940629 2520 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 10:51:35.945483 kubelet[2520]: I0129 10:51:35.945433 2520 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 10:51:35.946410 kubelet[2520]: I0129 10:51:35.946388 2520 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 10:51:35.946469 kubelet[2520]: I0129 10:51:35.946415 2520 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 10:51:35.946469 kubelet[2520]: I0129 10:51:35.946431 2520 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 10:51:35.946533 kubelet[2520]: E0129 10:51:35.946477 2520 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 10:51:35.961165 kubelet[2520]: I0129 10:51:35.960738 2520 factory.go:221] Registration of the containerd container factory successfully Jan 29 10:51:35.966326 kubelet[2520]: E0129 10:51:35.966297 2520 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 10:51:35.989201 kubelet[2520]: I0129 10:51:35.989092 2520 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 10:51:35.989201 kubelet[2520]: I0129 10:51:35.989110 2520 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 10:51:35.989201 kubelet[2520]: I0129 10:51:35.989130 2520 state_mem.go:36] "Initialized new in-memory state store" Jan 29 10:51:35.990440 kubelet[2520]: I0129 10:51:35.990416 2520 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 10:51:35.990511 kubelet[2520]: I0129 10:51:35.990440 2520 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 10:51:35.990511 kubelet[2520]: I0129 10:51:35.990460 2520 policy_none.go:49] "None policy: Start" Jan 29 10:51:35.991110 kubelet[2520]: I0129 10:51:35.991089 2520 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 10:51:35.991169 kubelet[2520]: I0129 10:51:35.991118 2520 state_mem.go:35] "Initializing new in-memory state store" Jan 29 10:51:35.991327 kubelet[2520]: I0129 10:51:35.991312 2520 state_mem.go:75] "Updated machine memory state" Jan 29 10:51:35.997689 kubelet[2520]: I0129 10:51:35.997666 2520 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 10:51:35.998063 kubelet[2520]: I0129 10:51:35.997817 2520 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 10:51:35.998063 kubelet[2520]: I0129 10:51:35.997833 2520 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 10:51:35.998063 kubelet[2520]: I0129 10:51:35.997989 2520 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 10:51:36.052639 kubelet[2520]: E0129 10:51:36.052542 2520 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 29 10:51:36.102530 kubelet[2520]: I0129 10:51:36.102504 2520 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 10:51:36.110126 kubelet[2520]: I0129 10:51:36.110095 2520 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jan 29 10:51:36.110303 kubelet[2520]: I0129 10:51:36.110189 2520 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 29 10:51:36.139791 kubelet[2520]: I0129 10:51:36.139746 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 10:51:36.139791 kubelet[2520]: I0129 10:51:36.139788 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 10:51:36.139951 kubelet[2520]: I0129 10:51:36.139810 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9d15970b67a1d341ed20ab95d3152efb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9d15970b67a1d341ed20ab95d3152efb\") " pod="kube-system/kube-apiserver-localhost" Jan 29 10:51:36.139951 kubelet[2520]: I0129 10:51:36.139825 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9d15970b67a1d341ed20ab95d3152efb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9d15970b67a1d341ed20ab95d3152efb\") " pod="kube-system/kube-apiserver-localhost" Jan 29 10:51:36.139951 kubelet[2520]: I0129 10:51:36.139841 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 10:51:36.139951 kubelet[2520]: I0129 10:51:36.139887 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 10:51:36.139951 kubelet[2520]: I0129 10:51:36.139918 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9d15970b67a1d341ed20ab95d3152efb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9d15970b67a1d341ed20ab95d3152efb\") " pod="kube-system/kube-apiserver-localhost" Jan 29 10:51:36.140066 kubelet[2520]: I0129 10:51:36.139939 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 10:51:36.140066 kubelet[2520]: I0129 10:51:36.139966 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c988230cd0d49eebfaffbefbe8c74a10-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c988230cd0d49eebfaffbefbe8c74a10\") " pod="kube-system/kube-scheduler-localhost" Jan 29 10:51:36.353214 kubelet[2520]: E0129 10:51:36.353072 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:51:36.353214 kubelet[2520]: E0129 10:51:36.353072 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:51:36.353462 kubelet[2520]: E0129 10:51:36.353413 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:51:36.469586 sudo[2556]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 29 10:51:36.469892 sudo[2556]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 29 10:51:36.901987 sudo[2556]: pam_unix(sudo:session): session closed for user root Jan 29 10:51:36.928579 kubelet[2520]: I0129 10:51:36.928228 2520 apiserver.go:52] "Watching apiserver" Jan 29 10:51:36.938715 kubelet[2520]: I0129 10:51:36.938680 2520 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 10:51:36.983503 kubelet[2520]: E0129 10:51:36.977174 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:51:36.983503 kubelet[2520]: E0129 10:51:36.977527 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:51:37.017178 kubelet[2520]: E0129 10:51:37.016316 2520 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 29 10:51:37.017178 kubelet[2520]: E0129 10:51:37.016494 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:51:37.040049 kubelet[2520]: I0129 10:51:37.039898 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.039753183 podStartE2EDuration="2.039753183s" podCreationTimestamp="2025-01-29 10:51:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 10:51:37.039589481 +0000 UTC m=+1.167867498" watchObservedRunningTime="2025-01-29 10:51:37.039753183 +0000 UTC m=+1.168031160" Jan 29 10:51:37.107762 kubelet[2520]: I0129 10:51:37.107694 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.107674207 podStartE2EDuration="1.107674207s" podCreationTimestamp="2025-01-29 10:51:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 10:51:37.081567819 +0000 UTC m=+1.209845956" watchObservedRunningTime="2025-01-29 10:51:37.107674207 +0000 UTC m=+1.235952184" Jan 29 10:51:37.108219 kubelet[2520]: I0129 10:51:37.107838 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.107832828 podStartE2EDuration="1.107832828s" podCreationTimestamp="2025-01-29 10:51:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 10:51:37.107574594 +0000 UTC m=+1.235852651" watchObservedRunningTime="2025-01-29 10:51:37.107832828 +0000 UTC m=+1.236110845" Jan 29 10:51:37.978742 kubelet[2520]: E0129 10:51:37.978686 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:51:38.672945 sudo[1622]: pam_unix(sudo:session): session closed for user root Jan 29 10:51:38.674046 sshd[1621]: Connection closed by 10.0.0.1 port 52746 Jan 29 10:51:38.674382 sshd-session[1619]: pam_unix(sshd:session): session closed for user core Jan 29 10:51:38.677673 systemd[1]: sshd@6-10.0.0.53:22-10.0.0.1:52746.service: Deactivated successfully. Jan 29 10:51:38.679286 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 10:51:38.679457 systemd[1]: session-7.scope: Consumed 6.885s CPU time, 155.8M memory peak, 0B memory swap peak. Jan 29 10:51:38.680261 systemd-logind[1422]: Session 7 logged out. Waiting for processes to exit. Jan 29 10:51:38.681529 systemd-logind[1422]: Removed session 7. Jan 29 10:51:40.213422 kubelet[2520]: I0129 10:51:40.213380 2520 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 10:51:40.214295 containerd[1442]: time="2025-01-29T10:51:40.214252264Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 10:51:40.214700 kubelet[2520]: I0129 10:51:40.214661 2520 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 10:51:40.489117 kubelet[2520]: E0129 10:51:40.488999 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:51:40.985311 kubelet[2520]: E0129 10:51:40.985254 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:51:41.058059 systemd[1]: Created slice kubepods-besteffort-pod2a7c8b05_bf9f_45c2_803e_1198db7a3c89.slice - libcontainer container kubepods-besteffort-pod2a7c8b05_bf9f_45c2_803e_1198db7a3c89.slice. Jan 29 10:51:41.074075 systemd[1]: Created slice kubepods-burstable-pod97cbe104_0c6d_40f4_bd6a_64d6cd581d22.slice - libcontainer container kubepods-burstable-pod97cbe104_0c6d_40f4_bd6a_64d6cd581d22.slice. Jan 29 10:51:41.074298 kubelet[2520]: I0129 10:51:41.073272 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-cilium-cgroup\") pod \"cilium-9s8w6\" (UID: \"97cbe104-0c6d-40f4-bd6a-64d6cd581d22\") " pod="kube-system/cilium-9s8w6" Jan 29 10:51:41.074371 kubelet[2520]: I0129 10:51:41.074324 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msqnh\" (UniqueName: \"kubernetes.io/projected/2a7c8b05-bf9f-45c2-803e-1198db7a3c89-kube-api-access-msqnh\") pod \"kube-proxy-pd4f9\" (UID: \"2a7c8b05-bf9f-45c2-803e-1198db7a3c89\") " pod="kube-system/kube-proxy-pd4f9" Jan 29 10:51:41.074371 kubelet[2520]: I0129 10:51:41.074350 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2a7c8b05-bf9f-45c2-803e-1198db7a3c89-lib-modules\") pod \"kube-proxy-pd4f9\" (UID: \"2a7c8b05-bf9f-45c2-803e-1198db7a3c89\") " pod="kube-system/kube-proxy-pd4f9" Jan 29 10:51:41.074371 kubelet[2520]: I0129 10:51:41.074369 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-cilium-config-path\") pod \"cilium-9s8w6\" (UID: \"97cbe104-0c6d-40f4-bd6a-64d6cd581d22\") " pod="kube-system/cilium-9s8w6" Jan 29 10:51:41.074477 kubelet[2520]: I0129 10:51:41.074384 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-xtables-lock\") pod \"cilium-9s8w6\" (UID: \"97cbe104-0c6d-40f4-bd6a-64d6cd581d22\") " pod="kube-system/cilium-9s8w6" Jan 29 10:51:41.074477 kubelet[2520]: I0129 10:51:41.074397 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-host-proc-sys-net\") pod \"cilium-9s8w6\" (UID: \"97cbe104-0c6d-40f4-bd6a-64d6cd581d22\") " pod="kube-system/cilium-9s8w6" Jan 29 10:51:41.074477 kubelet[2520]: I0129 10:51:41.074440 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-host-proc-sys-kernel\") pod \"cilium-9s8w6\" (UID: \"97cbe104-0c6d-40f4-bd6a-64d6cd581d22\") " pod="kube-system/cilium-9s8w6" Jan 29 10:51:41.074581 kubelet[2520]: I0129 10:51:41.074482 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-cilium-run\") pod \"cilium-9s8w6\" (UID: \"97cbe104-0c6d-40f4-bd6a-64d6cd581d22\") " pod="kube-system/cilium-9s8w6" Jan 29 10:51:41.074581 kubelet[2520]: I0129 10:51:41.074516 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-etc-cni-netd\") pod \"cilium-9s8w6\" (UID: \"97cbe104-0c6d-40f4-bd6a-64d6cd581d22\") " pod="kube-system/cilium-9s8w6" Jan 29 10:51:41.074581 kubelet[2520]: I0129 10:51:41.074533 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-clustermesh-secrets\") pod \"cilium-9s8w6\" (UID: \"97cbe104-0c6d-40f4-bd6a-64d6cd581d22\") " pod="kube-system/cilium-9s8w6" Jan 29 10:51:41.074581 kubelet[2520]: I0129 10:51:41.074548 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-hubble-tls\") pod \"cilium-9s8w6\" (UID: \"97cbe104-0c6d-40f4-bd6a-64d6cd581d22\") " pod="kube-system/cilium-9s8w6" Jan 29 10:51:41.074581 kubelet[2520]: I0129 10:51:41.074575 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-hostproc\") pod \"cilium-9s8w6\" (UID: \"97cbe104-0c6d-40f4-bd6a-64d6cd581d22\") " pod="kube-system/cilium-9s8w6" Jan 29 10:51:41.074738 kubelet[2520]: I0129 10:51:41.074599 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2a7c8b05-bf9f-45c2-803e-1198db7a3c89-kube-proxy\") pod \"kube-proxy-pd4f9\" (UID: \"2a7c8b05-bf9f-45c2-803e-1198db7a3c89\") " pod="kube-system/kube-proxy-pd4f9" Jan 29 10:51:41.074738 kubelet[2520]: I0129 10:51:41.074621 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-bpf-maps\") pod \"cilium-9s8w6\" (UID: \"97cbe104-0c6d-40f4-bd6a-64d6cd581d22\") " pod="kube-system/cilium-9s8w6" Jan 29 10:51:41.074738 kubelet[2520]: I0129 10:51:41.074638 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-cni-path\") pod \"cilium-9s8w6\" (UID: \"97cbe104-0c6d-40f4-bd6a-64d6cd581d22\") " pod="kube-system/cilium-9s8w6" Jan 29 10:51:41.074738 kubelet[2520]: I0129 10:51:41.074666 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2a7c8b05-bf9f-45c2-803e-1198db7a3c89-xtables-lock\") pod \"kube-proxy-pd4f9\" (UID: \"2a7c8b05-bf9f-45c2-803e-1198db7a3c89\") " pod="kube-system/kube-proxy-pd4f9" Jan 29 10:51:41.074738 kubelet[2520]: I0129 10:51:41.074701 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmqdb\" (UniqueName: \"kubernetes.io/projected/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-kube-api-access-xmqdb\") pod \"cilium-9s8w6\" (UID: \"97cbe104-0c6d-40f4-bd6a-64d6cd581d22\") " pod="kube-system/cilium-9s8w6" Jan 29 10:51:41.074738 kubelet[2520]: I0129 10:51:41.074725 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-lib-modules\") pod \"cilium-9s8w6\" (UID: \"97cbe104-0c6d-40f4-bd6a-64d6cd581d22\") " pod="kube-system/cilium-9s8w6" Jan 29 10:51:41.210957 systemd[1]: Created slice kubepods-besteffort-pod278b186c_d3a9_4123_9357_a5929a01d1d3.slice - libcontainer container kubepods-besteffort-pod278b186c_d3a9_4123_9357_a5929a01d1d3.slice. Jan 29 10:51:41.276836 kubelet[2520]: I0129 10:51:41.276683 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/278b186c-d3a9-4123-9357-a5929a01d1d3-cilium-config-path\") pod \"cilium-operator-5d85765b45-crl5x\" (UID: \"278b186c-d3a9-4123-9357-a5929a01d1d3\") " pod="kube-system/cilium-operator-5d85765b45-crl5x" Jan 29 10:51:41.276836 kubelet[2520]: I0129 10:51:41.276734 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-557t5\" (UniqueName: \"kubernetes.io/projected/278b186c-d3a9-4123-9357-a5929a01d1d3-kube-api-access-557t5\") pod \"cilium-operator-5d85765b45-crl5x\" (UID: \"278b186c-d3a9-4123-9357-a5929a01d1d3\") " pod="kube-system/cilium-operator-5d85765b45-crl5x" Jan 29 10:51:41.370199 kubelet[2520]: E0129 10:51:41.370124 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:51:41.370765 containerd[1442]: time="2025-01-29T10:51:41.370730142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pd4f9,Uid:2a7c8b05-bf9f-45c2-803e-1198db7a3c89,Namespace:kube-system,Attempt:0,}" Jan 29 10:51:41.377637 kubelet[2520]: E0129 10:51:41.377606 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:51:41.378278 containerd[1442]: time="2025-01-29T10:51:41.378029722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9s8w6,Uid:97cbe104-0c6d-40f4-bd6a-64d6cd581d22,Namespace:kube-system,Attempt:0,}" Jan 29 10:51:41.402313 containerd[1442]: time="2025-01-29T10:51:41.402215466Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:51:41.402313 containerd[1442]: time="2025-01-29T10:51:41.402272632Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:51:41.402313 containerd[1442]: time="2025-01-29T10:51:41.402289074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:51:41.402852 containerd[1442]: time="2025-01-29T10:51:41.402783967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:51:41.418531 containerd[1442]: time="2025-01-29T10:51:41.418438119Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:51:41.418531 containerd[1442]: time="2025-01-29T10:51:41.418495525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:51:41.418531 containerd[1442]: time="2025-01-29T10:51:41.418511847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:51:41.418714 containerd[1442]: time="2025-01-29T10:51:41.418587095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:51:41.419301 systemd[1]: Started cri-containerd-15a7437a661cdf9efafb9b579613cf585be15107971bd558d7f1b23b97b50d3f.scope - libcontainer container 15a7437a661cdf9efafb9b579613cf585be15107971bd558d7f1b23b97b50d3f. Jan 29 10:51:41.430435 systemd[1]: Started cri-containerd-b4dcd5116ed54c3d25d560d6395599f6392a3bdf828d1d62e4a2bb59ca51fced.scope - libcontainer container b4dcd5116ed54c3d25d560d6395599f6392a3bdf828d1d62e4a2bb59ca51fced. Jan 29 10:51:41.439930 containerd[1442]: time="2025-01-29T10:51:41.439848647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pd4f9,Uid:2a7c8b05-bf9f-45c2-803e-1198db7a3c89,Namespace:kube-system,Attempt:0,} returns sandbox id \"15a7437a661cdf9efafb9b579613cf585be15107971bd558d7f1b23b97b50d3f\"" Jan 29 10:51:41.441528 kubelet[2520]: E0129 10:51:41.440534 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:51:41.443840 containerd[1442]: time="2025-01-29T10:51:41.443810070Z" level=info msg="CreateContainer within sandbox \"15a7437a661cdf9efafb9b579613cf585be15107971bd558d7f1b23b97b50d3f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 10:51:41.455261 containerd[1442]: time="2025-01-29T10:51:41.455230810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9s8w6,Uid:97cbe104-0c6d-40f4-bd6a-64d6cd581d22,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4dcd5116ed54c3d25d560d6395599f6392a3bdf828d1d62e4a2bb59ca51fced\"" Jan 29 10:51:41.455954 kubelet[2520]: E0129 10:51:41.455930 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:51:41.457096 containerd[1442]: time="2025-01-29T10:51:41.457017241Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 10:51:41.486712 containerd[1442]: time="2025-01-29T10:51:41.486659768Z" level=info msg="CreateContainer within sandbox \"15a7437a661cdf9efafb9b579613cf585be15107971bd558d7f1b23b97b50d3f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"92e749db4a65e92f1dbc39cf46681a7f0bcc3170f7381a4ead7dfe6d8531fc4c\"" Jan 29 10:51:41.487259 containerd[1442]: time="2025-01-29T10:51:41.487229989Z" level=info msg="StartContainer for \"92e749db4a65e92f1dbc39cf46681a7f0bcc3170f7381a4ead7dfe6d8531fc4c\"" Jan 29 10:51:41.513437 kubelet[2520]: E0129 10:51:41.513404 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:51:41.514205 containerd[1442]: time="2025-01-29T10:51:41.514166827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-crl5x,Uid:278b186c-d3a9-4123-9357-a5929a01d1d3,Namespace:kube-system,Attempt:0,}" Jan 29 10:51:41.517459 systemd[1]: Started cri-containerd-92e749db4a65e92f1dbc39cf46681a7f0bcc3170f7381a4ead7dfe6d8531fc4c.scope - libcontainer container 92e749db4a65e92f1dbc39cf46681a7f0bcc3170f7381a4ead7dfe6d8531fc4c. Jan 29 10:51:41.537366 containerd[1442]: time="2025-01-29T10:51:41.535467943Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:51:41.537366 containerd[1442]: time="2025-01-29T10:51:41.535515548Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:51:41.537366 containerd[1442]: time="2025-01-29T10:51:41.535525829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:51:41.537608 containerd[1442]: time="2025-01-29T10:51:41.535585316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:51:41.550876 containerd[1442]: time="2025-01-29T10:51:41.549348666Z" level=info msg="StartContainer for \"92e749db4a65e92f1dbc39cf46681a7f0bcc3170f7381a4ead7dfe6d8531fc4c\" returns successfully" Jan 29 10:51:41.571334 systemd[1]: Started cri-containerd-545f47c857d4a9133992d1a304f95e16587069727bc46d83ef6e4a3c5daa91a7.scope - libcontainer container 545f47c857d4a9133992d1a304f95e16587069727bc46d83ef6e4a3c5daa91a7. Jan 29 10:51:41.608417 containerd[1442]: time="2025-01-29T10:51:41.608370372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-crl5x,Uid:278b186c-d3a9-4123-9357-a5929a01d1d3,Namespace:kube-system,Attempt:0,} returns sandbox id \"545f47c857d4a9133992d1a304f95e16587069727bc46d83ef6e4a3c5daa91a7\"" Jan 29 10:51:41.611172 kubelet[2520]: E0129 10:51:41.609207 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:51:41.989792 kubelet[2520]: E0129 10:51:41.989756 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:51:42.002877 kubelet[2520]: I0129 10:51:42.001089 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pd4f9" podStartSLOduration=1.001060529 podStartE2EDuration="1.001060529s" podCreationTimestamp="2025-01-29 10:51:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 10:51:41.999186529 +0000 UTC m=+6.127464586" watchObservedRunningTime="2025-01-29 10:51:42.001060529 +0000 UTC m=+6.129338586" Jan 29 10:51:42.364368 kubelet[2520]: E0129 10:51:42.364316 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:51:42.694065 kubelet[2520]: E0129 10:51:42.693949 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:51:42.992690 kubelet[2520]: E0129 10:51:42.992565 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:51:42.993724 kubelet[2520]: E0129 10:51:42.993682 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:51:46.087479 update_engine[1425]: I20250129 10:51:46.087375 1425 update_attempter.cc:509] Updating boot flags... Jan 29 10:51:46.127180 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2893) Jan 29 10:51:55.549377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1327122178.mount: Deactivated successfully. Jan 29 10:51:56.860349 containerd[1442]: time="2025-01-29T10:51:56.860296100Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:51:56.861324 containerd[1442]: time="2025-01-29T10:51:56.861293912Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 29 10:51:56.862106 containerd[1442]: time="2025-01-29T10:51:56.862074672Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:51:56.863672 containerd[1442]: time="2025-01-29T10:51:56.863644394Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 15.406572948s" Jan 29 10:51:56.863728 containerd[1442]: time="2025-01-29T10:51:56.863677756Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 29 10:51:56.867809 containerd[1442]: time="2025-01-29T10:51:56.867770249Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 29 10:51:56.868439 containerd[1442]: time="2025-01-29T10:51:56.868409362Z" level=info msg="CreateContainer within sandbox \"b4dcd5116ed54c3d25d560d6395599f6392a3bdf828d1d62e4a2bb59ca51fced\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 10:51:56.889040 containerd[1442]: time="2025-01-29T10:51:56.888988713Z" level=info msg="CreateContainer within sandbox \"b4dcd5116ed54c3d25d560d6395599f6392a3bdf828d1d62e4a2bb59ca51fced\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f1fde6cb322145d3e63dd349c6508291802a477f70992f0397b89835a956ab1d\"" Jan 29 10:51:56.890197 containerd[1442]: time="2025-01-29T10:51:56.889588425Z" level=info msg="StartContainer for \"f1fde6cb322145d3e63dd349c6508291802a477f70992f0397b89835a956ab1d\"" Jan 29 10:51:56.917329 systemd[1]: Started cri-containerd-f1fde6cb322145d3e63dd349c6508291802a477f70992f0397b89835a956ab1d.scope - libcontainer container f1fde6cb322145d3e63dd349c6508291802a477f70992f0397b89835a956ab1d. Jan 29 10:51:56.945886 containerd[1442]: time="2025-01-29T10:51:56.945835273Z" level=info msg="StartContainer for \"f1fde6cb322145d3e63dd349c6508291802a477f70992f0397b89835a956ab1d\" returns successfully" Jan 29 10:51:57.006612 systemd[1]: cri-containerd-f1fde6cb322145d3e63dd349c6508291802a477f70992f0397b89835a956ab1d.scope: Deactivated successfully. Jan 29 10:51:57.029174 kubelet[2520]: E0129 10:51:57.029120 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:51:57.161704 containerd[1442]: time="2025-01-29T10:51:57.161556647Z" level=info msg="shim disconnected" id=f1fde6cb322145d3e63dd349c6508291802a477f70992f0397b89835a956ab1d namespace=k8s.io Jan 29 10:51:57.161704 containerd[1442]: time="2025-01-29T10:51:57.161617570Z" level=warning msg="cleaning up after shim disconnected" id=f1fde6cb322145d3e63dd349c6508291802a477f70992f0397b89835a956ab1d namespace=k8s.io Jan 29 10:51:57.161704 containerd[1442]: time="2025-01-29T10:51:57.161627571Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 10:51:57.885614 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1fde6cb322145d3e63dd349c6508291802a477f70992f0397b89835a956ab1d-rootfs.mount: Deactivated successfully. Jan 29 10:51:58.029317 kubelet[2520]: E0129 10:51:58.028821 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:51:58.031590 containerd[1442]: time="2025-01-29T10:51:58.031124365Z" level=info msg="CreateContainer within sandbox \"b4dcd5116ed54c3d25d560d6395599f6392a3bdf828d1d62e4a2bb59ca51fced\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 10:51:58.052466 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2582812330.mount: Deactivated successfully. Jan 29 10:51:58.054632 containerd[1442]: time="2025-01-29T10:51:58.054361720Z" level=info msg="CreateContainer within sandbox \"b4dcd5116ed54c3d25d560d6395599f6392a3bdf828d1d62e4a2bb59ca51fced\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"20d73bbcec520e34890249eefb6802d68c6fd2789d42f9a20c0c203c2e2327bd\"" Jan 29 10:51:58.054985 containerd[1442]: time="2025-01-29T10:51:58.054949668Z" level=info msg="StartContainer for \"20d73bbcec520e34890249eefb6802d68c6fd2789d42f9a20c0c203c2e2327bd\"" Jan 29 10:51:58.085996 systemd[1]: Started cri-containerd-20d73bbcec520e34890249eefb6802d68c6fd2789d42f9a20c0c203c2e2327bd.scope - libcontainer container 20d73bbcec520e34890249eefb6802d68c6fd2789d42f9a20c0c203c2e2327bd. Jan 29 10:51:58.111495 containerd[1442]: time="2025-01-29T10:51:58.111452540Z" level=info msg="StartContainer for \"20d73bbcec520e34890249eefb6802d68c6fd2789d42f9a20c0c203c2e2327bd\" returns successfully" Jan 29 10:51:58.143016 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 10:51:58.143828 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 10:51:58.144009 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 29 10:51:58.149448 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 10:51:58.149656 systemd[1]: cri-containerd-20d73bbcec520e34890249eefb6802d68c6fd2789d42f9a20c0c203c2e2327bd.scope: Deactivated successfully. Jan 29 10:51:58.186298 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 10:51:58.186651 containerd[1442]: time="2025-01-29T10:51:58.186400538Z" level=info msg="shim disconnected" id=20d73bbcec520e34890249eefb6802d68c6fd2789d42f9a20c0c203c2e2327bd namespace=k8s.io Jan 29 10:51:58.186651 containerd[1442]: time="2025-01-29T10:51:58.186451540Z" level=warning msg="cleaning up after shim disconnected" id=20d73bbcec520e34890249eefb6802d68c6fd2789d42f9a20c0c203c2e2327bd namespace=k8s.io Jan 29 10:51:58.186651 containerd[1442]: time="2025-01-29T10:51:58.186461101Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 10:51:58.381792 containerd[1442]: time="2025-01-29T10:51:58.381439059Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:51:58.383576 containerd[1442]: time="2025-01-29T10:51:58.383516639Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 29 10:51:58.384600 containerd[1442]: time="2025-01-29T10:51:58.384554929Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:51:58.386079 containerd[1442]: time="2025-01-29T10:51:58.386029439Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.518226869s" Jan 29 10:51:58.386079 containerd[1442]: time="2025-01-29T10:51:58.386060801Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 29 10:51:58.394313 containerd[1442]: time="2025-01-29T10:51:58.394195951Z" level=info msg="CreateContainer within sandbox \"545f47c857d4a9133992d1a304f95e16587069727bc46d83ef6e4a3c5daa91a7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 29 10:51:58.410745 containerd[1442]: time="2025-01-29T10:51:58.410701504Z" level=info msg="CreateContainer within sandbox \"545f47c857d4a9133992d1a304f95e16587069727bc46d83ef6e4a3c5daa91a7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"63d48a5de5232a043dfc9a7acbef70773ea05a4212706f8f65ea2cfeaa3ddc88\"" Jan 29 10:51:58.412696 containerd[1442]: time="2025-01-29T10:51:58.412660278Z" level=info msg="StartContainer for \"63d48a5de5232a043dfc9a7acbef70773ea05a4212706f8f65ea2cfeaa3ddc88\"" Jan 29 10:51:58.437312 systemd[1]: Started cri-containerd-63d48a5de5232a043dfc9a7acbef70773ea05a4212706f8f65ea2cfeaa3ddc88.scope - libcontainer container 63d48a5de5232a043dfc9a7acbef70773ea05a4212706f8f65ea2cfeaa3ddc88. Jan 29 10:51:58.458180 containerd[1442]: time="2025-01-29T10:51:58.458088898Z" level=info msg="StartContainer for \"63d48a5de5232a043dfc9a7acbef70773ea05a4212706f8f65ea2cfeaa3ddc88\" returns successfully" Jan 29 10:51:58.886477 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-20d73bbcec520e34890249eefb6802d68c6fd2789d42f9a20c0c203c2e2327bd-rootfs.mount: Deactivated successfully. Jan 29 10:51:59.037331 kubelet[2520]: E0129 10:51:59.034721 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:51:59.045267 kubelet[2520]: E0129 10:51:59.045061 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:51:59.047080 containerd[1442]: time="2025-01-29T10:51:59.047028161Z" level=info msg="CreateContainer within sandbox \"b4dcd5116ed54c3d25d560d6395599f6392a3bdf828d1d62e4a2bb59ca51fced\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 10:51:59.062470 kubelet[2520]: I0129 10:51:59.061873 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-crl5x" podStartSLOduration=1.279302333 podStartE2EDuration="18.061854526s" podCreationTimestamp="2025-01-29 10:51:41 +0000 UTC" firstStartedPulling="2025-01-29 10:51:41.610205889 +0000 UTC m=+5.738483906" lastFinishedPulling="2025-01-29 10:51:58.392758122 +0000 UTC m=+22.521036099" observedRunningTime="2025-01-29 10:51:59.061835645 +0000 UTC m=+23.190113662" watchObservedRunningTime="2025-01-29 10:51:59.061854526 +0000 UTC m=+23.190132543" Jan 29 10:51:59.089122 containerd[1442]: time="2025-01-29T10:51:59.088400271Z" level=info msg="CreateContainer within sandbox \"b4dcd5116ed54c3d25d560d6395599f6392a3bdf828d1d62e4a2bb59ca51fced\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"adc085ba49ec092a9e93b22557a44b10fe5d7971abd1ae37f39e843e559126bf\"" Jan 29 10:51:59.089543 containerd[1442]: time="2025-01-29T10:51:59.089513522Z" level=info msg="StartContainer for \"adc085ba49ec092a9e93b22557a44b10fe5d7971abd1ae37f39e843e559126bf\"" Jan 29 10:51:59.130378 systemd[1]: Started cri-containerd-adc085ba49ec092a9e93b22557a44b10fe5d7971abd1ae37f39e843e559126bf.scope - libcontainer container adc085ba49ec092a9e93b22557a44b10fe5d7971abd1ae37f39e843e559126bf. Jan 29 10:51:59.161627 containerd[1442]: time="2025-01-29T10:51:59.161507885Z" level=info msg="StartContainer for \"adc085ba49ec092a9e93b22557a44b10fe5d7971abd1ae37f39e843e559126bf\" returns successfully" Jan 29 10:51:59.178352 systemd[1]: cri-containerd-adc085ba49ec092a9e93b22557a44b10fe5d7971abd1ae37f39e843e559126bf.scope: Deactivated successfully. Jan 29 10:51:59.270236 containerd[1442]: time="2025-01-29T10:51:59.270163180Z" level=info msg="shim disconnected" id=adc085ba49ec092a9e93b22557a44b10fe5d7971abd1ae37f39e843e559126bf namespace=k8s.io Jan 29 10:51:59.270236 containerd[1442]: time="2025-01-29T10:51:59.270229903Z" level=warning msg="cleaning up after shim disconnected" id=adc085ba49ec092a9e93b22557a44b10fe5d7971abd1ae37f39e843e559126bf namespace=k8s.io Jan 29 10:51:59.270236 containerd[1442]: time="2025-01-29T10:51:59.270239423Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 10:51:59.887907 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-adc085ba49ec092a9e93b22557a44b10fe5d7971abd1ae37f39e843e559126bf-rootfs.mount: Deactivated successfully. Jan 29 10:52:00.051214 kubelet[2520]: E0129 10:52:00.051128 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:52:00.052466 kubelet[2520]: E0129 10:52:00.052416 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:52:00.056476 containerd[1442]: time="2025-01-29T10:52:00.055993195Z" level=info msg="CreateContainer within sandbox \"b4dcd5116ed54c3d25d560d6395599f6392a3bdf828d1d62e4a2bb59ca51fced\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 10:52:00.078459 containerd[1442]: time="2025-01-29T10:52:00.078407110Z" level=info msg="CreateContainer within sandbox \"b4dcd5116ed54c3d25d560d6395599f6392a3bdf828d1d62e4a2bb59ca51fced\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6964154ef724232dc407d0db474210064aa4125ae02448786b29eb48846abc0e\"" Jan 29 10:52:00.079001 containerd[1442]: time="2025-01-29T10:52:00.078946494Z" level=info msg="StartContainer for \"6964154ef724232dc407d0db474210064aa4125ae02448786b29eb48846abc0e\"" Jan 29 10:52:00.106320 systemd[1]: Started cri-containerd-6964154ef724232dc407d0db474210064aa4125ae02448786b29eb48846abc0e.scope - libcontainer container 6964154ef724232dc407d0db474210064aa4125ae02448786b29eb48846abc0e. Jan 29 10:52:00.129011 systemd[1]: cri-containerd-6964154ef724232dc407d0db474210064aa4125ae02448786b29eb48846abc0e.scope: Deactivated successfully. Jan 29 10:52:00.131406 containerd[1442]: time="2025-01-29T10:52:00.131336822Z" level=info msg="StartContainer for \"6964154ef724232dc407d0db474210064aa4125ae02448786b29eb48846abc0e\" returns successfully" Jan 29 10:52:00.146341 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6964154ef724232dc407d0db474210064aa4125ae02448786b29eb48846abc0e-rootfs.mount: Deactivated successfully. Jan 29 10:52:00.153049 containerd[1442]: time="2025-01-29T10:52:00.152994744Z" level=info msg="shim disconnected" id=6964154ef724232dc407d0db474210064aa4125ae02448786b29eb48846abc0e namespace=k8s.io Jan 29 10:52:00.153049 containerd[1442]: time="2025-01-29T10:52:00.153045746Z" level=warning msg="cleaning up after shim disconnected" id=6964154ef724232dc407d0db474210064aa4125ae02448786b29eb48846abc0e namespace=k8s.io Jan 29 10:52:00.153049 containerd[1442]: time="2025-01-29T10:52:00.153054467Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 10:52:01.053318 kubelet[2520]: E0129 10:52:01.053199 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:52:01.055529 containerd[1442]: time="2025-01-29T10:52:01.055301343Z" level=info msg="CreateContainer within sandbox \"b4dcd5116ed54c3d25d560d6395599f6392a3bdf828d1d62e4a2bb59ca51fced\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 10:52:01.071908 containerd[1442]: time="2025-01-29T10:52:01.071865732Z" level=info msg="CreateContainer within sandbox \"b4dcd5116ed54c3d25d560d6395599f6392a3bdf828d1d62e4a2bb59ca51fced\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"440700df8cb367118e380fa1223accb1dcb2aac221496f0ff7cb2d9e89a44105\"" Jan 29 10:52:01.073347 containerd[1442]: time="2025-01-29T10:52:01.073317234Z" level=info msg="StartContainer for \"440700df8cb367118e380fa1223accb1dcb2aac221496f0ff7cb2d9e89a44105\"" Jan 29 10:52:01.101313 systemd[1]: Started cri-containerd-440700df8cb367118e380fa1223accb1dcb2aac221496f0ff7cb2d9e89a44105.scope - libcontainer container 440700df8cb367118e380fa1223accb1dcb2aac221496f0ff7cb2d9e89a44105. Jan 29 10:52:01.126156 containerd[1442]: time="2025-01-29T10:52:01.126113734Z" level=info msg="StartContainer for \"440700df8cb367118e380fa1223accb1dcb2aac221496f0ff7cb2d9e89a44105\" returns successfully" Jan 29 10:52:01.312000 kubelet[2520]: I0129 10:52:01.311891 2520 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 29 10:52:01.357247 systemd[1]: Created slice kubepods-burstable-pod52bddd13_a851_4887_91db_87d15edf1373.slice - libcontainer container kubepods-burstable-pod52bddd13_a851_4887_91db_87d15edf1373.slice. Jan 29 10:52:01.364934 systemd[1]: Created slice kubepods-burstable-pod25c0c1bf_db15_4e7c_b157_bbed45244c63.slice - libcontainer container kubepods-burstable-pod25c0c1bf_db15_4e7c_b157_bbed45244c63.slice. Jan 29 10:52:01.433643 kubelet[2520]: I0129 10:52:01.433513 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r94vh\" (UniqueName: \"kubernetes.io/projected/25c0c1bf-db15-4e7c-b157-bbed45244c63-kube-api-access-r94vh\") pod \"coredns-6f6b679f8f-zvdkt\" (UID: \"25c0c1bf-db15-4e7c-b157-bbed45244c63\") " pod="kube-system/coredns-6f6b679f8f-zvdkt" Jan 29 10:52:01.433643 kubelet[2520]: I0129 10:52:01.433558 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dww98\" (UniqueName: \"kubernetes.io/projected/52bddd13-a851-4887-91db-87d15edf1373-kube-api-access-dww98\") pod \"coredns-6f6b679f8f-nzrtf\" (UID: \"52bddd13-a851-4887-91db-87d15edf1373\") " pod="kube-system/coredns-6f6b679f8f-nzrtf" Jan 29 10:52:01.433643 kubelet[2520]: I0129 10:52:01.433583 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52bddd13-a851-4887-91db-87d15edf1373-config-volume\") pod \"coredns-6f6b679f8f-nzrtf\" (UID: \"52bddd13-a851-4887-91db-87d15edf1373\") " pod="kube-system/coredns-6f6b679f8f-nzrtf" Jan 29 10:52:01.433643 kubelet[2520]: I0129 10:52:01.433602 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25c0c1bf-db15-4e7c-b157-bbed45244c63-config-volume\") pod \"coredns-6f6b679f8f-zvdkt\" (UID: \"25c0c1bf-db15-4e7c-b157-bbed45244c63\") " pod="kube-system/coredns-6f6b679f8f-zvdkt" Jan 29 10:52:01.660546 kubelet[2520]: E0129 10:52:01.660449 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:52:01.663214 containerd[1442]: time="2025-01-29T10:52:01.662871071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-nzrtf,Uid:52bddd13-a851-4887-91db-87d15edf1373,Namespace:kube-system,Attempt:0,}" Jan 29 10:52:01.667952 kubelet[2520]: E0129 10:52:01.667470 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:52:01.668087 containerd[1442]: time="2025-01-29T10:52:01.668047692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-zvdkt,Uid:25c0c1bf-db15-4e7c-b157-bbed45244c63,Namespace:kube-system,Attempt:0,}" Jan 29 10:52:01.985836 systemd[1]: Started sshd@7-10.0.0.53:22-10.0.0.1:48626.service - OpenSSH per-connection server daemon (10.0.0.1:48626). Jan 29 10:52:02.029510 sshd[3376]: Accepted publickey for core from 10.0.0.1 port 48626 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 10:52:02.030981 sshd-session[3376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:52:02.035700 systemd-logind[1422]: New session 8 of user core. Jan 29 10:52:02.046399 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 10:52:02.059172 kubelet[2520]: E0129 10:52:02.058437 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:52:02.085973 kubelet[2520]: I0129 10:52:02.085903 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9s8w6" podStartSLOduration=5.675407628 podStartE2EDuration="21.08588817s" podCreationTimestamp="2025-01-29 10:51:41 +0000 UTC" firstStartedPulling="2025-01-29 10:51:41.456501626 +0000 UTC m=+5.584779643" lastFinishedPulling="2025-01-29 10:51:56.866982208 +0000 UTC m=+20.995260185" observedRunningTime="2025-01-29 10:52:02.085617599 +0000 UTC m=+26.213895616" watchObservedRunningTime="2025-01-29 10:52:02.08588817 +0000 UTC m=+26.214166187" Jan 29 10:52:02.184313 sshd[3378]: Connection closed by 10.0.0.1 port 48626 Jan 29 10:52:02.184668 sshd-session[3376]: pam_unix(sshd:session): session closed for user core Jan 29 10:52:02.189052 systemd[1]: sshd@7-10.0.0.53:22-10.0.0.1:48626.service: Deactivated successfully. Jan 29 10:52:02.190809 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 10:52:02.192316 systemd-logind[1422]: Session 8 logged out. Waiting for processes to exit. Jan 29 10:52:02.193381 systemd-logind[1422]: Removed session 8. Jan 29 10:52:03.060566 kubelet[2520]: E0129 10:52:03.060485 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:52:03.380171 systemd-networkd[1368]: cilium_host: Link UP Jan 29 10:52:03.380307 systemd-networkd[1368]: cilium_net: Link UP Jan 29 10:52:03.380481 systemd-networkd[1368]: cilium_net: Gained carrier Jan 29 10:52:03.380625 systemd-networkd[1368]: cilium_host: Gained carrier Jan 29 10:52:03.380739 systemd-networkd[1368]: cilium_net: Gained IPv6LL Jan 29 10:52:03.380878 systemd-networkd[1368]: cilium_host: Gained IPv6LL Jan 29 10:52:03.482771 systemd-networkd[1368]: cilium_vxlan: Link UP Jan 29 10:52:03.482777 systemd-networkd[1368]: cilium_vxlan: Gained carrier Jan 29 10:52:03.977193 kernel: NET: Registered PF_ALG protocol family Jan 29 10:52:04.062701 kubelet[2520]: E0129 10:52:04.062657 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:52:04.611608 systemd-networkd[1368]: lxc_health: Link UP Jan 29 10:52:04.619821 systemd-networkd[1368]: lxc_health: Gained carrier Jan 29 10:52:04.838185 kernel: eth0: renamed from tmp60fad Jan 29 10:52:04.849002 systemd-networkd[1368]: lxc54d76680c86e: Link UP Jan 29 10:52:04.849388 systemd-networkd[1368]: lxcff7c357a87b6: Link UP Jan 29 10:52:04.849543 systemd-networkd[1368]: lxc54d76680c86e: Gained carrier Jan 29 10:52:04.863241 kernel: eth0: renamed from tmp9175f Jan 29 10:52:04.871060 systemd-networkd[1368]: lxcff7c357a87b6: Gained carrier Jan 29 10:52:05.386626 kubelet[2520]: E0129 10:52:05.386579 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:52:05.434268 systemd-networkd[1368]: cilium_vxlan: Gained IPv6LL Jan 29 10:52:05.881315 systemd-networkd[1368]: lxc_health: Gained IPv6LL Jan 29 10:52:06.071656 kubelet[2520]: E0129 10:52:06.071604 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:52:06.329302 systemd-networkd[1368]: lxc54d76680c86e: Gained IPv6LL Jan 29 10:52:06.521299 systemd-networkd[1368]: lxcff7c357a87b6: Gained IPv6LL Jan 29 10:52:07.073757 kubelet[2520]: E0129 10:52:07.073711 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:52:07.198331 systemd[1]: Started sshd@8-10.0.0.53:22-10.0.0.1:59482.service - OpenSSH per-connection server daemon (10.0.0.1:59482). Jan 29 10:52:07.251158 sshd[3775]: Accepted publickey for core from 10.0.0.1 port 59482 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 10:52:07.251868 sshd-session[3775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:52:07.257051 systemd-logind[1422]: New session 9 of user core. Jan 29 10:52:07.266907 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 10:52:07.408927 sshd[3778]: Connection closed by 10.0.0.1 port 59482 Jan 29 10:52:07.408822 sshd-session[3775]: pam_unix(sshd:session): session closed for user core Jan 29 10:52:07.416261 systemd[1]: sshd@8-10.0.0.53:22-10.0.0.1:59482.service: Deactivated successfully. Jan 29 10:52:07.418292 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 10:52:07.420846 systemd-logind[1422]: Session 9 logged out. Waiting for processes to exit. Jan 29 10:52:07.421701 systemd-logind[1422]: Removed session 9. Jan 29 10:52:08.552069 containerd[1442]: time="2025-01-29T10:52:08.551843766Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:52:08.552069 containerd[1442]: time="2025-01-29T10:52:08.551900448Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:52:08.552069 containerd[1442]: time="2025-01-29T10:52:08.551911568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:52:08.552069 containerd[1442]: time="2025-01-29T10:52:08.551987131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:52:08.552546 containerd[1442]: time="2025-01-29T10:52:08.552333982Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:52:08.552546 containerd[1442]: time="2025-01-29T10:52:08.552394624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:52:08.552546 containerd[1442]: time="2025-01-29T10:52:08.552405345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:52:08.552610 containerd[1442]: time="2025-01-29T10:52:08.552484907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:52:08.578352 systemd[1]: Started cri-containerd-60fadfeae1d9df3e557daeca5298d0bd1cd23b9ba711d1fdc46f65dfb91940fa.scope - libcontainer container 60fadfeae1d9df3e557daeca5298d0bd1cd23b9ba711d1fdc46f65dfb91940fa. Jan 29 10:52:08.580259 systemd[1]: Started cri-containerd-9175f25f3872d9296e1a263bb4c8f8a40f0e3bc64a8b25e1c5401e044e276a3e.scope - libcontainer container 9175f25f3872d9296e1a263bb4c8f8a40f0e3bc64a8b25e1c5401e044e276a3e. Jan 29 10:52:08.591957 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 10:52:08.595325 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 10:52:08.613254 containerd[1442]: time="2025-01-29T10:52:08.613215691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-nzrtf,Uid:52bddd13-a851-4887-91db-87d15edf1373,Namespace:kube-system,Attempt:0,} returns sandbox id \"60fadfeae1d9df3e557daeca5298d0bd1cd23b9ba711d1fdc46f65dfb91940fa\"" Jan 29 10:52:08.614499 kubelet[2520]: E0129 10:52:08.614416 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:52:08.618877 containerd[1442]: time="2025-01-29T10:52:08.618836202Z" level=info msg="CreateContainer within sandbox \"60fadfeae1d9df3e557daeca5298d0bd1cd23b9ba711d1fdc46f65dfb91940fa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 10:52:08.623426 containerd[1442]: time="2025-01-29T10:52:08.623393956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-zvdkt,Uid:25c0c1bf-db15-4e7c-b157-bbed45244c63,Namespace:kube-system,Attempt:0,} returns sandbox id \"9175f25f3872d9296e1a263bb4c8f8a40f0e3bc64a8b25e1c5401e044e276a3e\"" Jan 29 10:52:08.624064 kubelet[2520]: E0129 10:52:08.624043 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:52:08.625788 containerd[1442]: time="2025-01-29T10:52:08.625758717Z" level=info msg="CreateContainer within sandbox \"9175f25f3872d9296e1a263bb4c8f8a40f0e3bc64a8b25e1c5401e044e276a3e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 10:52:08.634128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3193873473.mount: Deactivated successfully. Jan 29 10:52:08.637621 containerd[1442]: time="2025-01-29T10:52:08.637584439Z" level=info msg="CreateContainer within sandbox \"60fadfeae1d9df3e557daeca5298d0bd1cd23b9ba711d1fdc46f65dfb91940fa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9217f2daf39e66e9ed3cb7300c87a2d8f26977af9b9639065ea9c535a828c248\"" Jan 29 10:52:08.639132 containerd[1442]: time="2025-01-29T10:52:08.638343424Z" level=info msg="StartContainer for \"9217f2daf39e66e9ed3cb7300c87a2d8f26977af9b9639065ea9c535a828c248\"" Jan 29 10:52:08.641723 containerd[1442]: time="2025-01-29T10:52:08.641681618Z" level=info msg="CreateContainer within sandbox \"9175f25f3872d9296e1a263bb4c8f8a40f0e3bc64a8b25e1c5401e044e276a3e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bebc407bbba08531bf1ab40f8cbfd3070f1c9d2d0a1d373ecc63c31196b1c552\"" Jan 29 10:52:08.642275 containerd[1442]: time="2025-01-29T10:52:08.642257237Z" level=info msg="StartContainer for \"bebc407bbba08531bf1ab40f8cbfd3070f1c9d2d0a1d373ecc63c31196b1c552\"" Jan 29 10:52:08.665327 systemd[1]: Started cri-containerd-9217f2daf39e66e9ed3cb7300c87a2d8f26977af9b9639065ea9c535a828c248.scope - libcontainer container 9217f2daf39e66e9ed3cb7300c87a2d8f26977af9b9639065ea9c535a828c248. Jan 29 10:52:08.668999 systemd[1]: Started cri-containerd-bebc407bbba08531bf1ab40f8cbfd3070f1c9d2d0a1d373ecc63c31196b1c552.scope - libcontainer container bebc407bbba08531bf1ab40f8cbfd3070f1c9d2d0a1d373ecc63c31196b1c552. Jan 29 10:52:08.718949 containerd[1442]: time="2025-01-29T10:52:08.718846399Z" level=info msg="StartContainer for \"bebc407bbba08531bf1ab40f8cbfd3070f1c9d2d0a1d373ecc63c31196b1c552\" returns successfully" Jan 29 10:52:08.718949 containerd[1442]: time="2025-01-29T10:52:08.718935362Z" level=info msg="StartContainer for \"9217f2daf39e66e9ed3cb7300c87a2d8f26977af9b9639065ea9c535a828c248\" returns successfully" Jan 29 10:52:09.077832 kubelet[2520]: E0129 10:52:09.077789 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:52:09.084166 kubelet[2520]: E0129 10:52:09.083900 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:52:09.091991 kubelet[2520]: I0129 10:52:09.091930 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-zvdkt" podStartSLOduration=28.091911546 podStartE2EDuration="28.091911546s" podCreationTimestamp="2025-01-29 10:51:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 10:52:09.091634696 +0000 UTC m=+33.219912713" watchObservedRunningTime="2025-01-29 10:52:09.091911546 +0000 UTC m=+33.220189523" Jan 29 10:52:09.118762 kubelet[2520]: I0129 10:52:09.118693 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-nzrtf" podStartSLOduration=28.118674589 podStartE2EDuration="28.118674589s" podCreationTimestamp="2025-01-29 10:51:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 10:52:09.105863006 +0000 UTC m=+33.234141023" watchObservedRunningTime="2025-01-29 10:52:09.118674589 +0000 UTC m=+33.246952566" Jan 29 10:52:10.085539 kubelet[2520]: E0129 10:52:10.085501 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:52:10.085539 kubelet[2520]: E0129 10:52:10.085537 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:52:11.086761 kubelet[2520]: E0129 10:52:11.086719 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:52:12.419837 systemd[1]: Started sshd@9-10.0.0.53:22-10.0.0.1:59496.service - OpenSSH per-connection server daemon (10.0.0.1:59496). Jan 29 10:52:12.466214 sshd[3967]: Accepted publickey for core from 10.0.0.1 port 59496 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 10:52:12.467097 sshd-session[3967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:52:12.471217 systemd-logind[1422]: New session 10 of user core. Jan 29 10:52:12.480309 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 10:52:12.595633 sshd[3969]: Connection closed by 10.0.0.1 port 59496 Jan 29 10:52:12.596003 sshd-session[3967]: pam_unix(sshd:session): session closed for user core Jan 29 10:52:12.599351 systemd-logind[1422]: Session 10 logged out. Waiting for processes to exit. Jan 29 10:52:12.599571 systemd[1]: sshd@9-10.0.0.53:22-10.0.0.1:59496.service: Deactivated successfully. Jan 29 10:52:12.601330 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 10:52:12.602785 systemd-logind[1422]: Removed session 10. Jan 29 10:52:17.608928 systemd[1]: Started sshd@10-10.0.0.53:22-10.0.0.1:53682.service - OpenSSH per-connection server daemon (10.0.0.1:53682). Jan 29 10:52:17.654205 sshd[3983]: Accepted publickey for core from 10.0.0.1 port 53682 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 10:52:17.654787 sshd-session[3983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:52:17.658824 systemd-logind[1422]: New session 11 of user core. Jan 29 10:52:17.670386 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 10:52:17.794866 sshd[3985]: Connection closed by 10.0.0.1 port 53682 Jan 29 10:52:17.794445 sshd-session[3983]: pam_unix(sshd:session): session closed for user core Jan 29 10:52:17.807597 systemd[1]: sshd@10-10.0.0.53:22-10.0.0.1:53682.service: Deactivated successfully. Jan 29 10:52:17.809659 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 10:52:17.811011 systemd-logind[1422]: Session 11 logged out. Waiting for processes to exit. Jan 29 10:52:17.821638 systemd[1]: Started sshd@11-10.0.0.53:22-10.0.0.1:53696.service - OpenSSH per-connection server daemon (10.0.0.1:53696). Jan 29 10:52:17.822824 systemd-logind[1422]: Removed session 11. Jan 29 10:52:17.858212 sshd[3999]: Accepted publickey for core from 10.0.0.1 port 53696 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 10:52:17.858836 sshd-session[3999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:52:17.862684 systemd-logind[1422]: New session 12 of user core. Jan 29 10:52:17.870376 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 10:52:18.022368 sshd[4001]: Connection closed by 10.0.0.1 port 53696 Jan 29 10:52:18.022966 sshd-session[3999]: pam_unix(sshd:session): session closed for user core Jan 29 10:52:18.030553 systemd[1]: sshd@11-10.0.0.53:22-10.0.0.1:53696.service: Deactivated successfully. Jan 29 10:52:18.034700 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 10:52:18.042335 systemd-logind[1422]: Session 12 logged out. Waiting for processes to exit. Jan 29 10:52:18.049525 systemd[1]: Started sshd@12-10.0.0.53:22-10.0.0.1:53704.service - OpenSSH per-connection server daemon (10.0.0.1:53704). Jan 29 10:52:18.050595 systemd-logind[1422]: Removed session 12. Jan 29 10:52:18.102296 sshd[4011]: Accepted publickey for core from 10.0.0.1 port 53704 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 10:52:18.103631 sshd-session[4011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:52:18.108421 systemd-logind[1422]: New session 13 of user core. Jan 29 10:52:18.113353 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 10:52:18.227662 sshd[4013]: Connection closed by 10.0.0.1 port 53704 Jan 29 10:52:18.228006 sshd-session[4011]: pam_unix(sshd:session): session closed for user core Jan 29 10:52:18.231879 systemd[1]: sshd@12-10.0.0.53:22-10.0.0.1:53704.service: Deactivated successfully. Jan 29 10:52:18.235683 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 10:52:18.236397 systemd-logind[1422]: Session 13 logged out. Waiting for processes to exit. Jan 29 10:52:18.237232 systemd-logind[1422]: Removed session 13. Jan 29 10:52:23.245480 systemd[1]: Started sshd@13-10.0.0.53:22-10.0.0.1:53656.service - OpenSSH per-connection server daemon (10.0.0.1:53656). Jan 29 10:52:23.298666 sshd[4025]: Accepted publickey for core from 10.0.0.1 port 53656 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 10:52:23.299858 sshd-session[4025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:52:23.308040 systemd-logind[1422]: New session 14 of user core. Jan 29 10:52:23.317412 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 10:52:23.460599 sshd[4027]: Connection closed by 10.0.0.1 port 53656 Jan 29 10:52:23.461130 sshd-session[4025]: pam_unix(sshd:session): session closed for user core Jan 29 10:52:23.464491 systemd[1]: sshd@13-10.0.0.53:22-10.0.0.1:53656.service: Deactivated successfully. Jan 29 10:52:23.468415 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 10:52:23.469128 systemd-logind[1422]: Session 14 logged out. Waiting for processes to exit. Jan 29 10:52:23.470138 systemd-logind[1422]: Removed session 14. Jan 29 10:52:28.474188 systemd[1]: Started sshd@14-10.0.0.53:22-10.0.0.1:53668.service - OpenSSH per-connection server daemon (10.0.0.1:53668). Jan 29 10:52:28.521276 sshd[4040]: Accepted publickey for core from 10.0.0.1 port 53668 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 10:52:28.522579 sshd-session[4040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:52:28.528530 systemd-logind[1422]: New session 15 of user core. Jan 29 10:52:28.538307 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 10:52:28.661953 sshd[4042]: Connection closed by 10.0.0.1 port 53668 Jan 29 10:52:28.662635 sshd-session[4040]: pam_unix(sshd:session): session closed for user core Jan 29 10:52:28.672431 systemd[1]: sshd@14-10.0.0.53:22-10.0.0.1:53668.service: Deactivated successfully. Jan 29 10:52:28.675241 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 10:52:28.676469 systemd-logind[1422]: Session 15 logged out. Waiting for processes to exit. Jan 29 10:52:28.685445 systemd[1]: Started sshd@15-10.0.0.53:22-10.0.0.1:53680.service - OpenSSH per-connection server daemon (10.0.0.1:53680). Jan 29 10:52:28.686504 systemd-logind[1422]: Removed session 15. Jan 29 10:52:28.724558 sshd[4055]: Accepted publickey for core from 10.0.0.1 port 53680 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 10:52:28.725682 sshd-session[4055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:52:28.730270 systemd-logind[1422]: New session 16 of user core. Jan 29 10:52:28.742364 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 10:52:29.004452 sshd[4057]: Connection closed by 10.0.0.1 port 53680 Jan 29 10:52:29.005619 sshd-session[4055]: pam_unix(sshd:session): session closed for user core Jan 29 10:52:29.012277 systemd[1]: sshd@15-10.0.0.53:22-10.0.0.1:53680.service: Deactivated successfully. Jan 29 10:52:29.013981 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 10:52:29.016353 systemd-logind[1422]: Session 16 logged out. Waiting for processes to exit. Jan 29 10:52:29.032963 systemd[1]: Started sshd@16-10.0.0.53:22-10.0.0.1:53698.service - OpenSSH per-connection server daemon (10.0.0.1:53698). Jan 29 10:52:29.038200 systemd-logind[1422]: Removed session 16. Jan 29 10:52:29.079838 sshd[4067]: Accepted publickey for core from 10.0.0.1 port 53698 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 10:52:29.081495 sshd-session[4067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:52:29.086499 systemd-logind[1422]: New session 17 of user core. Jan 29 10:52:29.095343 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 10:52:30.437900 sshd[4069]: Connection closed by 10.0.0.1 port 53698 Jan 29 10:52:30.438341 sshd-session[4067]: pam_unix(sshd:session): session closed for user core Jan 29 10:52:30.445633 systemd[1]: sshd@16-10.0.0.53:22-10.0.0.1:53698.service: Deactivated successfully. Jan 29 10:52:30.450015 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 10:52:30.451596 systemd-logind[1422]: Session 17 logged out. Waiting for processes to exit. Jan 29 10:52:30.462316 systemd[1]: Started sshd@17-10.0.0.53:22-10.0.0.1:53700.service - OpenSSH per-connection server daemon (10.0.0.1:53700). Jan 29 10:52:30.466604 systemd-logind[1422]: Removed session 17. Jan 29 10:52:30.503784 sshd[4087]: Accepted publickey for core from 10.0.0.1 port 53700 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 10:52:30.505382 sshd-session[4087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:52:30.509411 systemd-logind[1422]: New session 18 of user core. Jan 29 10:52:30.529384 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 10:52:30.775388 sshd[4089]: Connection closed by 10.0.0.1 port 53700 Jan 29 10:52:30.777772 sshd-session[4087]: pam_unix(sshd:session): session closed for user core Jan 29 10:52:30.784204 systemd[1]: sshd@17-10.0.0.53:22-10.0.0.1:53700.service: Deactivated successfully. Jan 29 10:52:30.786613 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 10:52:30.787553 systemd-logind[1422]: Session 18 logged out. Waiting for processes to exit. Jan 29 10:52:30.798492 systemd[1]: Started sshd@18-10.0.0.53:22-10.0.0.1:53706.service - OpenSSH per-connection server daemon (10.0.0.1:53706). Jan 29 10:52:30.799617 systemd-logind[1422]: Removed session 18. Jan 29 10:52:30.837454 sshd[4100]: Accepted publickey for core from 10.0.0.1 port 53706 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 10:52:30.839116 sshd-session[4100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:52:30.843137 systemd-logind[1422]: New session 19 of user core. Jan 29 10:52:30.855341 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 10:52:30.962350 sshd[4102]: Connection closed by 10.0.0.1 port 53706 Jan 29 10:52:30.962697 sshd-session[4100]: pam_unix(sshd:session): session closed for user core Jan 29 10:52:30.965715 systemd[1]: sshd@18-10.0.0.53:22-10.0.0.1:53706.service: Deactivated successfully. Jan 29 10:52:30.967406 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 10:52:30.967966 systemd-logind[1422]: Session 19 logged out. Waiting for processes to exit. Jan 29 10:52:30.968717 systemd-logind[1422]: Removed session 19. Jan 29 10:52:35.975650 systemd[1]: Started sshd@19-10.0.0.53:22-10.0.0.1:52652.service - OpenSSH per-connection server daemon (10.0.0.1:52652). Jan 29 10:52:36.015673 sshd[4120]: Accepted publickey for core from 10.0.0.1 port 52652 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 10:52:36.017035 sshd-session[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:52:36.021101 systemd-logind[1422]: New session 20 of user core. Jan 29 10:52:36.029300 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 10:52:36.149120 sshd[4122]: Connection closed by 10.0.0.1 port 52652 Jan 29 10:52:36.149659 sshd-session[4120]: pam_unix(sshd:session): session closed for user core Jan 29 10:52:36.152995 systemd[1]: sshd@19-10.0.0.53:22-10.0.0.1:52652.service: Deactivated successfully. Jan 29 10:52:36.154698 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 10:52:36.155285 systemd-logind[1422]: Session 20 logged out. Waiting for processes to exit. Jan 29 10:52:36.156041 systemd-logind[1422]: Removed session 20. Jan 29 10:52:41.159832 systemd[1]: Started sshd@20-10.0.0.53:22-10.0.0.1:52662.service - OpenSSH per-connection server daemon (10.0.0.1:52662). Jan 29 10:52:41.201032 sshd[4135]: Accepted publickey for core from 10.0.0.1 port 52662 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 10:52:41.201534 sshd-session[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:52:41.205137 systemd-logind[1422]: New session 21 of user core. Jan 29 10:52:41.211336 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 10:52:41.321823 sshd[4137]: Connection closed by 10.0.0.1 port 52662 Jan 29 10:52:41.322441 sshd-session[4135]: pam_unix(sshd:session): session closed for user core Jan 29 10:52:41.326459 systemd[1]: sshd@20-10.0.0.53:22-10.0.0.1:52662.service: Deactivated successfully. Jan 29 10:52:41.328471 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 10:52:41.330422 systemd-logind[1422]: Session 21 logged out. Waiting for processes to exit. Jan 29 10:52:41.331220 systemd-logind[1422]: Removed session 21. Jan 29 10:52:46.332807 systemd[1]: Started sshd@21-10.0.0.53:22-10.0.0.1:46196.service - OpenSSH per-connection server daemon (10.0.0.1:46196). Jan 29 10:52:46.371177 sshd[4154]: Accepted publickey for core from 10.0.0.1 port 46196 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 10:52:46.372081 sshd-session[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:52:46.376536 systemd-logind[1422]: New session 22 of user core. Jan 29 10:52:46.388357 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 10:52:46.506300 sshd[4156]: Connection closed by 10.0.0.1 port 46196 Jan 29 10:52:46.506627 sshd-session[4154]: pam_unix(sshd:session): session closed for user core Jan 29 10:52:46.509450 systemd[1]: sshd@21-10.0.0.53:22-10.0.0.1:46196.service: Deactivated successfully. Jan 29 10:52:46.511235 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 10:52:46.511879 systemd-logind[1422]: Session 22 logged out. Waiting for processes to exit. Jan 29 10:52:46.512734 systemd-logind[1422]: Removed session 22. Jan 29 10:52:48.948198 kubelet[2520]: E0129 10:52:48.947344 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:52:51.517539 systemd[1]: Started sshd@22-10.0.0.53:22-10.0.0.1:46210.service - OpenSSH per-connection server daemon (10.0.0.1:46210). Jan 29 10:52:51.576491 sshd[4168]: Accepted publickey for core from 10.0.0.1 port 46210 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 10:52:51.577659 sshd-session[4168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:52:51.581415 systemd-logind[1422]: New session 23 of user core. Jan 29 10:52:51.594268 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 10:52:51.712376 sshd[4170]: Connection closed by 10.0.0.1 port 46210 Jan 29 10:52:51.712960 sshd-session[4168]: pam_unix(sshd:session): session closed for user core Jan 29 10:52:51.723422 systemd[1]: sshd@22-10.0.0.53:22-10.0.0.1:46210.service: Deactivated successfully. Jan 29 10:52:51.726608 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 10:52:51.728401 systemd-logind[1422]: Session 23 logged out. Waiting for processes to exit. Jan 29 10:52:51.740401 systemd[1]: Started sshd@23-10.0.0.53:22-10.0.0.1:46226.service - OpenSSH per-connection server daemon (10.0.0.1:46226). Jan 29 10:52:51.741483 systemd-logind[1422]: Removed session 23. Jan 29 10:52:51.775733 sshd[4183]: Accepted publickey for core from 10.0.0.1 port 46226 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 10:52:51.777037 sshd-session[4183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:52:51.780840 systemd-logind[1422]: New session 24 of user core. Jan 29 10:52:51.792366 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 10:52:53.729450 containerd[1442]: time="2025-01-29T10:52:53.729302446Z" level=info msg="StopContainer for \"63d48a5de5232a043dfc9a7acbef70773ea05a4212706f8f65ea2cfeaa3ddc88\" with timeout 30 (s)" Jan 29 10:52:53.730758 containerd[1442]: time="2025-01-29T10:52:53.730316528Z" level=info msg="Stop container \"63d48a5de5232a043dfc9a7acbef70773ea05a4212706f8f65ea2cfeaa3ddc88\" with signal terminated" Jan 29 10:52:53.742794 systemd[1]: cri-containerd-63d48a5de5232a043dfc9a7acbef70773ea05a4212706f8f65ea2cfeaa3ddc88.scope: Deactivated successfully. Jan 29 10:52:53.766550 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63d48a5de5232a043dfc9a7acbef70773ea05a4212706f8f65ea2cfeaa3ddc88-rootfs.mount: Deactivated successfully. Jan 29 10:52:53.778018 containerd[1442]: time="2025-01-29T10:52:53.777982765Z" level=info msg="StopContainer for \"440700df8cb367118e380fa1223accb1dcb2aac221496f0ff7cb2d9e89a44105\" with timeout 2 (s)" Jan 29 10:52:53.778459 containerd[1442]: time="2025-01-29T10:52:53.778323232Z" level=info msg="shim disconnected" id=63d48a5de5232a043dfc9a7acbef70773ea05a4212706f8f65ea2cfeaa3ddc88 namespace=k8s.io Jan 29 10:52:53.778459 containerd[1442]: time="2025-01-29T10:52:53.778380630Z" level=warning msg="cleaning up after shim disconnected" id=63d48a5de5232a043dfc9a7acbef70773ea05a4212706f8f65ea2cfeaa3ddc88 namespace=k8s.io Jan 29 10:52:53.778459 containerd[1442]: time="2025-01-29T10:52:53.778390749Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 10:52:53.779055 containerd[1442]: time="2025-01-29T10:52:53.778791454Z" level=info msg="Stop container \"440700df8cb367118e380fa1223accb1dcb2aac221496f0ff7cb2d9e89a44105\" with signal terminated" Jan 29 10:52:53.781606 containerd[1442]: time="2025-01-29T10:52:53.781553150Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 10:52:53.786090 systemd-networkd[1368]: lxc_health: Link DOWN Jan 29 10:52:53.786097 systemd-networkd[1368]: lxc_health: Lost carrier Jan 29 10:52:53.809987 systemd[1]: cri-containerd-440700df8cb367118e380fa1223accb1dcb2aac221496f0ff7cb2d9e89a44105.scope: Deactivated successfully. Jan 29 10:52:53.810664 systemd[1]: cri-containerd-440700df8cb367118e380fa1223accb1dcb2aac221496f0ff7cb2d9e89a44105.scope: Consumed 6.949s CPU time. Jan 29 10:52:53.833573 containerd[1442]: time="2025-01-29T10:52:53.833421388Z" level=info msg="StopContainer for \"63d48a5de5232a043dfc9a7acbef70773ea05a4212706f8f65ea2cfeaa3ddc88\" returns successfully" Jan 29 10:52:53.837420 containerd[1442]: time="2025-01-29T10:52:53.837243963Z" level=info msg="StopPodSandbox for \"545f47c857d4a9133992d1a304f95e16587069727bc46d83ef6e4a3c5daa91a7\"" Jan 29 10:52:53.837420 containerd[1442]: time="2025-01-29T10:52:53.837318000Z" level=info msg="Container to stop \"63d48a5de5232a043dfc9a7acbef70773ea05a4212706f8f65ea2cfeaa3ddc88\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 10:52:53.839409 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-545f47c857d4a9133992d1a304f95e16587069727bc46d83ef6e4a3c5daa91a7-shm.mount: Deactivated successfully. Jan 29 10:52:53.844633 systemd[1]: cri-containerd-545f47c857d4a9133992d1a304f95e16587069727bc46d83ef6e4a3c5daa91a7.scope: Deactivated successfully. Jan 29 10:52:53.849836 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-440700df8cb367118e380fa1223accb1dcb2aac221496f0ff7cb2d9e89a44105-rootfs.mount: Deactivated successfully. Jan 29 10:52:53.867163 containerd[1442]: time="2025-01-29T10:52:53.867083475Z" level=info msg="shim disconnected" id=440700df8cb367118e380fa1223accb1dcb2aac221496f0ff7cb2d9e89a44105 namespace=k8s.io Jan 29 10:52:53.867163 containerd[1442]: time="2025-01-29T10:52:53.867162912Z" level=warning msg="cleaning up after shim disconnected" id=440700df8cb367118e380fa1223accb1dcb2aac221496f0ff7cb2d9e89a44105 namespace=k8s.io Jan 29 10:52:53.867613 containerd[1442]: time="2025-01-29T10:52:53.867173871Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 10:52:53.872102 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-545f47c857d4a9133992d1a304f95e16587069727bc46d83ef6e4a3c5daa91a7-rootfs.mount: Deactivated successfully. Jan 29 10:52:53.877527 containerd[1442]: time="2025-01-29T10:52:53.877461042Z" level=info msg="shim disconnected" id=545f47c857d4a9133992d1a304f95e16587069727bc46d83ef6e4a3c5daa91a7 namespace=k8s.io Jan 29 10:52:53.877854 containerd[1442]: time="2025-01-29T10:52:53.877617956Z" level=warning msg="cleaning up after shim disconnected" id=545f47c857d4a9133992d1a304f95e16587069727bc46d83ef6e4a3c5daa91a7 namespace=k8s.io Jan 29 10:52:53.877854 containerd[1442]: time="2025-01-29T10:52:53.877631956Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 10:52:53.890355 containerd[1442]: time="2025-01-29T10:52:53.889462508Z" level=info msg="StopContainer for \"440700df8cb367118e380fa1223accb1dcb2aac221496f0ff7cb2d9e89a44105\" returns successfully" Jan 29 10:52:53.892617 containerd[1442]: time="2025-01-29T10:52:53.892560151Z" level=info msg="StopPodSandbox for \"b4dcd5116ed54c3d25d560d6395599f6392a3bdf828d1d62e4a2bb59ca51fced\"" Jan 29 10:52:53.892694 containerd[1442]: time="2025-01-29T10:52:53.892631508Z" level=info msg="Container to stop \"20d73bbcec520e34890249eefb6802d68c6fd2789d42f9a20c0c203c2e2327bd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 10:52:53.892694 containerd[1442]: time="2025-01-29T10:52:53.892645268Z" level=info msg="Container to stop \"6964154ef724232dc407d0db474210064aa4125ae02448786b29eb48846abc0e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 10:52:53.892694 containerd[1442]: time="2025-01-29T10:52:53.892654667Z" level=info msg="Container to stop \"f1fde6cb322145d3e63dd349c6508291802a477f70992f0397b89835a956ab1d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 10:52:53.892694 containerd[1442]: time="2025-01-29T10:52:53.892663627Z" level=info msg="Container to stop \"adc085ba49ec092a9e93b22557a44b10fe5d7971abd1ae37f39e843e559126bf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 10:52:53.892694 containerd[1442]: time="2025-01-29T10:52:53.892671907Z" level=info msg="Container to stop \"440700df8cb367118e380fa1223accb1dcb2aac221496f0ff7cb2d9e89a44105\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 10:52:53.896738 containerd[1442]: time="2025-01-29T10:52:53.896618677Z" level=info msg="TearDown network for sandbox \"545f47c857d4a9133992d1a304f95e16587069727bc46d83ef6e4a3c5daa91a7\" successfully" Jan 29 10:52:53.896738 containerd[1442]: time="2025-01-29T10:52:53.896644276Z" level=info msg="StopPodSandbox for \"545f47c857d4a9133992d1a304f95e16587069727bc46d83ef6e4a3c5daa91a7\" returns successfully" Jan 29 10:52:53.900088 systemd[1]: cri-containerd-b4dcd5116ed54c3d25d560d6395599f6392a3bdf828d1d62e4a2bb59ca51fced.scope: Deactivated successfully. Jan 29 10:52:53.927869 containerd[1442]: time="2025-01-29T10:52:53.927767779Z" level=info msg="shim disconnected" id=b4dcd5116ed54c3d25d560d6395599f6392a3bdf828d1d62e4a2bb59ca51fced namespace=k8s.io Jan 29 10:52:53.928799 containerd[1442]: time="2025-01-29T10:52:53.928384916Z" level=warning msg="cleaning up after shim disconnected" id=b4dcd5116ed54c3d25d560d6395599f6392a3bdf828d1d62e4a2bb59ca51fced namespace=k8s.io Jan 29 10:52:53.928799 containerd[1442]: time="2025-01-29T10:52:53.928410955Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 10:52:53.939197 containerd[1442]: time="2025-01-29T10:52:53.939120750Z" level=info msg="TearDown network for sandbox \"b4dcd5116ed54c3d25d560d6395599f6392a3bdf828d1d62e4a2bb59ca51fced\" successfully" Jan 29 10:52:53.939197 containerd[1442]: time="2025-01-29T10:52:53.939160428Z" level=info msg="StopPodSandbox for \"b4dcd5116ed54c3d25d560d6395599f6392a3bdf828d1d62e4a2bb59ca51fced\" returns successfully" Jan 29 10:52:53.947100 kubelet[2520]: E0129 10:52:53.947067 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:52:53.965292 kubelet[2520]: I0129 10:52:53.965246 2520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-557t5\" (UniqueName: \"kubernetes.io/projected/278b186c-d3a9-4123-9357-a5929a01d1d3-kube-api-access-557t5\") pod \"278b186c-d3a9-4123-9357-a5929a01d1d3\" (UID: \"278b186c-d3a9-4123-9357-a5929a01d1d3\") " Jan 29 10:52:53.965409 kubelet[2520]: I0129 10:52:53.965366 2520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/278b186c-d3a9-4123-9357-a5929a01d1d3-cilium-config-path\") pod \"278b186c-d3a9-4123-9357-a5929a01d1d3\" (UID: \"278b186c-d3a9-4123-9357-a5929a01d1d3\") " Jan 29 10:52:53.970673 kubelet[2520]: I0129 10:52:53.970621 2520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/278b186c-d3a9-4123-9357-a5929a01d1d3-kube-api-access-557t5" (OuterVolumeSpecName: "kube-api-access-557t5") pod "278b186c-d3a9-4123-9357-a5929a01d1d3" (UID: "278b186c-d3a9-4123-9357-a5929a01d1d3"). InnerVolumeSpecName "kube-api-access-557t5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:52:53.971277 kubelet[2520]: I0129 10:52:53.971240 2520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/278b186c-d3a9-4123-9357-a5929a01d1d3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "278b186c-d3a9-4123-9357-a5929a01d1d3" (UID: "278b186c-d3a9-4123-9357-a5929a01d1d3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:52:54.065881 kubelet[2520]: I0129 10:52:54.065735 2520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-cilium-cgroup\") pod \"97cbe104-0c6d-40f4-bd6a-64d6cd581d22\" (UID: \"97cbe104-0c6d-40f4-bd6a-64d6cd581d22\") " Jan 29 10:52:54.065881 kubelet[2520]: I0129 10:52:54.065773 2520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-host-proc-sys-kernel\") pod \"97cbe104-0c6d-40f4-bd6a-64d6cd581d22\" (UID: \"97cbe104-0c6d-40f4-bd6a-64d6cd581d22\") " Jan 29 10:52:54.065881 kubelet[2520]: I0129 10:52:54.065869 2520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-cilium-run\") pod \"97cbe104-0c6d-40f4-bd6a-64d6cd581d22\" (UID: \"97cbe104-0c6d-40f4-bd6a-64d6cd581d22\") " Jan 29 10:52:54.066052 kubelet[2520]: I0129 10:52:54.065892 2520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-clustermesh-secrets\") pod \"97cbe104-0c6d-40f4-bd6a-64d6cd581d22\" (UID: \"97cbe104-0c6d-40f4-bd6a-64d6cd581d22\") " Jan 29 10:52:54.066052 kubelet[2520]: I0129 10:52:54.065909 2520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-hubble-tls\") pod \"97cbe104-0c6d-40f4-bd6a-64d6cd581d22\" (UID: \"97cbe104-0c6d-40f4-bd6a-64d6cd581d22\") " Jan 29 10:52:54.066052 kubelet[2520]: I0129 10:52:54.065924 2520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-lib-modules\") pod \"97cbe104-0c6d-40f4-bd6a-64d6cd581d22\" (UID: \"97cbe104-0c6d-40f4-bd6a-64d6cd581d22\") " Jan 29 10:52:54.066052 kubelet[2520]: I0129 10:52:54.065943 2520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-cilium-config-path\") pod \"97cbe104-0c6d-40f4-bd6a-64d6cd581d22\" (UID: \"97cbe104-0c6d-40f4-bd6a-64d6cd581d22\") " Jan 29 10:52:54.066052 kubelet[2520]: I0129 10:52:54.065957 2520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-bpf-maps\") pod \"97cbe104-0c6d-40f4-bd6a-64d6cd581d22\" (UID: \"97cbe104-0c6d-40f4-bd6a-64d6cd581d22\") " Jan 29 10:52:54.066052 kubelet[2520]: I0129 10:52:54.065971 2520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-cni-path\") pod \"97cbe104-0c6d-40f4-bd6a-64d6cd581d22\" (UID: \"97cbe104-0c6d-40f4-bd6a-64d6cd581d22\") " Jan 29 10:52:54.066213 kubelet[2520]: I0129 10:52:54.065985 2520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-xtables-lock\") pod \"97cbe104-0c6d-40f4-bd6a-64d6cd581d22\" (UID: \"97cbe104-0c6d-40f4-bd6a-64d6cd581d22\") " Jan 29 10:52:54.066213 kubelet[2520]: I0129 10:52:54.065999 2520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-host-proc-sys-net\") pod \"97cbe104-0c6d-40f4-bd6a-64d6cd581d22\" (UID: \"97cbe104-0c6d-40f4-bd6a-64d6cd581d22\") " Jan 29 10:52:54.066213 kubelet[2520]: I0129 10:52:54.066016 2520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-hostproc\") pod \"97cbe104-0c6d-40f4-bd6a-64d6cd581d22\" (UID: \"97cbe104-0c6d-40f4-bd6a-64d6cd581d22\") " Jan 29 10:52:54.066213 kubelet[2520]: I0129 10:52:54.066033 2520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmqdb\" (UniqueName: \"kubernetes.io/projected/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-kube-api-access-xmqdb\") pod \"97cbe104-0c6d-40f4-bd6a-64d6cd581d22\" (UID: \"97cbe104-0c6d-40f4-bd6a-64d6cd581d22\") " Jan 29 10:52:54.066213 kubelet[2520]: I0129 10:52:54.066052 2520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-etc-cni-netd\") pod \"97cbe104-0c6d-40f4-bd6a-64d6cd581d22\" (UID: \"97cbe104-0c6d-40f4-bd6a-64d6cd581d22\") " Jan 29 10:52:54.066213 kubelet[2520]: I0129 10:52:54.066083 2520 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/278b186c-d3a9-4123-9357-a5929a01d1d3-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 29 10:52:54.066344 kubelet[2520]: I0129 10:52:54.066094 2520 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-557t5\" (UniqueName: \"kubernetes.io/projected/278b186c-d3a9-4123-9357-a5929a01d1d3-kube-api-access-557t5\") on node \"localhost\" DevicePath \"\"" Jan 29 10:52:54.066344 kubelet[2520]: I0129 10:52:54.066170 2520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "97cbe104-0c6d-40f4-bd6a-64d6cd581d22" (UID: "97cbe104-0c6d-40f4-bd6a-64d6cd581d22"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 10:52:54.066344 kubelet[2520]: I0129 10:52:54.066204 2520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "97cbe104-0c6d-40f4-bd6a-64d6cd581d22" (UID: "97cbe104-0c6d-40f4-bd6a-64d6cd581d22"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 10:52:54.066344 kubelet[2520]: I0129 10:52:54.066222 2520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "97cbe104-0c6d-40f4-bd6a-64d6cd581d22" (UID: "97cbe104-0c6d-40f4-bd6a-64d6cd581d22"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 10:52:54.066344 kubelet[2520]: I0129 10:52:54.066239 2520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-cni-path" (OuterVolumeSpecName: "cni-path") pod "97cbe104-0c6d-40f4-bd6a-64d6cd581d22" (UID: "97cbe104-0c6d-40f4-bd6a-64d6cd581d22"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 10:52:54.066452 kubelet[2520]: I0129 10:52:54.066255 2520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "97cbe104-0c6d-40f4-bd6a-64d6cd581d22" (UID: "97cbe104-0c6d-40f4-bd6a-64d6cd581d22"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 10:52:54.066452 kubelet[2520]: I0129 10:52:54.066250 2520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "97cbe104-0c6d-40f4-bd6a-64d6cd581d22" (UID: "97cbe104-0c6d-40f4-bd6a-64d6cd581d22"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 10:52:54.066452 kubelet[2520]: I0129 10:52:54.066269 2520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "97cbe104-0c6d-40f4-bd6a-64d6cd581d22" (UID: "97cbe104-0c6d-40f4-bd6a-64d6cd581d22"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 10:52:54.066452 kubelet[2520]: I0129 10:52:54.066284 2520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-hostproc" (OuterVolumeSpecName: "hostproc") pod "97cbe104-0c6d-40f4-bd6a-64d6cd581d22" (UID: "97cbe104-0c6d-40f4-bd6a-64d6cd581d22"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 10:52:54.066452 kubelet[2520]: I0129 10:52:54.066283 2520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "97cbe104-0c6d-40f4-bd6a-64d6cd581d22" (UID: "97cbe104-0c6d-40f4-bd6a-64d6cd581d22"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 10:52:54.066937 kubelet[2520]: I0129 10:52:54.066606 2520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "97cbe104-0c6d-40f4-bd6a-64d6cd581d22" (UID: "97cbe104-0c6d-40f4-bd6a-64d6cd581d22"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 10:52:54.068812 kubelet[2520]: I0129 10:52:54.068670 2520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "97cbe104-0c6d-40f4-bd6a-64d6cd581d22" (UID: "97cbe104-0c6d-40f4-bd6a-64d6cd581d22"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:52:54.068812 kubelet[2520]: I0129 10:52:54.068765 2520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-kube-api-access-xmqdb" (OuterVolumeSpecName: "kube-api-access-xmqdb") pod "97cbe104-0c6d-40f4-bd6a-64d6cd581d22" (UID: "97cbe104-0c6d-40f4-bd6a-64d6cd581d22"). InnerVolumeSpecName "kube-api-access-xmqdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:52:54.069558 kubelet[2520]: I0129 10:52:54.069522 2520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "97cbe104-0c6d-40f4-bd6a-64d6cd581d22" (UID: "97cbe104-0c6d-40f4-bd6a-64d6cd581d22"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:52:54.069943 kubelet[2520]: I0129 10:52:54.069914 2520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "97cbe104-0c6d-40f4-bd6a-64d6cd581d22" (UID: "97cbe104-0c6d-40f4-bd6a-64d6cd581d22"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:52:54.166552 kubelet[2520]: I0129 10:52:54.166517 2520 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 29 10:52:54.166823 kubelet[2520]: I0129 10:52:54.166698 2520 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-xmqdb\" (UniqueName: \"kubernetes.io/projected/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-kube-api-access-xmqdb\") on node \"localhost\" DevicePath \"\"" Jan 29 10:52:54.166823 kubelet[2520]: I0129 10:52:54.166720 2520 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 29 10:52:54.166823 kubelet[2520]: I0129 10:52:54.166729 2520 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 29 10:52:54.166823 kubelet[2520]: I0129 10:52:54.166736 2520 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 29 10:52:54.166823 kubelet[2520]: I0129 10:52:54.166747 2520 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 29 10:52:54.166823 kubelet[2520]: I0129 10:52:54.166755 2520 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 29 10:52:54.166823 kubelet[2520]: I0129 10:52:54.166762 2520 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 29 10:52:54.166823 kubelet[2520]: I0129 10:52:54.166771 2520 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 29 10:52:54.166996 kubelet[2520]: I0129 10:52:54.166779 2520 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 29 10:52:54.166996 kubelet[2520]: I0129 10:52:54.166786 2520 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 29 10:52:54.166996 kubelet[2520]: I0129 10:52:54.166794 2520 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 29 10:52:54.166996 kubelet[2520]: I0129 10:52:54.166801 2520 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 29 10:52:54.166996 kubelet[2520]: I0129 10:52:54.166808 2520 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/97cbe104-0c6d-40f4-bd6a-64d6cd581d22-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 29 10:52:54.185177 kubelet[2520]: I0129 10:52:54.184378 2520 scope.go:117] "RemoveContainer" containerID="63d48a5de5232a043dfc9a7acbef70773ea05a4212706f8f65ea2cfeaa3ddc88" Jan 29 10:52:54.185901 containerd[1442]: time="2025-01-29T10:52:54.185622898Z" level=info msg="RemoveContainer for \"63d48a5de5232a043dfc9a7acbef70773ea05a4212706f8f65ea2cfeaa3ddc88\"" Jan 29 10:52:54.188092 systemd[1]: Removed slice kubepods-besteffort-pod278b186c_d3a9_4123_9357_a5929a01d1d3.slice - libcontainer container kubepods-besteffort-pod278b186c_d3a9_4123_9357_a5929a01d1d3.slice. Jan 29 10:52:54.189809 containerd[1442]: time="2025-01-29T10:52:54.189755588Z" level=info msg="RemoveContainer for \"63d48a5de5232a043dfc9a7acbef70773ea05a4212706f8f65ea2cfeaa3ddc88\" returns successfully" Jan 29 10:52:54.190561 kubelet[2520]: I0129 10:52:54.190543 2520 scope.go:117] "RemoveContainer" containerID="63d48a5de5232a043dfc9a7acbef70773ea05a4212706f8f65ea2cfeaa3ddc88" Jan 29 10:52:54.190892 containerd[1442]: time="2025-01-29T10:52:54.190854828Z" level=error msg="ContainerStatus for \"63d48a5de5232a043dfc9a7acbef70773ea05a4212706f8f65ea2cfeaa3ddc88\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"63d48a5de5232a043dfc9a7acbef70773ea05a4212706f8f65ea2cfeaa3ddc88\": not found" Jan 29 10:52:54.191511 systemd[1]: Removed slice kubepods-burstable-pod97cbe104_0c6d_40f4_bd6a_64d6cd581d22.slice - libcontainer container kubepods-burstable-pod97cbe104_0c6d_40f4_bd6a_64d6cd581d22.slice. Jan 29 10:52:54.191610 systemd[1]: kubepods-burstable-pod97cbe104_0c6d_40f4_bd6a_64d6cd581d22.slice: Consumed 7.113s CPU time. Jan 29 10:52:54.219931 kubelet[2520]: E0129 10:52:54.219885 2520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"63d48a5de5232a043dfc9a7acbef70773ea05a4212706f8f65ea2cfeaa3ddc88\": not found" containerID="63d48a5de5232a043dfc9a7acbef70773ea05a4212706f8f65ea2cfeaa3ddc88" Jan 29 10:52:54.220037 kubelet[2520]: I0129 10:52:54.219927 2520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"63d48a5de5232a043dfc9a7acbef70773ea05a4212706f8f65ea2cfeaa3ddc88"} err="failed to get container status \"63d48a5de5232a043dfc9a7acbef70773ea05a4212706f8f65ea2cfeaa3ddc88\": rpc error: code = NotFound desc = an error occurred when try to find container \"63d48a5de5232a043dfc9a7acbef70773ea05a4212706f8f65ea2cfeaa3ddc88\": not found" Jan 29 10:52:54.220037 kubelet[2520]: I0129 10:52:54.220014 2520 scope.go:117] "RemoveContainer" containerID="440700df8cb367118e380fa1223accb1dcb2aac221496f0ff7cb2d9e89a44105" Jan 29 10:52:54.221343 containerd[1442]: time="2025-01-29T10:52:54.221044530Z" level=info msg="RemoveContainer for \"440700df8cb367118e380fa1223accb1dcb2aac221496f0ff7cb2d9e89a44105\"" Jan 29 10:52:54.223792 containerd[1442]: time="2025-01-29T10:52:54.223758951Z" level=info msg="RemoveContainer for \"440700df8cb367118e380fa1223accb1dcb2aac221496f0ff7cb2d9e89a44105\" returns successfully" Jan 29 10:52:54.224020 kubelet[2520]: I0129 10:52:54.223995 2520 scope.go:117] "RemoveContainer" containerID="6964154ef724232dc407d0db474210064aa4125ae02448786b29eb48846abc0e" Jan 29 10:52:54.224850 containerd[1442]: time="2025-01-29T10:52:54.224814353Z" level=info msg="RemoveContainer for \"6964154ef724232dc407d0db474210064aa4125ae02448786b29eb48846abc0e\"" Jan 29 10:52:54.226981 containerd[1442]: time="2025-01-29T10:52:54.226943796Z" level=info msg="RemoveContainer for \"6964154ef724232dc407d0db474210064aa4125ae02448786b29eb48846abc0e\" returns successfully" Jan 29 10:52:54.227114 kubelet[2520]: I0129 10:52:54.227082 2520 scope.go:117] "RemoveContainer" containerID="adc085ba49ec092a9e93b22557a44b10fe5d7971abd1ae37f39e843e559126bf" Jan 29 10:52:54.227934 containerd[1442]: time="2025-01-29T10:52:54.227909640Z" level=info msg="RemoveContainer for \"adc085ba49ec092a9e93b22557a44b10fe5d7971abd1ae37f39e843e559126bf\"" Jan 29 10:52:54.230153 containerd[1442]: time="2025-01-29T10:52:54.230096401Z" level=info msg="RemoveContainer for \"adc085ba49ec092a9e93b22557a44b10fe5d7971abd1ae37f39e843e559126bf\" returns successfully" Jan 29 10:52:54.230325 kubelet[2520]: I0129 10:52:54.230289 2520 scope.go:117] "RemoveContainer" containerID="20d73bbcec520e34890249eefb6802d68c6fd2789d42f9a20c0c203c2e2327bd" Jan 29 10:52:54.231287 containerd[1442]: time="2025-01-29T10:52:54.231249039Z" level=info msg="RemoveContainer for \"20d73bbcec520e34890249eefb6802d68c6fd2789d42f9a20c0c203c2e2327bd\"" Jan 29 10:52:54.233307 containerd[1442]: time="2025-01-29T10:52:54.233270446Z" level=info msg="RemoveContainer for \"20d73bbcec520e34890249eefb6802d68c6fd2789d42f9a20c0c203c2e2327bd\" returns successfully" Jan 29 10:52:54.233470 kubelet[2520]: I0129 10:52:54.233440 2520 scope.go:117] "RemoveContainer" containerID="f1fde6cb322145d3e63dd349c6508291802a477f70992f0397b89835a956ab1d" Jan 29 10:52:54.234569 containerd[1442]: time="2025-01-29T10:52:54.234361246Z" level=info msg="RemoveContainer for \"f1fde6cb322145d3e63dd349c6508291802a477f70992f0397b89835a956ab1d\"" Jan 29 10:52:54.236436 containerd[1442]: time="2025-01-29T10:52:54.236406052Z" level=info msg="RemoveContainer for \"f1fde6cb322145d3e63dd349c6508291802a477f70992f0397b89835a956ab1d\" returns successfully" Jan 29 10:52:54.236680 kubelet[2520]: I0129 10:52:54.236654 2520 scope.go:117] "RemoveContainer" containerID="440700df8cb367118e380fa1223accb1dcb2aac221496f0ff7cb2d9e89a44105" Jan 29 10:52:54.236927 containerd[1442]: time="2025-01-29T10:52:54.236879514Z" level=error msg="ContainerStatus for \"440700df8cb367118e380fa1223accb1dcb2aac221496f0ff7cb2d9e89a44105\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"440700df8cb367118e380fa1223accb1dcb2aac221496f0ff7cb2d9e89a44105\": not found" Jan 29 10:52:54.237025 kubelet[2520]: E0129 10:52:54.237006 2520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"440700df8cb367118e380fa1223accb1dcb2aac221496f0ff7cb2d9e89a44105\": not found" containerID="440700df8cb367118e380fa1223accb1dcb2aac221496f0ff7cb2d9e89a44105" Jan 29 10:52:54.237070 kubelet[2520]: I0129 10:52:54.237031 2520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"440700df8cb367118e380fa1223accb1dcb2aac221496f0ff7cb2d9e89a44105"} err="failed to get container status \"440700df8cb367118e380fa1223accb1dcb2aac221496f0ff7cb2d9e89a44105\": rpc error: code = NotFound desc = an error occurred when try to find container \"440700df8cb367118e380fa1223accb1dcb2aac221496f0ff7cb2d9e89a44105\": not found" Jan 29 10:52:54.237097 kubelet[2520]: I0129 10:52:54.237070 2520 scope.go:117] "RemoveContainer" containerID="6964154ef724232dc407d0db474210064aa4125ae02448786b29eb48846abc0e" Jan 29 10:52:54.237256 containerd[1442]: time="2025-01-29T10:52:54.237229782Z" level=error msg="ContainerStatus for \"6964154ef724232dc407d0db474210064aa4125ae02448786b29eb48846abc0e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6964154ef724232dc407d0db474210064aa4125ae02448786b29eb48846abc0e\": not found" Jan 29 10:52:54.237344 kubelet[2520]: E0129 10:52:54.237322 2520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6964154ef724232dc407d0db474210064aa4125ae02448786b29eb48846abc0e\": not found" containerID="6964154ef724232dc407d0db474210064aa4125ae02448786b29eb48846abc0e" Jan 29 10:52:54.237385 kubelet[2520]: I0129 10:52:54.237348 2520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6964154ef724232dc407d0db474210064aa4125ae02448786b29eb48846abc0e"} err="failed to get container status \"6964154ef724232dc407d0db474210064aa4125ae02448786b29eb48846abc0e\": rpc error: code = NotFound desc = an error occurred when try to find container \"6964154ef724232dc407d0db474210064aa4125ae02448786b29eb48846abc0e\": not found" Jan 29 10:52:54.237385 kubelet[2520]: I0129 10:52:54.237361 2520 scope.go:117] "RemoveContainer" containerID="adc085ba49ec092a9e93b22557a44b10fe5d7971abd1ae37f39e843e559126bf" Jan 29 10:52:54.237586 containerd[1442]: time="2025-01-29T10:52:54.237537290Z" level=error msg="ContainerStatus for \"adc085ba49ec092a9e93b22557a44b10fe5d7971abd1ae37f39e843e559126bf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"adc085ba49ec092a9e93b22557a44b10fe5d7971abd1ae37f39e843e559126bf\": not found" Jan 29 10:52:54.237689 kubelet[2520]: E0129 10:52:54.237669 2520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"adc085ba49ec092a9e93b22557a44b10fe5d7971abd1ae37f39e843e559126bf\": not found" containerID="adc085ba49ec092a9e93b22557a44b10fe5d7971abd1ae37f39e843e559126bf" Jan 29 10:52:54.237772 kubelet[2520]: I0129 10:52:54.237695 2520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"adc085ba49ec092a9e93b22557a44b10fe5d7971abd1ae37f39e843e559126bf"} err="failed to get container status \"adc085ba49ec092a9e93b22557a44b10fe5d7971abd1ae37f39e843e559126bf\": rpc error: code = NotFound desc = an error occurred when try to find container \"adc085ba49ec092a9e93b22557a44b10fe5d7971abd1ae37f39e843e559126bf\": not found" Jan 29 10:52:54.237772 kubelet[2520]: I0129 10:52:54.237713 2520 scope.go:117] "RemoveContainer" containerID="20d73bbcec520e34890249eefb6802d68c6fd2789d42f9a20c0c203c2e2327bd" Jan 29 10:52:54.237916 containerd[1442]: time="2025-01-29T10:52:54.237888158Z" level=error msg="ContainerStatus for \"20d73bbcec520e34890249eefb6802d68c6fd2789d42f9a20c0c203c2e2327bd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"20d73bbcec520e34890249eefb6802d68c6fd2789d42f9a20c0c203c2e2327bd\": not found" Jan 29 10:52:54.237995 kubelet[2520]: E0129 10:52:54.237979 2520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"20d73bbcec520e34890249eefb6802d68c6fd2789d42f9a20c0c203c2e2327bd\": not found" containerID="20d73bbcec520e34890249eefb6802d68c6fd2789d42f9a20c0c203c2e2327bd" Jan 29 10:52:54.238027 kubelet[2520]: I0129 10:52:54.237997 2520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"20d73bbcec520e34890249eefb6802d68c6fd2789d42f9a20c0c203c2e2327bd"} err="failed to get container status \"20d73bbcec520e34890249eefb6802d68c6fd2789d42f9a20c0c203c2e2327bd\": rpc error: code = NotFound desc = an error occurred when try to find container \"20d73bbcec520e34890249eefb6802d68c6fd2789d42f9a20c0c203c2e2327bd\": not found" Jan 29 10:52:54.238027 kubelet[2520]: I0129 10:52:54.238009 2520 scope.go:117] "RemoveContainer" containerID="f1fde6cb322145d3e63dd349c6508291802a477f70992f0397b89835a956ab1d" Jan 29 10:52:54.238263 containerd[1442]: time="2025-01-29T10:52:54.238199786Z" level=error msg="ContainerStatus for \"f1fde6cb322145d3e63dd349c6508291802a477f70992f0397b89835a956ab1d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f1fde6cb322145d3e63dd349c6508291802a477f70992f0397b89835a956ab1d\": not found" Jan 29 10:52:54.238307 kubelet[2520]: E0129 10:52:54.238282 2520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f1fde6cb322145d3e63dd349c6508291802a477f70992f0397b89835a956ab1d\": not found" containerID="f1fde6cb322145d3e63dd349c6508291802a477f70992f0397b89835a956ab1d" Jan 29 10:52:54.238307 kubelet[2520]: I0129 10:52:54.238296 2520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f1fde6cb322145d3e63dd349c6508291802a477f70992f0397b89835a956ab1d"} err="failed to get container status \"f1fde6cb322145d3e63dd349c6508291802a477f70992f0397b89835a956ab1d\": rpc error: code = NotFound desc = an error occurred when try to find container \"f1fde6cb322145d3e63dd349c6508291802a477f70992f0397b89835a956ab1d\": not found" Jan 29 10:52:54.748031 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4dcd5116ed54c3d25d560d6395599f6392a3bdf828d1d62e4a2bb59ca51fced-rootfs.mount: Deactivated successfully. Jan 29 10:52:54.748138 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b4dcd5116ed54c3d25d560d6395599f6392a3bdf828d1d62e4a2bb59ca51fced-shm.mount: Deactivated successfully. Jan 29 10:52:54.748214 systemd[1]: var-lib-kubelet-pods-278b186c\x2dd3a9\x2d4123\x2d9357\x2da5929a01d1d3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d557t5.mount: Deactivated successfully. Jan 29 10:52:54.748271 systemd[1]: var-lib-kubelet-pods-97cbe104\x2d0c6d\x2d40f4\x2dbd6a\x2d64d6cd581d22-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxmqdb.mount: Deactivated successfully. Jan 29 10:52:54.748330 systemd[1]: var-lib-kubelet-pods-97cbe104\x2d0c6d\x2d40f4\x2dbd6a\x2d64d6cd581d22-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 29 10:52:54.748389 systemd[1]: var-lib-kubelet-pods-97cbe104\x2d0c6d\x2d40f4\x2dbd6a\x2d64d6cd581d22-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 29 10:52:55.681934 sshd[4185]: Connection closed by 10.0.0.1 port 46226 Jan 29 10:52:55.681839 sshd-session[4183]: pam_unix(sshd:session): session closed for user core Jan 29 10:52:55.689497 systemd[1]: sshd@23-10.0.0.53:22-10.0.0.1:46226.service: Deactivated successfully. Jan 29 10:52:55.691404 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 10:52:55.693220 systemd[1]: session-24.scope: Consumed 1.255s CPU time. Jan 29 10:52:55.694572 systemd-logind[1422]: Session 24 logged out. Waiting for processes to exit. Jan 29 10:52:55.704459 systemd[1]: Started sshd@24-10.0.0.53:22-10.0.0.1:42986.service - OpenSSH per-connection server daemon (10.0.0.1:42986). Jan 29 10:52:55.705428 systemd-logind[1422]: Removed session 24. Jan 29 10:52:55.740472 sshd[4346]: Accepted publickey for core from 10.0.0.1 port 42986 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 10:52:55.741830 sshd-session[4346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:52:55.745742 systemd-logind[1422]: New session 25 of user core. Jan 29 10:52:55.753295 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 10:52:55.949500 kubelet[2520]: I0129 10:52:55.949391 2520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="278b186c-d3a9-4123-9357-a5929a01d1d3" path="/var/lib/kubelet/pods/278b186c-d3a9-4123-9357-a5929a01d1d3/volumes" Jan 29 10:52:55.949795 kubelet[2520]: I0129 10:52:55.949780 2520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97cbe104-0c6d-40f4-bd6a-64d6cd581d22" path="/var/lib/kubelet/pods/97cbe104-0c6d-40f4-bd6a-64d6cd581d22/volumes" Jan 29 10:52:56.018407 kubelet[2520]: E0129 10:52:56.018351 2520 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 10:52:56.508822 sshd[4348]: Connection closed by 10.0.0.1 port 42986 Jan 29 10:52:56.509331 sshd-session[4346]: pam_unix(sshd:session): session closed for user core Jan 29 10:52:56.523947 systemd[1]: sshd@24-10.0.0.53:22-10.0.0.1:42986.service: Deactivated successfully. Jan 29 10:52:56.526584 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 10:52:56.529734 systemd-logind[1422]: Session 25 logged out. Waiting for processes to exit. Jan 29 10:52:56.535053 kubelet[2520]: E0129 10:52:56.534996 2520 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="97cbe104-0c6d-40f4-bd6a-64d6cd581d22" containerName="mount-bpf-fs" Jan 29 10:52:56.535053 kubelet[2520]: E0129 10:52:56.535029 2520 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="97cbe104-0c6d-40f4-bd6a-64d6cd581d22" containerName="cilium-agent" Jan 29 10:52:56.535053 kubelet[2520]: E0129 10:52:56.535037 2520 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="97cbe104-0c6d-40f4-bd6a-64d6cd581d22" containerName="mount-cgroup" Jan 29 10:52:56.535053 kubelet[2520]: E0129 10:52:56.535043 2520 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="278b186c-d3a9-4123-9357-a5929a01d1d3" containerName="cilium-operator" Jan 29 10:52:56.535053 kubelet[2520]: E0129 10:52:56.535049 2520 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="97cbe104-0c6d-40f4-bd6a-64d6cd581d22" containerName="apply-sysctl-overwrites" Jan 29 10:52:56.535053 kubelet[2520]: E0129 10:52:56.535055 2520 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="97cbe104-0c6d-40f4-bd6a-64d6cd581d22" containerName="clean-cilium-state" Jan 29 10:52:56.535315 kubelet[2520]: I0129 10:52:56.535077 2520 memory_manager.go:354] "RemoveStaleState removing state" podUID="97cbe104-0c6d-40f4-bd6a-64d6cd581d22" containerName="cilium-agent" Jan 29 10:52:56.535315 kubelet[2520]: I0129 10:52:56.535084 2520 memory_manager.go:354] "RemoveStaleState removing state" podUID="278b186c-d3a9-4123-9357-a5929a01d1d3" containerName="cilium-operator" Jan 29 10:52:56.536983 systemd[1]: Started sshd@25-10.0.0.53:22-10.0.0.1:42988.service - OpenSSH per-connection server daemon (10.0.0.1:42988). Jan 29 10:52:56.542020 systemd-logind[1422]: Removed session 25. Jan 29 10:52:56.554593 systemd[1]: Created slice kubepods-burstable-pode720325b_8ecb_47bb_ab8c_b63a052f6ce5.slice - libcontainer container kubepods-burstable-pode720325b_8ecb_47bb_ab8c_b63a052f6ce5.slice. Jan 29 10:52:56.585559 kubelet[2520]: I0129 10:52:56.583349 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e720325b-8ecb-47bb-ab8c-b63a052f6ce5-hubble-tls\") pod \"cilium-9m82x\" (UID: \"e720325b-8ecb-47bb-ab8c-b63a052f6ce5\") " pod="kube-system/cilium-9m82x" Jan 29 10:52:56.585559 kubelet[2520]: I0129 10:52:56.583391 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e720325b-8ecb-47bb-ab8c-b63a052f6ce5-cni-path\") pod \"cilium-9m82x\" (UID: \"e720325b-8ecb-47bb-ab8c-b63a052f6ce5\") " pod="kube-system/cilium-9m82x" Jan 29 10:52:56.585559 kubelet[2520]: I0129 10:52:56.583409 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e720325b-8ecb-47bb-ab8c-b63a052f6ce5-clustermesh-secrets\") pod \"cilium-9m82x\" (UID: \"e720325b-8ecb-47bb-ab8c-b63a052f6ce5\") " pod="kube-system/cilium-9m82x" Jan 29 10:52:56.585559 kubelet[2520]: I0129 10:52:56.583427 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e720325b-8ecb-47bb-ab8c-b63a052f6ce5-host-proc-sys-net\") pod \"cilium-9m82x\" (UID: \"e720325b-8ecb-47bb-ab8c-b63a052f6ce5\") " pod="kube-system/cilium-9m82x" Jan 29 10:52:56.585559 kubelet[2520]: I0129 10:52:56.583446 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e720325b-8ecb-47bb-ab8c-b63a052f6ce5-host-proc-sys-kernel\") pod \"cilium-9m82x\" (UID: \"e720325b-8ecb-47bb-ab8c-b63a052f6ce5\") " pod="kube-system/cilium-9m82x" Jan 29 10:52:56.585559 kubelet[2520]: I0129 10:52:56.585271 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e720325b-8ecb-47bb-ab8c-b63a052f6ce5-lib-modules\") pod \"cilium-9m82x\" (UID: \"e720325b-8ecb-47bb-ab8c-b63a052f6ce5\") " pod="kube-system/cilium-9m82x" Jan 29 10:52:56.585808 kubelet[2520]: I0129 10:52:56.585312 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e720325b-8ecb-47bb-ab8c-b63a052f6ce5-cilium-ipsec-secrets\") pod \"cilium-9m82x\" (UID: \"e720325b-8ecb-47bb-ab8c-b63a052f6ce5\") " pod="kube-system/cilium-9m82x" Jan 29 10:52:56.585808 kubelet[2520]: I0129 10:52:56.585344 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e720325b-8ecb-47bb-ab8c-b63a052f6ce5-cilium-cgroup\") pod \"cilium-9m82x\" (UID: \"e720325b-8ecb-47bb-ab8c-b63a052f6ce5\") " pod="kube-system/cilium-9m82x" Jan 29 10:52:56.585808 kubelet[2520]: I0129 10:52:56.585374 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e720325b-8ecb-47bb-ab8c-b63a052f6ce5-cilium-config-path\") pod \"cilium-9m82x\" (UID: \"e720325b-8ecb-47bb-ab8c-b63a052f6ce5\") " pod="kube-system/cilium-9m82x" Jan 29 10:52:56.585808 kubelet[2520]: I0129 10:52:56.585404 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e720325b-8ecb-47bb-ab8c-b63a052f6ce5-cilium-run\") pod \"cilium-9m82x\" (UID: \"e720325b-8ecb-47bb-ab8c-b63a052f6ce5\") " pod="kube-system/cilium-9m82x" Jan 29 10:52:56.585808 kubelet[2520]: I0129 10:52:56.585424 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e720325b-8ecb-47bb-ab8c-b63a052f6ce5-bpf-maps\") pod \"cilium-9m82x\" (UID: \"e720325b-8ecb-47bb-ab8c-b63a052f6ce5\") " pod="kube-system/cilium-9m82x" Jan 29 10:52:56.585808 kubelet[2520]: I0129 10:52:56.585439 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e720325b-8ecb-47bb-ab8c-b63a052f6ce5-xtables-lock\") pod \"cilium-9m82x\" (UID: \"e720325b-8ecb-47bb-ab8c-b63a052f6ce5\") " pod="kube-system/cilium-9m82x" Jan 29 10:52:56.585921 kubelet[2520]: I0129 10:52:56.585455 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkv5f\" (UniqueName: \"kubernetes.io/projected/e720325b-8ecb-47bb-ab8c-b63a052f6ce5-kube-api-access-pkv5f\") pod \"cilium-9m82x\" (UID: \"e720325b-8ecb-47bb-ab8c-b63a052f6ce5\") " pod="kube-system/cilium-9m82x" Jan 29 10:52:56.585921 kubelet[2520]: I0129 10:52:56.585474 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e720325b-8ecb-47bb-ab8c-b63a052f6ce5-hostproc\") pod \"cilium-9m82x\" (UID: \"e720325b-8ecb-47bb-ab8c-b63a052f6ce5\") " pod="kube-system/cilium-9m82x" Jan 29 10:52:56.585921 kubelet[2520]: I0129 10:52:56.585489 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e720325b-8ecb-47bb-ab8c-b63a052f6ce5-etc-cni-netd\") pod \"cilium-9m82x\" (UID: \"e720325b-8ecb-47bb-ab8c-b63a052f6ce5\") " pod="kube-system/cilium-9m82x" Jan 29 10:52:56.603689 sshd[4359]: Accepted publickey for core from 10.0.0.1 port 42988 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 10:52:56.604992 sshd-session[4359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:52:56.610849 systemd-logind[1422]: New session 26 of user core. Jan 29 10:52:56.622354 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 10:52:56.673070 sshd[4361]: Connection closed by 10.0.0.1 port 42988 Jan 29 10:52:56.672357 sshd-session[4359]: pam_unix(sshd:session): session closed for user core Jan 29 10:52:56.697806 systemd[1]: sshd@25-10.0.0.53:22-10.0.0.1:42988.service: Deactivated successfully. Jan 29 10:52:56.703098 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 10:52:56.704629 systemd-logind[1422]: Session 26 logged out. Waiting for processes to exit. Jan 29 10:52:56.711454 systemd[1]: Started sshd@26-10.0.0.53:22-10.0.0.1:43000.service - OpenSSH per-connection server daemon (10.0.0.1:43000). Jan 29 10:52:56.712469 systemd-logind[1422]: Removed session 26. Jan 29 10:52:56.748588 sshd[4371]: Accepted publickey for core from 10.0.0.1 port 43000 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 10:52:56.749822 sshd-session[4371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:52:56.753912 systemd-logind[1422]: New session 27 of user core. Jan 29 10:52:56.765291 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 29 10:52:56.857706 kubelet[2520]: E0129 10:52:56.857672 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:52:56.861473 containerd[1442]: time="2025-01-29T10:52:56.861267241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9m82x,Uid:e720325b-8ecb-47bb-ab8c-b63a052f6ce5,Namespace:kube-system,Attempt:0,}" Jan 29 10:52:56.879496 containerd[1442]: time="2025-01-29T10:52:56.879413152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:52:56.879496 containerd[1442]: time="2025-01-29T10:52:56.879465870Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:52:56.879496 containerd[1442]: time="2025-01-29T10:52:56.879478310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:52:56.879719 containerd[1442]: time="2025-01-29T10:52:56.879549227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:52:56.897335 systemd[1]: Started cri-containerd-0ee5a9a6cd28a1377008b673cf079d19887a6859ac946b9c1e1405778a0a73cf.scope - libcontainer container 0ee5a9a6cd28a1377008b673cf079d19887a6859ac946b9c1e1405778a0a73cf. Jan 29 10:52:56.919950 containerd[1442]: time="2025-01-29T10:52:56.919911513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9m82x,Uid:e720325b-8ecb-47bb-ab8c-b63a052f6ce5,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ee5a9a6cd28a1377008b673cf079d19887a6859ac946b9c1e1405778a0a73cf\"" Jan 29 10:52:56.920607 kubelet[2520]: E0129 10:52:56.920585 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:52:56.923053 containerd[1442]: time="2025-01-29T10:52:56.923018848Z" level=info msg="CreateContainer within sandbox \"0ee5a9a6cd28a1377008b673cf079d19887a6859ac946b9c1e1405778a0a73cf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 10:52:56.932560 containerd[1442]: time="2025-01-29T10:52:56.932523409Z" level=info msg="CreateContainer within sandbox \"0ee5a9a6cd28a1377008b673cf079d19887a6859ac946b9c1e1405778a0a73cf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6ebe3d812f448dc851f2464695d0f33cfffbae5a677caf0dd4b8bf7fb34a077e\"" Jan 29 10:52:56.933327 containerd[1442]: time="2025-01-29T10:52:56.933300263Z" level=info msg="StartContainer for \"6ebe3d812f448dc851f2464695d0f33cfffbae5a677caf0dd4b8bf7fb34a077e\"" Jan 29 10:52:56.961311 systemd[1]: Started cri-containerd-6ebe3d812f448dc851f2464695d0f33cfffbae5a677caf0dd4b8bf7fb34a077e.scope - libcontainer container 6ebe3d812f448dc851f2464695d0f33cfffbae5a677caf0dd4b8bf7fb34a077e. Jan 29 10:52:56.983349 containerd[1442]: time="2025-01-29T10:52:56.983119631Z" level=info msg="StartContainer for \"6ebe3d812f448dc851f2464695d0f33cfffbae5a677caf0dd4b8bf7fb34a077e\" returns successfully" Jan 29 10:52:56.997953 systemd[1]: cri-containerd-6ebe3d812f448dc851f2464695d0f33cfffbae5a677caf0dd4b8bf7fb34a077e.scope: Deactivated successfully. Jan 29 10:52:57.024891 containerd[1442]: time="2025-01-29T10:52:57.024408958Z" level=info msg="shim disconnected" id=6ebe3d812f448dc851f2464695d0f33cfffbae5a677caf0dd4b8bf7fb34a077e namespace=k8s.io Jan 29 10:52:57.025138 containerd[1442]: time="2025-01-29T10:52:57.025069856Z" level=warning msg="cleaning up after shim disconnected" id=6ebe3d812f448dc851f2464695d0f33cfffbae5a677caf0dd4b8bf7fb34a077e namespace=k8s.io Jan 29 10:52:57.025138 containerd[1442]: time="2025-01-29T10:52:57.025088376Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 10:52:57.034568 containerd[1442]: time="2025-01-29T10:52:57.034521752Z" level=warning msg="cleanup warnings time=\"2025-01-29T10:52:57Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 10:52:57.193955 kubelet[2520]: E0129 10:52:57.193919 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:52:57.197637 containerd[1442]: time="2025-01-29T10:52:57.197597576Z" level=info msg="CreateContainer within sandbox \"0ee5a9a6cd28a1377008b673cf079d19887a6859ac946b9c1e1405778a0a73cf\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 10:52:57.213671 containerd[1442]: time="2025-01-29T10:52:57.213627299Z" level=info msg="CreateContainer within sandbox \"0ee5a9a6cd28a1377008b673cf079d19887a6859ac946b9c1e1405778a0a73cf\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6cfbbc26cf42b8810a8dee1a9411f0a73920be65436d51b32a549ce0301b9070\"" Jan 29 10:52:57.214043 containerd[1442]: time="2025-01-29T10:52:57.214023526Z" level=info msg="StartContainer for \"6cfbbc26cf42b8810a8dee1a9411f0a73920be65436d51b32a549ce0301b9070\"" Jan 29 10:52:57.240379 systemd[1]: Started cri-containerd-6cfbbc26cf42b8810a8dee1a9411f0a73920be65436d51b32a549ce0301b9070.scope - libcontainer container 6cfbbc26cf42b8810a8dee1a9411f0a73920be65436d51b32a549ce0301b9070. Jan 29 10:52:57.260243 containerd[1442]: time="2025-01-29T10:52:57.260205838Z" level=info msg="StartContainer for \"6cfbbc26cf42b8810a8dee1a9411f0a73920be65436d51b32a549ce0301b9070\" returns successfully" Jan 29 10:52:57.270811 systemd[1]: cri-containerd-6cfbbc26cf42b8810a8dee1a9411f0a73920be65436d51b32a549ce0301b9070.scope: Deactivated successfully. Jan 29 10:52:57.298088 containerd[1442]: time="2025-01-29T10:52:57.297924302Z" level=info msg="shim disconnected" id=6cfbbc26cf42b8810a8dee1a9411f0a73920be65436d51b32a549ce0301b9070 namespace=k8s.io Jan 29 10:52:57.298088 containerd[1442]: time="2025-01-29T10:52:57.297982660Z" level=warning msg="cleaning up after shim disconnected" id=6cfbbc26cf42b8810a8dee1a9411f0a73920be65436d51b32a549ce0301b9070 namespace=k8s.io Jan 29 10:52:57.298088 containerd[1442]: time="2025-01-29T10:52:57.297990580Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 10:52:57.718548 kubelet[2520]: I0129 10:52:57.718469 2520 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-29T10:52:57Z","lastTransitionTime":"2025-01-29T10:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 29 10:52:58.196487 kubelet[2520]: E0129 10:52:58.196456 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:52:58.199209 containerd[1442]: time="2025-01-29T10:52:58.199156909Z" level=info msg="CreateContainer within sandbox \"0ee5a9a6cd28a1377008b673cf079d19887a6859ac946b9c1e1405778a0a73cf\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 10:52:58.211995 containerd[1442]: time="2025-01-29T10:52:58.211941513Z" level=info msg="CreateContainer within sandbox \"0ee5a9a6cd28a1377008b673cf079d19887a6859ac946b9c1e1405778a0a73cf\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"00c513dc609444fd8975b145354ef043565f2d6318e8fc556cad826b09157bc3\"" Jan 29 10:52:58.212848 containerd[1442]: time="2025-01-29T10:52:58.212467417Z" level=info msg="StartContainer for \"00c513dc609444fd8975b145354ef043565f2d6318e8fc556cad826b09157bc3\"" Jan 29 10:52:58.251330 systemd[1]: Started cri-containerd-00c513dc609444fd8975b145354ef043565f2d6318e8fc556cad826b09157bc3.scope - libcontainer container 00c513dc609444fd8975b145354ef043565f2d6318e8fc556cad826b09157bc3. Jan 29 10:52:58.277464 systemd[1]: cri-containerd-00c513dc609444fd8975b145354ef043565f2d6318e8fc556cad826b09157bc3.scope: Deactivated successfully. Jan 29 10:52:58.279976 containerd[1442]: time="2025-01-29T10:52:58.279853932Z" level=info msg="StartContainer for \"00c513dc609444fd8975b145354ef043565f2d6318e8fc556cad826b09157bc3\" returns successfully" Jan 29 10:52:58.304916 containerd[1442]: time="2025-01-29T10:52:58.304855638Z" level=info msg="shim disconnected" id=00c513dc609444fd8975b145354ef043565f2d6318e8fc556cad826b09157bc3 namespace=k8s.io Jan 29 10:52:58.304916 containerd[1442]: time="2025-01-29T10:52:58.304911837Z" level=warning msg="cleaning up after shim disconnected" id=00c513dc609444fd8975b145354ef043565f2d6318e8fc556cad826b09157bc3 namespace=k8s.io Jan 29 10:52:58.304916 containerd[1442]: time="2025-01-29T10:52:58.304923716Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 10:52:58.694032 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-00c513dc609444fd8975b145354ef043565f2d6318e8fc556cad826b09157bc3-rootfs.mount: Deactivated successfully. Jan 29 10:52:59.200090 kubelet[2520]: E0129 10:52:59.200001 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:52:59.204031 containerd[1442]: time="2025-01-29T10:52:59.203778039Z" level=info msg="CreateContainer within sandbox \"0ee5a9a6cd28a1377008b673cf079d19887a6859ac946b9c1e1405778a0a73cf\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 10:52:59.218384 containerd[1442]: time="2025-01-29T10:52:59.218291528Z" level=info msg="CreateContainer within sandbox \"0ee5a9a6cd28a1377008b673cf079d19887a6859ac946b9c1e1405778a0a73cf\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c03756ff691c9ee13c4f9961b8a93285ef10a3c2e70d4868600df601d810b8b3\"" Jan 29 10:52:59.219717 containerd[1442]: time="2025-01-29T10:52:59.219630888Z" level=info msg="StartContainer for \"c03756ff691c9ee13c4f9961b8a93285ef10a3c2e70d4868600df601d810b8b3\"" Jan 29 10:52:59.255337 systemd[1]: Started cri-containerd-c03756ff691c9ee13c4f9961b8a93285ef10a3c2e70d4868600df601d810b8b3.scope - libcontainer container c03756ff691c9ee13c4f9961b8a93285ef10a3c2e70d4868600df601d810b8b3. Jan 29 10:52:59.275869 systemd[1]: cri-containerd-c03756ff691c9ee13c4f9961b8a93285ef10a3c2e70d4868600df601d810b8b3.scope: Deactivated successfully. Jan 29 10:52:59.277446 containerd[1442]: time="2025-01-29T10:52:59.277332615Z" level=info msg="StartContainer for \"c03756ff691c9ee13c4f9961b8a93285ef10a3c2e70d4868600df601d810b8b3\" returns successfully" Jan 29 10:52:59.298714 containerd[1442]: time="2025-01-29T10:52:59.298652422Z" level=info msg="shim disconnected" id=c03756ff691c9ee13c4f9961b8a93285ef10a3c2e70d4868600df601d810b8b3 namespace=k8s.io Jan 29 10:52:59.298714 containerd[1442]: time="2025-01-29T10:52:59.298708780Z" level=warning msg="cleaning up after shim disconnected" id=c03756ff691c9ee13c4f9961b8a93285ef10a3c2e70d4868600df601d810b8b3 namespace=k8s.io Jan 29 10:52:59.298714 containerd[1442]: time="2025-01-29T10:52:59.298716460Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 10:52:59.693817 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c03756ff691c9ee13c4f9961b8a93285ef10a3c2e70d4868600df601d810b8b3-rootfs.mount: Deactivated successfully. Jan 29 10:53:00.203857 kubelet[2520]: E0129 10:53:00.203788 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:53:00.206027 containerd[1442]: time="2025-01-29T10:53:00.205964930Z" level=info msg="CreateContainer within sandbox \"0ee5a9a6cd28a1377008b673cf079d19887a6859ac946b9c1e1405778a0a73cf\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 10:53:00.220120 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2588988094.mount: Deactivated successfully. Jan 29 10:53:00.224065 containerd[1442]: time="2025-01-29T10:53:00.224009576Z" level=info msg="CreateContainer within sandbox \"0ee5a9a6cd28a1377008b673cf079d19887a6859ac946b9c1e1405778a0a73cf\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b2d9a1f8d64b4f6de2ccb5b43eaf5e5e4548d0776777c473aa5829de01c667a7\"" Jan 29 10:53:00.226051 containerd[1442]: time="2025-01-29T10:53:00.225324419Z" level=info msg="StartContainer for \"b2d9a1f8d64b4f6de2ccb5b43eaf5e5e4548d0776777c473aa5829de01c667a7\"" Jan 29 10:53:00.257335 systemd[1]: Started cri-containerd-b2d9a1f8d64b4f6de2ccb5b43eaf5e5e4548d0776777c473aa5829de01c667a7.scope - libcontainer container b2d9a1f8d64b4f6de2ccb5b43eaf5e5e4548d0776777c473aa5829de01c667a7. Jan 29 10:53:00.282529 containerd[1442]: time="2025-01-29T10:53:00.282481591Z" level=info msg="StartContainer for \"b2d9a1f8d64b4f6de2ccb5b43eaf5e5e4548d0776777c473aa5829de01c667a7\" returns successfully" Jan 29 10:53:00.557221 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 29 10:53:01.209104 kubelet[2520]: E0129 10:53:01.208396 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:53:01.222910 kubelet[2520]: I0129 10:53:01.222852 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9m82x" podStartSLOduration=5.2228379480000005 podStartE2EDuration="5.222837948s" podCreationTimestamp="2025-01-29 10:52:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 10:53:01.22205549 +0000 UTC m=+85.350333547" watchObservedRunningTime="2025-01-29 10:53:01.222837948 +0000 UTC m=+85.351115965" Jan 29 10:53:01.948044 kubelet[2520]: E0129 10:53:01.948004 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:53:02.859362 kubelet[2520]: E0129 10:53:02.859322 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:53:03.422788 systemd-networkd[1368]: lxc_health: Link UP Jan 29 10:53:03.433896 systemd-networkd[1368]: lxc_health: Gained carrier Jan 29 10:53:04.860707 kubelet[2520]: E0129 10:53:04.860100 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:53:05.209344 systemd-networkd[1368]: lxc_health: Gained IPv6LL Jan 29 10:53:05.216647 kubelet[2520]: E0129 10:53:05.216428 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:53:06.218106 kubelet[2520]: E0129 10:53:06.218075 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:53:06.947685 kubelet[2520]: E0129 10:53:06.947644 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:53:11.836754 sshd[4373]: Connection closed by 10.0.0.1 port 43000 Jan 29 10:53:11.837345 sshd-session[4371]: pam_unix(sshd:session): session closed for user core Jan 29 10:53:11.840191 systemd[1]: sshd@26-10.0.0.53:22-10.0.0.1:43000.service: Deactivated successfully. Jan 29 10:53:11.841851 systemd[1]: session-27.scope: Deactivated successfully. Jan 29 10:53:11.843086 systemd-logind[1422]: Session 27 logged out. Waiting for processes to exit. Jan 29 10:53:11.844034 systemd-logind[1422]: Removed session 27.