Jan 30 12:55:27.937178 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 30 12:55:27.937202 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Jan 29 10:12:48 -00 2025 Jan 30 12:55:27.937212 kernel: KASLR enabled Jan 30 12:55:27.937218 kernel: efi: EFI v2.7 by EDK II Jan 30 12:55:27.937238 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jan 30 12:55:27.937247 kernel: random: crng init done Jan 30 12:55:27.937256 kernel: ACPI: Early table checksum verification disabled Jan 30 12:55:27.937263 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jan 30 12:55:27.937269 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 30 12:55:27.937277 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:55:27.937284 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:55:27.937290 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:55:27.937296 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:55:27.937302 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:55:27.937309 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:55:27.937317 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:55:27.937324 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:55:27.937330 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:55:27.937337 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 30 12:55:27.937407 kernel: NUMA: Failed to initialise from firmware Jan 30 12:55:27.937414 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 30 12:55:27.937420 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jan 30 12:55:27.937427 kernel: Zone ranges: Jan 30 12:55:27.937433 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 30 12:55:27.937439 kernel: DMA32 empty Jan 30 12:55:27.937449 kernel: Normal empty Jan 30 12:55:27.937456 kernel: Movable zone start for each node Jan 30 12:55:27.937462 kernel: Early memory node ranges Jan 30 12:55:27.937469 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jan 30 12:55:27.937475 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 30 12:55:27.937482 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 30 12:55:27.937488 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 30 12:55:27.937495 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 30 12:55:27.937501 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 30 12:55:27.937507 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 30 12:55:27.937514 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 30 12:55:27.937520 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 30 12:55:27.937528 kernel: psci: probing for conduit method from ACPI. Jan 30 12:55:27.937535 kernel: psci: PSCIv1.1 detected in firmware. Jan 30 12:55:27.937541 kernel: psci: Using standard PSCI v0.2 function IDs Jan 30 12:55:27.937551 kernel: psci: Trusted OS migration not required Jan 30 12:55:27.937557 kernel: psci: SMC Calling Convention v1.1 Jan 30 12:55:27.937564 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 30 12:55:27.937573 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 30 12:55:27.937580 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 30 12:55:27.937587 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 30 12:55:27.937594 kernel: Detected PIPT I-cache on CPU0 Jan 30 12:55:27.937601 kernel: CPU features: detected: GIC system register CPU interface Jan 30 12:55:27.937607 kernel: CPU features: detected: Hardware dirty bit management Jan 30 12:55:27.937614 kernel: CPU features: detected: Spectre-v4 Jan 30 12:55:27.937621 kernel: CPU features: detected: Spectre-BHB Jan 30 12:55:27.937627 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 30 12:55:27.937634 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 30 12:55:27.937642 kernel: CPU features: detected: ARM erratum 1418040 Jan 30 12:55:27.937649 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 30 12:55:27.937656 kernel: alternatives: applying boot alternatives Jan 30 12:55:27.937669 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 30 12:55:27.937678 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 12:55:27.937685 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 12:55:27.937692 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 12:55:27.937699 kernel: Fallback order for Node 0: 0 Jan 30 12:55:27.937705 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 30 12:55:27.937712 kernel: Policy zone: DMA Jan 30 12:55:27.937719 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 12:55:27.937727 kernel: software IO TLB: area num 4. Jan 30 12:55:27.937734 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 30 12:55:27.937742 kernel: Memory: 2386532K/2572288K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 185756K reserved, 0K cma-reserved) Jan 30 12:55:27.937748 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 30 12:55:27.937756 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 12:55:27.937763 kernel: rcu: RCU event tracing is enabled. Jan 30 12:55:27.937770 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 30 12:55:27.937777 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 12:55:27.937784 kernel: Tracing variant of Tasks RCU enabled. Jan 30 12:55:27.937791 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 12:55:27.937798 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 30 12:55:27.937804 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 30 12:55:27.937813 kernel: GICv3: 256 SPIs implemented Jan 30 12:55:27.937820 kernel: GICv3: 0 Extended SPIs implemented Jan 30 12:55:27.937826 kernel: Root IRQ handler: gic_handle_irq Jan 30 12:55:27.937834 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 30 12:55:27.937840 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 30 12:55:27.937847 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 30 12:55:27.937854 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 30 12:55:27.937861 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 30 12:55:27.937868 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 30 12:55:27.937875 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 30 12:55:27.937881 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 12:55:27.937890 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 12:55:27.937897 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 30 12:55:27.937904 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 30 12:55:27.937911 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 30 12:55:27.937918 kernel: arm-pv: using stolen time PV Jan 30 12:55:27.937925 kernel: Console: colour dummy device 80x25 Jan 30 12:55:27.937932 kernel: ACPI: Core revision 20230628 Jan 30 12:55:27.937939 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 30 12:55:27.937946 kernel: pid_max: default: 32768 minimum: 301 Jan 30 12:55:27.937953 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 12:55:27.937961 kernel: landlock: Up and running. Jan 30 12:55:27.937969 kernel: SELinux: Initializing. Jan 30 12:55:27.937976 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 12:55:27.937983 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 12:55:27.937990 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 12:55:27.937997 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 12:55:27.938004 kernel: rcu: Hierarchical SRCU implementation. Jan 30 12:55:27.938011 kernel: rcu: Max phase no-delay instances is 400. Jan 30 12:55:27.938018 kernel: Platform MSI: ITS@0x8080000 domain created Jan 30 12:55:27.938027 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 30 12:55:27.938034 kernel: Remapping and enabling EFI services. Jan 30 12:55:27.938041 kernel: smp: Bringing up secondary CPUs ... Jan 30 12:55:27.938048 kernel: Detected PIPT I-cache on CPU1 Jan 30 12:55:27.938055 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 30 12:55:27.938062 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 30 12:55:27.938069 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 12:55:27.938076 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 30 12:55:27.938083 kernel: Detected PIPT I-cache on CPU2 Jan 30 12:55:27.938090 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 30 12:55:27.938099 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 30 12:55:27.938106 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 12:55:27.938118 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 30 12:55:27.938127 kernel: Detected PIPT I-cache on CPU3 Jan 30 12:55:27.938135 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 30 12:55:27.938142 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 30 12:55:27.938150 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 12:55:27.938157 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 30 12:55:27.938164 kernel: smp: Brought up 1 node, 4 CPUs Jan 30 12:55:27.938174 kernel: SMP: Total of 4 processors activated. Jan 30 12:55:27.938181 kernel: CPU features: detected: 32-bit EL0 Support Jan 30 12:55:27.938189 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 30 12:55:27.938196 kernel: CPU features: detected: Common not Private translations Jan 30 12:55:27.938204 kernel: CPU features: detected: CRC32 instructions Jan 30 12:55:27.938211 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 30 12:55:27.938219 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 30 12:55:27.938238 kernel: CPU features: detected: LSE atomic instructions Jan 30 12:55:27.938248 kernel: CPU features: detected: Privileged Access Never Jan 30 12:55:27.938256 kernel: CPU features: detected: RAS Extension Support Jan 30 12:55:27.938263 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 30 12:55:27.938271 kernel: CPU: All CPU(s) started at EL1 Jan 30 12:55:27.938278 kernel: alternatives: applying system-wide alternatives Jan 30 12:55:27.938285 kernel: devtmpfs: initialized Jan 30 12:55:27.938293 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 12:55:27.938300 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 30 12:55:27.938308 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 12:55:27.938317 kernel: SMBIOS 3.0.0 present. Jan 30 12:55:27.938325 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jan 30 12:55:27.938332 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 12:55:27.938370 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 30 12:55:27.938380 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 30 12:55:27.938388 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 30 12:55:27.938395 kernel: audit: initializing netlink subsys (disabled) Jan 30 12:55:27.938403 kernel: audit: type=2000 audit(0.029:1): state=initialized audit_enabled=0 res=1 Jan 30 12:55:27.938410 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 12:55:27.938420 kernel: cpuidle: using governor menu Jan 30 12:55:27.938428 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 30 12:55:27.938435 kernel: ASID allocator initialised with 32768 entries Jan 30 12:55:27.938443 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 12:55:27.938450 kernel: Serial: AMBA PL011 UART driver Jan 30 12:55:27.938458 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 30 12:55:27.938465 kernel: Modules: 0 pages in range for non-PLT usage Jan 30 12:55:27.938472 kernel: Modules: 509040 pages in range for PLT usage Jan 30 12:55:27.938480 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 12:55:27.938489 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 12:55:27.938497 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 30 12:55:27.938505 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 30 12:55:27.938512 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 12:55:27.938520 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 12:55:27.938527 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 30 12:55:27.938534 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 30 12:55:27.938542 kernel: ACPI: Added _OSI(Module Device) Jan 30 12:55:27.938549 kernel: ACPI: Added _OSI(Processor Device) Jan 30 12:55:27.938558 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 12:55:27.938566 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 12:55:27.938574 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 12:55:27.938581 kernel: ACPI: Interpreter enabled Jan 30 12:55:27.938588 kernel: ACPI: Using GIC for interrupt routing Jan 30 12:55:27.938596 kernel: ACPI: MCFG table detected, 1 entries Jan 30 12:55:27.938603 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 30 12:55:27.938611 kernel: printk: console [ttyAMA0] enabled Jan 30 12:55:27.938618 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 12:55:27.938786 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 12:55:27.938868 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 30 12:55:27.938936 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 30 12:55:27.939005 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 30 12:55:27.939070 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 30 12:55:27.939080 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 30 12:55:27.939087 kernel: PCI host bridge to bus 0000:00 Jan 30 12:55:27.939163 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 30 12:55:27.939426 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 30 12:55:27.939539 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 30 12:55:27.939603 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 12:55:27.939698 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 30 12:55:27.939781 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 30 12:55:27.939862 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 30 12:55:27.939951 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 30 12:55:27.940020 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 30 12:55:27.940088 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 30 12:55:27.940155 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 30 12:55:27.940222 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 30 12:55:27.940454 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 30 12:55:27.940527 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 30 12:55:27.940588 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 30 12:55:27.940599 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 30 12:55:27.940607 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 30 12:55:27.940615 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 30 12:55:27.940622 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 30 12:55:27.940630 kernel: iommu: Default domain type: Translated Jan 30 12:55:27.940638 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 30 12:55:27.940646 kernel: efivars: Registered efivars operations Jan 30 12:55:27.940656 kernel: vgaarb: loaded Jan 30 12:55:27.940669 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 30 12:55:27.940678 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 12:55:27.940686 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 12:55:27.940694 kernel: pnp: PnP ACPI init Jan 30 12:55:27.940805 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 30 12:55:27.940817 kernel: pnp: PnP ACPI: found 1 devices Jan 30 12:55:27.940824 kernel: NET: Registered PF_INET protocol family Jan 30 12:55:27.940835 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 12:55:27.940843 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 12:55:27.940850 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 12:55:27.940858 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 12:55:27.940866 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 12:55:27.940874 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 12:55:27.940881 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 12:55:27.940889 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 12:55:27.940896 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 12:55:27.940905 kernel: PCI: CLS 0 bytes, default 64 Jan 30 12:55:27.940913 kernel: kvm [1]: HYP mode not available Jan 30 12:55:27.940920 kernel: Initialise system trusted keyrings Jan 30 12:55:27.940928 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 12:55:27.940935 kernel: Key type asymmetric registered Jan 30 12:55:27.940942 kernel: Asymmetric key parser 'x509' registered Jan 30 12:55:27.940950 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 30 12:55:27.940957 kernel: io scheduler mq-deadline registered Jan 30 12:55:27.940965 kernel: io scheduler kyber registered Jan 30 12:55:27.940973 kernel: io scheduler bfq registered Jan 30 12:55:27.940981 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 30 12:55:27.940988 kernel: ACPI: button: Power Button [PWRB] Jan 30 12:55:27.940996 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 30 12:55:27.941066 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 30 12:55:27.941077 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 12:55:27.941084 kernel: thunder_xcv, ver 1.0 Jan 30 12:55:27.941092 kernel: thunder_bgx, ver 1.0 Jan 30 12:55:27.941099 kernel: nicpf, ver 1.0 Jan 30 12:55:27.941108 kernel: nicvf, ver 1.0 Jan 30 12:55:27.941188 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 30 12:55:27.941276 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-30T12:55:27 UTC (1738241727) Jan 30 12:55:27.941289 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 12:55:27.941297 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 30 12:55:27.941305 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 30 12:55:27.941313 kernel: watchdog: Hard watchdog permanently disabled Jan 30 12:55:27.941320 kernel: NET: Registered PF_INET6 protocol family Jan 30 12:55:27.941335 kernel: Segment Routing with IPv6 Jan 30 12:55:27.941413 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 12:55:27.941421 kernel: NET: Registered PF_PACKET protocol family Jan 30 12:55:27.941432 kernel: Key type dns_resolver registered Jan 30 12:55:27.941443 kernel: registered taskstats version 1 Jan 30 12:55:27.941451 kernel: Loading compiled-in X.509 certificates Jan 30 12:55:27.941459 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: f200c60883a4a38d496d9250faf693faee9d7415' Jan 30 12:55:27.941467 kernel: Key type .fscrypt registered Jan 30 12:55:27.941474 kernel: Key type fscrypt-provisioning registered Jan 30 12:55:27.941486 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 12:55:27.941494 kernel: ima: Allocated hash algorithm: sha1 Jan 30 12:55:27.941503 kernel: ima: No architecture policies found Jan 30 12:55:27.941510 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 30 12:55:27.941518 kernel: clk: Disabling unused clocks Jan 30 12:55:27.941526 kernel: Freeing unused kernel memory: 39360K Jan 30 12:55:27.941534 kernel: Run /init as init process Jan 30 12:55:27.941542 kernel: with arguments: Jan 30 12:55:27.941550 kernel: /init Jan 30 12:55:27.941564 kernel: with environment: Jan 30 12:55:27.941572 kernel: HOME=/ Jan 30 12:55:27.941581 kernel: TERM=linux Jan 30 12:55:27.941592 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 12:55:27.941602 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 12:55:27.941612 systemd[1]: Detected virtualization kvm. Jan 30 12:55:27.941620 systemd[1]: Detected architecture arm64. Jan 30 12:55:27.941630 systemd[1]: Running in initrd. Jan 30 12:55:27.941638 systemd[1]: No hostname configured, using default hostname. Jan 30 12:55:27.941646 systemd[1]: Hostname set to . Jan 30 12:55:27.941654 systemd[1]: Initializing machine ID from VM UUID. Jan 30 12:55:27.941661 systemd[1]: Queued start job for default target initrd.target. Jan 30 12:55:27.941677 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 12:55:27.941685 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 12:55:27.941694 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 12:55:27.941704 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 12:55:27.941712 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 12:55:27.941720 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 12:55:27.941729 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 12:55:27.941738 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 12:55:27.941746 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 12:55:27.941754 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 12:55:27.941764 systemd[1]: Reached target paths.target - Path Units. Jan 30 12:55:27.941772 systemd[1]: Reached target slices.target - Slice Units. Jan 30 12:55:27.941780 systemd[1]: Reached target swap.target - Swaps. Jan 30 12:55:27.941788 systemd[1]: Reached target timers.target - Timer Units. Jan 30 12:55:27.941797 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 12:55:27.941805 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 12:55:27.941813 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 12:55:27.941821 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 12:55:27.941839 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 12:55:27.941849 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 12:55:27.941857 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 12:55:27.941865 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 12:55:27.941873 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 12:55:27.941881 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 12:55:27.941889 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 12:55:27.941897 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 12:55:27.941906 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 12:55:27.941915 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 12:55:27.941924 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 12:55:27.941932 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 12:55:27.941940 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 12:55:27.941948 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 12:55:27.941956 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 12:55:27.941967 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 12:55:27.941975 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 12:55:27.941984 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 12:55:27.942022 systemd-journald[237]: Collecting audit messages is disabled. Jan 30 12:55:27.942045 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 12:55:27.942054 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 12:55:27.942063 systemd-journald[237]: Journal started Jan 30 12:55:27.942082 systemd-journald[237]: Runtime Journal (/run/log/journal/ca4f3c32425746afa67eb51cb239204f) is 5.9M, max 47.3M, 41.4M free. Jan 30 12:55:27.930931 systemd-modules-load[238]: Inserted module 'overlay' Jan 30 12:55:27.944322 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 12:55:27.947500 systemd-modules-load[238]: Inserted module 'br_netfilter' Jan 30 12:55:27.948433 kernel: Bridge firewalling registered Jan 30 12:55:27.949380 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 12:55:27.950523 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 12:55:27.954403 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 12:55:27.956407 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 12:55:27.965564 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 12:55:27.968263 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 12:55:27.970416 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 12:55:27.987405 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 12:55:27.989394 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 12:55:27.999412 dracut-cmdline[275]: dracut-dracut-053 Jan 30 12:55:28.002010 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 30 12:55:28.022267 systemd-resolved[277]: Positive Trust Anchors: Jan 30 12:55:28.022286 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 12:55:28.022319 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 12:55:28.027444 systemd-resolved[277]: Defaulting to hostname 'linux'. Jan 30 12:55:28.028567 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 12:55:28.030428 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 12:55:28.081259 kernel: SCSI subsystem initialized Jan 30 12:55:28.086247 kernel: Loading iSCSI transport class v2.0-870. Jan 30 12:55:28.094268 kernel: iscsi: registered transport (tcp) Jan 30 12:55:28.107256 kernel: iscsi: registered transport (qla4xxx) Jan 30 12:55:28.107288 kernel: QLogic iSCSI HBA Driver Jan 30 12:55:28.155885 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 12:55:28.170478 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 12:55:28.189840 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 12:55:28.189910 kernel: device-mapper: uevent: version 1.0.3 Jan 30 12:55:28.189922 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 12:55:28.251272 kernel: raid6: neonx8 gen() 15562 MB/s Jan 30 12:55:28.265284 kernel: raid6: neonx4 gen() 15556 MB/s Jan 30 12:55:28.282279 kernel: raid6: neonx2 gen() 13242 MB/s Jan 30 12:55:28.299272 kernel: raid6: neonx1 gen() 10492 MB/s Jan 30 12:55:28.316496 kernel: raid6: int64x8 gen() 6953 MB/s Jan 30 12:55:28.333278 kernel: raid6: int64x4 gen() 7350 MB/s Jan 30 12:55:28.350274 kernel: raid6: int64x2 gen() 6046 MB/s Jan 30 12:55:28.367278 kernel: raid6: int64x1 gen() 5025 MB/s Jan 30 12:55:28.367350 kernel: raid6: using algorithm neonx8 gen() 15562 MB/s Jan 30 12:55:28.384279 kernel: raid6: .... xor() 11857 MB/s, rmw enabled Jan 30 12:55:28.384350 kernel: raid6: using neon recovery algorithm Jan 30 12:55:28.389254 kernel: xor: measuring software checksum speed Jan 30 12:55:28.389293 kernel: 8regs : 19759 MB/sec Jan 30 12:55:28.390268 kernel: 32regs : 18726 MB/sec Jan 30 12:55:28.390291 kernel: arm64_neon : 27007 MB/sec Jan 30 12:55:28.390300 kernel: xor: using function: arm64_neon (27007 MB/sec) Jan 30 12:55:28.442279 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 12:55:28.454421 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 12:55:28.468441 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 12:55:28.480800 systemd-udevd[459]: Using default interface naming scheme 'v255'. Jan 30 12:55:28.484254 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 12:55:28.491394 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 12:55:28.502950 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Jan 30 12:55:28.532526 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 12:55:28.543438 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 12:55:28.586735 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 12:55:28.595496 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 12:55:28.610264 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 12:55:28.611690 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 12:55:28.613149 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 12:55:28.615021 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 12:55:28.624543 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 12:55:28.638401 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 12:55:28.654020 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 30 12:55:28.660308 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 30 12:55:28.660447 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 12:55:28.660460 kernel: GPT:9289727 != 19775487 Jan 30 12:55:28.660469 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 12:55:28.660479 kernel: GPT:9289727 != 19775487 Jan 30 12:55:28.660491 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 12:55:28.660501 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 12:55:28.657104 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 12:55:28.657223 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 12:55:28.662080 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 12:55:28.665568 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 12:55:28.665748 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 12:55:28.669050 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 12:55:28.674469 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 12:55:28.686779 kernel: BTRFS: device fsid f02ec3fd-6702-4c1a-b68e-9001713a3a08 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (520) Jan 30 12:55:28.690272 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (506) Jan 30 12:55:28.692688 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 12:55:28.696384 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 12:55:28.704088 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 12:55:28.710541 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 12:55:28.711542 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 12:55:28.716653 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 12:55:28.729422 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 12:55:28.731534 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 12:55:28.750187 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 12:55:28.831590 disk-uuid[549]: Primary Header is updated. Jan 30 12:55:28.831590 disk-uuid[549]: Secondary Entries is updated. Jan 30 12:55:28.831590 disk-uuid[549]: Secondary Header is updated. Jan 30 12:55:28.839107 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 12:55:29.852251 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 12:55:29.852421 disk-uuid[559]: The operation has completed successfully. Jan 30 12:55:29.872486 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 12:55:29.872588 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 12:55:29.898405 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 12:55:29.901502 sh[575]: Success Jan 30 12:55:29.911250 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 30 12:55:29.951211 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 12:55:29.962900 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 12:55:29.967239 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 12:55:29.975471 kernel: BTRFS info (device dm-0): first mount of filesystem f02ec3fd-6702-4c1a-b68e-9001713a3a08 Jan 30 12:55:29.975534 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 30 12:55:29.975545 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 12:55:29.976257 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 12:55:29.977319 kernel: BTRFS info (device dm-0): using free space tree Jan 30 12:55:29.983934 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 12:55:29.985215 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 12:55:29.992491 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 12:55:29.996772 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 12:55:30.005481 kernel: BTRFS info (device vda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 12:55:30.005511 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 12:55:30.005522 kernel: BTRFS info (device vda6): using free space tree Jan 30 12:55:30.009265 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 12:55:30.019177 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 12:55:30.019982 kernel: BTRFS info (device vda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 12:55:30.032959 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 12:55:30.042478 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 12:55:30.113613 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 12:55:30.120430 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 12:55:30.158312 systemd-networkd[761]: lo: Link UP Jan 30 12:55:30.158323 systemd-networkd[761]: lo: Gained carrier Jan 30 12:55:30.159032 systemd-networkd[761]: Enumeration completed Jan 30 12:55:30.160273 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 12:55:30.161021 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 12:55:30.161024 systemd-networkd[761]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 12:55:30.161179 systemd[1]: Reached target network.target - Network. Jan 30 12:55:30.161900 systemd-networkd[761]: eth0: Link UP Jan 30 12:55:30.161903 systemd-networkd[761]: eth0: Gained carrier Jan 30 12:55:30.161910 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 12:55:30.199295 systemd-networkd[761]: eth0: DHCPv4 address 10.0.0.64/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 12:55:30.212025 ignition[666]: Ignition 2.19.0 Jan 30 12:55:30.212035 ignition[666]: Stage: fetch-offline Jan 30 12:55:30.212074 ignition[666]: no configs at "/usr/lib/ignition/base.d" Jan 30 12:55:30.212083 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 12:55:30.212326 ignition[666]: parsed url from cmdline: "" Jan 30 12:55:30.212329 ignition[666]: no config URL provided Jan 30 12:55:30.212334 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 12:55:30.212341 ignition[666]: no config at "/usr/lib/ignition/user.ign" Jan 30 12:55:30.212366 ignition[666]: op(1): [started] loading QEMU firmware config module Jan 30 12:55:30.212371 ignition[666]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 30 12:55:30.221009 ignition[666]: op(1): [finished] loading QEMU firmware config module Jan 30 12:55:30.261620 ignition[666]: parsing config with SHA512: fbd8ec1860e5e609793f38d2c80241e1c8845b4a1176fc17e25cd0cda58edfd307a22c355ddf59e0f070542c8283fece9e638faf22faa1c2172c820a7f21a255 Jan 30 12:55:30.266215 unknown[666]: fetched base config from "system" Jan 30 12:55:30.266238 unknown[666]: fetched user config from "qemu" Jan 30 12:55:30.266726 ignition[666]: fetch-offline: fetch-offline passed Jan 30 12:55:30.266837 ignition[666]: Ignition finished successfully Jan 30 12:55:30.269153 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 12:55:30.270577 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 30 12:55:30.282402 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 12:55:30.293257 ignition[772]: Ignition 2.19.0 Jan 30 12:55:30.293267 ignition[772]: Stage: kargs Jan 30 12:55:30.293445 ignition[772]: no configs at "/usr/lib/ignition/base.d" Jan 30 12:55:30.293454 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 12:55:30.294360 ignition[772]: kargs: kargs passed Jan 30 12:55:30.294406 ignition[772]: Ignition finished successfully Jan 30 12:55:30.296673 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 12:55:30.307423 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 12:55:30.318248 ignition[780]: Ignition 2.19.0 Jan 30 12:55:30.318258 ignition[780]: Stage: disks Jan 30 12:55:30.318445 ignition[780]: no configs at "/usr/lib/ignition/base.d" Jan 30 12:55:30.318455 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 12:55:30.319382 ignition[780]: disks: disks passed Jan 30 12:55:30.321713 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 12:55:30.319430 ignition[780]: Ignition finished successfully Jan 30 12:55:30.323013 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 12:55:30.323980 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 12:55:30.325467 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 12:55:30.326956 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 12:55:30.328586 systemd[1]: Reached target basic.target - Basic System. Jan 30 12:55:30.340414 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 12:55:30.351797 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 12:55:30.373944 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 12:55:30.385439 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 12:55:30.431256 kernel: EXT4-fs (vda9): mounted filesystem 8499bb43-f860-448d-b3b8-5a1fc2b80abf r/w with ordered data mode. Quota mode: none. Jan 30 12:55:30.431709 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 12:55:30.432855 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 12:55:30.443349 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 12:55:30.445066 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 12:55:30.446069 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 12:55:30.446171 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 12:55:30.446219 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 12:55:30.454125 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (798) Jan 30 12:55:30.454150 kernel: BTRFS info (device vda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 12:55:30.453393 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 12:55:30.459154 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 12:55:30.459178 kernel: BTRFS info (device vda6): using free space tree Jan 30 12:55:30.459188 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 12:55:30.458548 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 12:55:30.462853 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 12:55:30.505433 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 12:55:30.509524 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory Jan 30 12:55:30.512904 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 12:55:30.516972 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 12:55:30.608885 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 12:55:30.620373 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 12:55:30.621912 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 12:55:30.627240 kernel: BTRFS info (device vda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 12:55:30.648254 ignition[912]: INFO : Ignition 2.19.0 Jan 30 12:55:30.648254 ignition[912]: INFO : Stage: mount Jan 30 12:55:30.648254 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 12:55:30.648254 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 12:55:30.648254 ignition[912]: INFO : mount: mount passed Jan 30 12:55:30.648254 ignition[912]: INFO : Ignition finished successfully Jan 30 12:55:30.649889 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 12:55:30.651477 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 12:55:30.658371 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 12:55:30.974770 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 12:55:30.987440 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 12:55:30.993252 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (926) Jan 30 12:55:30.995353 kernel: BTRFS info (device vda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 12:55:30.995373 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 12:55:30.995384 kernel: BTRFS info (device vda6): using free space tree Jan 30 12:55:30.998245 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 12:55:30.999498 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 12:55:31.018221 ignition[944]: INFO : Ignition 2.19.0 Jan 30 12:55:31.018221 ignition[944]: INFO : Stage: files Jan 30 12:55:31.019913 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 12:55:31.019913 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 12:55:31.019913 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Jan 30 12:55:31.023737 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 12:55:31.023737 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 12:55:31.026743 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 12:55:31.028235 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 12:55:31.028235 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 12:55:31.027256 unknown[944]: wrote ssh authorized keys file for user: core Jan 30 12:55:31.031981 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 12:55:31.031981 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 30 12:55:31.092716 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 12:55:31.201489 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 12:55:31.201489 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 12:55:31.204537 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 30 12:55:31.448482 systemd-networkd[761]: eth0: Gained IPv6LL Jan 30 12:55:31.526136 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 12:55:31.607216 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 12:55:31.609130 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 30 12:55:31.609130 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 12:55:31.609130 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 12:55:31.609130 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 12:55:31.609130 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 12:55:31.609130 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 12:55:31.609130 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 12:55:31.609130 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 12:55:31.609130 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 12:55:31.609130 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 12:55:31.609130 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 30 12:55:31.609130 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 30 12:55:31.609130 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 30 12:55:31.609130 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Jan 30 12:55:31.852171 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 30 12:55:32.085292 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 30 12:55:32.085292 ignition[944]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 30 12:55:32.088422 ignition[944]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 12:55:32.088422 ignition[944]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 12:55:32.088422 ignition[944]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 30 12:55:32.088422 ignition[944]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 30 12:55:32.088422 ignition[944]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 12:55:32.088422 ignition[944]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 12:55:32.088422 ignition[944]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 30 12:55:32.088422 ignition[944]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 30 12:55:32.114352 ignition[944]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 12:55:32.118545 ignition[944]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 12:55:32.121140 ignition[944]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 30 12:55:32.121140 ignition[944]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 30 12:55:32.121140 ignition[944]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 12:55:32.121140 ignition[944]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 12:55:32.121140 ignition[944]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 12:55:32.121140 ignition[944]: INFO : files: files passed Jan 30 12:55:32.121140 ignition[944]: INFO : Ignition finished successfully Jan 30 12:55:32.123551 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 12:55:32.136791 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 12:55:32.139638 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 12:55:32.142299 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 12:55:32.142391 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 12:55:32.150972 initrd-setup-root-after-ignition[973]: grep: /sysroot/oem/oem-release: No such file or directory Jan 30 12:55:32.154759 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 12:55:32.154759 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 12:55:32.158379 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 12:55:32.158265 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 12:55:32.159577 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 12:55:32.170467 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 12:55:32.195281 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 12:55:32.195429 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 12:55:32.197575 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 12:55:32.199141 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 12:55:32.200997 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 12:55:32.201957 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 12:55:32.224372 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 12:55:32.231499 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 12:55:32.245885 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 12:55:32.246989 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 12:55:32.248989 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 12:55:32.250763 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 12:55:32.250896 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 12:55:32.253366 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 12:55:32.255325 systemd[1]: Stopped target basic.target - Basic System. Jan 30 12:55:32.256979 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 12:55:32.258648 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 12:55:32.260448 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 12:55:32.262324 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 12:55:32.264096 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 12:55:32.266036 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 12:55:32.268026 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 12:55:32.269745 systemd[1]: Stopped target swap.target - Swaps. Jan 30 12:55:32.271203 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 12:55:32.271349 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 12:55:32.273673 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 12:55:32.275474 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 12:55:32.277260 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 12:55:32.281656 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 12:55:32.282757 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 12:55:32.282889 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 12:55:32.285792 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 12:55:32.285911 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 12:55:32.287785 systemd[1]: Stopped target paths.target - Path Units. Jan 30 12:55:32.289365 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 12:55:32.293301 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 12:55:32.294445 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 12:55:32.296576 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 12:55:32.298060 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 12:55:32.298158 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 12:55:32.299657 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 12:55:32.299744 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 12:55:32.301206 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 12:55:32.301333 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 12:55:32.303023 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 12:55:32.303130 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 12:55:32.313461 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 12:55:32.314259 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 12:55:32.314397 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 12:55:32.319519 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 12:55:32.320269 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 12:55:32.320407 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 12:55:32.323183 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 12:55:32.327779 ignition[999]: INFO : Ignition 2.19.0 Jan 30 12:55:32.327779 ignition[999]: INFO : Stage: umount Jan 30 12:55:32.327779 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 12:55:32.327779 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 12:55:32.327779 ignition[999]: INFO : umount: umount passed Jan 30 12:55:32.327779 ignition[999]: INFO : Ignition finished successfully Jan 30 12:55:32.323414 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 12:55:32.330532 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 12:55:32.330621 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 12:55:32.332041 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 12:55:32.332125 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 12:55:32.335893 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 12:55:32.336433 systemd[1]: Stopped target network.target - Network. Jan 30 12:55:32.337419 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 12:55:32.337486 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 12:55:32.339147 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 12:55:32.339196 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 12:55:32.340757 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 12:55:32.340800 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 12:55:32.342257 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 12:55:32.342303 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 12:55:32.343915 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 12:55:32.345104 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 12:55:32.358298 systemd-networkd[761]: eth0: DHCPv6 lease lost Jan 30 12:55:32.360166 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 12:55:32.360328 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 12:55:32.363275 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 12:55:32.363404 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 12:55:32.366033 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 12:55:32.366090 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 12:55:32.380366 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 12:55:32.381321 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 12:55:32.381403 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 12:55:32.383367 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 12:55:32.383421 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 12:55:32.385128 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 12:55:32.385182 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 12:55:32.387293 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 12:55:32.387345 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 12:55:32.389405 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 12:55:32.399141 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 12:55:32.399335 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 12:55:32.410999 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 12:55:32.411157 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 12:55:32.413324 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 12:55:32.413364 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 12:55:32.415113 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 12:55:32.415144 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 12:55:32.416857 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 12:55:32.416909 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 12:55:32.419450 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 12:55:32.419496 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 12:55:32.422128 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 12:55:32.422175 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 12:55:32.440453 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 12:55:32.441262 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 12:55:32.441321 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 12:55:32.443188 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 12:55:32.443306 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 12:55:32.446119 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 12:55:32.447273 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 12:55:32.535793 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 12:55:32.535910 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 12:55:32.537766 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 12:55:32.539017 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 12:55:32.539075 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 12:55:32.548442 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 12:55:32.557044 systemd[1]: Switching root. Jan 30 12:55:32.578282 systemd-journald[237]: Journal stopped Jan 30 12:55:33.395264 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Jan 30 12:55:33.395320 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 12:55:33.395336 kernel: SELinux: policy capability open_perms=1 Jan 30 12:55:33.395346 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 12:55:33.395355 kernel: SELinux: policy capability always_check_network=0 Jan 30 12:55:33.395365 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 12:55:33.395376 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 12:55:33.395385 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 12:55:33.395395 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 12:55:33.395407 kernel: audit: type=1403 audit(1738241732.790:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 12:55:33.395419 systemd[1]: Successfully loaded SELinux policy in 34.962ms. Jan 30 12:55:33.395440 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.880ms. Jan 30 12:55:33.395453 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 12:55:33.395464 systemd[1]: Detected virtualization kvm. Jan 30 12:55:33.395474 systemd[1]: Detected architecture arm64. Jan 30 12:55:33.395485 systemd[1]: Detected first boot. Jan 30 12:55:33.395495 systemd[1]: Initializing machine ID from VM UUID. Jan 30 12:55:33.395506 zram_generator::config[1044]: No configuration found. Jan 30 12:55:33.395517 systemd[1]: Populated /etc with preset unit settings. Jan 30 12:55:33.395529 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 12:55:33.395540 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 12:55:33.395551 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 12:55:33.395563 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 12:55:33.395574 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 12:55:33.395585 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 12:55:33.395596 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 12:55:33.395607 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 12:55:33.395625 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 12:55:33.395642 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 12:55:33.395653 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 12:55:33.395665 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 12:55:33.395677 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 12:55:33.395687 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 12:55:33.395698 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 12:55:33.395709 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 12:55:33.395720 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 12:55:33.395731 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 30 12:55:33.395743 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 12:55:33.395753 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 12:55:33.395764 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 12:55:33.395775 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 12:55:33.395790 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 12:55:33.395800 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 12:55:33.395812 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 12:55:33.395825 systemd[1]: Reached target slices.target - Slice Units. Jan 30 12:55:33.395836 systemd[1]: Reached target swap.target - Swaps. Jan 30 12:55:33.395847 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 12:55:33.395858 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 12:55:33.395869 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 12:55:33.395880 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 12:55:33.395891 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 12:55:33.395904 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 12:55:33.395914 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 12:55:33.395926 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 12:55:33.395938 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 12:55:33.395949 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 12:55:33.395960 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 12:55:33.395971 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 12:55:33.395983 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 12:55:33.395993 systemd[1]: Reached target machines.target - Containers. Jan 30 12:55:33.396004 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 12:55:33.396015 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 12:55:33.396028 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 12:55:33.396040 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 12:55:33.396051 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 12:55:33.396062 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 12:55:33.396073 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 12:55:33.396084 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 12:55:33.396095 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 12:55:33.396106 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 12:55:33.396117 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 12:55:33.396130 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 12:55:33.396141 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 12:55:33.396152 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 12:55:33.396162 kernel: fuse: init (API version 7.39) Jan 30 12:55:33.396173 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 12:55:33.396184 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 12:55:33.396195 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 12:55:33.396206 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 12:55:33.396218 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 12:55:33.396238 kernel: loop: module loaded Jan 30 12:55:33.396254 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 12:55:33.396266 systemd[1]: Stopped verity-setup.service. Jan 30 12:55:33.396277 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 12:55:33.396288 kernel: ACPI: bus type drm_connector registered Jan 30 12:55:33.396298 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 12:55:33.396309 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 12:55:33.396322 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 12:55:33.396332 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 12:55:33.396343 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 12:55:33.396375 systemd-journald[1111]: Collecting audit messages is disabled. Jan 30 12:55:33.396397 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 12:55:33.396411 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 12:55:33.396424 systemd-journald[1111]: Journal started Jan 30 12:55:33.396445 systemd-journald[1111]: Runtime Journal (/run/log/journal/ca4f3c32425746afa67eb51cb239204f) is 5.9M, max 47.3M, 41.4M free. Jan 30 12:55:33.185337 systemd[1]: Queued start job for default target multi-user.target. Jan 30 12:55:33.213323 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 12:55:33.213718 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 12:55:33.398054 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 12:55:33.400821 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 12:55:33.401645 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 12:55:33.403056 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 12:55:33.403218 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 12:55:33.404392 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 12:55:33.404533 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 12:55:33.405975 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 12:55:33.406113 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 12:55:33.407396 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 12:55:33.407537 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 12:55:33.408600 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 12:55:33.408752 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 12:55:33.409890 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 12:55:33.411086 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 12:55:33.412555 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 12:55:33.424960 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 12:55:33.437407 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 12:55:33.439736 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 12:55:33.440890 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 12:55:33.440935 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 12:55:33.442986 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 12:55:33.445204 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 12:55:33.447314 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 12:55:33.448217 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 12:55:33.450147 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 12:55:33.452087 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 12:55:33.453116 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 12:55:33.456423 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 12:55:33.457295 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 12:55:33.461599 systemd-journald[1111]: Time spent on flushing to /var/log/journal/ca4f3c32425746afa67eb51cb239204f is 17.302ms for 855 entries. Jan 30 12:55:33.461599 systemd-journald[1111]: System Journal (/var/log/journal/ca4f3c32425746afa67eb51cb239204f) is 8.0M, max 195.6M, 187.6M free. Jan 30 12:55:33.512464 systemd-journald[1111]: Received client request to flush runtime journal. Jan 30 12:55:33.512523 kernel: loop0: detected capacity change from 0 to 114328 Jan 30 12:55:33.512539 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 12:55:33.462431 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 12:55:33.467547 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 12:55:33.470999 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 12:55:33.475247 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 12:55:33.476381 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 12:55:33.477537 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 12:55:33.478858 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 12:55:33.491766 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 12:55:33.500289 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 12:55:33.502078 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 12:55:33.511597 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 12:55:33.520894 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 12:55:33.523314 udevadm[1162]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 12:55:33.527844 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 12:55:33.544924 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 12:55:33.545684 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 12:55:33.548104 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 12:55:33.551262 kernel: loop1: detected capacity change from 0 to 114432 Jan 30 12:55:33.569528 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 12:55:33.591549 kernel: loop2: detected capacity change from 0 to 189592 Jan 30 12:55:33.593444 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Jan 30 12:55:33.593465 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Jan 30 12:55:33.598900 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 12:55:33.629552 kernel: loop3: detected capacity change from 0 to 114328 Jan 30 12:55:33.634412 kernel: loop4: detected capacity change from 0 to 114432 Jan 30 12:55:33.639263 kernel: loop5: detected capacity change from 0 to 189592 Jan 30 12:55:33.643336 (sd-merge)[1179]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 30 12:55:33.644113 (sd-merge)[1179]: Merged extensions into '/usr'. Jan 30 12:55:33.648960 systemd[1]: Reloading requested from client PID 1155 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 12:55:33.648976 systemd[1]: Reloading... Jan 30 12:55:33.704280 zram_generator::config[1202]: No configuration found. Jan 30 12:55:33.771209 ldconfig[1150]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 12:55:33.820012 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 12:55:33.856996 systemd[1]: Reloading finished in 207 ms. Jan 30 12:55:33.886657 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 12:55:33.887936 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 12:55:33.910455 systemd[1]: Starting ensure-sysext.service... Jan 30 12:55:33.912553 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 12:55:33.929644 systemd[1]: Reloading requested from client PID 1239 ('systemctl') (unit ensure-sysext.service)... Jan 30 12:55:33.929663 systemd[1]: Reloading... Jan 30 12:55:33.932881 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 12:55:33.933143 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 12:55:33.933801 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 12:55:33.934016 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Jan 30 12:55:33.934071 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Jan 30 12:55:33.937533 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 12:55:33.937548 systemd-tmpfiles[1240]: Skipping /boot Jan 30 12:55:33.945056 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 12:55:33.945070 systemd-tmpfiles[1240]: Skipping /boot Jan 30 12:55:33.983268 zram_generator::config[1276]: No configuration found. Jan 30 12:55:34.062024 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 12:55:34.099280 systemd[1]: Reloading finished in 169 ms. Jan 30 12:55:34.113628 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 12:55:34.125761 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 12:55:34.133361 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 12:55:34.135991 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 12:55:34.138235 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 12:55:34.141540 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 12:55:34.150269 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 12:55:34.153589 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 12:55:34.157663 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 12:55:34.159671 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 12:55:34.165276 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 12:55:34.169010 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 12:55:34.170075 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 12:55:34.174444 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 12:55:34.176625 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 12:55:34.176770 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 12:55:34.180307 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 12:55:34.180453 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 12:55:34.182837 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 12:55:34.184333 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 12:55:34.189415 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 12:55:34.189669 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 12:55:34.193342 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 12:55:34.199918 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 12:55:34.211380 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 12:55:34.212122 systemd-udevd[1309]: Using default interface naming scheme 'v255'. Jan 30 12:55:34.216506 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 12:55:34.218690 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 12:55:34.219870 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 12:55:34.223909 augenrules[1333]: No rules Jan 30 12:55:34.224207 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 12:55:34.226349 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 12:55:34.229081 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 12:55:34.231823 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 12:55:34.233353 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 12:55:34.233485 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 12:55:34.239795 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 12:55:34.241268 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 12:55:34.243601 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 12:55:34.246765 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 12:55:34.247972 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 12:55:34.249709 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 12:55:34.251421 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 12:55:34.266269 systemd[1]: Finished ensure-sysext.service. Jan 30 12:55:34.270296 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 12:55:34.278567 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 12:55:34.285018 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 12:55:34.287342 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 12:55:34.289405 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 12:55:34.290405 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 12:55:34.292779 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 12:55:34.294971 systemd-resolved[1307]: Positive Trust Anchors: Jan 30 12:55:34.294989 systemd-resolved[1307]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 12:55:34.295023 systemd-resolved[1307]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 12:55:34.296924 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 12:55:34.297954 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 12:55:34.298423 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 12:55:34.298558 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 12:55:34.299821 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 12:55:34.299935 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 12:55:34.304638 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 12:55:34.306340 systemd-resolved[1307]: Defaulting to hostname 'linux'. Jan 30 12:55:34.311035 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 12:55:34.311208 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 12:55:34.317378 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 12:55:34.318521 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 30 12:55:34.318552 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 12:55:34.338912 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 12:55:34.339096 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 12:55:34.340656 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 12:55:34.386250 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1364) Jan 30 12:55:34.403364 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 12:55:34.413120 systemd-networkd[1377]: lo: Link UP Jan 30 12:55:34.413135 systemd-networkd[1377]: lo: Gained carrier Jan 30 12:55:34.413941 systemd-networkd[1377]: Enumeration completed Jan 30 12:55:34.414349 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 12:55:34.415178 systemd-networkd[1377]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 12:55:34.415181 systemd-networkd[1377]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 12:55:34.415395 systemd[1]: Reached target network.target - Network. Jan 30 12:55:34.416164 systemd-networkd[1377]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 12:55:34.416192 systemd-networkd[1377]: eth0: Link UP Jan 30 12:55:34.416194 systemd-networkd[1377]: eth0: Gained carrier Jan 30 12:55:34.416203 systemd-networkd[1377]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 12:55:34.434474 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 12:55:34.435797 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 12:55:34.438283 systemd-networkd[1377]: eth0: DHCPv4 address 10.0.0.64/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 12:55:34.439100 systemd-timesyncd[1378]: Network configuration changed, trying to establish connection. Jan 30 12:55:34.439336 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 12:55:34.441101 systemd-timesyncd[1378]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 30 12:55:34.441154 systemd-timesyncd[1378]: Initial clock synchronization to Thu 2025-01-30 12:55:34.208976 UTC. Jan 30 12:55:34.443209 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 12:55:34.444492 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 12:55:34.446947 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 12:55:34.449174 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 12:55:34.465408 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 12:55:34.477218 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 12:55:34.491011 lvm[1399]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 12:55:34.528592 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 12:55:34.529929 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 12:55:34.530843 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 12:55:34.531785 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 12:55:34.532799 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 12:55:34.533994 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 12:55:34.534923 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 12:55:34.535876 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 12:55:34.536926 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 12:55:34.536959 systemd[1]: Reached target paths.target - Path Units. Jan 30 12:55:34.537645 systemd[1]: Reached target timers.target - Timer Units. Jan 30 12:55:34.541349 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 12:55:34.543622 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 12:55:34.551615 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 12:55:34.554144 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 12:55:34.555590 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 12:55:34.556762 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 12:55:34.557505 systemd[1]: Reached target basic.target - Basic System. Jan 30 12:55:34.558426 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 12:55:34.558472 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 12:55:34.559587 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 12:55:34.561479 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 12:55:34.564366 lvm[1408]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 12:55:34.565393 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 12:55:34.569531 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 12:55:34.576432 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 12:55:34.581442 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 12:55:34.583561 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 12:55:34.584299 jq[1411]: false Jan 30 12:55:34.586917 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 12:55:34.589187 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 12:55:34.596491 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 12:55:34.602592 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 12:55:34.603122 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 12:55:34.603741 extend-filesystems[1412]: Found loop3 Jan 30 12:55:34.603741 extend-filesystems[1412]: Found loop4 Jan 30 12:55:34.603741 extend-filesystems[1412]: Found loop5 Jan 30 12:55:34.603741 extend-filesystems[1412]: Found vda Jan 30 12:55:34.603741 extend-filesystems[1412]: Found vda1 Jan 30 12:55:34.603741 extend-filesystems[1412]: Found vda2 Jan 30 12:55:34.603741 extend-filesystems[1412]: Found vda3 Jan 30 12:55:34.603741 extend-filesystems[1412]: Found usr Jan 30 12:55:34.603741 extend-filesystems[1412]: Found vda4 Jan 30 12:55:34.603741 extend-filesystems[1412]: Found vda6 Jan 30 12:55:34.603741 extend-filesystems[1412]: Found vda7 Jan 30 12:55:34.603741 extend-filesystems[1412]: Found vda9 Jan 30 12:55:34.603741 extend-filesystems[1412]: Checking size of /dev/vda9 Jan 30 12:55:34.604805 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 12:55:34.607278 dbus-daemon[1410]: [system] SELinux support is enabled Jan 30 12:55:34.606907 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 12:55:34.608442 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 12:55:34.612895 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 12:55:34.651605 jq[1429]: true Jan 30 12:55:34.616607 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 12:55:34.617477 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 12:55:34.651958 tar[1432]: linux-arm64/helm Jan 30 12:55:34.620117 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 12:55:34.652220 jq[1433]: true Jan 30 12:55:34.620435 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 12:55:34.631539 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 12:55:34.631733 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 12:55:34.633979 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 12:55:34.634011 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 12:55:34.659300 extend-filesystems[1412]: Resized partition /dev/vda9 Jan 30 12:55:34.659705 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 12:55:34.670343 update_engine[1423]: I20250130 12:55:34.665535 1423 main.cc:92] Flatcar Update Engine starting Jan 30 12:55:34.670343 update_engine[1423]: I20250130 12:55:34.669061 1423 update_check_scheduler.cc:74] Next update check in 9m51s Jan 30 12:55:34.680502 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1348) Jan 30 12:55:34.659872 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 12:55:34.678039 systemd[1]: Started update-engine.service - Update Engine. Jan 30 12:55:34.682157 systemd-logind[1418]: Watching system buttons on /dev/input/event0 (Power Button) Jan 30 12:55:34.692931 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 30 12:55:34.682358 systemd-logind[1418]: New seat seat0. Jan 30 12:55:34.693024 extend-filesystems[1444]: resize2fs 1.47.1 (20-May-2024) Jan 30 12:55:34.691668 (ntainerd)[1438]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 12:55:34.691914 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 12:55:34.694038 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 12:55:34.741264 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 30 12:55:34.763716 extend-filesystems[1444]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 12:55:34.763716 extend-filesystems[1444]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 12:55:34.763716 extend-filesystems[1444]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 30 12:55:34.768879 extend-filesystems[1412]: Resized filesystem in /dev/vda9 Jan 30 12:55:34.765959 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 12:55:34.766084 locksmithd[1453]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 12:55:34.766199 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 12:55:34.771308 bash[1463]: Updated "/home/core/.ssh/authorized_keys" Jan 30 12:55:34.775234 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 12:55:34.779193 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 30 12:55:34.955419 containerd[1438]: time="2025-01-30T12:55:34.955257680Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 12:55:34.992656 containerd[1438]: time="2025-01-30T12:55:34.992546200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 12:55:34.994866 containerd[1438]: time="2025-01-30T12:55:34.994811120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 12:55:34.994866 containerd[1438]: time="2025-01-30T12:55:34.994855640Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 12:55:34.994866 containerd[1438]: time="2025-01-30T12:55:34.994874360Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 12:55:34.995177 containerd[1438]: time="2025-01-30T12:55:34.995143720Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 12:55:34.995177 containerd[1438]: time="2025-01-30T12:55:34.995173880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 12:55:34.995278 containerd[1438]: time="2025-01-30T12:55:34.995257000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 12:55:34.995344 containerd[1438]: time="2025-01-30T12:55:34.995327240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 12:55:34.995640 containerd[1438]: time="2025-01-30T12:55:34.995591800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 12:55:34.995640 containerd[1438]: time="2025-01-30T12:55:34.995627560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 12:55:34.995694 containerd[1438]: time="2025-01-30T12:55:34.995644480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 12:55:34.995694 containerd[1438]: time="2025-01-30T12:55:34.995654840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 12:55:34.995809 containerd[1438]: time="2025-01-30T12:55:34.995788400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 12:55:34.996087 containerd[1438]: time="2025-01-30T12:55:34.996066000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 12:55:34.996275 containerd[1438]: time="2025-01-30T12:55:34.996252480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 12:55:34.996304 containerd[1438]: time="2025-01-30T12:55:34.996276960Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 12:55:34.996373 containerd[1438]: time="2025-01-30T12:55:34.996357600Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 12:55:34.996496 containerd[1438]: time="2025-01-30T12:55:34.996479960Z" level=info msg="metadata content store policy set" policy=shared Jan 30 12:55:35.001601 containerd[1438]: time="2025-01-30T12:55:35.001542497Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 12:55:35.001601 containerd[1438]: time="2025-01-30T12:55:35.001611554Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 12:55:35.001719 containerd[1438]: time="2025-01-30T12:55:35.001627284Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 12:55:35.001719 containerd[1438]: time="2025-01-30T12:55:35.001642121Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 12:55:35.001719 containerd[1438]: time="2025-01-30T12:55:35.001655598Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 12:55:35.001829 containerd[1438]: time="2025-01-30T12:55:35.001809365Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 12:55:35.002071 containerd[1438]: time="2025-01-30T12:55:35.002051764Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 12:55:35.002173 containerd[1438]: time="2025-01-30T12:55:35.002155777Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 12:55:35.002197 containerd[1438]: time="2025-01-30T12:55:35.002175896Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 12:55:35.002197 containerd[1438]: time="2025-01-30T12:55:35.002188558Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 12:55:35.002246 containerd[1438]: time="2025-01-30T12:55:35.002207939Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 12:55:35.002246 containerd[1438]: time="2025-01-30T12:55:35.002220912Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 12:55:35.002289 containerd[1438]: time="2025-01-30T12:55:35.002248643Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 12:55:35.002289 containerd[1438]: time="2025-01-30T12:55:35.002271365Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 12:55:35.002289 containerd[1438]: time="2025-01-30T12:55:35.002286240Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 12:55:35.002346 containerd[1438]: time="2025-01-30T12:55:35.002299213Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 12:55:35.002346 containerd[1438]: time="2025-01-30T12:55:35.002312263Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 12:55:35.002346 containerd[1438]: time="2025-01-30T12:55:35.002323022Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 12:55:35.002346 containerd[1438]: time="2025-01-30T12:55:35.002342597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 12:55:35.002408 containerd[1438]: time="2025-01-30T12:55:35.002363570Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 12:55:35.002408 containerd[1438]: time="2025-01-30T12:55:35.002376349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 12:55:35.002408 containerd[1438]: time="2025-01-30T12:55:35.002389088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 12:55:35.002408 containerd[1438]: time="2025-01-30T12:55:35.002400623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 12:55:35.002478 containerd[1438]: time="2025-01-30T12:55:35.002427850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 12:55:35.002478 containerd[1438]: time="2025-01-30T12:55:35.002441483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 12:55:35.002478 containerd[1438]: time="2025-01-30T12:55:35.002454028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 12:55:35.002478 containerd[1438]: time="2025-01-30T12:55:35.002465719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 12:55:35.002557 containerd[1438]: time="2025-01-30T12:55:35.002478847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 12:55:35.002557 containerd[1438]: time="2025-01-30T12:55:35.002490693Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 12:55:35.002557 containerd[1438]: time="2025-01-30T12:55:35.002522542Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 12:55:35.002557 containerd[1438]: time="2025-01-30T12:55:35.002535553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 12:55:35.002629 containerd[1438]: time="2025-01-30T12:55:35.002560605Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 12:55:35.002629 containerd[1438]: time="2025-01-30T12:55:35.002582161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 12:55:35.002629 containerd[1438]: time="2025-01-30T12:55:35.002594822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 12:55:35.002629 containerd[1438]: time="2025-01-30T12:55:35.002606008Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 12:55:35.002805 containerd[1438]: time="2025-01-30T12:55:35.002788361Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 12:55:35.002826 containerd[1438]: time="2025-01-30T12:55:35.002810111Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 12:55:35.002826 containerd[1438]: time="2025-01-30T12:55:35.002820754Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 12:55:35.002866 containerd[1438]: time="2025-01-30T12:55:35.002832367Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 12:55:35.002866 containerd[1438]: time="2025-01-30T12:55:35.002841999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 12:55:35.002866 containerd[1438]: time="2025-01-30T12:55:35.002854428Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 12:55:35.002866 containerd[1438]: time="2025-01-30T12:55:35.002863322Z" level=info msg="NRI interface is disabled by configuration." Jan 30 12:55:35.002930 containerd[1438]: time="2025-01-30T12:55:35.002873886Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 12:55:35.003336 containerd[1438]: time="2025-01-30T12:55:35.003263877Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 12:55:35.003336 containerd[1438]: time="2025-01-30T12:55:35.003327496Z" level=info msg="Connect containerd service" Jan 30 12:55:35.003549 containerd[1438]: time="2025-01-30T12:55:35.003534784Z" level=info msg="using legacy CRI server" Jan 30 12:55:35.003549 containerd[1438]: time="2025-01-30T12:55:35.003545815Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 12:55:35.003636 containerd[1438]: time="2025-01-30T12:55:35.003624038Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 12:55:35.005881 containerd[1438]: time="2025-01-30T12:55:35.005835887Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 12:55:35.006398 containerd[1438]: time="2025-01-30T12:55:35.006239432Z" level=info msg="Start subscribing containerd event" Jan 30 12:55:35.006398 containerd[1438]: time="2025-01-30T12:55:35.006316646Z" level=info msg="Start recovering state" Jan 30 12:55:35.006508 containerd[1438]: time="2025-01-30T12:55:35.006484978Z" level=info msg="Start event monitor" Jan 30 12:55:35.006536 containerd[1438]: time="2025-01-30T12:55:35.006507194Z" level=info msg="Start snapshots syncer" Jan 30 12:55:35.006536 containerd[1438]: time="2025-01-30T12:55:35.006518380Z" level=info msg="Start cni network conf syncer for default" Jan 30 12:55:35.006536 containerd[1438]: time="2025-01-30T12:55:35.006532479Z" level=info msg="Start streaming server" Jan 30 12:55:35.007024 containerd[1438]: time="2025-01-30T12:55:35.006987409Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 12:55:35.007339 containerd[1438]: time="2025-01-30T12:55:35.007176248Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 12:55:35.007501 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 12:55:35.009110 containerd[1438]: time="2025-01-30T12:55:35.009048870Z" level=info msg="containerd successfully booted in 0.057225s" Jan 30 12:55:35.059908 tar[1432]: linux-arm64/LICENSE Jan 30 12:55:35.060013 tar[1432]: linux-arm64/README.md Jan 30 12:55:35.071132 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 12:55:35.736440 systemd-networkd[1377]: eth0: Gained IPv6LL Jan 30 12:55:35.743514 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 12:55:35.747437 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 12:55:35.758570 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 30 12:55:35.762839 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:55:35.765252 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 12:55:35.798068 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 12:55:35.800912 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 30 12:55:35.801834 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 30 12:55:35.809355 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 12:55:36.145960 sshd_keygen[1425]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 12:55:36.168386 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 12:55:36.179920 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 12:55:36.194193 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 12:55:36.194419 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 12:55:36.218550 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 12:55:36.229682 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 12:55:36.246682 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 12:55:36.249413 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 30 12:55:36.250587 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 12:55:36.338397 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:55:36.339771 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 12:55:36.341190 systemd[1]: Startup finished in 614ms (kernel) + 5.065s (initrd) + 3.596s (userspace) = 9.275s. Jan 30 12:55:36.343380 (kubelet)[1523]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 12:55:36.926027 kubelet[1523]: E0130 12:55:36.925880 1523 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 12:55:36.928263 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 12:55:36.928414 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 12:55:40.497197 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 12:55:40.498690 systemd[1]: Started sshd@0-10.0.0.64:22-10.0.0.1:49884.service - OpenSSH per-connection server daemon (10.0.0.1:49884). Jan 30 12:55:40.598757 sshd[1536]: Accepted publickey for core from 10.0.0.1 port 49884 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:55:40.601110 sshd[1536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:55:40.622109 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 12:55:40.630588 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 12:55:40.632998 systemd-logind[1418]: New session 1 of user core. Jan 30 12:55:40.646374 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 12:55:40.649693 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 12:55:40.658506 (systemd)[1540]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 12:55:40.755502 systemd[1540]: Queued start job for default target default.target. Jan 30 12:55:40.766317 systemd[1540]: Created slice app.slice - User Application Slice. Jan 30 12:55:40.766363 systemd[1540]: Reached target paths.target - Paths. Jan 30 12:55:40.766376 systemd[1540]: Reached target timers.target - Timers. Jan 30 12:55:40.768041 systemd[1540]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 12:55:40.779637 systemd[1540]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 12:55:40.781204 systemd[1540]: Reached target sockets.target - Sockets. Jan 30 12:55:40.781445 systemd[1540]: Reached target basic.target - Basic System. Jan 30 12:55:40.781620 systemd[1540]: Reached target default.target - Main User Target. Jan 30 12:55:40.781652 systemd[1540]: Startup finished in 116ms. Jan 30 12:55:40.781715 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 12:55:40.783606 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 12:55:40.846543 systemd[1]: Started sshd@1-10.0.0.64:22-10.0.0.1:49896.service - OpenSSH per-connection server daemon (10.0.0.1:49896). Jan 30 12:55:40.903287 sshd[1551]: Accepted publickey for core from 10.0.0.1 port 49896 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:55:40.904682 sshd[1551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:55:40.909517 systemd-logind[1418]: New session 2 of user core. Jan 30 12:55:40.917434 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 12:55:40.970573 sshd[1551]: pam_unix(sshd:session): session closed for user core Jan 30 12:55:40.982779 systemd[1]: sshd@1-10.0.0.64:22-10.0.0.1:49896.service: Deactivated successfully. Jan 30 12:55:40.985147 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 12:55:40.989893 systemd-logind[1418]: Session 2 logged out. Waiting for processes to exit. Jan 30 12:55:41.000590 systemd[1]: Started sshd@2-10.0.0.64:22-10.0.0.1:49910.service - OpenSSH per-connection server daemon (10.0.0.1:49910). Jan 30 12:55:41.003044 systemd-logind[1418]: Removed session 2. Jan 30 12:55:41.033334 sshd[1558]: Accepted publickey for core from 10.0.0.1 port 49910 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:55:41.034256 sshd[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:55:41.040764 systemd-logind[1418]: New session 3 of user core. Jan 30 12:55:41.051498 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 12:55:41.101504 sshd[1558]: pam_unix(sshd:session): session closed for user core Jan 30 12:55:41.110966 systemd[1]: sshd@2-10.0.0.64:22-10.0.0.1:49910.service: Deactivated successfully. Jan 30 12:55:41.113710 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 12:55:41.115109 systemd-logind[1418]: Session 3 logged out. Waiting for processes to exit. Jan 30 12:55:41.124568 systemd[1]: Started sshd@3-10.0.0.64:22-10.0.0.1:49912.service - OpenSSH per-connection server daemon (10.0.0.1:49912). Jan 30 12:55:41.129400 systemd-logind[1418]: Removed session 3. Jan 30 12:55:41.160612 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 49912 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:55:41.162107 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:55:41.166695 systemd-logind[1418]: New session 4 of user core. Jan 30 12:55:41.172435 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 12:55:41.225660 sshd[1565]: pam_unix(sshd:session): session closed for user core Jan 30 12:55:41.236813 systemd[1]: sshd@3-10.0.0.64:22-10.0.0.1:49912.service: Deactivated successfully. Jan 30 12:55:41.238323 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 12:55:41.239891 systemd-logind[1418]: Session 4 logged out. Waiting for processes to exit. Jan 30 12:55:41.240967 systemd[1]: Started sshd@4-10.0.0.64:22-10.0.0.1:49928.service - OpenSSH per-connection server daemon (10.0.0.1:49928). Jan 30 12:55:41.242186 systemd-logind[1418]: Removed session 4. Jan 30 12:55:41.280728 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 49928 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:55:41.281268 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:55:41.285208 systemd-logind[1418]: New session 5 of user core. Jan 30 12:55:41.295429 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 12:55:41.365648 sudo[1575]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 12:55:41.368151 sudo[1575]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 12:55:41.389334 sudo[1575]: pam_unix(sudo:session): session closed for user root Jan 30 12:55:41.392846 sshd[1572]: pam_unix(sshd:session): session closed for user core Jan 30 12:55:41.406971 systemd[1]: sshd@4-10.0.0.64:22-10.0.0.1:49928.service: Deactivated successfully. Jan 30 12:55:41.408785 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 12:55:41.411618 systemd-logind[1418]: Session 5 logged out. Waiting for processes to exit. Jan 30 12:55:41.419762 systemd[1]: Started sshd@5-10.0.0.64:22-10.0.0.1:49940.service - OpenSSH per-connection server daemon (10.0.0.1:49940). Jan 30 12:55:41.423579 systemd-logind[1418]: Removed session 5. Jan 30 12:55:41.455174 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 49940 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:55:41.456731 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:55:41.460923 systemd-logind[1418]: New session 6 of user core. Jan 30 12:55:41.477456 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 12:55:41.532671 sudo[1584]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 12:55:41.533427 sudo[1584]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 12:55:41.537154 sudo[1584]: pam_unix(sudo:session): session closed for user root Jan 30 12:55:41.543553 sudo[1583]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 12:55:41.543834 sudo[1583]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 12:55:41.563564 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 12:55:41.565308 auditctl[1587]: No rules Jan 30 12:55:41.565665 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 12:55:41.565849 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 12:55:41.568802 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 12:55:41.595476 augenrules[1605]: No rules Jan 30 12:55:41.596144 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 12:55:41.597121 sudo[1583]: pam_unix(sudo:session): session closed for user root Jan 30 12:55:41.599780 sshd[1580]: pam_unix(sshd:session): session closed for user core Jan 30 12:55:41.611439 systemd[1]: sshd@5-10.0.0.64:22-10.0.0.1:49940.service: Deactivated successfully. Jan 30 12:55:41.613469 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 12:55:41.614759 systemd-logind[1418]: Session 6 logged out. Waiting for processes to exit. Jan 30 12:55:41.616094 systemd[1]: Started sshd@6-10.0.0.64:22-10.0.0.1:49948.service - OpenSSH per-connection server daemon (10.0.0.1:49948). Jan 30 12:55:41.617186 systemd-logind[1418]: Removed session 6. Jan 30 12:55:41.656779 sshd[1613]: Accepted publickey for core from 10.0.0.1 port 49948 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:55:41.658166 sshd[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:55:41.662311 systemd-logind[1418]: New session 7 of user core. Jan 30 12:55:41.674425 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 12:55:41.726419 sudo[1616]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 12:55:41.726705 sudo[1616]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 12:55:42.066533 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 12:55:42.066643 (dockerd)[1635]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 12:55:42.353853 dockerd[1635]: time="2025-01-30T12:55:42.353721627Z" level=info msg="Starting up" Jan 30 12:55:42.512905 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1090184800-merged.mount: Deactivated successfully. Jan 30 12:55:42.533105 dockerd[1635]: time="2025-01-30T12:55:42.533043704Z" level=info msg="Loading containers: start." Jan 30 12:55:42.630270 kernel: Initializing XFRM netlink socket Jan 30 12:55:42.703844 systemd-networkd[1377]: docker0: Link UP Jan 30 12:55:42.722911 dockerd[1635]: time="2025-01-30T12:55:42.722841922Z" level=info msg="Loading containers: done." Jan 30 12:55:42.742707 dockerd[1635]: time="2025-01-30T12:55:42.742636975Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 12:55:42.742908 dockerd[1635]: time="2025-01-30T12:55:42.742763675Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 12:55:42.742908 dockerd[1635]: time="2025-01-30T12:55:42.742892392Z" level=info msg="Daemon has completed initialization" Jan 30 12:55:42.779421 dockerd[1635]: time="2025-01-30T12:55:42.779115412Z" level=info msg="API listen on /run/docker.sock" Jan 30 12:55:42.779761 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 12:55:43.380551 containerd[1438]: time="2025-01-30T12:55:43.380505283Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 30 12:55:44.125628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2468675229.mount: Deactivated successfully. Jan 30 12:55:45.089054 containerd[1438]: time="2025-01-30T12:55:45.088987482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:45.089576 containerd[1438]: time="2025-01-30T12:55:45.089529514Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=25618072" Jan 30 12:55:45.090790 containerd[1438]: time="2025-01-30T12:55:45.090743262Z" level=info msg="ImageCreate event name:\"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:45.094364 containerd[1438]: time="2025-01-30T12:55:45.094309044Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:45.095570 containerd[1438]: time="2025-01-30T12:55:45.095536128Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"25614870\" in 1.714977597s" Jan 30 12:55:45.095605 containerd[1438]: time="2025-01-30T12:55:45.095574712Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\"" Jan 30 12:55:45.096434 containerd[1438]: time="2025-01-30T12:55:45.096405325Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 30 12:55:46.390449 containerd[1438]: time="2025-01-30T12:55:46.390382295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:46.391509 containerd[1438]: time="2025-01-30T12:55:46.391410266Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=22469469" Jan 30 12:55:46.392388 containerd[1438]: time="2025-01-30T12:55:46.392360359Z" level=info msg="ImageCreate event name:\"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:46.396249 containerd[1438]: time="2025-01-30T12:55:46.395982374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:46.397293 containerd[1438]: time="2025-01-30T12:55:46.397253112Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"23873257\" in 1.300812847s" Jan 30 12:55:46.397367 containerd[1438]: time="2025-01-30T12:55:46.397294514Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\"" Jan 30 12:55:46.398126 containerd[1438]: time="2025-01-30T12:55:46.398102046Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 30 12:55:47.178748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 12:55:47.192441 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:55:47.310927 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:55:47.316597 (kubelet)[1854]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 12:55:47.364753 kubelet[1854]: E0130 12:55:47.364630 1854 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 12:55:47.369190 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 12:55:47.369500 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 12:55:47.703280 containerd[1438]: time="2025-01-30T12:55:47.703011726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:47.703632 containerd[1438]: time="2025-01-30T12:55:47.703596927Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=17024219" Jan 30 12:55:47.704764 containerd[1438]: time="2025-01-30T12:55:47.704709668Z" level=info msg="ImageCreate event name:\"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:47.708004 containerd[1438]: time="2025-01-30T12:55:47.707953406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:47.709048 containerd[1438]: time="2025-01-30T12:55:47.709008406Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"18428025\" in 1.310870273s" Jan 30 12:55:47.709048 containerd[1438]: time="2025-01-30T12:55:47.709044037Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\"" Jan 30 12:55:47.709934 containerd[1438]: time="2025-01-30T12:55:47.709621921Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 30 12:55:48.833989 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2129155061.mount: Deactivated successfully. Jan 30 12:55:49.217600 containerd[1438]: time="2025-01-30T12:55:49.217456076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:49.221502 containerd[1438]: time="2025-01-30T12:55:49.221451246Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=26772119" Jan 30 12:55:49.222767 containerd[1438]: time="2025-01-30T12:55:49.222739657Z" level=info msg="ImageCreate event name:\"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:49.225309 containerd[1438]: time="2025-01-30T12:55:49.225263319Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:49.226118 containerd[1438]: time="2025-01-30T12:55:49.226073319Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"26771136\" in 1.516410497s" Jan 30 12:55:49.226118 containerd[1438]: time="2025-01-30T12:55:49.226111229Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\"" Jan 30 12:55:49.226688 containerd[1438]: time="2025-01-30T12:55:49.226651043Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 12:55:49.793916 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3453604000.mount: Deactivated successfully. Jan 30 12:55:50.521909 containerd[1438]: time="2025-01-30T12:55:50.521853395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:50.522989 containerd[1438]: time="2025-01-30T12:55:50.522719871Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Jan 30 12:55:50.523956 containerd[1438]: time="2025-01-30T12:55:50.523915931Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:50.527410 containerd[1438]: time="2025-01-30T12:55:50.527362509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:50.528604 containerd[1438]: time="2025-01-30T12:55:50.528561080Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.301872559s" Jan 30 12:55:50.528657 containerd[1438]: time="2025-01-30T12:55:50.528600246Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 30 12:55:50.529127 containerd[1438]: time="2025-01-30T12:55:50.529040357Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 30 12:55:51.062625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2369635924.mount: Deactivated successfully. Jan 30 12:55:51.068992 containerd[1438]: time="2025-01-30T12:55:51.068938416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:51.070500 containerd[1438]: time="2025-01-30T12:55:51.070460545Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jan 30 12:55:51.072588 containerd[1438]: time="2025-01-30T12:55:51.071532263Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:51.073977 containerd[1438]: time="2025-01-30T12:55:51.073946965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:51.074779 containerd[1438]: time="2025-01-30T12:55:51.074751361Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 545.487438ms" Jan 30 12:55:51.074843 containerd[1438]: time="2025-01-30T12:55:51.074785085Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 30 12:55:51.075468 containerd[1438]: time="2025-01-30T12:55:51.075448724Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 30 12:55:51.711020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1580645080.mount: Deactivated successfully. Jan 30 12:55:53.128220 containerd[1438]: time="2025-01-30T12:55:53.128166827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:53.129198 containerd[1438]: time="2025-01-30T12:55:53.128745027Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406427" Jan 30 12:55:53.130949 containerd[1438]: time="2025-01-30T12:55:53.130305604Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:53.135251 containerd[1438]: time="2025-01-30T12:55:53.133858066Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:53.136364 containerd[1438]: time="2025-01-30T12:55:53.136322467Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.060739395s" Jan 30 12:55:53.136476 containerd[1438]: time="2025-01-30T12:55:53.136457472Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jan 30 12:55:57.619695 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 12:55:57.629487 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:55:57.761639 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:55:57.765987 (kubelet)[2003]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 12:55:57.811028 kubelet[2003]: E0130 12:55:57.810967 2003 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 12:55:57.813784 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 12:55:57.813918 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 12:55:59.133543 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:55:59.144609 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:55:59.166615 systemd[1]: Reloading requested from client PID 2019 ('systemctl') (unit session-7.scope)... Jan 30 12:55:59.166764 systemd[1]: Reloading... Jan 30 12:55:59.229261 zram_generator::config[2054]: No configuration found. Jan 30 12:55:59.444007 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 12:55:59.498133 systemd[1]: Reloading finished in 330 ms. Jan 30 12:55:59.539210 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 12:55:59.539287 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 12:55:59.539519 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:55:59.541980 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:55:59.640048 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:55:59.644660 (kubelet)[2104]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 12:55:59.681106 kubelet[2104]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 12:55:59.681106 kubelet[2104]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 12:55:59.681106 kubelet[2104]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 12:55:59.681583 kubelet[2104]: I0130 12:55:59.681533 2104 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 12:56:00.261684 kubelet[2104]: I0130 12:56:00.261635 2104 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 30 12:56:00.261684 kubelet[2104]: I0130 12:56:00.261670 2104 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 12:56:00.261962 kubelet[2104]: I0130 12:56:00.261935 2104 server.go:929] "Client rotation is on, will bootstrap in background" Jan 30 12:56:00.306104 kubelet[2104]: E0130 12:56:00.306063 2104 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.64:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Jan 30 12:56:00.307042 kubelet[2104]: I0130 12:56:00.307014 2104 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 12:56:00.315668 kubelet[2104]: E0130 12:56:00.315616 2104 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 12:56:00.315668 kubelet[2104]: I0130 12:56:00.315650 2104 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 12:56:00.319215 kubelet[2104]: I0130 12:56:00.319177 2104 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 12:56:00.320097 kubelet[2104]: I0130 12:56:00.320063 2104 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 12:56:00.320278 kubelet[2104]: I0130 12:56:00.320241 2104 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 12:56:00.320455 kubelet[2104]: I0130 12:56:00.320275 2104 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 12:56:00.320653 kubelet[2104]: I0130 12:56:00.320634 2104 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 12:56:00.320653 kubelet[2104]: I0130 12:56:00.320647 2104 container_manager_linux.go:300] "Creating device plugin manager" Jan 30 12:56:00.320858 kubelet[2104]: I0130 12:56:00.320837 2104 state_mem.go:36] "Initialized new in-memory state store" Jan 30 12:56:00.324857 kubelet[2104]: I0130 12:56:00.324834 2104 kubelet.go:408] "Attempting to sync node with API server" Jan 30 12:56:00.324912 kubelet[2104]: I0130 12:56:00.324862 2104 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 12:56:00.325926 kubelet[2104]: I0130 12:56:00.325388 2104 kubelet.go:314] "Adding apiserver pod source" Jan 30 12:56:00.325926 kubelet[2104]: I0130 12:56:00.325408 2104 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 12:56:00.330869 kubelet[2104]: W0130 12:56:00.330798 2104 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused Jan 30 12:56:00.330971 kubelet[2104]: E0130 12:56:00.330869 2104 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Jan 30 12:56:00.332865 kubelet[2104]: I0130 12:56:00.332841 2104 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 12:56:00.333069 kubelet[2104]: W0130 12:56:00.333031 2104 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused Jan 30 12:56:00.333109 kubelet[2104]: E0130 12:56:00.333085 2104 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Jan 30 12:56:00.334935 kubelet[2104]: I0130 12:56:00.334827 2104 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 12:56:00.337398 kubelet[2104]: W0130 12:56:00.337373 2104 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 12:56:00.338328 kubelet[2104]: I0130 12:56:00.338248 2104 server.go:1269] "Started kubelet" Jan 30 12:56:00.338911 kubelet[2104]: I0130 12:56:00.338862 2104 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 12:56:00.339724 kubelet[2104]: I0130 12:56:00.339211 2104 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 12:56:00.339724 kubelet[2104]: I0130 12:56:00.339606 2104 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 12:56:00.340306 kubelet[2104]: I0130 12:56:00.340245 2104 server.go:460] "Adding debug handlers to kubelet server" Jan 30 12:56:00.341631 kubelet[2104]: I0130 12:56:00.341590 2104 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 12:56:00.342456 kubelet[2104]: I0130 12:56:00.342106 2104 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 12:56:00.343573 kubelet[2104]: I0130 12:56:00.343547 2104 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 12:56:00.343691 kubelet[2104]: I0130 12:56:00.343672 2104 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 12:56:00.343759 kubelet[2104]: I0130 12:56:00.343746 2104 reconciler.go:26] "Reconciler: start to sync state" Jan 30 12:56:00.343997 kubelet[2104]: E0130 12:56:00.343969 2104 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 12:56:00.344067 kubelet[2104]: W0130 12:56:00.344026 2104 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused Jan 30 12:56:00.344104 kubelet[2104]: E0130 12:56:00.344083 2104 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Jan 30 12:56:00.344104 kubelet[2104]: E0130 12:56:00.344052 2104 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.64:6443: connect: connection refused" interval="200ms" Jan 30 12:56:00.344488 kubelet[2104]: I0130 12:56:00.344213 2104 factory.go:221] Registration of the systemd container factory successfully Jan 30 12:56:00.344488 kubelet[2104]: I0130 12:56:00.344313 2104 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 12:56:00.344488 kubelet[2104]: E0130 12:56:00.344415 2104 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 12:56:00.344986 kubelet[2104]: E0130 12:56:00.343054 2104 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.64:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.64:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f79abe2b574ab default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-30 12:56:00.338212011 +0000 UTC m=+0.690312627,LastTimestamp:2025-01-30 12:56:00.338212011 +0000 UTC m=+0.690312627,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 30 12:56:00.346157 kubelet[2104]: I0130 12:56:00.345890 2104 factory.go:221] Registration of the containerd container factory successfully Jan 30 12:56:00.359278 kubelet[2104]: I0130 12:56:00.359246 2104 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 12:56:00.359278 kubelet[2104]: I0130 12:56:00.359264 2104 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 12:56:00.359278 kubelet[2104]: I0130 12:56:00.359282 2104 state_mem.go:36] "Initialized new in-memory state store" Jan 30 12:56:00.363352 kubelet[2104]: I0130 12:56:00.363220 2104 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 12:56:00.364494 kubelet[2104]: I0130 12:56:00.364331 2104 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 12:56:00.364494 kubelet[2104]: I0130 12:56:00.364453 2104 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 12:56:00.364494 kubelet[2104]: I0130 12:56:00.364473 2104 kubelet.go:2321] "Starting kubelet main sync loop" Jan 30 12:56:00.365263 kubelet[2104]: E0130 12:56:00.364693 2104 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 12:56:00.365263 kubelet[2104]: W0130 12:56:00.364994 2104 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused Jan 30 12:56:00.365263 kubelet[2104]: E0130 12:56:00.365033 2104 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Jan 30 12:56:00.444510 kubelet[2104]: E0130 12:56:00.444472 2104 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 12:56:00.465764 kubelet[2104]: E0130 12:56:00.465726 2104 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 12:56:00.471149 kubelet[2104]: I0130 12:56:00.471120 2104 policy_none.go:49] "None policy: Start" Jan 30 12:56:00.472563 kubelet[2104]: I0130 12:56:00.472033 2104 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 12:56:00.472563 kubelet[2104]: I0130 12:56:00.472105 2104 state_mem.go:35] "Initializing new in-memory state store" Jan 30 12:56:00.478988 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 12:56:00.493747 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 12:56:00.496624 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 12:56:00.508924 kubelet[2104]: I0130 12:56:00.508104 2104 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 12:56:00.508924 kubelet[2104]: I0130 12:56:00.508391 2104 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 12:56:00.508924 kubelet[2104]: I0130 12:56:00.508406 2104 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 12:56:00.508924 kubelet[2104]: I0130 12:56:00.508677 2104 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 12:56:00.510206 kubelet[2104]: E0130 12:56:00.510128 2104 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 30 12:56:00.544629 kubelet[2104]: E0130 12:56:00.544501 2104 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.64:6443: connect: connection refused" interval="400ms" Jan 30 12:56:00.609974 kubelet[2104]: I0130 12:56:00.609926 2104 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 30 12:56:00.610417 kubelet[2104]: E0130 12:56:00.610374 2104 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.64:6443/api/v1/nodes\": dial tcp 10.0.0.64:6443: connect: connection refused" node="localhost" Jan 30 12:56:00.675499 systemd[1]: Created slice kubepods-burstable-pod8b776b6fa45a93d98c0325f358ac85aa.slice - libcontainer container kubepods-burstable-pod8b776b6fa45a93d98c0325f358ac85aa.slice. Jan 30 12:56:00.685898 systemd[1]: Created slice kubepods-burstable-podfa5289f3c0ba7f1736282e713231ffc5.slice - libcontainer container kubepods-burstable-podfa5289f3c0ba7f1736282e713231ffc5.slice. Jan 30 12:56:00.702442 systemd[1]: Created slice kubepods-burstable-podc988230cd0d49eebfaffbefbe8c74a10.slice - libcontainer container kubepods-burstable-podc988230cd0d49eebfaffbefbe8c74a10.slice. Jan 30 12:56:00.745386 kubelet[2104]: I0130 12:56:00.745325 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:56:00.745386 kubelet[2104]: I0130 12:56:00.745378 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:56:00.745801 kubelet[2104]: I0130 12:56:00.745401 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8b776b6fa45a93d98c0325f358ac85aa-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8b776b6fa45a93d98c0325f358ac85aa\") " pod="kube-system/kube-apiserver-localhost" Jan 30 12:56:00.745801 kubelet[2104]: I0130 12:56:00.745418 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8b776b6fa45a93d98c0325f358ac85aa-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8b776b6fa45a93d98c0325f358ac85aa\") " pod="kube-system/kube-apiserver-localhost" Jan 30 12:56:00.745801 kubelet[2104]: I0130 12:56:00.745435 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:56:00.745801 kubelet[2104]: I0130 12:56:00.745450 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c988230cd0d49eebfaffbefbe8c74a10-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c988230cd0d49eebfaffbefbe8c74a10\") " pod="kube-system/kube-scheduler-localhost" Jan 30 12:56:00.745801 kubelet[2104]: I0130 12:56:00.745464 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8b776b6fa45a93d98c0325f358ac85aa-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8b776b6fa45a93d98c0325f358ac85aa\") " pod="kube-system/kube-apiserver-localhost" Jan 30 12:56:00.745922 kubelet[2104]: I0130 12:56:00.745478 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:56:00.745922 kubelet[2104]: I0130 12:56:00.745494 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:56:00.811598 kubelet[2104]: I0130 12:56:00.811492 2104 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 30 12:56:00.811903 kubelet[2104]: E0130 12:56:00.811850 2104 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.64:6443/api/v1/nodes\": dial tcp 10.0.0.64:6443: connect: connection refused" node="localhost" Jan 30 12:56:00.945094 kubelet[2104]: E0130 12:56:00.945027 2104 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.64:6443: connect: connection refused" interval="800ms" Jan 30 12:56:00.984464 kubelet[2104]: E0130 12:56:00.984418 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:00.985123 containerd[1438]: time="2025-01-30T12:56:00.985084374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8b776b6fa45a93d98c0325f358ac85aa,Namespace:kube-system,Attempt:0,}" Jan 30 12:56:01.000414 kubelet[2104]: E0130 12:56:01.000368 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:01.001137 containerd[1438]: time="2025-01-30T12:56:01.000848007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fa5289f3c0ba7f1736282e713231ffc5,Namespace:kube-system,Attempt:0,}" Jan 30 12:56:01.004677 kubelet[2104]: E0130 12:56:01.004644 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:01.005251 containerd[1438]: time="2025-01-30T12:56:01.005203349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c988230cd0d49eebfaffbefbe8c74a10,Namespace:kube-system,Attempt:0,}" Jan 30 12:56:01.144741 kubelet[2104]: W0130 12:56:01.144591 2104 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused Jan 30 12:56:01.144741 kubelet[2104]: E0130 12:56:01.144668 2104 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Jan 30 12:56:01.213596 kubelet[2104]: I0130 12:56:01.213498 2104 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 30 12:56:01.213812 kubelet[2104]: E0130 12:56:01.213790 2104 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.64:6443/api/v1/nodes\": dial tcp 10.0.0.64:6443: connect: connection refused" node="localhost" Jan 30 12:56:01.232136 kubelet[2104]: E0130 12:56:01.232021 2104 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.64:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.64:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f79abe2b574ab default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-30 12:56:00.338212011 +0000 UTC m=+0.690312627,LastTimestamp:2025-01-30 12:56:00.338212011 +0000 UTC m=+0.690312627,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 30 12:56:01.437522 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount988953160.mount: Deactivated successfully. Jan 30 12:56:01.445765 containerd[1438]: time="2025-01-30T12:56:01.445712690Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 12:56:01.446282 containerd[1438]: time="2025-01-30T12:56:01.446190499Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 30 12:56:01.447061 containerd[1438]: time="2025-01-30T12:56:01.447025866Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 12:56:01.448212 containerd[1438]: time="2025-01-30T12:56:01.448170434Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 12:56:01.448522 containerd[1438]: time="2025-01-30T12:56:01.448375769Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 12:56:01.449403 containerd[1438]: time="2025-01-30T12:56:01.449348571Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 12:56:01.450100 containerd[1438]: time="2025-01-30T12:56:01.450020965Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 12:56:01.451996 containerd[1438]: time="2025-01-30T12:56:01.451920252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 12:56:01.455433 containerd[1438]: time="2025-01-30T12:56:01.455384449Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 470.216679ms" Jan 30 12:56:01.457071 containerd[1438]: time="2025-01-30T12:56:01.456809084Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 455.873686ms" Jan 30 12:56:01.458819 containerd[1438]: time="2025-01-30T12:56:01.458779907Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 453.489556ms" Jan 30 12:56:01.526420 kubelet[2104]: W0130 12:56:01.526343 2104 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused Jan 30 12:56:01.526640 kubelet[2104]: E0130 12:56:01.526603 2104 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Jan 30 12:56:01.619315 containerd[1438]: time="2025-01-30T12:56:01.619144702Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:56:01.621154 containerd[1438]: time="2025-01-30T12:56:01.619221273Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:56:01.621154 containerd[1438]: time="2025-01-30T12:56:01.621128113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:01.621449 containerd[1438]: time="2025-01-30T12:56:01.621241451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:01.621538 containerd[1438]: time="2025-01-30T12:56:01.621034677Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:56:01.622059 containerd[1438]: time="2025-01-30T12:56:01.622013635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:56:01.622059 containerd[1438]: time="2025-01-30T12:56:01.622045286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:01.622211 containerd[1438]: time="2025-01-30T12:56:01.622171692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:01.622300 containerd[1438]: time="2025-01-30T12:56:01.622165098Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:56:01.623061 containerd[1438]: time="2025-01-30T12:56:01.623000665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:56:01.623138 containerd[1438]: time="2025-01-30T12:56:01.623058692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:01.623213 containerd[1438]: time="2025-01-30T12:56:01.623175667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:01.642458 systemd[1]: Started cri-containerd-1258f65d1837dc8b49f515834b7c6d67ba295a47a40f304cb1a84acd6cab69d1.scope - libcontainer container 1258f65d1837dc8b49f515834b7c6d67ba295a47a40f304cb1a84acd6cab69d1. Jan 30 12:56:01.643670 systemd[1]: Started cri-containerd-505f5c56639bf95de85d52da6a06c4bc06f66f53bf8ded3d3a07eac6b65d47fb.scope - libcontainer container 505f5c56639bf95de85d52da6a06c4bc06f66f53bf8ded3d3a07eac6b65d47fb. Jan 30 12:56:01.649084 systemd[1]: Started cri-containerd-0f76ed5bf21ac330bb5a9d41d4155c32edc35025aeafafebae71fa63e2149f12.scope - libcontainer container 0f76ed5bf21ac330bb5a9d41d4155c32edc35025aeafafebae71fa63e2149f12. Jan 30 12:56:01.683530 containerd[1438]: time="2025-01-30T12:56:01.683471536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fa5289f3c0ba7f1736282e713231ffc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"1258f65d1837dc8b49f515834b7c6d67ba295a47a40f304cb1a84acd6cab69d1\"" Jan 30 12:56:01.685019 kubelet[2104]: E0130 12:56:01.684984 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:01.693087 containerd[1438]: time="2025-01-30T12:56:01.692969412Z" level=info msg="CreateContainer within sandbox \"1258f65d1837dc8b49f515834b7c6d67ba295a47a40f304cb1a84acd6cab69d1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 12:56:01.693451 kubelet[2104]: W0130 12:56:01.693218 2104 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused Jan 30 12:56:01.693451 kubelet[2104]: E0130 12:56:01.693312 2104 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Jan 30 12:56:01.697974 containerd[1438]: time="2025-01-30T12:56:01.697342149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8b776b6fa45a93d98c0325f358ac85aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"505f5c56639bf95de85d52da6a06c4bc06f66f53bf8ded3d3a07eac6b65d47fb\"" Jan 30 12:56:01.698892 kubelet[2104]: E0130 12:56:01.698853 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:01.700613 containerd[1438]: time="2025-01-30T12:56:01.700565003Z" level=info msg="CreateContainer within sandbox \"505f5c56639bf95de85d52da6a06c4bc06f66f53bf8ded3d3a07eac6b65d47fb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 12:56:01.706295 containerd[1438]: time="2025-01-30T12:56:01.705991270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c988230cd0d49eebfaffbefbe8c74a10,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f76ed5bf21ac330bb5a9d41d4155c32edc35025aeafafebae71fa63e2149f12\"" Jan 30 12:56:01.707171 kubelet[2104]: E0130 12:56:01.706765 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:01.708427 containerd[1438]: time="2025-01-30T12:56:01.708389228Z" level=info msg="CreateContainer within sandbox \"0f76ed5bf21ac330bb5a9d41d4155c32edc35025aeafafebae71fa63e2149f12\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 12:56:01.729150 containerd[1438]: time="2025-01-30T12:56:01.729088003Z" level=info msg="CreateContainer within sandbox \"1258f65d1837dc8b49f515834b7c6d67ba295a47a40f304cb1a84acd6cab69d1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4cea55acf835f05c1211c4d58fc81a7df39f42b305ef923bfe783473252b9d28\"" Jan 30 12:56:01.730307 containerd[1438]: time="2025-01-30T12:56:01.730246798Z" level=info msg="StartContainer for \"4cea55acf835f05c1211c4d58fc81a7df39f42b305ef923bfe783473252b9d28\"" Jan 30 12:56:01.734877 containerd[1438]: time="2025-01-30T12:56:01.734813800Z" level=info msg="CreateContainer within sandbox \"0f76ed5bf21ac330bb5a9d41d4155c32edc35025aeafafebae71fa63e2149f12\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2332a9b06b33e0cfddaa906f0efde648417a6ce5ef3174e3ceeab9e807ae4682\"" Jan 30 12:56:01.736370 containerd[1438]: time="2025-01-30T12:56:01.735445310Z" level=info msg="StartContainer for \"2332a9b06b33e0cfddaa906f0efde648417a6ce5ef3174e3ceeab9e807ae4682\"" Jan 30 12:56:01.737547 containerd[1438]: time="2025-01-30T12:56:01.737500737Z" level=info msg="CreateContainer within sandbox \"505f5c56639bf95de85d52da6a06c4bc06f66f53bf8ded3d3a07eac6b65d47fb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3b9d66fd8122f947a3fb5346937b1139c82cbc7463550dfcc6d089101d0806e8\"" Jan 30 12:56:01.738481 containerd[1438]: time="2025-01-30T12:56:01.738316321Z" level=info msg="StartContainer for \"3b9d66fd8122f947a3fb5346937b1139c82cbc7463550dfcc6d089101d0806e8\"" Jan 30 12:56:01.746331 kubelet[2104]: E0130 12:56:01.746282 2104 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.64:6443: connect: connection refused" interval="1.6s" Jan 30 12:56:01.760478 systemd[1]: Started cri-containerd-4cea55acf835f05c1211c4d58fc81a7df39f42b305ef923bfe783473252b9d28.scope - libcontainer container 4cea55acf835f05c1211c4d58fc81a7df39f42b305ef923bfe783473252b9d28. Jan 30 12:56:01.773457 systemd[1]: Started cri-containerd-2332a9b06b33e0cfddaa906f0efde648417a6ce5ef3174e3ceeab9e807ae4682.scope - libcontainer container 2332a9b06b33e0cfddaa906f0efde648417a6ce5ef3174e3ceeab9e807ae4682. Jan 30 12:56:01.774942 systemd[1]: Started cri-containerd-3b9d66fd8122f947a3fb5346937b1139c82cbc7463550dfcc6d089101d0806e8.scope - libcontainer container 3b9d66fd8122f947a3fb5346937b1139c82cbc7463550dfcc6d089101d0806e8. Jan 30 12:56:01.800719 kubelet[2104]: W0130 12:56:01.800523 2104 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused Jan 30 12:56:01.800719 kubelet[2104]: E0130 12:56:01.800606 2104 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Jan 30 12:56:01.897432 containerd[1438]: time="2025-01-30T12:56:01.897386923Z" level=info msg="StartContainer for \"2332a9b06b33e0cfddaa906f0efde648417a6ce5ef3174e3ceeab9e807ae4682\" returns successfully" Jan 30 12:56:01.897800 containerd[1438]: time="2025-01-30T12:56:01.897633341Z" level=info msg="StartContainer for \"4cea55acf835f05c1211c4d58fc81a7df39f42b305ef923bfe783473252b9d28\" returns successfully" Jan 30 12:56:01.897800 containerd[1438]: time="2025-01-30T12:56:01.897637497Z" level=info msg="StartContainer for \"3b9d66fd8122f947a3fb5346937b1139c82cbc7463550dfcc6d089101d0806e8\" returns successfully" Jan 30 12:56:02.015905 kubelet[2104]: I0130 12:56:02.015453 2104 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 30 12:56:02.015905 kubelet[2104]: E0130 12:56:02.015788 2104 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.64:6443/api/v1/nodes\": dial tcp 10.0.0.64:6443: connect: connection refused" node="localhost" Jan 30 12:56:02.373794 kubelet[2104]: E0130 12:56:02.373606 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:02.376352 kubelet[2104]: E0130 12:56:02.376023 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:02.379688 kubelet[2104]: E0130 12:56:02.379583 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:03.382895 kubelet[2104]: E0130 12:56:03.382808 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:03.484491 kubelet[2104]: E0130 12:56:03.484425 2104 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 30 12:56:03.617104 kubelet[2104]: I0130 12:56:03.617075 2104 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 30 12:56:03.632346 kubelet[2104]: I0130 12:56:03.632300 2104 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 30 12:56:03.632346 kubelet[2104]: E0130 12:56:03.632344 2104 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 30 12:56:03.645189 kubelet[2104]: E0130 12:56:03.645001 2104 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 12:56:03.745524 kubelet[2104]: E0130 12:56:03.745453 2104 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 12:56:03.845798 kubelet[2104]: E0130 12:56:03.845759 2104 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 12:56:03.946415 kubelet[2104]: E0130 12:56:03.946273 2104 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 12:56:04.046821 kubelet[2104]: E0130 12:56:04.046774 2104 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 12:56:04.147457 kubelet[2104]: E0130 12:56:04.147367 2104 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 12:56:04.248251 kubelet[2104]: E0130 12:56:04.248118 2104 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 12:56:04.348699 kubelet[2104]: E0130 12:56:04.348641 2104 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 12:56:04.384166 kubelet[2104]: E0130 12:56:04.384073 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:04.449572 kubelet[2104]: E0130 12:56:04.449531 2104 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 12:56:05.330744 kubelet[2104]: I0130 12:56:05.330693 2104 apiserver.go:52] "Watching apiserver" Jan 30 12:56:05.344626 kubelet[2104]: I0130 12:56:05.344589 2104 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 12:56:05.621903 systemd[1]: Reloading requested from client PID 2385 ('systemctl') (unit session-7.scope)... Jan 30 12:56:05.621923 systemd[1]: Reloading... Jan 30 12:56:05.711266 zram_generator::config[2427]: No configuration found. Jan 30 12:56:05.801216 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 12:56:05.868368 systemd[1]: Reloading finished in 246 ms. Jan 30 12:56:05.902899 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:56:05.918424 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 12:56:05.918659 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:56:05.918746 systemd[1]: kubelet.service: Consumed 1.061s CPU time, 117.3M memory peak, 0B memory swap peak. Jan 30 12:56:05.932558 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:56:06.037871 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:56:06.042836 (kubelet)[2466]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 12:56:06.082355 kubelet[2466]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 12:56:06.082355 kubelet[2466]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 12:56:06.082355 kubelet[2466]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 12:56:06.082791 kubelet[2466]: I0130 12:56:06.082406 2466 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 12:56:06.089258 kubelet[2466]: I0130 12:56:06.089185 2466 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 30 12:56:06.089258 kubelet[2466]: I0130 12:56:06.089220 2466 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 12:56:06.089642 kubelet[2466]: I0130 12:56:06.089610 2466 server.go:929] "Client rotation is on, will bootstrap in background" Jan 30 12:56:06.092319 kubelet[2466]: I0130 12:56:06.091970 2466 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 12:56:06.095011 kubelet[2466]: I0130 12:56:06.094965 2466 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 12:56:06.100940 kubelet[2466]: E0130 12:56:06.100900 2466 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 12:56:06.100940 kubelet[2466]: I0130 12:56:06.100936 2466 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 12:56:06.103435 kubelet[2466]: I0130 12:56:06.103414 2466 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 12:56:06.103538 kubelet[2466]: I0130 12:56:06.103527 2466 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 12:56:06.103650 kubelet[2466]: I0130 12:56:06.103623 2466 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 12:56:06.103841 kubelet[2466]: I0130 12:56:06.103651 2466 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 12:56:06.103918 kubelet[2466]: I0130 12:56:06.103853 2466 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 12:56:06.103918 kubelet[2466]: I0130 12:56:06.103863 2466 container_manager_linux.go:300] "Creating device plugin manager" Jan 30 12:56:06.103918 kubelet[2466]: I0130 12:56:06.103899 2466 state_mem.go:36] "Initialized new in-memory state store" Jan 30 12:56:06.104021 kubelet[2466]: I0130 12:56:06.104010 2466 kubelet.go:408] "Attempting to sync node with API server" Jan 30 12:56:06.104047 kubelet[2466]: I0130 12:56:06.104026 2466 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 12:56:06.104066 kubelet[2466]: I0130 12:56:06.104048 2466 kubelet.go:314] "Adding apiserver pod source" Jan 30 12:56:06.104066 kubelet[2466]: I0130 12:56:06.104060 2466 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 12:56:06.105318 kubelet[2466]: I0130 12:56:06.105289 2466 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 12:56:06.105942 kubelet[2466]: I0130 12:56:06.105918 2466 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 12:56:06.107293 kubelet[2466]: I0130 12:56:06.106527 2466 server.go:1269] "Started kubelet" Jan 30 12:56:06.107459 kubelet[2466]: I0130 12:56:06.107400 2466 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 12:56:06.107729 kubelet[2466]: I0130 12:56:06.107704 2466 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 12:56:06.107813 kubelet[2466]: I0130 12:56:06.107789 2466 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 12:56:06.108877 kubelet[2466]: I0130 12:56:06.108852 2466 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 12:56:06.109163 kubelet[2466]: I0130 12:56:06.109130 2466 server.go:460] "Adding debug handlers to kubelet server" Jan 30 12:56:06.110617 kubelet[2466]: I0130 12:56:06.110575 2466 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 12:56:06.111455 kubelet[2466]: I0130 12:56:06.111418 2466 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 12:56:06.114477 kubelet[2466]: I0130 12:56:06.111586 2466 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 12:56:06.114477 kubelet[2466]: I0130 12:56:06.111734 2466 reconciler.go:26] "Reconciler: start to sync state" Jan 30 12:56:06.114477 kubelet[2466]: E0130 12:56:06.111931 2466 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 12:56:06.116277 kubelet[2466]: E0130 12:56:06.114864 2466 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 12:56:06.125674 kubelet[2466]: I0130 12:56:06.125141 2466 factory.go:221] Registration of the containerd container factory successfully Jan 30 12:56:06.125674 kubelet[2466]: I0130 12:56:06.125169 2466 factory.go:221] Registration of the systemd container factory successfully Jan 30 12:56:06.125674 kubelet[2466]: I0130 12:56:06.125336 2466 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 12:56:06.138122 kubelet[2466]: I0130 12:56:06.138063 2466 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 12:56:06.140531 kubelet[2466]: I0130 12:56:06.140482 2466 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 12:56:06.141000 kubelet[2466]: I0130 12:56:06.140985 2466 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 12:56:06.141123 kubelet[2466]: I0130 12:56:06.141112 2466 kubelet.go:2321] "Starting kubelet main sync loop" Jan 30 12:56:06.141299 kubelet[2466]: E0130 12:56:06.141275 2466 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 12:56:06.167612 kubelet[2466]: I0130 12:56:06.167501 2466 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 12:56:06.167612 kubelet[2466]: I0130 12:56:06.167522 2466 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 12:56:06.167612 kubelet[2466]: I0130 12:56:06.167541 2466 state_mem.go:36] "Initialized new in-memory state store" Jan 30 12:56:06.167768 kubelet[2466]: I0130 12:56:06.167693 2466 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 12:56:06.167768 kubelet[2466]: I0130 12:56:06.167706 2466 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 12:56:06.167768 kubelet[2466]: I0130 12:56:06.167724 2466 policy_none.go:49] "None policy: Start" Jan 30 12:56:06.168809 kubelet[2466]: I0130 12:56:06.168753 2466 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 12:56:06.168809 kubelet[2466]: I0130 12:56:06.168782 2466 state_mem.go:35] "Initializing new in-memory state store" Jan 30 12:56:06.169157 kubelet[2466]: I0130 12:56:06.168937 2466 state_mem.go:75] "Updated machine memory state" Jan 30 12:56:06.176011 kubelet[2466]: I0130 12:56:06.175893 2466 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 12:56:06.176136 kubelet[2466]: I0130 12:56:06.176076 2466 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 12:56:06.176136 kubelet[2466]: I0130 12:56:06.176088 2466 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 12:56:06.176326 kubelet[2466]: I0130 12:56:06.176308 2466 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 12:56:06.280641 kubelet[2466]: I0130 12:56:06.280597 2466 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 30 12:56:06.288733 kubelet[2466]: I0130 12:56:06.288689 2466 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jan 30 12:56:06.288868 kubelet[2466]: I0130 12:56:06.288783 2466 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 30 12:56:06.312115 kubelet[2466]: I0130 12:56:06.312055 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:56:06.312115 kubelet[2466]: I0130 12:56:06.312100 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:56:06.312312 kubelet[2466]: I0130 12:56:06.312182 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:56:06.312312 kubelet[2466]: I0130 12:56:06.312202 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:56:06.312312 kubelet[2466]: I0130 12:56:06.312279 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:56:06.312312 kubelet[2466]: I0130 12:56:06.312298 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8b776b6fa45a93d98c0325f358ac85aa-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8b776b6fa45a93d98c0325f358ac85aa\") " pod="kube-system/kube-apiserver-localhost" Jan 30 12:56:06.312411 kubelet[2466]: I0130 12:56:06.312317 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8b776b6fa45a93d98c0325f358ac85aa-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8b776b6fa45a93d98c0325f358ac85aa\") " pod="kube-system/kube-apiserver-localhost" Jan 30 12:56:06.312411 kubelet[2466]: I0130 12:56:06.312333 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8b776b6fa45a93d98c0325f358ac85aa-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8b776b6fa45a93d98c0325f358ac85aa\") " pod="kube-system/kube-apiserver-localhost" Jan 30 12:56:06.312411 kubelet[2466]: I0130 12:56:06.312352 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c988230cd0d49eebfaffbefbe8c74a10-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c988230cd0d49eebfaffbefbe8c74a10\") " pod="kube-system/kube-scheduler-localhost" Jan 30 12:56:06.557106 kubelet[2466]: E0130 12:56:06.556981 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:06.557106 kubelet[2466]: E0130 12:56:06.557066 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:06.557267 kubelet[2466]: E0130 12:56:06.556985 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:06.625306 sudo[2503]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 30 12:56:06.625606 sudo[2503]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 30 12:56:07.051788 sudo[2503]: pam_unix(sudo:session): session closed for user root Jan 30 12:56:07.104935 kubelet[2466]: I0130 12:56:07.104896 2466 apiserver.go:52] "Watching apiserver" Jan 30 12:56:07.112133 kubelet[2466]: I0130 12:56:07.112099 2466 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 12:56:07.137182 kubelet[2466]: I0130 12:56:07.137114 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.1370992580000001 podStartE2EDuration="1.137099258s" podCreationTimestamp="2025-01-30 12:56:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 12:56:07.13569524 +0000 UTC m=+1.089651995" watchObservedRunningTime="2025-01-30 12:56:07.137099258 +0000 UTC m=+1.091056013" Jan 30 12:56:07.155452 kubelet[2466]: E0130 12:56:07.155305 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:07.156364 kubelet[2466]: E0130 12:56:07.155482 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:07.156364 kubelet[2466]: E0130 12:56:07.155201 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:07.160979 kubelet[2466]: I0130 12:56:07.160908 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.160889357 podStartE2EDuration="1.160889357s" podCreationTimestamp="2025-01-30 12:56:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 12:56:07.147163432 +0000 UTC m=+1.101120187" watchObservedRunningTime="2025-01-30 12:56:07.160889357 +0000 UTC m=+1.114846112" Jan 30 12:56:07.161717 kubelet[2466]: I0130 12:56:07.161340 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.161303934 podStartE2EDuration="1.161303934s" podCreationTimestamp="2025-01-30 12:56:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 12:56:07.160327214 +0000 UTC m=+1.114283969" watchObservedRunningTime="2025-01-30 12:56:07.161303934 +0000 UTC m=+1.115260689" Jan 30 12:56:08.156457 kubelet[2466]: E0130 12:56:08.156418 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:08.402842 sudo[1616]: pam_unix(sudo:session): session closed for user root Jan 30 12:56:08.404543 sshd[1613]: pam_unix(sshd:session): session closed for user core Jan 30 12:56:08.408653 systemd[1]: sshd@6-10.0.0.64:22-10.0.0.1:49948.service: Deactivated successfully. Jan 30 12:56:08.410513 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 12:56:08.410800 systemd[1]: session-7.scope: Consumed 7.941s CPU time, 154.3M memory peak, 0B memory swap peak. Jan 30 12:56:08.413557 systemd-logind[1418]: Session 7 logged out. Waiting for processes to exit. Jan 30 12:56:08.415220 systemd-logind[1418]: Removed session 7. Jan 30 12:56:08.455692 kubelet[2466]: E0130 12:56:08.455653 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:08.914201 kubelet[2466]: E0130 12:56:08.914097 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:09.780350 kubelet[2466]: E0130 12:56:09.780316 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:11.742541 kubelet[2466]: I0130 12:56:11.742497 2466 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 12:56:11.743415 containerd[1438]: time="2025-01-30T12:56:11.743287882Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 12:56:11.743764 kubelet[2466]: I0130 12:56:11.743492 2466 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 12:56:12.562236 systemd[1]: Created slice kubepods-besteffort-pod3b1ac557_a0b7_4ce8_bdff_168e25073a34.slice - libcontainer container kubepods-besteffort-pod3b1ac557_a0b7_4ce8_bdff_168e25073a34.slice. Jan 30 12:56:12.577176 systemd[1]: Created slice kubepods-burstable-pod364bfab2_93a9_4445_9fae_81330b062b22.slice - libcontainer container kubepods-burstable-pod364bfab2_93a9_4445_9fae_81330b062b22.slice. Jan 30 12:56:12.747699 kubelet[2466]: I0130 12:56:12.747590 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/364bfab2-93a9-4445-9fae-81330b062b22-cilium-cgroup\") pod \"cilium-5lb69\" (UID: \"364bfab2-93a9-4445-9fae-81330b062b22\") " pod="kube-system/cilium-5lb69" Jan 30 12:56:12.747699 kubelet[2466]: I0130 12:56:12.747642 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3b1ac557-a0b7-4ce8-bdff-168e25073a34-kube-proxy\") pod \"kube-proxy-9vqlh\" (UID: \"3b1ac557-a0b7-4ce8-bdff-168e25073a34\") " pod="kube-system/kube-proxy-9vqlh" Jan 30 12:56:12.747699 kubelet[2466]: I0130 12:56:12.747662 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbf7m\" (UniqueName: \"kubernetes.io/projected/364bfab2-93a9-4445-9fae-81330b062b22-kube-api-access-tbf7m\") pod \"cilium-5lb69\" (UID: \"364bfab2-93a9-4445-9fae-81330b062b22\") " pod="kube-system/cilium-5lb69" Jan 30 12:56:12.747699 kubelet[2466]: I0130 12:56:12.747679 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/364bfab2-93a9-4445-9fae-81330b062b22-etc-cni-netd\") pod \"cilium-5lb69\" (UID: \"364bfab2-93a9-4445-9fae-81330b062b22\") " pod="kube-system/cilium-5lb69" Jan 30 12:56:12.748171 kubelet[2466]: I0130 12:56:12.747739 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/364bfab2-93a9-4445-9fae-81330b062b22-clustermesh-secrets\") pod \"cilium-5lb69\" (UID: \"364bfab2-93a9-4445-9fae-81330b062b22\") " pod="kube-system/cilium-5lb69" Jan 30 12:56:12.748171 kubelet[2466]: I0130 12:56:12.747803 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/364bfab2-93a9-4445-9fae-81330b062b22-cilium-run\") pod \"cilium-5lb69\" (UID: \"364bfab2-93a9-4445-9fae-81330b062b22\") " pod="kube-system/cilium-5lb69" Jan 30 12:56:12.748171 kubelet[2466]: I0130 12:56:12.747883 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/364bfab2-93a9-4445-9fae-81330b062b22-host-proc-sys-kernel\") pod \"cilium-5lb69\" (UID: \"364bfab2-93a9-4445-9fae-81330b062b22\") " pod="kube-system/cilium-5lb69" Jan 30 12:56:12.748171 kubelet[2466]: I0130 12:56:12.747900 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/364bfab2-93a9-4445-9fae-81330b062b22-host-proc-sys-net\") pod \"cilium-5lb69\" (UID: \"364bfab2-93a9-4445-9fae-81330b062b22\") " pod="kube-system/cilium-5lb69" Jan 30 12:56:12.748171 kubelet[2466]: I0130 12:56:12.747917 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/364bfab2-93a9-4445-9fae-81330b062b22-hubble-tls\") pod \"cilium-5lb69\" (UID: \"364bfab2-93a9-4445-9fae-81330b062b22\") " pod="kube-system/cilium-5lb69" Jan 30 12:56:12.748327 kubelet[2466]: I0130 12:56:12.747946 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n849v\" (UniqueName: \"kubernetes.io/projected/3b1ac557-a0b7-4ce8-bdff-168e25073a34-kube-api-access-n849v\") pod \"kube-proxy-9vqlh\" (UID: \"3b1ac557-a0b7-4ce8-bdff-168e25073a34\") " pod="kube-system/kube-proxy-9vqlh" Jan 30 12:56:12.748327 kubelet[2466]: I0130 12:56:12.747965 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/364bfab2-93a9-4445-9fae-81330b062b22-bpf-maps\") pod \"cilium-5lb69\" (UID: \"364bfab2-93a9-4445-9fae-81330b062b22\") " pod="kube-system/cilium-5lb69" Jan 30 12:56:12.748327 kubelet[2466]: I0130 12:56:12.747986 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/364bfab2-93a9-4445-9fae-81330b062b22-cni-path\") pod \"cilium-5lb69\" (UID: \"364bfab2-93a9-4445-9fae-81330b062b22\") " pod="kube-system/cilium-5lb69" Jan 30 12:56:12.748327 kubelet[2466]: I0130 12:56:12.748004 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/364bfab2-93a9-4445-9fae-81330b062b22-lib-modules\") pod \"cilium-5lb69\" (UID: \"364bfab2-93a9-4445-9fae-81330b062b22\") " pod="kube-system/cilium-5lb69" Jan 30 12:56:12.748327 kubelet[2466]: I0130 12:56:12.748022 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/364bfab2-93a9-4445-9fae-81330b062b22-cilium-config-path\") pod \"cilium-5lb69\" (UID: \"364bfab2-93a9-4445-9fae-81330b062b22\") " pod="kube-system/cilium-5lb69" Jan 30 12:56:12.748327 kubelet[2466]: I0130 12:56:12.748037 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/364bfab2-93a9-4445-9fae-81330b062b22-hostproc\") pod \"cilium-5lb69\" (UID: \"364bfab2-93a9-4445-9fae-81330b062b22\") " pod="kube-system/cilium-5lb69" Jan 30 12:56:12.748523 kubelet[2466]: I0130 12:56:12.748051 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/364bfab2-93a9-4445-9fae-81330b062b22-xtables-lock\") pod \"cilium-5lb69\" (UID: \"364bfab2-93a9-4445-9fae-81330b062b22\") " pod="kube-system/cilium-5lb69" Jan 30 12:56:12.748523 kubelet[2466]: I0130 12:56:12.748066 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3b1ac557-a0b7-4ce8-bdff-168e25073a34-xtables-lock\") pod \"kube-proxy-9vqlh\" (UID: \"3b1ac557-a0b7-4ce8-bdff-168e25073a34\") " pod="kube-system/kube-proxy-9vqlh" Jan 30 12:56:12.748523 kubelet[2466]: I0130 12:56:12.748081 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b1ac557-a0b7-4ce8-bdff-168e25073a34-lib-modules\") pod \"kube-proxy-9vqlh\" (UID: \"3b1ac557-a0b7-4ce8-bdff-168e25073a34\") " pod="kube-system/kube-proxy-9vqlh" Jan 30 12:56:12.872780 kubelet[2466]: E0130 12:56:12.871395 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:12.874153 containerd[1438]: time="2025-01-30T12:56:12.873634587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9vqlh,Uid:3b1ac557-a0b7-4ce8-bdff-168e25073a34,Namespace:kube-system,Attempt:0,}" Jan 30 12:56:12.880140 kubelet[2466]: E0130 12:56:12.880098 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:12.881852 containerd[1438]: time="2025-01-30T12:56:12.881799960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5lb69,Uid:364bfab2-93a9-4445-9fae-81330b062b22,Namespace:kube-system,Attempt:0,}" Jan 30 12:56:12.935107 systemd[1]: Created slice kubepods-besteffort-podc1b128be_584d_47f9_9527_ac0a43fcf59b.slice - libcontainer container kubepods-besteffort-podc1b128be_584d_47f9_9527_ac0a43fcf59b.slice. Jan 30 12:56:12.940998 containerd[1438]: time="2025-01-30T12:56:12.940703631Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:56:12.940998 containerd[1438]: time="2025-01-30T12:56:12.940766513Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:56:12.940998 containerd[1438]: time="2025-01-30T12:56:12.940792994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:12.940998 containerd[1438]: time="2025-01-30T12:56:12.940879316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:12.944851 containerd[1438]: time="2025-01-30T12:56:12.944679555Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:56:12.944851 containerd[1438]: time="2025-01-30T12:56:12.944748317Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:56:12.944851 containerd[1438]: time="2025-01-30T12:56:12.944764597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:12.945161 containerd[1438]: time="2025-01-30T12:56:12.944900441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:12.967472 systemd[1]: Started cri-containerd-5aec7576fd71bcd0f9fce80ef9ec5da430b0e694472b950c9a7ab6713b204815.scope - libcontainer container 5aec7576fd71bcd0f9fce80ef9ec5da430b0e694472b950c9a7ab6713b204815. Jan 30 12:56:12.968882 systemd[1]: Started cri-containerd-e6efb08d504f5d9a53dcc6f957b808adcf43a7952632ca746fd29301475c6ecc.scope - libcontainer container e6efb08d504f5d9a53dcc6f957b808adcf43a7952632ca746fd29301475c6ecc. Jan 30 12:56:12.996432 containerd[1438]: time="2025-01-30T12:56:12.996266678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9vqlh,Uid:3b1ac557-a0b7-4ce8-bdff-168e25073a34,Namespace:kube-system,Attempt:0,} returns sandbox id \"5aec7576fd71bcd0f9fce80ef9ec5da430b0e694472b950c9a7ab6713b204815\"" Jan 30 12:56:12.997297 kubelet[2466]: E0130 12:56:12.997270 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:12.999694 containerd[1438]: time="2025-01-30T12:56:12.999489058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5lb69,Uid:364bfab2-93a9-4445-9fae-81330b062b22,Namespace:kube-system,Attempt:0,} returns sandbox id \"e6efb08d504f5d9a53dcc6f957b808adcf43a7952632ca746fd29301475c6ecc\"" Jan 30 12:56:13.000928 kubelet[2466]: E0130 12:56:13.000413 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:13.003059 containerd[1438]: time="2025-01-30T12:56:13.002912362Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 30 12:56:13.004218 containerd[1438]: time="2025-01-30T12:56:13.004159918Z" level=info msg="CreateContainer within sandbox \"5aec7576fd71bcd0f9fce80ef9ec5da430b0e694472b950c9a7ab6713b204815\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 12:56:13.028606 containerd[1438]: time="2025-01-30T12:56:13.028552236Z" level=info msg="CreateContainer within sandbox \"5aec7576fd71bcd0f9fce80ef9ec5da430b0e694472b950c9a7ab6713b204815\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bf86a79ed3d2391873c5ac572d6180d8d0c344c54d823ab3de2d0ed833b103e2\"" Jan 30 12:56:13.029479 containerd[1438]: time="2025-01-30T12:56:13.029426502Z" level=info msg="StartContainer for \"bf86a79ed3d2391873c5ac572d6180d8d0c344c54d823ab3de2d0ed833b103e2\"" Jan 30 12:56:13.049363 kubelet[2466]: I0130 12:56:13.049283 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l996\" (UniqueName: \"kubernetes.io/projected/c1b128be-584d-47f9-9527-ac0a43fcf59b-kube-api-access-8l996\") pod \"cilium-operator-5d85765b45-z55vt\" (UID: \"c1b128be-584d-47f9-9527-ac0a43fcf59b\") " pod="kube-system/cilium-operator-5d85765b45-z55vt" Jan 30 12:56:13.049617 kubelet[2466]: I0130 12:56:13.049559 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c1b128be-584d-47f9-9527-ac0a43fcf59b-cilium-config-path\") pod \"cilium-operator-5d85765b45-z55vt\" (UID: \"c1b128be-584d-47f9-9527-ac0a43fcf59b\") " pod="kube-system/cilium-operator-5d85765b45-z55vt" Jan 30 12:56:13.060453 systemd[1]: Started cri-containerd-bf86a79ed3d2391873c5ac572d6180d8d0c344c54d823ab3de2d0ed833b103e2.scope - libcontainer container bf86a79ed3d2391873c5ac572d6180d8d0c344c54d823ab3de2d0ed833b103e2. Jan 30 12:56:13.087908 containerd[1438]: time="2025-01-30T12:56:13.087863101Z" level=info msg="StartContainer for \"bf86a79ed3d2391873c5ac572d6180d8d0c344c54d823ab3de2d0ed833b103e2\" returns successfully" Jan 30 12:56:13.170835 kubelet[2466]: E0130 12:56:13.170792 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:13.216680 kubelet[2466]: I0130 12:56:13.216621 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9vqlh" podStartSLOduration=1.216603648 podStartE2EDuration="1.216603648s" podCreationTimestamp="2025-01-30 12:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 12:56:13.216271999 +0000 UTC m=+7.170228714" watchObservedRunningTime="2025-01-30 12:56:13.216603648 +0000 UTC m=+7.170560403" Jan 30 12:56:13.239869 kubelet[2466]: E0130 12:56:13.239829 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:13.240425 containerd[1438]: time="2025-01-30T12:56:13.240382348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-z55vt,Uid:c1b128be-584d-47f9-9527-ac0a43fcf59b,Namespace:kube-system,Attempt:0,}" Jan 30 12:56:13.270724 containerd[1438]: time="2025-01-30T12:56:13.270615198Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:56:13.270724 containerd[1438]: time="2025-01-30T12:56:13.270682920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:56:13.270724 containerd[1438]: time="2025-01-30T12:56:13.270695040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:13.271107 containerd[1438]: time="2025-01-30T12:56:13.271040130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:13.289765 systemd[1]: Started cri-containerd-774be15d8742d2a76989aef71c94a76551a260bbb41c30337b1c385a7b6cb87f.scope - libcontainer container 774be15d8742d2a76989aef71c94a76551a260bbb41c30337b1c385a7b6cb87f. Jan 30 12:56:13.324812 containerd[1438]: time="2025-01-30T12:56:13.324743190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-z55vt,Uid:c1b128be-584d-47f9-9527-ac0a43fcf59b,Namespace:kube-system,Attempt:0,} returns sandbox id \"774be15d8742d2a76989aef71c94a76551a260bbb41c30337b1c385a7b6cb87f\"" Jan 30 12:56:13.325681 kubelet[2466]: E0130 12:56:13.325658 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:18.463495 kubelet[2466]: E0130 12:56:18.463450 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:18.921809 kubelet[2466]: E0130 12:56:18.921770 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:19.794431 kubelet[2466]: E0130 12:56:19.794372 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:20.066338 update_engine[1423]: I20250130 12:56:20.066035 1423 update_attempter.cc:509] Updating boot flags... Jan 30 12:56:20.110008 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2850) Jan 30 12:56:21.119431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount251615411.mount: Deactivated successfully. Jan 30 12:56:22.475347 containerd[1438]: time="2025-01-30T12:56:22.475284712Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:56:22.476700 containerd[1438]: time="2025-01-30T12:56:22.476668537Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 30 12:56:22.477568 containerd[1438]: time="2025-01-30T12:56:22.477509073Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:56:22.479120 containerd[1438]: time="2025-01-30T12:56:22.479083302Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.476019256s" Jan 30 12:56:22.479336 containerd[1438]: time="2025-01-30T12:56:22.479237625Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 30 12:56:22.481698 containerd[1438]: time="2025-01-30T12:56:22.481636789Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 30 12:56:22.488573 containerd[1438]: time="2025-01-30T12:56:22.487213812Z" level=info msg="CreateContainer within sandbox \"e6efb08d504f5d9a53dcc6f957b808adcf43a7952632ca746fd29301475c6ecc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 12:56:22.611564 containerd[1438]: time="2025-01-30T12:56:22.611509308Z" level=info msg="CreateContainer within sandbox \"e6efb08d504f5d9a53dcc6f957b808adcf43a7952632ca746fd29301475c6ecc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d512e4a475a2765b2d063fc890562fa9d322f21a0403cc18a144348796b40154\"" Jan 30 12:56:22.612299 containerd[1438]: time="2025-01-30T12:56:22.612260802Z" level=info msg="StartContainer for \"d512e4a475a2765b2d063fc890562fa9d322f21a0403cc18a144348796b40154\"" Jan 30 12:56:22.649670 systemd[1]: Started cri-containerd-d512e4a475a2765b2d063fc890562fa9d322f21a0403cc18a144348796b40154.scope - libcontainer container d512e4a475a2765b2d063fc890562fa9d322f21a0403cc18a144348796b40154. Jan 30 12:56:22.692931 containerd[1438]: time="2025-01-30T12:56:22.692563246Z" level=info msg="StartContainer for \"d512e4a475a2765b2d063fc890562fa9d322f21a0403cc18a144348796b40154\" returns successfully" Jan 30 12:56:22.775003 systemd[1]: cri-containerd-d512e4a475a2765b2d063fc890562fa9d322f21a0403cc18a144348796b40154.scope: Deactivated successfully. Jan 30 12:56:22.809054 containerd[1438]: time="2025-01-30T12:56:22.804757998Z" level=info msg="shim disconnected" id=d512e4a475a2765b2d063fc890562fa9d322f21a0403cc18a144348796b40154 namespace=k8s.io Jan 30 12:56:22.809054 containerd[1438]: time="2025-01-30T12:56:22.808863594Z" level=warning msg="cleaning up after shim disconnected" id=d512e4a475a2765b2d063fc890562fa9d322f21a0403cc18a144348796b40154 namespace=k8s.io Jan 30 12:56:22.809054 containerd[1438]: time="2025-01-30T12:56:22.808880474Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:56:23.196076 kubelet[2466]: E0130 12:56:23.195816 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:23.201051 containerd[1438]: time="2025-01-30T12:56:23.200733580Z" level=info msg="CreateContainer within sandbox \"e6efb08d504f5d9a53dcc6f957b808adcf43a7952632ca746fd29301475c6ecc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 12:56:23.227346 containerd[1438]: time="2025-01-30T12:56:23.227292288Z" level=info msg="CreateContainer within sandbox \"e6efb08d504f5d9a53dcc6f957b808adcf43a7952632ca746fd29301475c6ecc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ba0cdf776e9a587836952ae553012a1bf5b0dcccdbf480dd67bfacc66fafa4a3\"" Jan 30 12:56:23.227967 containerd[1438]: time="2025-01-30T12:56:23.227933219Z" level=info msg="StartContainer for \"ba0cdf776e9a587836952ae553012a1bf5b0dcccdbf480dd67bfacc66fafa4a3\"" Jan 30 12:56:23.272473 systemd[1]: Started cri-containerd-ba0cdf776e9a587836952ae553012a1bf5b0dcccdbf480dd67bfacc66fafa4a3.scope - libcontainer container ba0cdf776e9a587836952ae553012a1bf5b0dcccdbf480dd67bfacc66fafa4a3. Jan 30 12:56:23.304293 containerd[1438]: time="2025-01-30T12:56:23.304211442Z" level=info msg="StartContainer for \"ba0cdf776e9a587836952ae553012a1bf5b0dcccdbf480dd67bfacc66fafa4a3\" returns successfully" Jan 30 12:56:23.336985 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 12:56:23.337266 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 12:56:23.337636 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 30 12:56:23.346794 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 12:56:23.347344 systemd[1]: cri-containerd-ba0cdf776e9a587836952ae553012a1bf5b0dcccdbf480dd67bfacc66fafa4a3.scope: Deactivated successfully. Jan 30 12:56:23.369874 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 12:56:23.413470 containerd[1438]: time="2025-01-30T12:56:23.413362604Z" level=info msg="shim disconnected" id=ba0cdf776e9a587836952ae553012a1bf5b0dcccdbf480dd67bfacc66fafa4a3 namespace=k8s.io Jan 30 12:56:23.413726 containerd[1438]: time="2025-01-30T12:56:23.413475806Z" level=warning msg="cleaning up after shim disconnected" id=ba0cdf776e9a587836952ae553012a1bf5b0dcccdbf480dd67bfacc66fafa4a3 namespace=k8s.io Jan 30 12:56:23.413726 containerd[1438]: time="2025-01-30T12:56:23.413497166Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:56:23.598531 containerd[1438]: time="2025-01-30T12:56:23.598379701Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:56:23.599316 containerd[1438]: time="2025-01-30T12:56:23.599218276Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 30 12:56:23.600378 containerd[1438]: time="2025-01-30T12:56:23.600330256Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:56:23.601904 containerd[1438]: time="2025-01-30T12:56:23.601859203Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.120150652s" Jan 30 12:56:23.601957 containerd[1438]: time="2025-01-30T12:56:23.601907443Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 30 12:56:23.604635 containerd[1438]: time="2025-01-30T12:56:23.604539530Z" level=info msg="CreateContainer within sandbox \"774be15d8742d2a76989aef71c94a76551a260bbb41c30337b1c385a7b6cb87f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 30 12:56:23.609124 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d512e4a475a2765b2d063fc890562fa9d322f21a0403cc18a144348796b40154-rootfs.mount: Deactivated successfully. Jan 30 12:56:23.619124 containerd[1438]: time="2025-01-30T12:56:23.619056945Z" level=info msg="CreateContainer within sandbox \"774be15d8742d2a76989aef71c94a76551a260bbb41c30337b1c385a7b6cb87f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"fc4b0599204106f811b34f7954e49398aa685dc2d3dc0cbb3541fbf95926d496\"" Jan 30 12:56:23.620402 containerd[1438]: time="2025-01-30T12:56:23.619809679Z" level=info msg="StartContainer for \"fc4b0599204106f811b34f7954e49398aa685dc2d3dc0cbb3541fbf95926d496\"" Jan 30 12:56:23.649463 systemd[1]: Started cri-containerd-fc4b0599204106f811b34f7954e49398aa685dc2d3dc0cbb3541fbf95926d496.scope - libcontainer container fc4b0599204106f811b34f7954e49398aa685dc2d3dc0cbb3541fbf95926d496. Jan 30 12:56:23.690845 containerd[1438]: time="2025-01-30T12:56:23.690792368Z" level=info msg="StartContainer for \"fc4b0599204106f811b34f7954e49398aa685dc2d3dc0cbb3541fbf95926d496\" returns successfully" Jan 30 12:56:24.194309 kubelet[2466]: E0130 12:56:24.193829 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:24.198355 kubelet[2466]: E0130 12:56:24.198210 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:24.201753 containerd[1438]: time="2025-01-30T12:56:24.201696243Z" level=info msg="CreateContainer within sandbox \"e6efb08d504f5d9a53dcc6f957b808adcf43a7952632ca746fd29301475c6ecc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 12:56:24.283030 kubelet[2466]: I0130 12:56:24.282954 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-z55vt" podStartSLOduration=2.007152965 podStartE2EDuration="12.282929767s" podCreationTimestamp="2025-01-30 12:56:12 +0000 UTC" firstStartedPulling="2025-01-30 12:56:13.327059138 +0000 UTC m=+7.281015893" lastFinishedPulling="2025-01-30 12:56:23.60283598 +0000 UTC m=+17.556792695" observedRunningTime="2025-01-30 12:56:24.217812914 +0000 UTC m=+18.171769669" watchObservedRunningTime="2025-01-30 12:56:24.282929767 +0000 UTC m=+18.236886522" Jan 30 12:56:24.306283 containerd[1438]: time="2025-01-30T12:56:24.305491026Z" level=info msg="CreateContainer within sandbox \"e6efb08d504f5d9a53dcc6f957b808adcf43a7952632ca746fd29301475c6ecc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"175772b98afba615a48de0eefec2ac796daa2233e2ea35b80a4c9b4b567d8bd7\"" Jan 30 12:56:24.308171 containerd[1438]: time="2025-01-30T12:56:24.306634605Z" level=info msg="StartContainer for \"175772b98afba615a48de0eefec2ac796daa2233e2ea35b80a4c9b4b567d8bd7\"" Jan 30 12:56:24.363854 systemd[1]: Started cri-containerd-175772b98afba615a48de0eefec2ac796daa2233e2ea35b80a4c9b4b567d8bd7.scope - libcontainer container 175772b98afba615a48de0eefec2ac796daa2233e2ea35b80a4c9b4b567d8bd7. Jan 30 12:56:24.401816 containerd[1438]: time="2025-01-30T12:56:24.401627160Z" level=info msg="StartContainer for \"175772b98afba615a48de0eefec2ac796daa2233e2ea35b80a4c9b4b567d8bd7\" returns successfully" Jan 30 12:56:24.419698 systemd[1]: cri-containerd-175772b98afba615a48de0eefec2ac796daa2233e2ea35b80a4c9b4b567d8bd7.scope: Deactivated successfully. Jan 30 12:56:24.467864 containerd[1438]: time="2025-01-30T12:56:24.467666109Z" level=info msg="shim disconnected" id=175772b98afba615a48de0eefec2ac796daa2233e2ea35b80a4c9b4b567d8bd7 namespace=k8s.io Jan 30 12:56:24.467864 containerd[1438]: time="2025-01-30T12:56:24.467723310Z" level=warning msg="cleaning up after shim disconnected" id=175772b98afba615a48de0eefec2ac796daa2233e2ea35b80a4c9b4b567d8bd7 namespace=k8s.io Jan 30 12:56:24.467864 containerd[1438]: time="2025-01-30T12:56:24.467735310Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:56:25.212762 kubelet[2466]: E0130 12:56:25.212348 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:25.212762 kubelet[2466]: E0130 12:56:25.212949 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:25.220600 containerd[1438]: time="2025-01-30T12:56:25.220433062Z" level=info msg="CreateContainer within sandbox \"e6efb08d504f5d9a53dcc6f957b808adcf43a7952632ca746fd29301475c6ecc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 12:56:25.250305 containerd[1438]: time="2025-01-30T12:56:25.250246940Z" level=info msg="CreateContainer within sandbox \"e6efb08d504f5d9a53dcc6f957b808adcf43a7952632ca746fd29301475c6ecc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7f33faed3a948b741a8b00c9a845bc3498dc27733ba706643d691b4e50c80e09\"" Jan 30 12:56:25.252135 containerd[1438]: time="2025-01-30T12:56:25.251441319Z" level=info msg="StartContainer for \"7f33faed3a948b741a8b00c9a845bc3498dc27733ba706643d691b4e50c80e09\"" Jan 30 12:56:25.294464 systemd[1]: Started cri-containerd-7f33faed3a948b741a8b00c9a845bc3498dc27733ba706643d691b4e50c80e09.scope - libcontainer container 7f33faed3a948b741a8b00c9a845bc3498dc27733ba706643d691b4e50c80e09. Jan 30 12:56:25.321361 systemd[1]: cri-containerd-7f33faed3a948b741a8b00c9a845bc3498dc27733ba706643d691b4e50c80e09.scope: Deactivated successfully. Jan 30 12:56:25.330283 containerd[1438]: time="2025-01-30T12:56:25.330138501Z" level=info msg="StartContainer for \"7f33faed3a948b741a8b00c9a845bc3498dc27733ba706643d691b4e50c80e09\" returns successfully" Jan 30 12:56:25.352213 containerd[1438]: time="2025-01-30T12:56:25.352123693Z" level=info msg="shim disconnected" id=7f33faed3a948b741a8b00c9a845bc3498dc27733ba706643d691b4e50c80e09 namespace=k8s.io Jan 30 12:56:25.352213 containerd[1438]: time="2025-01-30T12:56:25.352184974Z" level=warning msg="cleaning up after shim disconnected" id=7f33faed3a948b741a8b00c9a845bc3498dc27733ba706643d691b4e50c80e09 namespace=k8s.io Jan 30 12:56:25.352213 containerd[1438]: time="2025-01-30T12:56:25.352194174Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:56:25.608521 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f33faed3a948b741a8b00c9a845bc3498dc27733ba706643d691b4e50c80e09-rootfs.mount: Deactivated successfully. Jan 30 12:56:26.217556 kubelet[2466]: E0130 12:56:26.217504 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:26.222327 containerd[1438]: time="2025-01-30T12:56:26.221999517Z" level=info msg="CreateContainer within sandbox \"e6efb08d504f5d9a53dcc6f957b808adcf43a7952632ca746fd29301475c6ecc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 12:56:26.251135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount569887067.mount: Deactivated successfully. Jan 30 12:56:26.253903 containerd[1438]: time="2025-01-30T12:56:26.253826084Z" level=info msg="CreateContainer within sandbox \"e6efb08d504f5d9a53dcc6f957b808adcf43a7952632ca746fd29301475c6ecc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8ffdaee1e9bb72f4193e1c6d645aa8bb376c9e3f55754714b941c77377e39a1d\"" Jan 30 12:56:26.254472 containerd[1438]: time="2025-01-30T12:56:26.254442574Z" level=info msg="StartContainer for \"8ffdaee1e9bb72f4193e1c6d645aa8bb376c9e3f55754714b941c77377e39a1d\"" Jan 30 12:56:26.285892 systemd[1]: Started cri-containerd-8ffdaee1e9bb72f4193e1c6d645aa8bb376c9e3f55754714b941c77377e39a1d.scope - libcontainer container 8ffdaee1e9bb72f4193e1c6d645aa8bb376c9e3f55754714b941c77377e39a1d. Jan 30 12:56:26.321297 containerd[1438]: time="2025-01-30T12:56:26.321218436Z" level=info msg="StartContainer for \"8ffdaee1e9bb72f4193e1c6d645aa8bb376c9e3f55754714b941c77377e39a1d\" returns successfully" Jan 30 12:56:26.460769 kubelet[2466]: I0130 12:56:26.456768 2466 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 30 12:56:26.502406 systemd[1]: Created slice kubepods-burstable-pod1416c8a1_bd41_446b_99e6_857ba025de3a.slice - libcontainer container kubepods-burstable-pod1416c8a1_bd41_446b_99e6_857ba025de3a.slice. Jan 30 12:56:26.507657 systemd[1]: Created slice kubepods-burstable-pode0b57787_bd9c_472a_9fda_2b236f5d7e70.slice - libcontainer container kubepods-burstable-pode0b57787_bd9c_472a_9fda_2b236f5d7e70.slice. Jan 30 12:56:26.648209 kubelet[2466]: I0130 12:56:26.648152 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1416c8a1-bd41-446b-99e6-857ba025de3a-config-volume\") pod \"coredns-6f6b679f8f-trlgm\" (UID: \"1416c8a1-bd41-446b-99e6-857ba025de3a\") " pod="kube-system/coredns-6f6b679f8f-trlgm" Jan 30 12:56:26.648209 kubelet[2466]: I0130 12:56:26.648212 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmggg\" (UniqueName: \"kubernetes.io/projected/1416c8a1-bd41-446b-99e6-857ba025de3a-kube-api-access-rmggg\") pod \"coredns-6f6b679f8f-trlgm\" (UID: \"1416c8a1-bd41-446b-99e6-857ba025de3a\") " pod="kube-system/coredns-6f6b679f8f-trlgm" Jan 30 12:56:26.648463 kubelet[2466]: I0130 12:56:26.648260 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0b57787-bd9c-472a-9fda-2b236f5d7e70-config-volume\") pod \"coredns-6f6b679f8f-tm5p5\" (UID: \"e0b57787-bd9c-472a-9fda-2b236f5d7e70\") " pod="kube-system/coredns-6f6b679f8f-tm5p5" Jan 30 12:56:26.648463 kubelet[2466]: I0130 12:56:26.648286 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7x8r\" (UniqueName: \"kubernetes.io/projected/e0b57787-bd9c-472a-9fda-2b236f5d7e70-kube-api-access-v7x8r\") pod \"coredns-6f6b679f8f-tm5p5\" (UID: \"e0b57787-bd9c-472a-9fda-2b236f5d7e70\") " pod="kube-system/coredns-6f6b679f8f-tm5p5" Jan 30 12:56:26.806098 kubelet[2466]: E0130 12:56:26.805865 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:26.807403 containerd[1438]: time="2025-01-30T12:56:26.807355600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-trlgm,Uid:1416c8a1-bd41-446b-99e6-857ba025de3a,Namespace:kube-system,Attempt:0,}" Jan 30 12:56:26.810478 kubelet[2466]: E0130 12:56:26.810449 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:26.810978 containerd[1438]: time="2025-01-30T12:56:26.810924775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-tm5p5,Uid:e0b57787-bd9c-472a-9fda-2b236f5d7e70,Namespace:kube-system,Attempt:0,}" Jan 30 12:56:27.221404 kubelet[2466]: E0130 12:56:27.221298 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:27.237974 kubelet[2466]: I0130 12:56:27.237906 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5lb69" podStartSLOduration=5.758702032 podStartE2EDuration="15.237888954s" podCreationTimestamp="2025-01-30 12:56:12 +0000 UTC" firstStartedPulling="2025-01-30 12:56:13.002308624 +0000 UTC m=+6.956265379" lastFinishedPulling="2025-01-30 12:56:22.481495546 +0000 UTC m=+16.435452301" observedRunningTime="2025-01-30 12:56:27.236597695 +0000 UTC m=+21.190554450" watchObservedRunningTime="2025-01-30 12:56:27.237888954 +0000 UTC m=+21.191845709" Jan 30 12:56:28.224076 kubelet[2466]: E0130 12:56:28.224032 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:28.756659 systemd-networkd[1377]: cilium_host: Link UP Jan 30 12:56:28.756785 systemd-networkd[1377]: cilium_net: Link UP Jan 30 12:56:28.756916 systemd-networkd[1377]: cilium_net: Gained carrier Jan 30 12:56:28.757060 systemd-networkd[1377]: cilium_host: Gained carrier Jan 30 12:56:28.897028 systemd-networkd[1377]: cilium_vxlan: Link UP Jan 30 12:56:28.897036 systemd-networkd[1377]: cilium_vxlan: Gained carrier Jan 30 12:56:29.225578 kubelet[2466]: E0130 12:56:29.225540 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:29.296282 kernel: NET: Registered PF_ALG protocol family Jan 30 12:56:29.432427 systemd-networkd[1377]: cilium_net: Gained IPv6LL Jan 30 12:56:29.432711 systemd-networkd[1377]: cilium_host: Gained IPv6LL Jan 30 12:56:29.961037 systemd-networkd[1377]: lxc_health: Link UP Jan 30 12:56:29.966874 systemd-networkd[1377]: lxc_health: Gained carrier Jan 30 12:56:30.496922 kernel: eth0: renamed from tmp66637 Jan 30 12:56:30.504857 systemd-networkd[1377]: lxc8af74a78e8d2: Link UP Jan 30 12:56:30.519257 kernel: eth0: renamed from tmp928b7 Jan 30 12:56:30.524919 systemd-networkd[1377]: lxc84ee1b802940: Link UP Jan 30 12:56:30.526926 systemd-networkd[1377]: lxc8af74a78e8d2: Gained carrier Jan 30 12:56:30.527805 systemd-networkd[1377]: lxc84ee1b802940: Gained carrier Jan 30 12:56:30.776443 systemd-networkd[1377]: cilium_vxlan: Gained IPv6LL Jan 30 12:56:30.901316 kubelet[2466]: E0130 12:56:30.901218 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:31.352796 systemd-networkd[1377]: lxc_health: Gained IPv6LL Jan 30 12:56:31.864633 systemd-networkd[1377]: lxc8af74a78e8d2: Gained IPv6LL Jan 30 12:56:31.938281 kubelet[2466]: I0130 12:56:31.937699 2466 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 12:56:31.938281 kubelet[2466]: E0130 12:56:31.938071 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:32.184365 systemd-networkd[1377]: lxc84ee1b802940: Gained IPv6LL Jan 30 12:56:32.230903 kubelet[2466]: E0130 12:56:32.230844 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:34.304782 containerd[1438]: time="2025-01-30T12:56:34.304536547Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:56:34.304782 containerd[1438]: time="2025-01-30T12:56:34.304604707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:56:34.304782 containerd[1438]: time="2025-01-30T12:56:34.304620588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:34.304782 containerd[1438]: time="2025-01-30T12:56:34.304712509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:34.313904 containerd[1438]: time="2025-01-30T12:56:34.313635167Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:56:34.313904 containerd[1438]: time="2025-01-30T12:56:34.313706967Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:56:34.313904 containerd[1438]: time="2025-01-30T12:56:34.313723407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:34.314213 containerd[1438]: time="2025-01-30T12:56:34.314137092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:34.335424 systemd[1]: Started cri-containerd-66637c5ced4a13afd8bc4b2c153e13efcaf8c5eba2a16f356e5384561779d487.scope - libcontainer container 66637c5ced4a13afd8bc4b2c153e13efcaf8c5eba2a16f356e5384561779d487. Jan 30 12:56:34.339504 systemd[1]: Started cri-containerd-928b776e23f4faba24bf1942970e9670466f9bcd4345a0ccae6a05e67987573a.scope - libcontainer container 928b776e23f4faba24bf1942970e9670466f9bcd4345a0ccae6a05e67987573a. Jan 30 12:56:34.346737 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 12:56:34.350323 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 12:56:34.366838 containerd[1438]: time="2025-01-30T12:56:34.366623268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-trlgm,Uid:1416c8a1-bd41-446b-99e6-857ba025de3a,Namespace:kube-system,Attempt:0,} returns sandbox id \"66637c5ced4a13afd8bc4b2c153e13efcaf8c5eba2a16f356e5384561779d487\"" Jan 30 12:56:34.367747 kubelet[2466]: E0130 12:56:34.367698 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:34.368092 containerd[1438]: time="2025-01-30T12:56:34.367768561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-tm5p5,Uid:e0b57787-bd9c-472a-9fda-2b236f5d7e70,Namespace:kube-system,Attempt:0,} returns sandbox id \"928b776e23f4faba24bf1942970e9670466f9bcd4345a0ccae6a05e67987573a\"" Jan 30 12:56:34.370475 kubelet[2466]: E0130 12:56:34.370452 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:34.370912 containerd[1438]: time="2025-01-30T12:56:34.370855435Z" level=info msg="CreateContainer within sandbox \"66637c5ced4a13afd8bc4b2c153e13efcaf8c5eba2a16f356e5384561779d487\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 12:56:34.372997 containerd[1438]: time="2025-01-30T12:56:34.372819536Z" level=info msg="CreateContainer within sandbox \"928b776e23f4faba24bf1942970e9670466f9bcd4345a0ccae6a05e67987573a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 12:56:34.385253 containerd[1438]: time="2025-01-30T12:56:34.385190232Z" level=info msg="CreateContainer within sandbox \"66637c5ced4a13afd8bc4b2c153e13efcaf8c5eba2a16f356e5384561779d487\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cb431dfef14fa187cee8a427d6a109d5aa3b5b88077c5f393305a9c3e61aa5e9\"" Jan 30 12:56:34.387121 containerd[1438]: time="2025-01-30T12:56:34.387085173Z" level=info msg="StartContainer for \"cb431dfef14fa187cee8a427d6a109d5aa3b5b88077c5f393305a9c3e61aa5e9\"" Jan 30 12:56:34.391193 containerd[1438]: time="2025-01-30T12:56:34.391125377Z" level=info msg="CreateContainer within sandbox \"928b776e23f4faba24bf1942970e9670466f9bcd4345a0ccae6a05e67987573a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d0fe9defecfa3e882e3c05050d729dfca54fd3044833cea70c16cbe2f7a08af9\"" Jan 30 12:56:34.393156 containerd[1438]: time="2025-01-30T12:56:34.392568233Z" level=info msg="StartContainer for \"d0fe9defecfa3e882e3c05050d729dfca54fd3044833cea70c16cbe2f7a08af9\"" Jan 30 12:56:34.422542 systemd[1]: Started cri-containerd-cb431dfef14fa187cee8a427d6a109d5aa3b5b88077c5f393305a9c3e61aa5e9.scope - libcontainer container cb431dfef14fa187cee8a427d6a109d5aa3b5b88077c5f393305a9c3e61aa5e9. Jan 30 12:56:34.424181 systemd[1]: Started cri-containerd-d0fe9defecfa3e882e3c05050d729dfca54fd3044833cea70c16cbe2f7a08af9.scope - libcontainer container d0fe9defecfa3e882e3c05050d729dfca54fd3044833cea70c16cbe2f7a08af9. Jan 30 12:56:34.472319 containerd[1438]: time="2025-01-30T12:56:34.472273749Z" level=info msg="StartContainer for \"d0fe9defecfa3e882e3c05050d729dfca54fd3044833cea70c16cbe2f7a08af9\" returns successfully" Jan 30 12:56:34.474331 containerd[1438]: time="2025-01-30T12:56:34.472273789Z" level=info msg="StartContainer for \"cb431dfef14fa187cee8a427d6a109d5aa3b5b88077c5f393305a9c3e61aa5e9\" returns successfully" Jan 30 12:56:35.242813 kubelet[2466]: E0130 12:56:35.241907 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:35.247346 kubelet[2466]: E0130 12:56:35.247307 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:35.266177 kubelet[2466]: I0130 12:56:35.266104 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-tm5p5" podStartSLOduration=23.26608904 podStartE2EDuration="23.26608904s" podCreationTimestamp="2025-01-30 12:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 12:56:35.265723076 +0000 UTC m=+29.219679831" watchObservedRunningTime="2025-01-30 12:56:35.26608904 +0000 UTC m=+29.220045795" Jan 30 12:56:35.279812 kubelet[2466]: I0130 12:56:35.279139 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-trlgm" podStartSLOduration=23.279117498 podStartE2EDuration="23.279117498s" podCreationTimestamp="2025-01-30 12:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 12:56:35.277180117 +0000 UTC m=+29.231136872" watchObservedRunningTime="2025-01-30 12:56:35.279117498 +0000 UTC m=+29.233074213" Jan 30 12:56:36.249690 kubelet[2466]: E0130 12:56:36.249460 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:36.369634 systemd[1]: Started sshd@7-10.0.0.64:22-10.0.0.1:42378.service - OpenSSH per-connection server daemon (10.0.0.1:42378). Jan 30 12:56:36.414679 sshd[3876]: Accepted publickey for core from 10.0.0.1 port 42378 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:56:36.416543 sshd[3876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:56:36.421391 systemd-logind[1418]: New session 8 of user core. Jan 30 12:56:36.430435 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 12:56:36.556373 sshd[3876]: pam_unix(sshd:session): session closed for user core Jan 30 12:56:36.560402 systemd[1]: sshd@7-10.0.0.64:22-10.0.0.1:42378.service: Deactivated successfully. Jan 30 12:56:36.562414 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 12:56:36.563099 systemd-logind[1418]: Session 8 logged out. Waiting for processes to exit. Jan 30 12:56:36.564330 systemd-logind[1418]: Removed session 8. Jan 30 12:56:36.807620 kubelet[2466]: E0130 12:56:36.807304 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:37.250705 kubelet[2466]: E0130 12:56:37.250669 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:37.270323 kubelet[2466]: E0130 12:56:37.270269 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:41.568362 systemd[1]: Started sshd@8-10.0.0.64:22-10.0.0.1:42380.service - OpenSSH per-connection server daemon (10.0.0.1:42380). Jan 30 12:56:41.606778 sshd[3897]: Accepted publickey for core from 10.0.0.1 port 42380 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:56:41.608753 sshd[3897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:56:41.613435 systemd-logind[1418]: New session 9 of user core. Jan 30 12:56:41.626038 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 12:56:41.742959 sshd[3897]: pam_unix(sshd:session): session closed for user core Jan 30 12:56:41.747304 systemd[1]: sshd@8-10.0.0.64:22-10.0.0.1:42380.service: Deactivated successfully. Jan 30 12:56:41.751006 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 12:56:41.751813 systemd-logind[1418]: Session 9 logged out. Waiting for processes to exit. Jan 30 12:56:41.752861 systemd-logind[1418]: Removed session 9. Jan 30 12:56:46.772166 systemd[1]: Started sshd@9-10.0.0.64:22-10.0.0.1:49178.service - OpenSSH per-connection server daemon (10.0.0.1:49178). Jan 30 12:56:46.820660 sshd[3914]: Accepted publickey for core from 10.0.0.1 port 49178 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:56:46.822607 sshd[3914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:56:46.829710 systemd-logind[1418]: New session 10 of user core. Jan 30 12:56:46.839453 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 12:56:46.966770 sshd[3914]: pam_unix(sshd:session): session closed for user core Jan 30 12:56:46.969672 systemd[1]: sshd@9-10.0.0.64:22-10.0.0.1:49178.service: Deactivated successfully. Jan 30 12:56:46.971382 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 12:56:46.973815 systemd-logind[1418]: Session 10 logged out. Waiting for processes to exit. Jan 30 12:56:46.975520 systemd-logind[1418]: Removed session 10. Jan 30 12:56:51.986716 systemd[1]: Started sshd@10-10.0.0.64:22-10.0.0.1:49192.service - OpenSSH per-connection server daemon (10.0.0.1:49192). Jan 30 12:56:52.030906 sshd[3929]: Accepted publickey for core from 10.0.0.1 port 49192 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:56:52.032442 sshd[3929]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:56:52.036390 systemd-logind[1418]: New session 11 of user core. Jan 30 12:56:52.047441 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 12:56:52.170779 sshd[3929]: pam_unix(sshd:session): session closed for user core Jan 30 12:56:52.184027 systemd[1]: sshd@10-10.0.0.64:22-10.0.0.1:49192.service: Deactivated successfully. Jan 30 12:56:52.185873 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 12:56:52.188297 systemd-logind[1418]: Session 11 logged out. Waiting for processes to exit. Jan 30 12:56:52.196051 systemd[1]: Started sshd@11-10.0.0.64:22-10.0.0.1:49196.service - OpenSSH per-connection server daemon (10.0.0.1:49196). Jan 30 12:56:52.197251 systemd-logind[1418]: Removed session 11. Jan 30 12:56:52.239637 sshd[3945]: Accepted publickey for core from 10.0.0.1 port 49196 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:56:52.241167 sshd[3945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:56:52.245644 systemd-logind[1418]: New session 12 of user core. Jan 30 12:56:52.261445 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 12:56:52.426724 sshd[3945]: pam_unix(sshd:session): session closed for user core Jan 30 12:56:52.436632 systemd[1]: sshd@11-10.0.0.64:22-10.0.0.1:49196.service: Deactivated successfully. Jan 30 12:56:52.439474 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 12:56:52.442394 systemd-logind[1418]: Session 12 logged out. Waiting for processes to exit. Jan 30 12:56:52.456316 systemd[1]: Started sshd@12-10.0.0.64:22-10.0.0.1:49198.service - OpenSSH per-connection server daemon (10.0.0.1:49198). Jan 30 12:56:52.457876 systemd-logind[1418]: Removed session 12. Jan 30 12:56:52.497876 sshd[3958]: Accepted publickey for core from 10.0.0.1 port 49198 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:56:52.499432 sshd[3958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:56:52.504654 systemd-logind[1418]: New session 13 of user core. Jan 30 12:56:52.514472 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 12:56:52.638420 sshd[3958]: pam_unix(sshd:session): session closed for user core Jan 30 12:56:52.643415 systemd[1]: sshd@12-10.0.0.64:22-10.0.0.1:49198.service: Deactivated successfully. Jan 30 12:56:52.645509 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 12:56:52.646220 systemd-logind[1418]: Session 13 logged out. Waiting for processes to exit. Jan 30 12:56:52.647864 systemd-logind[1418]: Removed session 13. Jan 30 12:56:57.649826 systemd[1]: Started sshd@13-10.0.0.64:22-10.0.0.1:57370.service - OpenSSH per-connection server daemon (10.0.0.1:57370). Jan 30 12:56:57.685953 sshd[3973]: Accepted publickey for core from 10.0.0.1 port 57370 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:56:57.687325 sshd[3973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:56:57.692495 systemd-logind[1418]: New session 14 of user core. Jan 30 12:56:57.699635 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 12:56:57.818070 sshd[3973]: pam_unix(sshd:session): session closed for user core Jan 30 12:56:57.821610 systemd[1]: sshd@13-10.0.0.64:22-10.0.0.1:57370.service: Deactivated successfully. Jan 30 12:56:57.823526 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 12:56:57.824345 systemd-logind[1418]: Session 14 logged out. Waiting for processes to exit. Jan 30 12:56:57.825128 systemd-logind[1418]: Removed session 14. Jan 30 12:57:02.832015 systemd[1]: Started sshd@14-10.0.0.64:22-10.0.0.1:60252.service - OpenSSH per-connection server daemon (10.0.0.1:60252). Jan 30 12:57:02.884049 sshd[3988]: Accepted publickey for core from 10.0.0.1 port 60252 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:57:02.885922 sshd[3988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:57:02.890320 systemd-logind[1418]: New session 15 of user core. Jan 30 12:57:02.901488 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 12:57:03.022857 sshd[3988]: pam_unix(sshd:session): session closed for user core Jan 30 12:57:03.034012 systemd[1]: sshd@14-10.0.0.64:22-10.0.0.1:60252.service: Deactivated successfully. Jan 30 12:57:03.036264 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 12:57:03.037970 systemd-logind[1418]: Session 15 logged out. Waiting for processes to exit. Jan 30 12:57:03.052879 systemd[1]: Started sshd@15-10.0.0.64:22-10.0.0.1:60268.service - OpenSSH per-connection server daemon (10.0.0.1:60268). Jan 30 12:57:03.055793 systemd-logind[1418]: Removed session 15. Jan 30 12:57:03.085155 sshd[4002]: Accepted publickey for core from 10.0.0.1 port 60268 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:57:03.086440 sshd[4002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:57:03.097327 systemd-logind[1418]: New session 16 of user core. Jan 30 12:57:03.107433 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 12:57:03.335882 sshd[4002]: pam_unix(sshd:session): session closed for user core Jan 30 12:57:03.350046 systemd[1]: sshd@15-10.0.0.64:22-10.0.0.1:60268.service: Deactivated successfully. Jan 30 12:57:03.352771 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 12:57:03.354139 systemd-logind[1418]: Session 16 logged out. Waiting for processes to exit. Jan 30 12:57:03.359624 systemd[1]: Started sshd@16-10.0.0.64:22-10.0.0.1:60276.service - OpenSSH per-connection server daemon (10.0.0.1:60276). Jan 30 12:57:03.360537 systemd-logind[1418]: Removed session 16. Jan 30 12:57:03.399104 sshd[4014]: Accepted publickey for core from 10.0.0.1 port 60276 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:57:03.400600 sshd[4014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:57:03.405160 systemd-logind[1418]: New session 17 of user core. Jan 30 12:57:03.411430 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 12:57:04.769573 sshd[4014]: pam_unix(sshd:session): session closed for user core Jan 30 12:57:04.780485 systemd[1]: sshd@16-10.0.0.64:22-10.0.0.1:60276.service: Deactivated successfully. Jan 30 12:57:04.785768 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 12:57:04.788066 systemd-logind[1418]: Session 17 logged out. Waiting for processes to exit. Jan 30 12:57:04.798914 systemd[1]: Started sshd@17-10.0.0.64:22-10.0.0.1:60286.service - OpenSSH per-connection server daemon (10.0.0.1:60286). Jan 30 12:57:04.801462 systemd-logind[1418]: Removed session 17. Jan 30 12:57:04.835525 sshd[4038]: Accepted publickey for core from 10.0.0.1 port 60286 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:57:04.837323 sshd[4038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:57:04.842447 systemd-logind[1418]: New session 18 of user core. Jan 30 12:57:04.852422 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 12:57:05.080698 sshd[4038]: pam_unix(sshd:session): session closed for user core Jan 30 12:57:05.092505 systemd[1]: sshd@17-10.0.0.64:22-10.0.0.1:60286.service: Deactivated successfully. Jan 30 12:57:05.094928 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 12:57:05.096349 systemd-logind[1418]: Session 18 logged out. Waiting for processes to exit. Jan 30 12:57:05.109141 systemd[1]: Started sshd@18-10.0.0.64:22-10.0.0.1:60294.service - OpenSSH per-connection server daemon (10.0.0.1:60294). Jan 30 12:57:05.110350 systemd-logind[1418]: Removed session 18. Jan 30 12:57:05.141086 sshd[4050]: Accepted publickey for core from 10.0.0.1 port 60294 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:57:05.142506 sshd[4050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:57:05.147823 systemd-logind[1418]: New session 19 of user core. Jan 30 12:57:05.155519 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 12:57:05.264075 sshd[4050]: pam_unix(sshd:session): session closed for user core Jan 30 12:57:05.268339 systemd[1]: sshd@18-10.0.0.64:22-10.0.0.1:60294.service: Deactivated successfully. Jan 30 12:57:05.270039 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 12:57:05.270727 systemd-logind[1418]: Session 19 logged out. Waiting for processes to exit. Jan 30 12:57:05.271711 systemd-logind[1418]: Removed session 19. Jan 30 12:57:10.277690 systemd[1]: Started sshd@19-10.0.0.64:22-10.0.0.1:60296.service - OpenSSH per-connection server daemon (10.0.0.1:60296). Jan 30 12:57:10.312965 sshd[4069]: Accepted publickey for core from 10.0.0.1 port 60296 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:57:10.314644 sshd[4069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:57:10.319382 systemd-logind[1418]: New session 20 of user core. Jan 30 12:57:10.328509 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 12:57:10.453572 sshd[4069]: pam_unix(sshd:session): session closed for user core Jan 30 12:57:10.457993 systemd[1]: sshd@19-10.0.0.64:22-10.0.0.1:60296.service: Deactivated successfully. Jan 30 12:57:10.460253 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 12:57:10.463015 systemd-logind[1418]: Session 20 logged out. Waiting for processes to exit. Jan 30 12:57:10.465027 systemd-logind[1418]: Removed session 20. Jan 30 12:57:15.465946 systemd[1]: Started sshd@20-10.0.0.64:22-10.0.0.1:43708.service - OpenSSH per-connection server daemon (10.0.0.1:43708). Jan 30 12:57:15.521748 sshd[4085]: Accepted publickey for core from 10.0.0.1 port 43708 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:57:15.523510 sshd[4085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:57:15.528197 systemd-logind[1418]: New session 21 of user core. Jan 30 12:57:15.541922 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 12:57:15.658930 sshd[4085]: pam_unix(sshd:session): session closed for user core Jan 30 12:57:15.664467 systemd[1]: sshd@20-10.0.0.64:22-10.0.0.1:43708.service: Deactivated successfully. Jan 30 12:57:15.667807 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 12:57:15.668926 systemd-logind[1418]: Session 21 logged out. Waiting for processes to exit. Jan 30 12:57:15.669784 systemd-logind[1418]: Removed session 21. Jan 30 12:57:19.142685 kubelet[2466]: E0130 12:57:19.142632 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:57:20.685037 systemd[1]: Started sshd@21-10.0.0.64:22-10.0.0.1:43724.service - OpenSSH per-connection server daemon (10.0.0.1:43724). Jan 30 12:57:20.725562 sshd[4099]: Accepted publickey for core from 10.0.0.1 port 43724 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:57:20.727067 sshd[4099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:57:20.731028 systemd-logind[1418]: New session 22 of user core. Jan 30 12:57:20.742450 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 12:57:20.856502 sshd[4099]: pam_unix(sshd:session): session closed for user core Jan 30 12:57:20.869220 systemd[1]: sshd@21-10.0.0.64:22-10.0.0.1:43724.service: Deactivated successfully. Jan 30 12:57:20.873100 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 12:57:20.875259 systemd-logind[1418]: Session 22 logged out. Waiting for processes to exit. Jan 30 12:57:20.886599 systemd[1]: Started sshd@22-10.0.0.64:22-10.0.0.1:43736.service - OpenSSH per-connection server daemon (10.0.0.1:43736). Jan 30 12:57:20.888286 systemd-logind[1418]: Removed session 22. Jan 30 12:57:20.925798 sshd[4113]: Accepted publickey for core from 10.0.0.1 port 43736 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:57:20.926317 sshd[4113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:57:20.930761 systemd-logind[1418]: New session 23 of user core. Jan 30 12:57:20.938408 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 12:57:23.247031 containerd[1438]: time="2025-01-30T12:57:23.246873531Z" level=info msg="StopContainer for \"fc4b0599204106f811b34f7954e49398aa685dc2d3dc0cbb3541fbf95926d496\" with timeout 30 (s)" Jan 30 12:57:23.248736 containerd[1438]: time="2025-01-30T12:57:23.248682218Z" level=info msg="Stop container \"fc4b0599204106f811b34f7954e49398aa685dc2d3dc0cbb3541fbf95926d496\" with signal terminated" Jan 30 12:57:23.262189 systemd[1]: cri-containerd-fc4b0599204106f811b34f7954e49398aa685dc2d3dc0cbb3541fbf95926d496.scope: Deactivated successfully. Jan 30 12:57:23.294140 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc4b0599204106f811b34f7954e49398aa685dc2d3dc0cbb3541fbf95926d496-rootfs.mount: Deactivated successfully. Jan 30 12:57:23.297831 containerd[1438]: time="2025-01-30T12:57:23.297788064Z" level=info msg="StopContainer for \"8ffdaee1e9bb72f4193e1c6d645aa8bb376c9e3f55754714b941c77377e39a1d\" with timeout 2 (s)" Jan 30 12:57:23.298283 containerd[1438]: time="2025-01-30T12:57:23.298220745Z" level=info msg="Stop container \"8ffdaee1e9bb72f4193e1c6d645aa8bb376c9e3f55754714b941c77377e39a1d\" with signal terminated" Jan 30 12:57:23.305456 systemd-networkd[1377]: lxc_health: Link DOWN Jan 30 12:57:23.305462 systemd-networkd[1377]: lxc_health: Lost carrier Jan 30 12:57:23.308999 containerd[1438]: time="2025-01-30T12:57:23.308938422Z" level=info msg="shim disconnected" id=fc4b0599204106f811b34f7954e49398aa685dc2d3dc0cbb3541fbf95926d496 namespace=k8s.io Jan 30 12:57:23.308999 containerd[1438]: time="2025-01-30T12:57:23.308994302Z" level=warning msg="cleaning up after shim disconnected" id=fc4b0599204106f811b34f7954e49398aa685dc2d3dc0cbb3541fbf95926d496 namespace=k8s.io Jan 30 12:57:23.308999 containerd[1438]: time="2025-01-30T12:57:23.309010462Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:57:23.316814 containerd[1438]: time="2025-01-30T12:57:23.316748528Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 12:57:23.337093 systemd[1]: cri-containerd-8ffdaee1e9bb72f4193e1c6d645aa8bb376c9e3f55754714b941c77377e39a1d.scope: Deactivated successfully. Jan 30 12:57:23.337846 systemd[1]: cri-containerd-8ffdaee1e9bb72f4193e1c6d645aa8bb376c9e3f55754714b941c77377e39a1d.scope: Consumed 7.277s CPU time. Jan 30 12:57:23.355544 containerd[1438]: time="2025-01-30T12:57:23.355377219Z" level=info msg="StopContainer for \"fc4b0599204106f811b34f7954e49398aa685dc2d3dc0cbb3541fbf95926d496\" returns successfully" Jan 30 12:57:23.358348 containerd[1438]: time="2025-01-30T12:57:23.356485663Z" level=info msg="StopPodSandbox for \"774be15d8742d2a76989aef71c94a76551a260bbb41c30337b1c385a7b6cb87f\"" Jan 30 12:57:23.358348 containerd[1438]: time="2025-01-30T12:57:23.356535503Z" level=info msg="Container to stop \"fc4b0599204106f811b34f7954e49398aa685dc2d3dc0cbb3541fbf95926d496\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 12:57:23.358379 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-774be15d8742d2a76989aef71c94a76551a260bbb41c30337b1c385a7b6cb87f-shm.mount: Deactivated successfully. Jan 30 12:57:23.362999 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ffdaee1e9bb72f4193e1c6d645aa8bb376c9e3f55754714b941c77377e39a1d-rootfs.mount: Deactivated successfully. Jan 30 12:57:23.366811 containerd[1438]: time="2025-01-30T12:57:23.366567817Z" level=info msg="shim disconnected" id=8ffdaee1e9bb72f4193e1c6d645aa8bb376c9e3f55754714b941c77377e39a1d namespace=k8s.io Jan 30 12:57:23.366811 containerd[1438]: time="2025-01-30T12:57:23.366630937Z" level=warning msg="cleaning up after shim disconnected" id=8ffdaee1e9bb72f4193e1c6d645aa8bb376c9e3f55754714b941c77377e39a1d namespace=k8s.io Jan 30 12:57:23.366811 containerd[1438]: time="2025-01-30T12:57:23.366648937Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:57:23.369037 systemd[1]: cri-containerd-774be15d8742d2a76989aef71c94a76551a260bbb41c30337b1c385a7b6cb87f.scope: Deactivated successfully. Jan 30 12:57:23.385534 containerd[1438]: time="2025-01-30T12:57:23.385208000Z" level=info msg="StopContainer for \"8ffdaee1e9bb72f4193e1c6d645aa8bb376c9e3f55754714b941c77377e39a1d\" returns successfully" Jan 30 12:57:23.386543 containerd[1438]: time="2025-01-30T12:57:23.386498605Z" level=info msg="StopPodSandbox for \"e6efb08d504f5d9a53dcc6f957b808adcf43a7952632ca746fd29301475c6ecc\"" Jan 30 12:57:23.386593 containerd[1438]: time="2025-01-30T12:57:23.386555125Z" level=info msg="Container to stop \"d512e4a475a2765b2d063fc890562fa9d322f21a0403cc18a144348796b40154\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 12:57:23.386593 containerd[1438]: time="2025-01-30T12:57:23.386570085Z" level=info msg="Container to stop \"ba0cdf776e9a587836952ae553012a1bf5b0dcccdbf480dd67bfacc66fafa4a3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 12:57:23.386593 containerd[1438]: time="2025-01-30T12:57:23.386581565Z" level=info msg="Container to stop \"175772b98afba615a48de0eefec2ac796daa2233e2ea35b80a4c9b4b567d8bd7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 12:57:23.386681 containerd[1438]: time="2025-01-30T12:57:23.386596365Z" level=info msg="Container to stop \"7f33faed3a948b741a8b00c9a845bc3498dc27733ba706643d691b4e50c80e09\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 12:57:23.386681 containerd[1438]: time="2025-01-30T12:57:23.386609365Z" level=info msg="Container to stop \"8ffdaee1e9bb72f4193e1c6d645aa8bb376c9e3f55754714b941c77377e39a1d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 12:57:23.388561 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e6efb08d504f5d9a53dcc6f957b808adcf43a7952632ca746fd29301475c6ecc-shm.mount: Deactivated successfully. Jan 30 12:57:23.392867 systemd[1]: cri-containerd-e6efb08d504f5d9a53dcc6f957b808adcf43a7952632ca746fd29301475c6ecc.scope: Deactivated successfully. Jan 30 12:57:23.429286 containerd[1438]: time="2025-01-30T12:57:23.428496827Z" level=info msg="shim disconnected" id=774be15d8742d2a76989aef71c94a76551a260bbb41c30337b1c385a7b6cb87f namespace=k8s.io Jan 30 12:57:23.429286 containerd[1438]: time="2025-01-30T12:57:23.429266949Z" level=warning msg="cleaning up after shim disconnected" id=774be15d8742d2a76989aef71c94a76551a260bbb41c30337b1c385a7b6cb87f namespace=k8s.io Jan 30 12:57:23.429286 containerd[1438]: time="2025-01-30T12:57:23.429279909Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:57:23.429515 containerd[1438]: time="2025-01-30T12:57:23.429350310Z" level=info msg="shim disconnected" id=e6efb08d504f5d9a53dcc6f957b808adcf43a7952632ca746fd29301475c6ecc namespace=k8s.io Jan 30 12:57:23.429515 containerd[1438]: time="2025-01-30T12:57:23.429391790Z" level=warning msg="cleaning up after shim disconnected" id=e6efb08d504f5d9a53dcc6f957b808adcf43a7952632ca746fd29301475c6ecc namespace=k8s.io Jan 30 12:57:23.429515 containerd[1438]: time="2025-01-30T12:57:23.429400470Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:57:23.442078 containerd[1438]: time="2025-01-30T12:57:23.442022433Z" level=info msg="TearDown network for sandbox \"e6efb08d504f5d9a53dcc6f957b808adcf43a7952632ca746fd29301475c6ecc\" successfully" Jan 30 12:57:23.442078 containerd[1438]: time="2025-01-30T12:57:23.442059713Z" level=info msg="StopPodSandbox for \"e6efb08d504f5d9a53dcc6f957b808adcf43a7952632ca746fd29301475c6ecc\" returns successfully" Jan 30 12:57:23.442424 containerd[1438]: time="2025-01-30T12:57:23.442387434Z" level=warning msg="cleanup warnings time=\"2025-01-30T12:57:23Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 12:57:23.466719 containerd[1438]: time="2025-01-30T12:57:23.466654076Z" level=info msg="TearDown network for sandbox \"774be15d8742d2a76989aef71c94a76551a260bbb41c30337b1c385a7b6cb87f\" successfully" Jan 30 12:57:23.466719 containerd[1438]: time="2025-01-30T12:57:23.466702596Z" level=info msg="StopPodSandbox for \"774be15d8742d2a76989aef71c94a76551a260bbb41c30337b1c385a7b6cb87f\" returns successfully" Jan 30 12:57:23.629806 kubelet[2466]: I0130 12:57:23.628760 2466 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/364bfab2-93a9-4445-9fae-81330b062b22-etc-cni-netd\") pod \"364bfab2-93a9-4445-9fae-81330b062b22\" (UID: \"364bfab2-93a9-4445-9fae-81330b062b22\") " Jan 30 12:57:23.629806 kubelet[2466]: I0130 12:57:23.628805 2466 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/364bfab2-93a9-4445-9fae-81330b062b22-host-proc-sys-kernel\") pod \"364bfab2-93a9-4445-9fae-81330b062b22\" (UID: \"364bfab2-93a9-4445-9fae-81330b062b22\") " Jan 30 12:57:23.629806 kubelet[2466]: I0130 12:57:23.628824 2466 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/364bfab2-93a9-4445-9fae-81330b062b22-hostproc\") pod \"364bfab2-93a9-4445-9fae-81330b062b22\" (UID: \"364bfab2-93a9-4445-9fae-81330b062b22\") " Jan 30 12:57:23.629806 kubelet[2466]: I0130 12:57:23.628856 2466 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/364bfab2-93a9-4445-9fae-81330b062b22-cilium-config-path\") pod \"364bfab2-93a9-4445-9fae-81330b062b22\" (UID: \"364bfab2-93a9-4445-9fae-81330b062b22\") " Jan 30 12:57:23.629806 kubelet[2466]: I0130 12:57:23.628877 2466 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/364bfab2-93a9-4445-9fae-81330b062b22-xtables-lock\") pod \"364bfab2-93a9-4445-9fae-81330b062b22\" (UID: \"364bfab2-93a9-4445-9fae-81330b062b22\") " Jan 30 12:57:23.629806 kubelet[2466]: I0130 12:57:23.628891 2466 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/364bfab2-93a9-4445-9fae-81330b062b22-cilium-run\") pod \"364bfab2-93a9-4445-9fae-81330b062b22\" (UID: \"364bfab2-93a9-4445-9fae-81330b062b22\") " Jan 30 12:57:23.630310 kubelet[2466]: I0130 12:57:23.628906 2466 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/364bfab2-93a9-4445-9fae-81330b062b22-host-proc-sys-net\") pod \"364bfab2-93a9-4445-9fae-81330b062b22\" (UID: \"364bfab2-93a9-4445-9fae-81330b062b22\") " Jan 30 12:57:23.630310 kubelet[2466]: I0130 12:57:23.628926 2466 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/364bfab2-93a9-4445-9fae-81330b062b22-clustermesh-secrets\") pod \"364bfab2-93a9-4445-9fae-81330b062b22\" (UID: \"364bfab2-93a9-4445-9fae-81330b062b22\") " Jan 30 12:57:23.630310 kubelet[2466]: I0130 12:57:23.628942 2466 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/364bfab2-93a9-4445-9fae-81330b062b22-bpf-maps\") pod \"364bfab2-93a9-4445-9fae-81330b062b22\" (UID: \"364bfab2-93a9-4445-9fae-81330b062b22\") " Jan 30 12:57:23.630310 kubelet[2466]: I0130 12:57:23.628961 2466 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/364bfab2-93a9-4445-9fae-81330b062b22-cilium-cgroup\") pod \"364bfab2-93a9-4445-9fae-81330b062b22\" (UID: \"364bfab2-93a9-4445-9fae-81330b062b22\") " Jan 30 12:57:23.630310 kubelet[2466]: I0130 12:57:23.628978 2466 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbf7m\" (UniqueName: \"kubernetes.io/projected/364bfab2-93a9-4445-9fae-81330b062b22-kube-api-access-tbf7m\") pod \"364bfab2-93a9-4445-9fae-81330b062b22\" (UID: \"364bfab2-93a9-4445-9fae-81330b062b22\") " Jan 30 12:57:23.630310 kubelet[2466]: I0130 12:57:23.628996 2466 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/364bfab2-93a9-4445-9fae-81330b062b22-hubble-tls\") pod \"364bfab2-93a9-4445-9fae-81330b062b22\" (UID: \"364bfab2-93a9-4445-9fae-81330b062b22\") " Jan 30 12:57:23.630460 kubelet[2466]: I0130 12:57:23.629039 2466 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c1b128be-584d-47f9-9527-ac0a43fcf59b-cilium-config-path\") pod \"c1b128be-584d-47f9-9527-ac0a43fcf59b\" (UID: \"c1b128be-584d-47f9-9527-ac0a43fcf59b\") " Jan 30 12:57:23.630460 kubelet[2466]: I0130 12:57:23.629057 2466 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/364bfab2-93a9-4445-9fae-81330b062b22-cni-path\") pod \"364bfab2-93a9-4445-9fae-81330b062b22\" (UID: \"364bfab2-93a9-4445-9fae-81330b062b22\") " Jan 30 12:57:23.630460 kubelet[2466]: I0130 12:57:23.629075 2466 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/364bfab2-93a9-4445-9fae-81330b062b22-lib-modules\") pod \"364bfab2-93a9-4445-9fae-81330b062b22\" (UID: \"364bfab2-93a9-4445-9fae-81330b062b22\") " Jan 30 12:57:23.630460 kubelet[2466]: I0130 12:57:23.629091 2466 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8l996\" (UniqueName: \"kubernetes.io/projected/c1b128be-584d-47f9-9527-ac0a43fcf59b-kube-api-access-8l996\") pod \"c1b128be-584d-47f9-9527-ac0a43fcf59b\" (UID: \"c1b128be-584d-47f9-9527-ac0a43fcf59b\") " Jan 30 12:57:23.634417 kubelet[2466]: I0130 12:57:23.632692 2466 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/364bfab2-93a9-4445-9fae-81330b062b22-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "364bfab2-93a9-4445-9fae-81330b062b22" (UID: "364bfab2-93a9-4445-9fae-81330b062b22"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 12:57:23.634417 kubelet[2466]: I0130 12:57:23.632693 2466 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/364bfab2-93a9-4445-9fae-81330b062b22-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "364bfab2-93a9-4445-9fae-81330b062b22" (UID: "364bfab2-93a9-4445-9fae-81330b062b22"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 12:57:23.634417 kubelet[2466]: I0130 12:57:23.632776 2466 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/364bfab2-93a9-4445-9fae-81330b062b22-hostproc" (OuterVolumeSpecName: "hostproc") pod "364bfab2-93a9-4445-9fae-81330b062b22" (UID: "364bfab2-93a9-4445-9fae-81330b062b22"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 12:57:23.634417 kubelet[2466]: I0130 12:57:23.632968 2466 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/364bfab2-93a9-4445-9fae-81330b062b22-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "364bfab2-93a9-4445-9fae-81330b062b22" (UID: "364bfab2-93a9-4445-9fae-81330b062b22"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 12:57:23.634417 kubelet[2466]: I0130 12:57:23.633035 2466 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/364bfab2-93a9-4445-9fae-81330b062b22-cni-path" (OuterVolumeSpecName: "cni-path") pod "364bfab2-93a9-4445-9fae-81330b062b22" (UID: "364bfab2-93a9-4445-9fae-81330b062b22"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 12:57:23.634794 kubelet[2466]: I0130 12:57:23.634752 2466 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1b128be-584d-47f9-9527-ac0a43fcf59b-kube-api-access-8l996" (OuterVolumeSpecName: "kube-api-access-8l996") pod "c1b128be-584d-47f9-9527-ac0a43fcf59b" (UID: "c1b128be-584d-47f9-9527-ac0a43fcf59b"). InnerVolumeSpecName "kube-api-access-8l996". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 12:57:23.634841 kubelet[2466]: I0130 12:57:23.634823 2466 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/364bfab2-93a9-4445-9fae-81330b062b22-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "364bfab2-93a9-4445-9fae-81330b062b22" (UID: "364bfab2-93a9-4445-9fae-81330b062b22"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 12:57:23.634868 kubelet[2466]: I0130 12:57:23.634848 2466 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/364bfab2-93a9-4445-9fae-81330b062b22-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "364bfab2-93a9-4445-9fae-81330b062b22" (UID: "364bfab2-93a9-4445-9fae-81330b062b22"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 12:57:23.634954 kubelet[2466]: I0130 12:57:23.634921 2466 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/364bfab2-93a9-4445-9fae-81330b062b22-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "364bfab2-93a9-4445-9fae-81330b062b22" (UID: "364bfab2-93a9-4445-9fae-81330b062b22"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 12:57:23.634996 kubelet[2466]: I0130 12:57:23.634977 2466 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/364bfab2-93a9-4445-9fae-81330b062b22-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "364bfab2-93a9-4445-9fae-81330b062b22" (UID: "364bfab2-93a9-4445-9fae-81330b062b22"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 12:57:23.635024 kubelet[2466]: I0130 12:57:23.634999 2466 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/364bfab2-93a9-4445-9fae-81330b062b22-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "364bfab2-93a9-4445-9fae-81330b062b22" (UID: "364bfab2-93a9-4445-9fae-81330b062b22"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 12:57:23.635024 kubelet[2466]: I0130 12:57:23.635018 2466 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/364bfab2-93a9-4445-9fae-81330b062b22-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "364bfab2-93a9-4445-9fae-81330b062b22" (UID: "364bfab2-93a9-4445-9fae-81330b062b22"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 12:57:23.635300 kubelet[2466]: I0130 12:57:23.635272 2466 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1b128be-584d-47f9-9527-ac0a43fcf59b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c1b128be-584d-47f9-9527-ac0a43fcf59b" (UID: "c1b128be-584d-47f9-9527-ac0a43fcf59b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 12:57:23.635784 kubelet[2466]: I0130 12:57:23.635754 2466 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/364bfab2-93a9-4445-9fae-81330b062b22-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "364bfab2-93a9-4445-9fae-81330b062b22" (UID: "364bfab2-93a9-4445-9fae-81330b062b22"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 12:57:23.637325 kubelet[2466]: I0130 12:57:23.637270 2466 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/364bfab2-93a9-4445-9fae-81330b062b22-kube-api-access-tbf7m" (OuterVolumeSpecName: "kube-api-access-tbf7m") pod "364bfab2-93a9-4445-9fae-81330b062b22" (UID: "364bfab2-93a9-4445-9fae-81330b062b22"). InnerVolumeSpecName "kube-api-access-tbf7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 12:57:23.637395 kubelet[2466]: I0130 12:57:23.637351 2466 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/364bfab2-93a9-4445-9fae-81330b062b22-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "364bfab2-93a9-4445-9fae-81330b062b22" (UID: "364bfab2-93a9-4445-9fae-81330b062b22"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 12:57:23.729594 kubelet[2466]: I0130 12:57:23.729543 2466 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/364bfab2-93a9-4445-9fae-81330b062b22-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 30 12:57:23.729594 kubelet[2466]: I0130 12:57:23.729580 2466 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/364bfab2-93a9-4445-9fae-81330b062b22-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 30 12:57:23.729594 kubelet[2466]: I0130 12:57:23.729592 2466 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/364bfab2-93a9-4445-9fae-81330b062b22-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 30 12:57:23.729594 kubelet[2466]: I0130 12:57:23.729602 2466 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/364bfab2-93a9-4445-9fae-81330b062b22-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 30 12:57:23.729594 kubelet[2466]: I0130 12:57:23.729612 2466 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/364bfab2-93a9-4445-9fae-81330b062b22-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 30 12:57:23.729862 kubelet[2466]: I0130 12:57:23.729620 2466 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/364bfab2-93a9-4445-9fae-81330b062b22-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 30 12:57:23.729862 kubelet[2466]: I0130 12:57:23.729628 2466 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/364bfab2-93a9-4445-9fae-81330b062b22-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 30 12:57:23.729862 kubelet[2466]: I0130 12:57:23.729636 2466 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/364bfab2-93a9-4445-9fae-81330b062b22-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 30 12:57:23.729862 kubelet[2466]: I0130 12:57:23.729644 2466 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/364bfab2-93a9-4445-9fae-81330b062b22-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 30 12:57:23.729862 kubelet[2466]: I0130 12:57:23.729652 2466 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c1b128be-584d-47f9-9527-ac0a43fcf59b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 30 12:57:23.729862 kubelet[2466]: I0130 12:57:23.729662 2466 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/364bfab2-93a9-4445-9fae-81330b062b22-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 30 12:57:23.729862 kubelet[2466]: I0130 12:57:23.729678 2466 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-tbf7m\" (UniqueName: \"kubernetes.io/projected/364bfab2-93a9-4445-9fae-81330b062b22-kube-api-access-tbf7m\") on node \"localhost\" DevicePath \"\"" Jan 30 12:57:23.729862 kubelet[2466]: I0130 12:57:23.729689 2466 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/364bfab2-93a9-4445-9fae-81330b062b22-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 30 12:57:23.730065 kubelet[2466]: I0130 12:57:23.729697 2466 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/364bfab2-93a9-4445-9fae-81330b062b22-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 30 12:57:23.730065 kubelet[2466]: I0130 12:57:23.729705 2466 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/364bfab2-93a9-4445-9fae-81330b062b22-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 30 12:57:23.730065 kubelet[2466]: I0130 12:57:23.729713 2466 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-8l996\" (UniqueName: \"kubernetes.io/projected/c1b128be-584d-47f9-9527-ac0a43fcf59b-kube-api-access-8l996\") on node \"localhost\" DevicePath \"\"" Jan 30 12:57:24.151234 systemd[1]: Removed slice kubepods-burstable-pod364bfab2_93a9_4445_9fae_81330b062b22.slice - libcontainer container kubepods-burstable-pod364bfab2_93a9_4445_9fae_81330b062b22.slice. Jan 30 12:57:24.151606 systemd[1]: kubepods-burstable-pod364bfab2_93a9_4445_9fae_81330b062b22.slice: Consumed 7.492s CPU time. Jan 30 12:57:24.152664 systemd[1]: Removed slice kubepods-besteffort-podc1b128be_584d_47f9_9527_ac0a43fcf59b.slice - libcontainer container kubepods-besteffort-podc1b128be_584d_47f9_9527_ac0a43fcf59b.slice. Jan 30 12:57:24.270820 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-774be15d8742d2a76989aef71c94a76551a260bbb41c30337b1c385a7b6cb87f-rootfs.mount: Deactivated successfully. Jan 30 12:57:24.270935 systemd[1]: var-lib-kubelet-pods-c1b128be\x2d584d\x2d47f9\x2d9527\x2dac0a43fcf59b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8l996.mount: Deactivated successfully. Jan 30 12:57:24.271014 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e6efb08d504f5d9a53dcc6f957b808adcf43a7952632ca746fd29301475c6ecc-rootfs.mount: Deactivated successfully. Jan 30 12:57:24.271072 systemd[1]: var-lib-kubelet-pods-364bfab2\x2d93a9\x2d4445\x2d9fae\x2d81330b062b22-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtbf7m.mount: Deactivated successfully. Jan 30 12:57:24.271136 systemd[1]: var-lib-kubelet-pods-364bfab2\x2d93a9\x2d4445\x2d9fae\x2d81330b062b22-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 30 12:57:24.271203 systemd[1]: var-lib-kubelet-pods-364bfab2\x2d93a9\x2d4445\x2d9fae\x2d81330b062b22-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 30 12:57:24.374057 kubelet[2466]: I0130 12:57:24.373965 2466 scope.go:117] "RemoveContainer" containerID="8ffdaee1e9bb72f4193e1c6d645aa8bb376c9e3f55754714b941c77377e39a1d" Jan 30 12:57:24.375440 containerd[1438]: time="2025-01-30T12:57:24.375404207Z" level=info msg="RemoveContainer for \"8ffdaee1e9bb72f4193e1c6d645aa8bb376c9e3f55754714b941c77377e39a1d\"" Jan 30 12:57:24.418573 containerd[1438]: time="2025-01-30T12:57:24.418451154Z" level=info msg="RemoveContainer for \"8ffdaee1e9bb72f4193e1c6d645aa8bb376c9e3f55754714b941c77377e39a1d\" returns successfully" Jan 30 12:57:24.419109 kubelet[2466]: I0130 12:57:24.418978 2466 scope.go:117] "RemoveContainer" containerID="7f33faed3a948b741a8b00c9a845bc3498dc27733ba706643d691b4e50c80e09" Jan 30 12:57:24.420315 containerd[1438]: time="2025-01-30T12:57:24.420276361Z" level=info msg="RemoveContainer for \"7f33faed3a948b741a8b00c9a845bc3498dc27733ba706643d691b4e50c80e09\"" Jan 30 12:57:24.449470 containerd[1438]: time="2025-01-30T12:57:24.449413700Z" level=info msg="RemoveContainer for \"7f33faed3a948b741a8b00c9a845bc3498dc27733ba706643d691b4e50c80e09\" returns successfully" Jan 30 12:57:24.449852 kubelet[2466]: I0130 12:57:24.449799 2466 scope.go:117] "RemoveContainer" containerID="175772b98afba615a48de0eefec2ac796daa2233e2ea35b80a4c9b4b567d8bd7" Jan 30 12:57:24.451330 containerd[1438]: time="2025-01-30T12:57:24.451291547Z" level=info msg="RemoveContainer for \"175772b98afba615a48de0eefec2ac796daa2233e2ea35b80a4c9b4b567d8bd7\"" Jan 30 12:57:24.476995 containerd[1438]: time="2025-01-30T12:57:24.476908754Z" level=info msg="RemoveContainer for \"175772b98afba615a48de0eefec2ac796daa2233e2ea35b80a4c9b4b567d8bd7\" returns successfully" Jan 30 12:57:24.482184 kubelet[2466]: I0130 12:57:24.482118 2466 scope.go:117] "RemoveContainer" containerID="ba0cdf776e9a587836952ae553012a1bf5b0dcccdbf480dd67bfacc66fafa4a3" Jan 30 12:57:24.483482 containerd[1438]: time="2025-01-30T12:57:24.483430337Z" level=info msg="RemoveContainer for \"ba0cdf776e9a587836952ae553012a1bf5b0dcccdbf480dd67bfacc66fafa4a3\"" Jan 30 12:57:24.510251 containerd[1438]: time="2025-01-30T12:57:24.510192988Z" level=info msg="RemoveContainer for \"ba0cdf776e9a587836952ae553012a1bf5b0dcccdbf480dd67bfacc66fafa4a3\" returns successfully" Jan 30 12:57:24.510537 kubelet[2466]: I0130 12:57:24.510501 2466 scope.go:117] "RemoveContainer" containerID="d512e4a475a2765b2d063fc890562fa9d322f21a0403cc18a144348796b40154" Jan 30 12:57:24.511596 containerd[1438]: time="2025-01-30T12:57:24.511551233Z" level=info msg="RemoveContainer for \"d512e4a475a2765b2d063fc890562fa9d322f21a0403cc18a144348796b40154\"" Jan 30 12:57:24.524086 containerd[1438]: time="2025-01-30T12:57:24.524023835Z" level=info msg="RemoveContainer for \"d512e4a475a2765b2d063fc890562fa9d322f21a0403cc18a144348796b40154\" returns successfully" Jan 30 12:57:24.524357 kubelet[2466]: I0130 12:57:24.524324 2466 scope.go:117] "RemoveContainer" containerID="8ffdaee1e9bb72f4193e1c6d645aa8bb376c9e3f55754714b941c77377e39a1d" Jan 30 12:57:24.524617 containerd[1438]: time="2025-01-30T12:57:24.524570837Z" level=error msg="ContainerStatus for \"8ffdaee1e9bb72f4193e1c6d645aa8bb376c9e3f55754714b941c77377e39a1d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8ffdaee1e9bb72f4193e1c6d645aa8bb376c9e3f55754714b941c77377e39a1d\": not found" Jan 30 12:57:24.535144 kubelet[2466]: E0130 12:57:24.535080 2466 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8ffdaee1e9bb72f4193e1c6d645aa8bb376c9e3f55754714b941c77377e39a1d\": not found" containerID="8ffdaee1e9bb72f4193e1c6d645aa8bb376c9e3f55754714b941c77377e39a1d" Jan 30 12:57:24.535309 kubelet[2466]: I0130 12:57:24.535145 2466 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8ffdaee1e9bb72f4193e1c6d645aa8bb376c9e3f55754714b941c77377e39a1d"} err="failed to get container status \"8ffdaee1e9bb72f4193e1c6d645aa8bb376c9e3f55754714b941c77377e39a1d\": rpc error: code = NotFound desc = an error occurred when try to find container \"8ffdaee1e9bb72f4193e1c6d645aa8bb376c9e3f55754714b941c77377e39a1d\": not found" Jan 30 12:57:24.535309 kubelet[2466]: I0130 12:57:24.535274 2466 scope.go:117] "RemoveContainer" containerID="7f33faed3a948b741a8b00c9a845bc3498dc27733ba706643d691b4e50c80e09" Jan 30 12:57:24.535632 containerd[1438]: time="2025-01-30T12:57:24.535571515Z" level=error msg="ContainerStatus for \"7f33faed3a948b741a8b00c9a845bc3498dc27733ba706643d691b4e50c80e09\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7f33faed3a948b741a8b00c9a845bc3498dc27733ba706643d691b4e50c80e09\": not found" Jan 30 12:57:24.535752 kubelet[2466]: E0130 12:57:24.535731 2466 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7f33faed3a948b741a8b00c9a845bc3498dc27733ba706643d691b4e50c80e09\": not found" containerID="7f33faed3a948b741a8b00c9a845bc3498dc27733ba706643d691b4e50c80e09" Jan 30 12:57:24.535788 kubelet[2466]: I0130 12:57:24.535774 2466 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7f33faed3a948b741a8b00c9a845bc3498dc27733ba706643d691b4e50c80e09"} err="failed to get container status \"7f33faed3a948b741a8b00c9a845bc3498dc27733ba706643d691b4e50c80e09\": rpc error: code = NotFound desc = an error occurred when try to find container \"7f33faed3a948b741a8b00c9a845bc3498dc27733ba706643d691b4e50c80e09\": not found" Jan 30 12:57:24.535811 kubelet[2466]: I0130 12:57:24.535790 2466 scope.go:117] "RemoveContainer" containerID="175772b98afba615a48de0eefec2ac796daa2233e2ea35b80a4c9b4b567d8bd7" Jan 30 12:57:24.536005 containerd[1438]: time="2025-01-30T12:57:24.535955916Z" level=error msg="ContainerStatus for \"175772b98afba615a48de0eefec2ac796daa2233e2ea35b80a4c9b4b567d8bd7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"175772b98afba615a48de0eefec2ac796daa2233e2ea35b80a4c9b4b567d8bd7\": not found" Jan 30 12:57:24.536079 kubelet[2466]: E0130 12:57:24.536055 2466 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"175772b98afba615a48de0eefec2ac796daa2233e2ea35b80a4c9b4b567d8bd7\": not found" containerID="175772b98afba615a48de0eefec2ac796daa2233e2ea35b80a4c9b4b567d8bd7" Jan 30 12:57:24.536119 kubelet[2466]: I0130 12:57:24.536074 2466 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"175772b98afba615a48de0eefec2ac796daa2233e2ea35b80a4c9b4b567d8bd7"} err="failed to get container status \"175772b98afba615a48de0eefec2ac796daa2233e2ea35b80a4c9b4b567d8bd7\": rpc error: code = NotFound desc = an error occurred when try to find container \"175772b98afba615a48de0eefec2ac796daa2233e2ea35b80a4c9b4b567d8bd7\": not found" Jan 30 12:57:24.536119 kubelet[2466]: I0130 12:57:24.536089 2466 scope.go:117] "RemoveContainer" containerID="ba0cdf776e9a587836952ae553012a1bf5b0dcccdbf480dd67bfacc66fafa4a3" Jan 30 12:57:24.536321 containerd[1438]: time="2025-01-30T12:57:24.536288117Z" level=error msg="ContainerStatus for \"ba0cdf776e9a587836952ae553012a1bf5b0dcccdbf480dd67bfacc66fafa4a3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ba0cdf776e9a587836952ae553012a1bf5b0dcccdbf480dd67bfacc66fafa4a3\": not found" Jan 30 12:57:24.536564 kubelet[2466]: E0130 12:57:24.536420 2466 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ba0cdf776e9a587836952ae553012a1bf5b0dcccdbf480dd67bfacc66fafa4a3\": not found" containerID="ba0cdf776e9a587836952ae553012a1bf5b0dcccdbf480dd67bfacc66fafa4a3" Jan 30 12:57:24.536564 kubelet[2466]: I0130 12:57:24.536442 2466 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ba0cdf776e9a587836952ae553012a1bf5b0dcccdbf480dd67bfacc66fafa4a3"} err="failed to get container status \"ba0cdf776e9a587836952ae553012a1bf5b0dcccdbf480dd67bfacc66fafa4a3\": rpc error: code = NotFound desc = an error occurred when try to find container \"ba0cdf776e9a587836952ae553012a1bf5b0dcccdbf480dd67bfacc66fafa4a3\": not found" Jan 30 12:57:24.536564 kubelet[2466]: I0130 12:57:24.536455 2466 scope.go:117] "RemoveContainer" containerID="d512e4a475a2765b2d063fc890562fa9d322f21a0403cc18a144348796b40154" Jan 30 12:57:24.536880 containerd[1438]: time="2025-01-30T12:57:24.536798719Z" level=error msg="ContainerStatus for \"d512e4a475a2765b2d063fc890562fa9d322f21a0403cc18a144348796b40154\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d512e4a475a2765b2d063fc890562fa9d322f21a0403cc18a144348796b40154\": not found" Jan 30 12:57:24.537065 kubelet[2466]: E0130 12:57:24.536914 2466 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d512e4a475a2765b2d063fc890562fa9d322f21a0403cc18a144348796b40154\": not found" containerID="d512e4a475a2765b2d063fc890562fa9d322f21a0403cc18a144348796b40154" Jan 30 12:57:24.537065 kubelet[2466]: I0130 12:57:24.536979 2466 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d512e4a475a2765b2d063fc890562fa9d322f21a0403cc18a144348796b40154"} err="failed to get container status \"d512e4a475a2765b2d063fc890562fa9d322f21a0403cc18a144348796b40154\": rpc error: code = NotFound desc = an error occurred when try to find container \"d512e4a475a2765b2d063fc890562fa9d322f21a0403cc18a144348796b40154\": not found" Jan 30 12:57:24.537065 kubelet[2466]: I0130 12:57:24.536998 2466 scope.go:117] "RemoveContainer" containerID="fc4b0599204106f811b34f7954e49398aa685dc2d3dc0cbb3541fbf95926d496" Jan 30 12:57:24.538022 containerd[1438]: time="2025-01-30T12:57:24.537994563Z" level=info msg="RemoveContainer for \"fc4b0599204106f811b34f7954e49398aa685dc2d3dc0cbb3541fbf95926d496\"" Jan 30 12:57:24.540882 containerd[1438]: time="2025-01-30T12:57:24.540821573Z" level=info msg="RemoveContainer for \"fc4b0599204106f811b34f7954e49398aa685dc2d3dc0cbb3541fbf95926d496\" returns successfully" Jan 30 12:57:25.143042 kubelet[2466]: E0130 12:57:25.142646 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:57:25.195744 sshd[4113]: pam_unix(sshd:session): session closed for user core Jan 30 12:57:25.208947 systemd[1]: sshd@22-10.0.0.64:22-10.0.0.1:43736.service: Deactivated successfully. Jan 30 12:57:25.211737 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 12:57:25.211906 systemd[1]: session-23.scope: Consumed 1.605s CPU time. Jan 30 12:57:25.213331 systemd-logind[1418]: Session 23 logged out. Waiting for processes to exit. Jan 30 12:57:25.215981 systemd[1]: Started sshd@23-10.0.0.64:22-10.0.0.1:43008.service - OpenSSH per-connection server daemon (10.0.0.1:43008). Jan 30 12:57:25.216563 systemd-logind[1418]: Removed session 23. Jan 30 12:57:25.255547 sshd[4272]: Accepted publickey for core from 10.0.0.1 port 43008 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:57:25.256953 sshd[4272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:57:25.261436 systemd-logind[1418]: New session 24 of user core. Jan 30 12:57:25.268422 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 12:57:26.145176 kubelet[2466]: E0130 12:57:26.144478 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:57:26.147304 kubelet[2466]: I0130 12:57:26.146710 2466 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="364bfab2-93a9-4445-9fae-81330b062b22" path="/var/lib/kubelet/pods/364bfab2-93a9-4445-9fae-81330b062b22/volumes" Jan 30 12:57:26.147989 kubelet[2466]: I0130 12:57:26.147934 2466 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1b128be-584d-47f9-9527-ac0a43fcf59b" path="/var/lib/kubelet/pods/c1b128be-584d-47f9-9527-ac0a43fcf59b/volumes" Jan 30 12:57:26.192807 kubelet[2466]: E0130 12:57:26.192753 2466 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 12:57:26.203916 sshd[4272]: pam_unix(sshd:session): session closed for user core Jan 30 12:57:26.212379 systemd[1]: sshd@23-10.0.0.64:22-10.0.0.1:43008.service: Deactivated successfully. Jan 30 12:57:26.216111 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 12:57:26.220523 systemd-logind[1418]: Session 24 logged out. Waiting for processes to exit. Jan 30 12:57:26.225977 kubelet[2466]: E0130 12:57:26.225924 2466 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c1b128be-584d-47f9-9527-ac0a43fcf59b" containerName="cilium-operator" Jan 30 12:57:26.225977 kubelet[2466]: E0130 12:57:26.225958 2466 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="364bfab2-93a9-4445-9fae-81330b062b22" containerName="cilium-agent" Jan 30 12:57:26.225977 kubelet[2466]: E0130 12:57:26.225966 2466 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="364bfab2-93a9-4445-9fae-81330b062b22" containerName="mount-cgroup" Jan 30 12:57:26.225977 kubelet[2466]: E0130 12:57:26.225973 2466 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="364bfab2-93a9-4445-9fae-81330b062b22" containerName="apply-sysctl-overwrites" Jan 30 12:57:26.225977 kubelet[2466]: E0130 12:57:26.225980 2466 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="364bfab2-93a9-4445-9fae-81330b062b22" containerName="mount-bpf-fs" Jan 30 12:57:26.225977 kubelet[2466]: E0130 12:57:26.225986 2466 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="364bfab2-93a9-4445-9fae-81330b062b22" containerName="clean-cilium-state" Jan 30 12:57:26.226260 kubelet[2466]: I0130 12:57:26.226010 2466 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1b128be-584d-47f9-9527-ac0a43fcf59b" containerName="cilium-operator" Jan 30 12:57:26.226260 kubelet[2466]: I0130 12:57:26.226016 2466 memory_manager.go:354] "RemoveStaleState removing state" podUID="364bfab2-93a9-4445-9fae-81330b062b22" containerName="cilium-agent" Jan 30 12:57:26.229601 systemd[1]: Started sshd@24-10.0.0.64:22-10.0.0.1:43024.service - OpenSSH per-connection server daemon (10.0.0.1:43024). Jan 30 12:57:26.234060 systemd-logind[1418]: Removed session 24. Jan 30 12:57:26.245865 systemd[1]: Created slice kubepods-burstable-podb03567c3_7160_4983_9b8b_8f9503d2ca5a.slice - libcontainer container kubepods-burstable-podb03567c3_7160_4983_9b8b_8f9503d2ca5a.slice. Jan 30 12:57:26.272993 sshd[4285]: Accepted publickey for core from 10.0.0.1 port 43024 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:57:26.274548 sshd[4285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:57:26.279317 systemd-logind[1418]: New session 25 of user core. Jan 30 12:57:26.286447 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 12:57:26.339005 sshd[4285]: pam_unix(sshd:session): session closed for user core Jan 30 12:57:26.343798 kubelet[2466]: I0130 12:57:26.343546 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b03567c3-7160-4983-9b8b-8f9503d2ca5a-cilium-cgroup\") pod \"cilium-2wgt5\" (UID: \"b03567c3-7160-4983-9b8b-8f9503d2ca5a\") " pod="kube-system/cilium-2wgt5" Jan 30 12:57:26.343798 kubelet[2466]: I0130 12:57:26.343596 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b03567c3-7160-4983-9b8b-8f9503d2ca5a-cilium-config-path\") pod \"cilium-2wgt5\" (UID: \"b03567c3-7160-4983-9b8b-8f9503d2ca5a\") " pod="kube-system/cilium-2wgt5" Jan 30 12:57:26.343798 kubelet[2466]: I0130 12:57:26.343622 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b03567c3-7160-4983-9b8b-8f9503d2ca5a-clustermesh-secrets\") pod \"cilium-2wgt5\" (UID: \"b03567c3-7160-4983-9b8b-8f9503d2ca5a\") " pod="kube-system/cilium-2wgt5" Jan 30 12:57:26.343798 kubelet[2466]: I0130 12:57:26.343642 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b03567c3-7160-4983-9b8b-8f9503d2ca5a-cni-path\") pod \"cilium-2wgt5\" (UID: \"b03567c3-7160-4983-9b8b-8f9503d2ca5a\") " pod="kube-system/cilium-2wgt5" Jan 30 12:57:26.343798 kubelet[2466]: I0130 12:57:26.343662 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b03567c3-7160-4983-9b8b-8f9503d2ca5a-host-proc-sys-net\") pod \"cilium-2wgt5\" (UID: \"b03567c3-7160-4983-9b8b-8f9503d2ca5a\") " pod="kube-system/cilium-2wgt5" Jan 30 12:57:26.343798 kubelet[2466]: I0130 12:57:26.343681 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b03567c3-7160-4983-9b8b-8f9503d2ca5a-etc-cni-netd\") pod \"cilium-2wgt5\" (UID: \"b03567c3-7160-4983-9b8b-8f9503d2ca5a\") " pod="kube-system/cilium-2wgt5" Jan 30 12:57:26.344032 kubelet[2466]: I0130 12:57:26.343712 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b03567c3-7160-4983-9b8b-8f9503d2ca5a-xtables-lock\") pod \"cilium-2wgt5\" (UID: \"b03567c3-7160-4983-9b8b-8f9503d2ca5a\") " pod="kube-system/cilium-2wgt5" Jan 30 12:57:26.344032 kubelet[2466]: I0130 12:57:26.343735 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b03567c3-7160-4983-9b8b-8f9503d2ca5a-cilium-ipsec-secrets\") pod \"cilium-2wgt5\" (UID: \"b03567c3-7160-4983-9b8b-8f9503d2ca5a\") " pod="kube-system/cilium-2wgt5" Jan 30 12:57:26.344032 kubelet[2466]: I0130 12:57:26.343757 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b03567c3-7160-4983-9b8b-8f9503d2ca5a-cilium-run\") pod \"cilium-2wgt5\" (UID: \"b03567c3-7160-4983-9b8b-8f9503d2ca5a\") " pod="kube-system/cilium-2wgt5" Jan 30 12:57:26.344032 kubelet[2466]: I0130 12:57:26.343778 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b03567c3-7160-4983-9b8b-8f9503d2ca5a-hostproc\") pod \"cilium-2wgt5\" (UID: \"b03567c3-7160-4983-9b8b-8f9503d2ca5a\") " pod="kube-system/cilium-2wgt5" Jan 30 12:57:26.344275 kubelet[2466]: I0130 12:57:26.344188 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b03567c3-7160-4983-9b8b-8f9503d2ca5a-host-proc-sys-kernel\") pod \"cilium-2wgt5\" (UID: \"b03567c3-7160-4983-9b8b-8f9503d2ca5a\") " pod="kube-system/cilium-2wgt5" Jan 30 12:57:26.344346 kubelet[2466]: I0130 12:57:26.344311 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b03567c3-7160-4983-9b8b-8f9503d2ca5a-bpf-maps\") pod \"cilium-2wgt5\" (UID: \"b03567c3-7160-4983-9b8b-8f9503d2ca5a\") " pod="kube-system/cilium-2wgt5" Jan 30 12:57:26.344461 kubelet[2466]: I0130 12:57:26.344438 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b03567c3-7160-4983-9b8b-8f9503d2ca5a-lib-modules\") pod \"cilium-2wgt5\" (UID: \"b03567c3-7160-4983-9b8b-8f9503d2ca5a\") " pod="kube-system/cilium-2wgt5" Jan 30 12:57:26.344629 kubelet[2466]: I0130 12:57:26.344532 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b03567c3-7160-4983-9b8b-8f9503d2ca5a-hubble-tls\") pod \"cilium-2wgt5\" (UID: \"b03567c3-7160-4983-9b8b-8f9503d2ca5a\") " pod="kube-system/cilium-2wgt5" Jan 30 12:57:26.344629 kubelet[2466]: I0130 12:57:26.344576 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssd52\" (UniqueName: \"kubernetes.io/projected/b03567c3-7160-4983-9b8b-8f9503d2ca5a-kube-api-access-ssd52\") pod \"cilium-2wgt5\" (UID: \"b03567c3-7160-4983-9b8b-8f9503d2ca5a\") " pod="kube-system/cilium-2wgt5" Jan 30 12:57:26.349152 systemd[1]: sshd@24-10.0.0.64:22-10.0.0.1:43024.service: Deactivated successfully. Jan 30 12:57:26.350714 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 12:57:26.352033 systemd-logind[1418]: Session 25 logged out. Waiting for processes to exit. Jan 30 12:57:26.359576 systemd[1]: Started sshd@25-10.0.0.64:22-10.0.0.1:43026.service - OpenSSH per-connection server daemon (10.0.0.1:43026). Jan 30 12:57:26.360499 systemd-logind[1418]: Removed session 25. Jan 30 12:57:26.390818 sshd[4293]: Accepted publickey for core from 10.0.0.1 port 43026 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:57:26.392126 sshd[4293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:57:26.397624 systemd-logind[1418]: New session 26 of user core. Jan 30 12:57:26.401673 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 12:57:26.552937 kubelet[2466]: E0130 12:57:26.552843 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:57:26.553961 containerd[1438]: time="2025-01-30T12:57:26.553920400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2wgt5,Uid:b03567c3-7160-4983-9b8b-8f9503d2ca5a,Namespace:kube-system,Attempt:0,}" Jan 30 12:57:26.578780 containerd[1438]: time="2025-01-30T12:57:26.578677246Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:57:26.578780 containerd[1438]: time="2025-01-30T12:57:26.578742927Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:57:26.578780 containerd[1438]: time="2025-01-30T12:57:26.578754047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:57:26.580149 containerd[1438]: time="2025-01-30T12:57:26.578833247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:57:26.597444 systemd[1]: Started cri-containerd-9ffb76f8ab65566cf71e8e11deedb6e20d7ba28e99ba5beb28a4e39290964e04.scope - libcontainer container 9ffb76f8ab65566cf71e8e11deedb6e20d7ba28e99ba5beb28a4e39290964e04. Jan 30 12:57:26.616076 containerd[1438]: time="2025-01-30T12:57:26.616038736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2wgt5,Uid:b03567c3-7160-4983-9b8b-8f9503d2ca5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ffb76f8ab65566cf71e8e11deedb6e20d7ba28e99ba5beb28a4e39290964e04\"" Jan 30 12:57:26.617181 kubelet[2466]: E0130 12:57:26.617151 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:57:26.620029 containerd[1438]: time="2025-01-30T12:57:26.619970430Z" level=info msg="CreateContainer within sandbox \"9ffb76f8ab65566cf71e8e11deedb6e20d7ba28e99ba5beb28a4e39290964e04\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 12:57:26.647530 containerd[1438]: time="2025-01-30T12:57:26.647400326Z" level=info msg="CreateContainer within sandbox \"9ffb76f8ab65566cf71e8e11deedb6e20d7ba28e99ba5beb28a4e39290964e04\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9422183363b2ae117f0f260b62d0f2a266390975921c5e04fe45a26f3ed9ae13\"" Jan 30 12:57:26.649608 containerd[1438]: time="2025-01-30T12:57:26.649505373Z" level=info msg="StartContainer for \"9422183363b2ae117f0f260b62d0f2a266390975921c5e04fe45a26f3ed9ae13\"" Jan 30 12:57:26.675443 systemd[1]: Started cri-containerd-9422183363b2ae117f0f260b62d0f2a266390975921c5e04fe45a26f3ed9ae13.scope - libcontainer container 9422183363b2ae117f0f260b62d0f2a266390975921c5e04fe45a26f3ed9ae13. Jan 30 12:57:26.697742 containerd[1438]: time="2025-01-30T12:57:26.697674981Z" level=info msg="StartContainer for \"9422183363b2ae117f0f260b62d0f2a266390975921c5e04fe45a26f3ed9ae13\" returns successfully" Jan 30 12:57:26.709466 systemd[1]: cri-containerd-9422183363b2ae117f0f260b62d0f2a266390975921c5e04fe45a26f3ed9ae13.scope: Deactivated successfully. Jan 30 12:57:26.769023 containerd[1438]: time="2025-01-30T12:57:26.768940549Z" level=info msg="shim disconnected" id=9422183363b2ae117f0f260b62d0f2a266390975921c5e04fe45a26f3ed9ae13 namespace=k8s.io Jan 30 12:57:26.769023 containerd[1438]: time="2025-01-30T12:57:26.769014989Z" level=warning msg="cleaning up after shim disconnected" id=9422183363b2ae117f0f260b62d0f2a266390975921c5e04fe45a26f3ed9ae13 namespace=k8s.io Jan 30 12:57:26.769023 containerd[1438]: time="2025-01-30T12:57:26.769024949Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:57:27.387588 kubelet[2466]: E0130 12:57:27.387553 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:57:27.390010 containerd[1438]: time="2025-01-30T12:57:27.389775680Z" level=info msg="CreateContainer within sandbox \"9ffb76f8ab65566cf71e8e11deedb6e20d7ba28e99ba5beb28a4e39290964e04\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 12:57:27.399097 containerd[1438]: time="2025-01-30T12:57:27.399050312Z" level=info msg="CreateContainer within sandbox \"9ffb76f8ab65566cf71e8e11deedb6e20d7ba28e99ba5beb28a4e39290964e04\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6beb9af27f1e621a87c361ef1aec529dd52e10935f7e0b70455bb2e621dcc9c9\"" Jan 30 12:57:27.399928 containerd[1438]: time="2025-01-30T12:57:27.399894715Z" level=info msg="StartContainer for \"6beb9af27f1e621a87c361ef1aec529dd52e10935f7e0b70455bb2e621dcc9c9\"" Jan 30 12:57:27.430426 systemd[1]: Started cri-containerd-6beb9af27f1e621a87c361ef1aec529dd52e10935f7e0b70455bb2e621dcc9c9.scope - libcontainer container 6beb9af27f1e621a87c361ef1aec529dd52e10935f7e0b70455bb2e621dcc9c9. Jan 30 12:57:27.456136 containerd[1438]: time="2025-01-30T12:57:27.456076552Z" level=info msg="StartContainer for \"6beb9af27f1e621a87c361ef1aec529dd52e10935f7e0b70455bb2e621dcc9c9\" returns successfully" Jan 30 12:57:27.462331 systemd[1]: cri-containerd-6beb9af27f1e621a87c361ef1aec529dd52e10935f7e0b70455bb2e621dcc9c9.scope: Deactivated successfully. Jan 30 12:57:27.478145 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6beb9af27f1e621a87c361ef1aec529dd52e10935f7e0b70455bb2e621dcc9c9-rootfs.mount: Deactivated successfully. Jan 30 12:57:27.482848 containerd[1438]: time="2025-01-30T12:57:27.482673886Z" level=info msg="shim disconnected" id=6beb9af27f1e621a87c361ef1aec529dd52e10935f7e0b70455bb2e621dcc9c9 namespace=k8s.io Jan 30 12:57:27.482848 containerd[1438]: time="2025-01-30T12:57:27.482750206Z" level=warning msg="cleaning up after shim disconnected" id=6beb9af27f1e621a87c361ef1aec529dd52e10935f7e0b70455bb2e621dcc9c9 namespace=k8s.io Jan 30 12:57:27.482848 containerd[1438]: time="2025-01-30T12:57:27.482758566Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:57:28.023177 kubelet[2466]: I0130 12:57:28.023079 2466 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-30T12:57:28Z","lastTransitionTime":"2025-01-30T12:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 30 12:57:28.391321 kubelet[2466]: E0130 12:57:28.391206 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:57:28.394409 containerd[1438]: time="2025-01-30T12:57:28.394371655Z" level=info msg="CreateContainer within sandbox \"9ffb76f8ab65566cf71e8e11deedb6e20d7ba28e99ba5beb28a4e39290964e04\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 12:57:28.428070 containerd[1438]: time="2025-01-30T12:57:28.428023814Z" level=info msg="CreateContainer within sandbox \"9ffb76f8ab65566cf71e8e11deedb6e20d7ba28e99ba5beb28a4e39290964e04\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6d404590af0a7c0f603ea90fa3c67158922fe3c8ee19fa79f6944e8e2de9d862\"" Jan 30 12:57:28.428854 containerd[1438]: time="2025-01-30T12:57:28.428826576Z" level=info msg="StartContainer for \"6d404590af0a7c0f603ea90fa3c67158922fe3c8ee19fa79f6944e8e2de9d862\"" Jan 30 12:57:28.457927 systemd[1]: run-containerd-runc-k8s.io-6d404590af0a7c0f603ea90fa3c67158922fe3c8ee19fa79f6944e8e2de9d862-runc.wrei5R.mount: Deactivated successfully. Jan 30 12:57:28.468428 systemd[1]: Started cri-containerd-6d404590af0a7c0f603ea90fa3c67158922fe3c8ee19fa79f6944e8e2de9d862.scope - libcontainer container 6d404590af0a7c0f603ea90fa3c67158922fe3c8ee19fa79f6944e8e2de9d862. Jan 30 12:57:28.494774 systemd[1]: cri-containerd-6d404590af0a7c0f603ea90fa3c67158922fe3c8ee19fa79f6944e8e2de9d862.scope: Deactivated successfully. Jan 30 12:57:28.495590 containerd[1438]: time="2025-01-30T12:57:28.495394372Z" level=info msg="StartContainer for \"6d404590af0a7c0f603ea90fa3c67158922fe3c8ee19fa79f6944e8e2de9d862\" returns successfully" Jan 30 12:57:28.519029 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d404590af0a7c0f603ea90fa3c67158922fe3c8ee19fa79f6944e8e2de9d862-rootfs.mount: Deactivated successfully. Jan 30 12:57:28.526103 containerd[1438]: time="2025-01-30T12:57:28.525879960Z" level=info msg="shim disconnected" id=6d404590af0a7c0f603ea90fa3c67158922fe3c8ee19fa79f6944e8e2de9d862 namespace=k8s.io Jan 30 12:57:28.526103 containerd[1438]: time="2025-01-30T12:57:28.525939120Z" level=warning msg="cleaning up after shim disconnected" id=6d404590af0a7c0f603ea90fa3c67158922fe3c8ee19fa79f6944e8e2de9d862 namespace=k8s.io Jan 30 12:57:28.526103 containerd[1438]: time="2025-01-30T12:57:28.525948480Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:57:29.395259 kubelet[2466]: E0130 12:57:29.395209 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:57:29.399465 containerd[1438]: time="2025-01-30T12:57:29.398422535Z" level=info msg="CreateContainer within sandbox \"9ffb76f8ab65566cf71e8e11deedb6e20d7ba28e99ba5beb28a4e39290964e04\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 12:57:29.416832 containerd[1438]: time="2025-01-30T12:57:29.416779960Z" level=info msg="CreateContainer within sandbox \"9ffb76f8ab65566cf71e8e11deedb6e20d7ba28e99ba5beb28a4e39290964e04\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"86c5c802bb97ffaced2ae2c1c33d2cc602fb6ab3076945ed325d60005ec038f9\"" Jan 30 12:57:29.419203 containerd[1438]: time="2025-01-30T12:57:29.419140409Z" level=info msg="StartContainer for \"86c5c802bb97ffaced2ae2c1c33d2cc602fb6ab3076945ed325d60005ec038f9\"" Jan 30 12:57:29.454429 systemd[1]: Started cri-containerd-86c5c802bb97ffaced2ae2c1c33d2cc602fb6ab3076945ed325d60005ec038f9.scope - libcontainer container 86c5c802bb97ffaced2ae2c1c33d2cc602fb6ab3076945ed325d60005ec038f9. Jan 30 12:57:29.475691 systemd[1]: cri-containerd-86c5c802bb97ffaced2ae2c1c33d2cc602fb6ab3076945ed325d60005ec038f9.scope: Deactivated successfully. Jan 30 12:57:29.479850 containerd[1438]: time="2025-01-30T12:57:29.479510624Z" level=info msg="StartContainer for \"86c5c802bb97ffaced2ae2c1c33d2cc602fb6ab3076945ed325d60005ec038f9\" returns successfully" Jan 30 12:57:29.526636 containerd[1438]: time="2025-01-30T12:57:29.498707652Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb03567c3_7160_4983_9b8b_8f9503d2ca5a.slice/cri-containerd-86c5c802bb97ffaced2ae2c1c33d2cc602fb6ab3076945ed325d60005ec038f9.scope/memory.events\": no such file or directory" Jan 30 12:57:29.527495 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86c5c802bb97ffaced2ae2c1c33d2cc602fb6ab3076945ed325d60005ec038f9-rootfs.mount: Deactivated successfully. Jan 30 12:57:29.534606 containerd[1438]: time="2025-01-30T12:57:29.534548980Z" level=info msg="shim disconnected" id=86c5c802bb97ffaced2ae2c1c33d2cc602fb6ab3076945ed325d60005ec038f9 namespace=k8s.io Jan 30 12:57:29.535486 containerd[1438]: time="2025-01-30T12:57:29.534870621Z" level=warning msg="cleaning up after shim disconnected" id=86c5c802bb97ffaced2ae2c1c33d2cc602fb6ab3076945ed325d60005ec038f9 namespace=k8s.io Jan 30 12:57:29.535486 containerd[1438]: time="2025-01-30T12:57:29.534894381Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:57:30.398242 kubelet[2466]: E0130 12:57:30.398104 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:57:30.401325 containerd[1438]: time="2025-01-30T12:57:30.400954277Z" level=info msg="CreateContainer within sandbox \"9ffb76f8ab65566cf71e8e11deedb6e20d7ba28e99ba5beb28a4e39290964e04\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 12:57:30.438016 containerd[1438]: time="2025-01-30T12:57:30.437942129Z" level=info msg="CreateContainer within sandbox \"9ffb76f8ab65566cf71e8e11deedb6e20d7ba28e99ba5beb28a4e39290964e04\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"90f8cfa5fb6e4bf7d458d87dd634d269d5f96805d0c3113eb6eaf4a2e0609b7d\"" Jan 30 12:57:30.442461 containerd[1438]: time="2025-01-30T12:57:30.438722292Z" level=info msg="StartContainer for \"90f8cfa5fb6e4bf7d458d87dd634d269d5f96805d0c3113eb6eaf4a2e0609b7d\"" Jan 30 12:57:30.472471 systemd[1]: Started cri-containerd-90f8cfa5fb6e4bf7d458d87dd634d269d5f96805d0c3113eb6eaf4a2e0609b7d.scope - libcontainer container 90f8cfa5fb6e4bf7d458d87dd634d269d5f96805d0c3113eb6eaf4a2e0609b7d. Jan 30 12:57:30.514853 containerd[1438]: time="2025-01-30T12:57:30.514804085Z" level=info msg="StartContainer for \"90f8cfa5fb6e4bf7d458d87dd634d269d5f96805d0c3113eb6eaf4a2e0609b7d\" returns successfully" Jan 30 12:57:30.841262 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 30 12:57:31.402708 kubelet[2466]: E0130 12:57:31.402629 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:57:32.554194 kubelet[2466]: E0130 12:57:32.554150 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:57:32.726448 systemd[1]: run-containerd-runc-k8s.io-90f8cfa5fb6e4bf7d458d87dd634d269d5f96805d0c3113eb6eaf4a2e0609b7d-runc.KA9cgd.mount: Deactivated successfully. Jan 30 12:57:33.831296 systemd-networkd[1377]: lxc_health: Link UP Jan 30 12:57:33.842023 systemd-networkd[1377]: lxc_health: Gained carrier Jan 30 12:57:34.554471 kubelet[2466]: E0130 12:57:34.554381 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:57:34.572828 kubelet[2466]: I0130 12:57:34.572747 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2wgt5" podStartSLOduration=8.572729208 podStartE2EDuration="8.572729208s" podCreationTimestamp="2025-01-30 12:57:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 12:57:31.436085082 +0000 UTC m=+85.390041877" watchObservedRunningTime="2025-01-30 12:57:34.572729208 +0000 UTC m=+88.526685963" Jan 30 12:57:35.409917 kubelet[2466]: E0130 12:57:35.409873 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:57:35.608395 systemd-networkd[1377]: lxc_health: Gained IPv6LL Jan 30 12:57:36.411980 kubelet[2466]: E0130 12:57:36.411924 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:57:39.142951 kubelet[2466]: E0130 12:57:39.142423 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:57:39.178215 sshd[4293]: pam_unix(sshd:session): session closed for user core Jan 30 12:57:39.181502 systemd[1]: sshd@25-10.0.0.64:22-10.0.0.1:43026.service: Deactivated successfully. Jan 30 12:57:39.184168 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 12:57:39.185766 systemd-logind[1418]: Session 26 logged out. Waiting for processes to exit. Jan 30 12:57:39.188474 systemd-logind[1418]: Removed session 26.