Jan 29 11:01:12.935395 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 29 11:01:12.935415 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Wed Jan 29 09:37:00 -00 2025 Jan 29 11:01:12.935424 kernel: KASLR enabled Jan 29 11:01:12.935430 kernel: efi: EFI v2.7 by EDK II Jan 29 11:01:12.935436 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 Jan 29 11:01:12.935441 kernel: random: crng init done Jan 29 11:01:12.935448 kernel: secureboot: Secure boot disabled Jan 29 11:01:12.935454 kernel: ACPI: Early table checksum verification disabled Jan 29 11:01:12.935460 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Jan 29 11:01:12.935467 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 29 11:01:12.935473 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:01:12.935479 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:01:12.935485 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:01:12.935491 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:01:12.935498 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:01:12.935505 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:01:12.935511 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:01:12.935518 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:01:12.935524 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:01:12.935530 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 29 11:01:12.935536 kernel: NUMA: Failed to initialise from firmware Jan 29 11:01:12.935542 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 11:01:12.935548 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jan 29 11:01:12.935554 kernel: Zone ranges: Jan 29 11:01:12.935560 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 11:01:12.935567 kernel: DMA32 empty Jan 29 11:01:12.935573 kernel: Normal empty Jan 29 11:01:12.935579 kernel: Movable zone start for each node Jan 29 11:01:12.935598 kernel: Early memory node ranges Jan 29 11:01:12.935604 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jan 29 11:01:12.935611 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 29 11:01:12.935617 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 29 11:01:12.935623 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 29 11:01:12.935629 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 29 11:01:12.935635 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 29 11:01:12.935641 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 29 11:01:12.935647 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 11:01:12.935655 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 29 11:01:12.935661 kernel: psci: probing for conduit method from ACPI. Jan 29 11:01:12.935668 kernel: psci: PSCIv1.1 detected in firmware. Jan 29 11:01:12.935676 kernel: psci: Using standard PSCI v0.2 function IDs Jan 29 11:01:12.935683 kernel: psci: Trusted OS migration not required Jan 29 11:01:12.935690 kernel: psci: SMC Calling Convention v1.1 Jan 29 11:01:12.935697 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 29 11:01:12.935704 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 29 11:01:12.935711 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 29 11:01:12.935718 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 29 11:01:12.935724 kernel: Detected PIPT I-cache on CPU0 Jan 29 11:01:12.935731 kernel: CPU features: detected: GIC system register CPU interface Jan 29 11:01:12.935738 kernel: CPU features: detected: Hardware dirty bit management Jan 29 11:01:12.935744 kernel: CPU features: detected: Spectre-v4 Jan 29 11:01:12.935750 kernel: CPU features: detected: Spectre-BHB Jan 29 11:01:12.935757 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 29 11:01:12.935765 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 29 11:01:12.935772 kernel: CPU features: detected: ARM erratum 1418040 Jan 29 11:01:12.935778 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 29 11:01:12.935785 kernel: alternatives: applying boot alternatives Jan 29 11:01:12.935792 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c8edc06d36325e34bb125a9ad39c4f788eb9f01102631b71efea3f9afa94c89e Jan 29 11:01:12.935799 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:01:12.935806 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 11:01:12.935813 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:01:12.935819 kernel: Fallback order for Node 0: 0 Jan 29 11:01:12.935826 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 29 11:01:12.935832 kernel: Policy zone: DMA Jan 29 11:01:12.935840 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:01:12.935847 kernel: software IO TLB: area num 4. Jan 29 11:01:12.935853 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 29 11:01:12.935860 kernel: Memory: 2386324K/2572288K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39680K init, 897K bss, 185964K reserved, 0K cma-reserved) Jan 29 11:01:12.935867 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 29 11:01:12.935873 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:01:12.935890 kernel: rcu: RCU event tracing is enabled. Jan 29 11:01:12.935897 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 29 11:01:12.935904 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:01:12.935910 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:01:12.935917 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:01:12.935923 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 29 11:01:12.935931 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 29 11:01:12.935938 kernel: GICv3: 256 SPIs implemented Jan 29 11:01:12.935944 kernel: GICv3: 0 Extended SPIs implemented Jan 29 11:01:12.935951 kernel: Root IRQ handler: gic_handle_irq Jan 29 11:01:12.935957 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 29 11:01:12.935964 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 29 11:01:12.935971 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 29 11:01:12.935977 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 29 11:01:12.935984 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 29 11:01:12.935991 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 29 11:01:12.935997 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 29 11:01:12.936009 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:01:12.936016 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:01:12.936023 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 29 11:01:12.936030 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 29 11:01:12.936036 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 29 11:01:12.936043 kernel: arm-pv: using stolen time PV Jan 29 11:01:12.936051 kernel: Console: colour dummy device 80x25 Jan 29 11:01:12.936058 kernel: ACPI: Core revision 20230628 Jan 29 11:01:12.936065 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 29 11:01:12.936072 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:01:12.936081 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:01:12.936087 kernel: landlock: Up and running. Jan 29 11:01:12.936094 kernel: SELinux: Initializing. Jan 29 11:01:12.936101 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:01:12.936108 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:01:12.936115 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:01:12.936122 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:01:12.936128 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:01:12.936135 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:01:12.936143 kernel: Platform MSI: ITS@0x8080000 domain created Jan 29 11:01:12.936150 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 29 11:01:12.936157 kernel: Remapping and enabling EFI services. Jan 29 11:01:12.936164 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:01:12.936171 kernel: Detected PIPT I-cache on CPU1 Jan 29 11:01:12.936178 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 29 11:01:12.936184 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 29 11:01:12.936191 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:01:12.936198 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 29 11:01:12.936205 kernel: Detected PIPT I-cache on CPU2 Jan 29 11:01:12.936213 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 29 11:01:12.936220 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 29 11:01:12.936232 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:01:12.936240 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 29 11:01:12.936247 kernel: Detected PIPT I-cache on CPU3 Jan 29 11:01:12.936254 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 29 11:01:12.936261 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 29 11:01:12.936268 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:01:12.936275 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 29 11:01:12.936284 kernel: smp: Brought up 1 node, 4 CPUs Jan 29 11:01:12.936291 kernel: SMP: Total of 4 processors activated. Jan 29 11:01:12.936298 kernel: CPU features: detected: 32-bit EL0 Support Jan 29 11:01:12.936305 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 29 11:01:12.936312 kernel: CPU features: detected: Common not Private translations Jan 29 11:01:12.936319 kernel: CPU features: detected: CRC32 instructions Jan 29 11:01:12.936331 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 29 11:01:12.936338 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 29 11:01:12.936347 kernel: CPU features: detected: LSE atomic instructions Jan 29 11:01:12.936354 kernel: CPU features: detected: Privileged Access Never Jan 29 11:01:12.936361 kernel: CPU features: detected: RAS Extension Support Jan 29 11:01:12.936368 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 29 11:01:12.936375 kernel: CPU: All CPU(s) started at EL1 Jan 29 11:01:12.936382 kernel: alternatives: applying system-wide alternatives Jan 29 11:01:12.936389 kernel: devtmpfs: initialized Jan 29 11:01:12.936396 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:01:12.936404 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 29 11:01:12.936412 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:01:12.936419 kernel: SMBIOS 3.0.0 present. Jan 29 11:01:12.936426 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jan 29 11:01:12.936433 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:01:12.936440 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 29 11:01:12.936448 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 29 11:01:12.936455 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 29 11:01:12.936462 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:01:12.936469 kernel: audit: type=2000 audit(0.025:1): state=initialized audit_enabled=0 res=1 Jan 29 11:01:12.936477 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:01:12.936484 kernel: cpuidle: using governor menu Jan 29 11:01:12.936491 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 29 11:01:12.936498 kernel: ASID allocator initialised with 32768 entries Jan 29 11:01:12.936505 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:01:12.936512 kernel: Serial: AMBA PL011 UART driver Jan 29 11:01:12.936519 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 29 11:01:12.936527 kernel: Modules: 0 pages in range for non-PLT usage Jan 29 11:01:12.936534 kernel: Modules: 508960 pages in range for PLT usage Jan 29 11:01:12.936542 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:01:12.936549 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:01:12.936556 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 29 11:01:12.936563 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 29 11:01:12.936570 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:01:12.936577 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:01:12.936590 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 29 11:01:12.936597 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 29 11:01:12.936604 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:01:12.936612 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:01:12.936619 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:01:12.936627 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:01:12.936634 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 11:01:12.936641 kernel: ACPI: Interpreter enabled Jan 29 11:01:12.936648 kernel: ACPI: Using GIC for interrupt routing Jan 29 11:01:12.936655 kernel: ACPI: MCFG table detected, 1 entries Jan 29 11:01:12.936662 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 29 11:01:12.936669 kernel: printk: console [ttyAMA0] enabled Jan 29 11:01:12.936678 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 11:01:12.936799 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:01:12.936871 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 29 11:01:12.936939 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 29 11:01:12.937003 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 29 11:01:12.937066 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 29 11:01:12.937076 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 29 11:01:12.937086 kernel: PCI host bridge to bus 0000:00 Jan 29 11:01:12.937156 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 29 11:01:12.937232 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 29 11:01:12.937294 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 29 11:01:12.937361 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 11:01:12.937442 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 29 11:01:12.937518 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 11:01:12.937615 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 29 11:01:12.937685 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 29 11:01:12.937750 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 11:01:12.937815 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 11:01:12.937879 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 29 11:01:12.937944 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 29 11:01:12.938003 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 29 11:01:12.938064 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 29 11:01:12.938121 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 29 11:01:12.938131 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 29 11:01:12.938138 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 29 11:01:12.938145 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 29 11:01:12.938152 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 29 11:01:12.938159 kernel: iommu: Default domain type: Translated Jan 29 11:01:12.938166 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 29 11:01:12.938176 kernel: efivars: Registered efivars operations Jan 29 11:01:12.938182 kernel: vgaarb: loaded Jan 29 11:01:12.938190 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 29 11:01:12.938197 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:01:12.938204 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:01:12.938211 kernel: pnp: PnP ACPI init Jan 29 11:01:12.938287 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 29 11:01:12.938297 kernel: pnp: PnP ACPI: found 1 devices Jan 29 11:01:12.938307 kernel: NET: Registered PF_INET protocol family Jan 29 11:01:12.938314 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 11:01:12.938322 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 11:01:12.938337 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:01:12.938345 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:01:12.938352 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 11:01:12.938360 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 11:01:12.938367 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:01:12.938374 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:01:12.938385 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:01:12.938392 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:01:12.938400 kernel: kvm [1]: HYP mode not available Jan 29 11:01:12.938407 kernel: Initialise system trusted keyrings Jan 29 11:01:12.938414 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 11:01:12.938422 kernel: Key type asymmetric registered Jan 29 11:01:12.938428 kernel: Asymmetric key parser 'x509' registered Jan 29 11:01:12.938436 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 29 11:01:12.938443 kernel: io scheduler mq-deadline registered Jan 29 11:01:12.938452 kernel: io scheduler kyber registered Jan 29 11:01:12.938459 kernel: io scheduler bfq registered Jan 29 11:01:12.938466 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 29 11:01:12.938473 kernel: ACPI: button: Power Button [PWRB] Jan 29 11:01:12.938481 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 29 11:01:12.938553 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 29 11:01:12.938563 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:01:12.938570 kernel: thunder_xcv, ver 1.0 Jan 29 11:01:12.938577 kernel: thunder_bgx, ver 1.0 Jan 29 11:01:12.938602 kernel: nicpf, ver 1.0 Jan 29 11:01:12.938609 kernel: nicvf, ver 1.0 Jan 29 11:01:12.938691 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 29 11:01:12.938757 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-29T11:01:12 UTC (1738148472) Jan 29 11:01:12.938766 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 11:01:12.938774 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 29 11:01:12.938781 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 29 11:01:12.938789 kernel: watchdog: Hard watchdog permanently disabled Jan 29 11:01:12.938798 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:01:12.938806 kernel: Segment Routing with IPv6 Jan 29 11:01:12.938813 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:01:12.938820 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:01:12.938827 kernel: Key type dns_resolver registered Jan 29 11:01:12.938834 kernel: registered taskstats version 1 Jan 29 11:01:12.938841 kernel: Loading compiled-in X.509 certificates Jan 29 11:01:12.938848 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: f3333311a24aa8c58222f4e98a07eaa1f186ad1a' Jan 29 11:01:12.938855 kernel: Key type .fscrypt registered Jan 29 11:01:12.938863 kernel: Key type fscrypt-provisioning registered Jan 29 11:01:12.938870 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:01:12.938877 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:01:12.938884 kernel: ima: No architecture policies found Jan 29 11:01:12.938891 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 29 11:01:12.938898 kernel: clk: Disabling unused clocks Jan 29 11:01:12.938905 kernel: Freeing unused kernel memory: 39680K Jan 29 11:01:12.938912 kernel: Run /init as init process Jan 29 11:01:12.938919 kernel: with arguments: Jan 29 11:01:12.938928 kernel: /init Jan 29 11:01:12.938935 kernel: with environment: Jan 29 11:01:12.938942 kernel: HOME=/ Jan 29 11:01:12.938949 kernel: TERM=linux Jan 29 11:01:12.938955 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:01:12.938964 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:01:12.939042 systemd[1]: Detected virtualization kvm. Jan 29 11:01:12.939052 systemd[1]: Detected architecture arm64. Jan 29 11:01:12.939063 systemd[1]: Running in initrd. Jan 29 11:01:12.939070 systemd[1]: No hostname configured, using default hostname. Jan 29 11:01:12.939078 systemd[1]: Hostname set to . Jan 29 11:01:12.939086 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:01:12.939093 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:01:12.939101 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:01:12.939109 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:01:12.939117 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:01:12.939127 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:01:12.939134 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:01:12.939142 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:01:12.939152 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:01:12.939160 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:01:12.939168 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:01:12.939177 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:01:12.939185 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:01:12.939193 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:01:12.939201 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:01:12.939208 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:01:12.939217 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:01:12.939225 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:01:12.939233 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:01:12.939240 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:01:12.939250 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:01:12.939258 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:01:12.939266 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:01:12.939273 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:01:12.939281 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:01:12.939289 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:01:12.939297 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:01:12.939304 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:01:12.939312 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:01:12.939322 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:01:12.939337 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:01:12.939345 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:01:12.939353 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:01:12.939361 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:01:12.939372 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:01:12.939380 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:01:12.939387 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:01:12.939395 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:01:12.939403 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:01:12.939411 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:01:12.939440 systemd-journald[237]: Collecting audit messages is disabled. Jan 29 11:01:12.939462 kernel: Bridge firewalling registered Jan 29 11:01:12.939470 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:01:12.939478 systemd-journald[237]: Journal started Jan 29 11:01:12.939499 systemd-journald[237]: Runtime Journal (/run/log/journal/879fb755287f48a58c48dbaa46399047) is 5.9M, max 47.3M, 41.4M free. Jan 29 11:01:12.917011 systemd-modules-load[238]: Inserted module 'overlay' Jan 29 11:01:12.942106 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:01:12.935311 systemd-modules-load[238]: Inserted module 'br_netfilter' Jan 29 11:01:12.944758 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:01:12.945687 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:01:12.949941 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:01:12.952622 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:01:12.953628 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:01:12.956151 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:01:12.961849 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:01:12.964268 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:01:12.971569 dracut-cmdline[275]: dracut-dracut-053 Jan 29 11:01:12.974130 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c8edc06d36325e34bb125a9ad39c4f788eb9f01102631b71efea3f9afa94c89e Jan 29 11:01:12.998996 systemd-resolved[280]: Positive Trust Anchors: Jan 29 11:01:12.999071 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:01:12.999102 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:01:13.005164 systemd-resolved[280]: Defaulting to hostname 'linux'. Jan 29 11:01:13.006216 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:01:13.007150 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:01:13.044613 kernel: SCSI subsystem initialized Jan 29 11:01:13.048599 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:01:13.057612 kernel: iscsi: registered transport (tcp) Jan 29 11:01:13.069612 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:01:13.069631 kernel: QLogic iSCSI HBA Driver Jan 29 11:01:13.110998 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:01:13.123770 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:01:13.140230 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:01:13.140284 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:01:13.141624 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:01:13.189610 kernel: raid6: neonx8 gen() 15716 MB/s Jan 29 11:01:13.206602 kernel: raid6: neonx4 gen() 15643 MB/s Jan 29 11:01:13.223606 kernel: raid6: neonx2 gen() 13182 MB/s Jan 29 11:01:13.240601 kernel: raid6: neonx1 gen() 10403 MB/s Jan 29 11:01:13.257608 kernel: raid6: int64x8 gen() 6912 MB/s Jan 29 11:01:13.274611 kernel: raid6: int64x4 gen() 7349 MB/s Jan 29 11:01:13.291622 kernel: raid6: int64x2 gen() 6128 MB/s Jan 29 11:01:13.308711 kernel: raid6: int64x1 gen() 5050 MB/s Jan 29 11:01:13.308766 kernel: raid6: using algorithm neonx8 gen() 15716 MB/s Jan 29 11:01:13.325620 kernel: raid6: .... xor() 11909 MB/s, rmw enabled Jan 29 11:01:13.325675 kernel: raid6: using neon recovery algorithm Jan 29 11:01:13.330612 kernel: xor: measuring software checksum speed Jan 29 11:01:13.330657 kernel: 8regs : 19793 MB/sec Jan 29 11:01:13.332026 kernel: 32regs : 18511 MB/sec Jan 29 11:01:13.332042 kernel: arm64_neon : 27079 MB/sec Jan 29 11:01:13.332051 kernel: xor: using function: arm64_neon (27079 MB/sec) Jan 29 11:01:13.381610 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:01:13.392231 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:01:13.402811 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:01:13.414455 systemd-udevd[459]: Using default interface naming scheme 'v255'. Jan 29 11:01:13.417628 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:01:13.419840 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:01:13.434884 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Jan 29 11:01:13.464627 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:01:13.478749 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:01:13.524360 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:01:13.535604 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:01:13.551238 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:01:13.552513 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:01:13.553862 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:01:13.555736 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:01:13.561948 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 29 11:01:13.572096 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 29 11:01:13.572209 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 11:01:13.572222 kernel: GPT:9289727 != 19775487 Jan 29 11:01:13.572231 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 11:01:13.572241 kernel: GPT:9289727 != 19775487 Jan 29 11:01:13.572249 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 11:01:13.572260 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:01:13.564766 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:01:13.578621 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:01:13.587600 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (512) Jan 29 11:01:13.587643 kernel: BTRFS: device fsid b5bc7ecc-f31a-46c7-9582-5efca7819025 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (514) Jan 29 11:01:13.592874 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 11:01:13.598287 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 11:01:13.607997 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:01:13.612236 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 11:01:13.613171 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 11:01:13.625763 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:01:13.626599 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:01:13.626658 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:01:13.629308 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:01:13.631460 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:01:13.633870 disk-uuid[543]: Primary Header is updated. Jan 29 11:01:13.633870 disk-uuid[543]: Secondary Entries is updated. Jan 29 11:01:13.633870 disk-uuid[543]: Secondary Header is updated. Jan 29 11:01:13.637024 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:01:13.631524 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:01:13.634808 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:01:13.636736 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:01:13.643598 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:01:13.651728 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:01:13.655403 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:01:13.682915 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:01:14.647821 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:01:14.647899 disk-uuid[544]: The operation has completed successfully. Jan 29 11:01:14.668350 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:01:14.668482 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:01:14.691771 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:01:14.694503 sh[571]: Success Jan 29 11:01:14.709801 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 29 11:01:14.750043 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:01:14.751606 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:01:14.752352 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:01:14.763605 kernel: BTRFS info (device dm-0): first mount of filesystem b5bc7ecc-f31a-46c7-9582-5efca7819025 Jan 29 11:01:14.763651 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:01:14.763662 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:01:14.765149 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:01:14.765163 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:01:14.768973 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:01:14.770191 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:01:14.782749 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:01:14.784107 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:01:14.791963 kernel: BTRFS info (device vda6): first mount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:01:14.792001 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:01:14.792012 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:01:14.793626 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:01:14.801692 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:01:14.803044 kernel: BTRFS info (device vda6): last unmount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:01:14.809107 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:01:14.817752 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:01:14.879314 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:01:14.895766 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:01:14.921008 systemd-networkd[763]: lo: Link UP Jan 29 11:01:14.921016 systemd-networkd[763]: lo: Gained carrier Jan 29 11:01:14.921924 systemd-networkd[763]: Enumeration completed Jan 29 11:01:14.922683 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:01:14.922775 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:01:14.922779 systemd-networkd[763]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:01:14.923747 systemd[1]: Reached target network.target - Network. Jan 29 11:01:14.924006 systemd-networkd[763]: eth0: Link UP Jan 29 11:01:14.924010 systemd-networkd[763]: eth0: Gained carrier Jan 29 11:01:14.924016 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:01:14.930231 ignition[664]: Ignition 2.20.0 Jan 29 11:01:14.930237 ignition[664]: Stage: fetch-offline Jan 29 11:01:14.930272 ignition[664]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:01:14.930280 ignition[664]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:01:14.930519 ignition[664]: parsed url from cmdline: "" Jan 29 11:01:14.930522 ignition[664]: no config URL provided Jan 29 11:01:14.930527 ignition[664]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:01:14.930537 ignition[664]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:01:14.930565 ignition[664]: op(1): [started] loading QEMU firmware config module Jan 29 11:01:14.930570 ignition[664]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 29 11:01:14.938017 ignition[664]: op(1): [finished] loading QEMU firmware config module Jan 29 11:01:14.938044 ignition[664]: QEMU firmware config was not found. Ignoring... Jan 29 11:01:14.939636 systemd-networkd[763]: eth0: DHCPv4 address 10.0.0.65/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:01:14.976127 ignition[664]: parsing config with SHA512: 94f2ea9b4f5caee29bdbe03adda17ae5dd47f0cdefd8f68ef3da57e035a54c9aa67c1eee23fe07b816891480454c8edf898b49884b5c1882ceba7544a7137894 Jan 29 11:01:14.980717 unknown[664]: fetched base config from "system" Jan 29 11:01:14.980728 unknown[664]: fetched user config from "qemu" Jan 29 11:01:14.981168 ignition[664]: fetch-offline: fetch-offline passed Jan 29 11:01:14.981250 ignition[664]: Ignition finished successfully Jan 29 11:01:14.982995 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:01:14.984259 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 11:01:14.989751 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:01:15.000638 ignition[768]: Ignition 2.20.0 Jan 29 11:01:15.000649 ignition[768]: Stage: kargs Jan 29 11:01:15.000806 ignition[768]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:01:15.000815 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:01:15.003618 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:01:15.001730 ignition[768]: kargs: kargs passed Jan 29 11:01:15.001775 ignition[768]: Ignition finished successfully Jan 29 11:01:15.016792 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:01:15.029531 ignition[777]: Ignition 2.20.0 Jan 29 11:01:15.029543 ignition[777]: Stage: disks Jan 29 11:01:15.029721 ignition[777]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:01:15.029731 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:01:15.030649 ignition[777]: disks: disks passed Jan 29 11:01:15.030701 ignition[777]: Ignition finished successfully Jan 29 11:01:15.033499 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:01:15.035459 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:01:15.037202 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:01:15.038197 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:01:15.039840 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:01:15.041121 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:01:15.054793 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:01:15.067979 systemd-fsck[788]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 11:01:15.072642 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:01:15.090713 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:01:15.137542 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:01:15.138688 kernel: EXT4-fs (vda9): mounted filesystem bd47c032-97f4-4b3a-b174-3601de374086 r/w with ordered data mode. Quota mode: none. Jan 29 11:01:15.138608 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:01:15.155676 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:01:15.157144 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:01:15.158023 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 11:01:15.158060 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:01:15.158082 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:01:15.163525 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:01:15.165647 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (796) Jan 29 11:01:15.165667 kernel: BTRFS info (device vda6): first mount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:01:15.165790 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:01:15.169353 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:01:15.169376 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:01:15.170613 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:01:15.171953 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:01:15.230692 initrd-setup-root[820]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:01:15.234803 initrd-setup-root[827]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:01:15.238571 initrd-setup-root[834]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:01:15.241467 initrd-setup-root[841]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:01:15.335393 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:01:15.343695 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:01:15.345173 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:01:15.355620 kernel: BTRFS info (device vda6): last unmount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:01:15.373297 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:01:15.375048 ignition[910]: INFO : Ignition 2.20.0 Jan 29 11:01:15.375048 ignition[910]: INFO : Stage: mount Jan 29 11:01:15.376313 ignition[910]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:01:15.376313 ignition[910]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:01:15.376313 ignition[910]: INFO : mount: mount passed Jan 29 11:01:15.379827 ignition[910]: INFO : Ignition finished successfully Jan 29 11:01:15.378176 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:01:15.390756 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:01:15.763028 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:01:15.777767 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:01:15.783836 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (925) Jan 29 11:01:15.783877 kernel: BTRFS info (device vda6): first mount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:01:15.783888 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:01:15.784961 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:01:15.786592 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:01:15.787941 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:01:15.805097 ignition[942]: INFO : Ignition 2.20.0 Jan 29 11:01:15.805097 ignition[942]: INFO : Stage: files Jan 29 11:01:15.806636 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:01:15.806636 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:01:15.806636 ignition[942]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:01:15.809985 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:01:15.809985 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:01:15.809985 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:01:15.809985 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:01:15.809985 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:01:15.809290 unknown[942]: wrote ssh authorized keys file for user: core Jan 29 11:01:15.819761 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 29 11:01:15.819761 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 29 11:01:15.857744 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 11:01:15.965370 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 29 11:01:15.965370 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 11:01:15.969138 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 29 11:01:16.060708 systemd-networkd[763]: eth0: Gained IPv6LL Jan 29 11:01:16.303108 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 11:01:16.357251 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 11:01:16.357251 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:01:16.360101 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:01:16.360101 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:01:16.360101 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:01:16.360101 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:01:16.360101 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:01:16.360101 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:01:16.360101 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:01:16.360101 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:01:16.360101 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:01:16.360101 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 29 11:01:16.360101 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 29 11:01:16.360101 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 29 11:01:16.360101 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Jan 29 11:01:16.533501 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 11:01:16.717757 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 29 11:01:16.717757 ignition[942]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 29 11:01:16.720734 ignition[942]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:01:16.720734 ignition[942]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:01:16.720734 ignition[942]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 29 11:01:16.720734 ignition[942]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 29 11:01:16.720734 ignition[942]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:01:16.720734 ignition[942]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:01:16.720734 ignition[942]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 29 11:01:16.720734 ignition[942]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 29 11:01:16.755534 ignition[942]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:01:16.760992 ignition[942]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:01:16.762191 ignition[942]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 29 11:01:16.762191 ignition[942]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 29 11:01:16.762191 ignition[942]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 11:01:16.762191 ignition[942]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:01:16.762191 ignition[942]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:01:16.762191 ignition[942]: INFO : files: files passed Jan 29 11:01:16.762191 ignition[942]: INFO : Ignition finished successfully Jan 29 11:01:16.762713 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:01:16.788097 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:01:16.789992 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:01:16.794317 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:01:16.794417 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:01:16.802064 initrd-setup-root-after-ignition[970]: grep: /sysroot/oem/oem-release: No such file or directory Jan 29 11:01:16.806996 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:01:16.806996 initrd-setup-root-after-ignition[972]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:01:16.809521 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:01:16.812905 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:01:16.814249 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:01:16.825806 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:01:16.868977 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:01:16.869658 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:01:16.871081 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:01:16.872342 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:01:16.873946 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:01:16.874868 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:01:16.903663 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:01:16.916768 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:01:16.928532 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:01:16.929656 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:01:16.931364 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:01:16.932833 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:01:16.932971 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:01:16.934970 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:01:16.936564 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:01:16.937962 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:01:16.939321 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:01:16.941022 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:01:16.942731 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:01:16.944251 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:01:16.945769 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:01:16.947309 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:01:16.948675 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:01:16.949840 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:01:16.949960 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:01:16.951752 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:01:16.953162 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:01:16.954744 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:01:16.954819 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:01:16.956297 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:01:16.956419 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:01:16.958530 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:01:16.958662 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:01:16.960115 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:01:16.961334 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:01:16.964616 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:01:16.966693 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:01:16.967446 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:01:16.968759 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:01:16.968858 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:01:16.970119 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:01:16.970196 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:01:16.971454 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:01:16.971563 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:01:16.973028 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:01:16.973130 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:01:16.992130 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:01:17.008661 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:01:17.009394 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:01:17.009525 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:01:17.014841 ignition[996]: INFO : Ignition 2.20.0 Jan 29 11:01:17.014841 ignition[996]: INFO : Stage: umount Jan 29 11:01:17.014841 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:01:17.014841 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:01:17.011859 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:01:17.019933 ignition[996]: INFO : umount: umount passed Jan 29 11:01:17.019933 ignition[996]: INFO : Ignition finished successfully Jan 29 11:01:17.012007 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:01:17.016721 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:01:17.016832 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:01:17.019464 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:01:17.019613 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:01:17.025707 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:01:17.027815 systemd[1]: Stopped target network.target - Network. Jan 29 11:01:17.028759 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:01:17.028813 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:01:17.031789 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:01:17.031841 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:01:17.033207 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:01:17.033243 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:01:17.034478 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:01:17.034518 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:01:17.035896 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:01:17.037176 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:01:17.040629 systemd-networkd[763]: eth0: DHCPv6 lease lost Jan 29 11:01:17.042113 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:01:17.042235 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:01:17.044151 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:01:17.044184 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:01:17.059474 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:01:17.060227 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:01:17.060311 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:01:17.061952 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:01:17.063668 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:01:17.064148 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:01:17.067390 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:01:17.067468 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:01:17.070141 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:01:17.070190 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:01:17.071708 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:01:17.071751 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:01:17.083676 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:01:17.083807 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:01:17.085492 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:01:17.085690 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:01:17.088198 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:01:17.088263 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:01:17.090263 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:01:17.090298 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:01:17.092034 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:01:17.092088 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:01:17.094549 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:01:17.094619 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:01:17.097213 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:01:17.097270 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:01:17.110854 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:01:17.111672 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:01:17.111728 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:01:17.113373 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 11:01:17.113418 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:01:17.115304 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:01:17.115355 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:01:17.116932 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:01:17.116969 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:01:17.118811 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:01:17.118898 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:01:17.120459 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:01:17.120536 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:01:17.122236 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:01:17.123094 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:01:17.123198 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:01:17.125216 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:01:17.134708 systemd[1]: Switching root. Jan 29 11:01:17.159421 systemd-journald[237]: Journal stopped Jan 29 11:01:17.860551 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Jan 29 11:01:17.860704 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:01:17.860735 kernel: SELinux: policy capability open_perms=1 Jan 29 11:01:17.860746 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:01:17.860794 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:01:17.860809 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:01:17.860822 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:01:17.860833 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:01:17.860842 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:01:17.860852 kernel: audit: type=1403 audit(1738148477.315:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:01:17.860863 systemd[1]: Successfully loaded SELinux policy in 32.754ms. Jan 29 11:01:17.860877 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.040ms. Jan 29 11:01:17.860889 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:01:17.860900 systemd[1]: Detected virtualization kvm. Jan 29 11:01:17.860910 systemd[1]: Detected architecture arm64. Jan 29 11:01:17.860923 systemd[1]: Detected first boot. Jan 29 11:01:17.860933 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:01:17.860943 zram_generator::config[1040]: No configuration found. Jan 29 11:01:17.860954 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:01:17.860965 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 11:01:17.860975 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 11:01:17.860987 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 11:01:17.860998 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:01:17.861010 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:01:17.861020 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:01:17.861030 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:01:17.861041 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:01:17.861052 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:01:17.861062 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:01:17.861072 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:01:17.861082 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:01:17.861092 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:01:17.861104 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:01:17.861115 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:01:17.861125 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:01:17.861135 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:01:17.861145 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 29 11:01:17.861156 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:01:17.861169 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 11:01:17.861180 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 11:01:17.861190 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 11:01:17.861201 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:01:17.861212 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:01:17.861222 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:01:17.861233 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:01:17.861243 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:01:17.861253 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:01:17.861263 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:01:17.861275 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:01:17.861286 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:01:17.861296 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:01:17.861307 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:01:17.861317 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:01:17.861326 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:01:17.861337 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:01:17.861348 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:01:17.861367 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:01:17.861380 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:01:17.861398 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:01:17.861410 systemd[1]: Reached target machines.target - Containers. Jan 29 11:01:17.861421 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:01:17.861431 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:01:17.861441 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:01:17.861451 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:01:17.861462 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:01:17.861472 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:01:17.861484 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:01:17.861494 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:01:17.861504 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:01:17.861515 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:01:17.861525 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 11:01:17.861534 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 11:01:17.861544 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 11:01:17.861554 kernel: fuse: init (API version 7.39) Jan 29 11:01:17.861565 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 11:01:17.861575 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:01:17.861697 kernel: loop: module loaded Jan 29 11:01:17.861719 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:01:17.861750 kernel: ACPI: bus type drm_connector registered Jan 29 11:01:17.861789 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:01:17.861802 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:01:17.861815 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:01:17.861825 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 11:01:17.861835 systemd[1]: Stopped verity-setup.service. Jan 29 11:01:17.861851 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:01:17.861861 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:01:17.861872 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:01:17.861892 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:01:17.861904 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:01:17.861919 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:01:17.861960 systemd-journald[1104]: Collecting audit messages is disabled. Jan 29 11:01:17.861983 systemd-journald[1104]: Journal started Jan 29 11:01:17.862005 systemd-journald[1104]: Runtime Journal (/run/log/journal/879fb755287f48a58c48dbaa46399047) is 5.9M, max 47.3M, 41.4M free. Jan 29 11:01:17.675039 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:01:17.694550 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 11:01:17.694894 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 11:01:17.864618 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:01:17.864892 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:01:17.866184 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:01:17.866359 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:01:17.867534 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:01:17.867685 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:01:17.868834 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:01:17.869986 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:01:17.870129 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:01:17.871192 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:01:17.871323 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:01:17.872521 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:01:17.872670 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:01:17.873867 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:01:17.873999 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:01:17.875214 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:01:17.876386 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:01:17.877674 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:01:17.889565 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:01:17.898685 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:01:17.900645 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:01:17.901566 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:01:17.901613 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:01:17.903298 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:01:17.905512 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:01:17.907711 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:01:17.908788 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:01:17.910540 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:01:17.912379 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:01:17.913405 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:01:17.914794 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:01:17.918707 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:01:17.919752 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:01:17.922774 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:01:17.925662 systemd-journald[1104]: Time spent on flushing to /var/log/journal/879fb755287f48a58c48dbaa46399047 is 32.471ms for 862 entries. Jan 29 11:01:17.925662 systemd-journald[1104]: System Journal (/var/log/journal/879fb755287f48a58c48dbaa46399047) is 8.0M, max 195.6M, 187.6M free. Jan 29 11:01:17.967627 systemd-journald[1104]: Received client request to flush runtime journal. Jan 29 11:01:17.967684 kernel: loop0: detected capacity change from 0 to 201592 Jan 29 11:01:17.967709 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:01:17.926854 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:01:17.929735 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:01:17.932988 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:01:17.934239 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:01:17.936097 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:01:17.957766 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:01:17.959544 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:01:17.960971 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:01:17.964697 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:01:17.967398 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:01:17.969500 udevadm[1159]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 11:01:17.973891 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:01:17.988219 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:01:17.989101 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:01:17.993911 systemd-tmpfiles[1153]: ACLs are not supported, ignoring. Jan 29 11:01:17.993928 systemd-tmpfiles[1153]: ACLs are not supported, ignoring. Jan 29 11:01:17.994599 kernel: loop1: detected capacity change from 0 to 113536 Jan 29 11:01:17.999165 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:01:18.013764 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:01:18.030077 kernel: loop2: detected capacity change from 0 to 116808 Jan 29 11:01:18.037802 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:01:18.050897 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:01:18.062613 kernel: loop3: detected capacity change from 0 to 201592 Jan 29 11:01:18.064315 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Jan 29 11:01:18.064331 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Jan 29 11:01:18.068430 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:01:18.070603 kernel: loop4: detected capacity change from 0 to 113536 Jan 29 11:01:18.074731 kernel: loop5: detected capacity change from 0 to 116808 Jan 29 11:01:18.077557 (sd-merge)[1179]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 29 11:01:18.078295 (sd-merge)[1179]: Merged extensions into '/usr'. Jan 29 11:01:18.083683 systemd[1]: Reloading requested from client PID 1152 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:01:18.083700 systemd[1]: Reloading... Jan 29 11:01:18.146330 zram_generator::config[1205]: No configuration found. Jan 29 11:01:18.218678 ldconfig[1146]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:01:18.240514 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:01:18.275856 systemd[1]: Reloading finished in 191 ms. Jan 29 11:01:18.314661 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:01:18.317640 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:01:18.329775 systemd[1]: Starting ensure-sysext.service... Jan 29 11:01:18.331542 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:01:18.351148 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:01:18.351691 systemd[1]: Reloading requested from client PID 1241 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:01:18.351704 systemd[1]: Reloading... Jan 29 11:01:18.351966 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:01:18.352708 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:01:18.353022 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Jan 29 11:01:18.353139 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Jan 29 11:01:18.355564 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:01:18.355683 systemd-tmpfiles[1242]: Skipping /boot Jan 29 11:01:18.362967 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:01:18.363290 systemd-tmpfiles[1242]: Skipping /boot Jan 29 11:01:18.387678 zram_generator::config[1270]: No configuration found. Jan 29 11:01:18.470990 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:01:18.506148 systemd[1]: Reloading finished in 154 ms. Jan 29 11:01:18.521526 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:01:18.522886 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:01:18.538054 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:01:18.540052 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:01:18.541974 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:01:18.545786 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:01:18.550828 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:01:18.557753 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:01:18.561609 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:01:18.570864 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:01:18.573152 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:01:18.576403 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:01:18.580813 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:01:18.583935 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:01:18.589477 systemd-udevd[1310]: Using default interface naming scheme 'v255'. Jan 29 11:01:18.591028 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:01:18.592438 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:01:18.593313 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:01:18.594984 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:01:18.595661 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:01:18.596859 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:01:18.596992 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:01:18.599266 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:01:18.599424 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:01:18.606442 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:01:18.610345 augenrules[1341]: No rules Jan 29 11:01:18.620996 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:01:18.623841 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:01:18.626082 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:01:18.627424 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:01:18.628155 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:01:18.630184 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:01:18.631944 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:01:18.632142 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:01:18.639916 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:01:18.641818 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:01:18.644133 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:01:18.644547 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:01:18.646344 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:01:18.646790 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:01:18.648570 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:01:18.648933 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:01:18.669458 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 29 11:01:18.671982 systemd[1]: Finished ensure-sysext.service. Jan 29 11:01:18.681745 systemd-resolved[1308]: Positive Trust Anchors: Jan 29 11:01:18.682388 systemd-resolved[1308]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:01:18.682422 systemd-resolved[1308]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:01:18.682852 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:01:18.683740 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:01:18.686875 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:01:18.688766 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:01:18.692136 systemd-resolved[1308]: Defaulting to hostname 'linux'. Jan 29 11:01:18.698205 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:01:18.700888 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:01:18.701768 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:01:18.703774 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:01:18.706664 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 11:01:18.707642 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:01:18.707905 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:01:18.709293 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:01:18.709672 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:01:18.710730 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:01:18.710866 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:01:18.711939 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:01:18.712126 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:01:18.713501 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:01:18.713841 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:01:18.719757 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1368) Jan 29 11:01:18.719537 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:01:18.721920 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:01:18.722006 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:01:18.723420 augenrules[1380]: /sbin/augenrules: No change Jan 29 11:01:18.744039 augenrules[1412]: No rules Jan 29 11:01:18.746231 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:01:18.746546 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:01:18.768459 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 11:01:18.769889 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:01:18.773474 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:01:18.781806 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:01:18.793948 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:01:18.795729 systemd-networkd[1388]: lo: Link UP Jan 29 11:01:18.795738 systemd-networkd[1388]: lo: Gained carrier Jan 29 11:01:18.796546 systemd-networkd[1388]: Enumeration completed Jan 29 11:01:18.798139 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:01:18.799205 systemd[1]: Reached target network.target - Network. Jan 29 11:01:18.800598 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:01:18.800607 systemd-networkd[1388]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:01:18.803197 systemd-networkd[1388]: eth0: Link UP Jan 29 11:01:18.803204 systemd-networkd[1388]: eth0: Gained carrier Jan 29 11:01:18.803218 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:01:18.809849 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:01:18.812345 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:01:18.816783 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:01:18.818990 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:01:18.822686 systemd-networkd[1388]: eth0: DHCPv4 address 10.0.0.65/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:01:18.825754 systemd-timesyncd[1395]: Network configuration changed, trying to establish connection. Jan 29 11:01:18.826573 systemd-timesyncd[1395]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 29 11:01:18.826828 systemd-timesyncd[1395]: Initial clock synchronization to Wed 2025-01-29 11:01:18.807623 UTC. Jan 29 11:01:18.835716 lvm[1428]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:01:18.851723 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:01:18.871153 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:01:18.872332 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:01:18.873255 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:01:18.874200 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:01:18.875187 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:01:18.876388 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:01:18.877336 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:01:18.878268 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:01:18.879168 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:01:18.879203 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:01:18.879864 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:01:18.881289 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:01:18.883547 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:01:18.895482 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:01:18.897410 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:01:18.898718 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:01:18.899552 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:01:18.900291 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:01:18.901150 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:01:18.901177 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:01:18.902058 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:01:18.903770 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:01:18.906143 lvm[1437]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:01:18.907644 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:01:18.911803 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:01:18.912702 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:01:18.917207 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:01:18.919541 jq[1440]: false Jan 29 11:01:18.921760 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 11:01:18.924060 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:01:18.928776 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:01:18.937789 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:01:18.940297 extend-filesystems[1441]: Found loop3 Jan 29 11:01:18.941783 extend-filesystems[1441]: Found loop4 Jan 29 11:01:18.941783 extend-filesystems[1441]: Found loop5 Jan 29 11:01:18.941783 extend-filesystems[1441]: Found vda Jan 29 11:01:18.941783 extend-filesystems[1441]: Found vda1 Jan 29 11:01:18.941783 extend-filesystems[1441]: Found vda2 Jan 29 11:01:18.941783 extend-filesystems[1441]: Found vda3 Jan 29 11:01:18.941783 extend-filesystems[1441]: Found usr Jan 29 11:01:18.941783 extend-filesystems[1441]: Found vda4 Jan 29 11:01:18.941783 extend-filesystems[1441]: Found vda6 Jan 29 11:01:18.941783 extend-filesystems[1441]: Found vda7 Jan 29 11:01:18.941783 extend-filesystems[1441]: Found vda9 Jan 29 11:01:18.941783 extend-filesystems[1441]: Checking size of /dev/vda9 Jan 29 11:01:18.940462 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 11:01:18.942471 dbus-daemon[1439]: [system] SELinux support is enabled Jan 29 11:01:18.941410 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:01:18.943846 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:01:18.946540 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:01:18.959895 jq[1457]: true Jan 29 11:01:18.947921 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:01:18.958414 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:01:18.966635 extend-filesystems[1441]: Resized partition /dev/vda9 Jan 29 11:01:18.975434 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1363) Jan 29 11:01:18.976065 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:01:18.976224 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:01:18.976525 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:01:18.976731 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:01:18.978367 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:01:18.978519 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:01:18.986203 systemd-logind[1452]: Watching system buttons on /dev/input/event0 (Power Button) Jan 29 11:01:18.987391 systemd-logind[1452]: New seat seat0. Jan 29 11:01:18.994895 extend-filesystems[1463]: resize2fs 1.47.1 (20-May-2024) Jan 29 11:01:19.016797 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 29 11:01:19.016137 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:01:19.016880 tar[1464]: linux-arm64/LICENSE Jan 29 11:01:19.016880 tar[1464]: linux-arm64/helm Jan 29 11:01:19.021801 update_engine[1455]: I20250129 11:01:19.002777 1455 main.cc:92] Flatcar Update Engine starting Jan 29 11:01:19.021801 update_engine[1455]: I20250129 11:01:19.006659 1455 update_check_scheduler.cc:74] Next update check in 3m4s Jan 29 11:01:19.020365 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:01:19.021249 (ntainerd)[1467]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:01:19.025212 jq[1466]: true Jan 29 11:01:19.027942 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:01:19.028091 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:01:19.030381 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:01:19.030492 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:01:19.037671 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 29 11:01:19.039943 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:01:19.103955 extend-filesystems[1463]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 11:01:19.103955 extend-filesystems[1463]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 11:01:19.103955 extend-filesystems[1463]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 29 11:01:19.110239 extend-filesystems[1441]: Resized filesystem in /dev/vda9 Jan 29 11:01:19.105215 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:01:19.105400 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:01:19.113917 locksmithd[1479]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:01:19.139254 bash[1493]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:01:19.142925 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:01:19.147128 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 11:01:19.220730 containerd[1467]: time="2025-01-29T11:01:19.220457573Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 11:01:19.252459 containerd[1467]: time="2025-01-29T11:01:19.252162268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:01:19.253634 containerd[1467]: time="2025-01-29T11:01:19.253600339Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:01:19.253735 containerd[1467]: time="2025-01-29T11:01:19.253719149Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:01:19.253813 containerd[1467]: time="2025-01-29T11:01:19.253799582Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:01:19.254083 containerd[1467]: time="2025-01-29T11:01:19.254058549Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:01:19.255153 containerd[1467]: time="2025-01-29T11:01:19.254242960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:01:19.255153 containerd[1467]: time="2025-01-29T11:01:19.254328469Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:01:19.255153 containerd[1467]: time="2025-01-29T11:01:19.254342541Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:01:19.255153 containerd[1467]: time="2025-01-29T11:01:19.254514760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:01:19.255153 containerd[1467]: time="2025-01-29T11:01:19.254529831Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:01:19.255153 containerd[1467]: time="2025-01-29T11:01:19.254542383Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:01:19.255153 containerd[1467]: time="2025-01-29T11:01:19.254552417Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:01:19.255153 containerd[1467]: time="2025-01-29T11:01:19.254638247Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:01:19.255153 containerd[1467]: time="2025-01-29T11:01:19.254817381Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:01:19.255153 containerd[1467]: time="2025-01-29T11:01:19.254904690Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:01:19.255153 containerd[1467]: time="2025-01-29T11:01:19.254918281Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:01:19.255417 containerd[1467]: time="2025-01-29T11:01:19.254985042Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:01:19.255417 containerd[1467]: time="2025-01-29T11:01:19.255021581Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:01:19.258987 containerd[1467]: time="2025-01-29T11:01:19.258961456Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:01:19.259118 containerd[1467]: time="2025-01-29T11:01:19.259100894Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:01:19.259249 containerd[1467]: time="2025-01-29T11:01:19.259230337Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:01:19.259311 containerd[1467]: time="2025-01-29T11:01:19.259298657Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:01:19.259364 containerd[1467]: time="2025-01-29T11:01:19.259352225Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:01:19.259624 containerd[1467]: time="2025-01-29T11:01:19.259575894Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:01:19.260041 containerd[1467]: time="2025-01-29T11:01:19.260018752Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:01:19.260297 containerd[1467]: time="2025-01-29T11:01:19.260275761Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:01:19.260445 containerd[1467]: time="2025-01-29T11:01:19.260427871Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:01:19.260521 containerd[1467]: time="2025-01-29T11:01:19.260507584Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:01:19.260603 containerd[1467]: time="2025-01-29T11:01:19.260568188Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:01:19.260738 containerd[1467]: time="2025-01-29T11:01:19.260719699Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:01:19.260798 containerd[1467]: time="2025-01-29T11:01:19.260784900Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:01:19.262603 containerd[1467]: time="2025-01-29T11:01:19.260903990Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:01:19.262603 containerd[1467]: time="2025-01-29T11:01:19.260928016Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:01:19.262603 containerd[1467]: time="2025-01-29T11:01:19.260944726Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:01:19.262603 containerd[1467]: time="2025-01-29T11:01:19.260957318Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:01:19.262603 containerd[1467]: time="2025-01-29T11:01:19.260968272Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:01:19.262603 containerd[1467]: time="2025-01-29T11:01:19.260989659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:01:19.262603 containerd[1467]: time="2025-01-29T11:01:19.261003651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:01:19.262603 containerd[1467]: time="2025-01-29T11:01:19.261016124Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:01:19.262603 containerd[1467]: time="2025-01-29T11:01:19.261028237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:01:19.262603 containerd[1467]: time="2025-01-29T11:01:19.261039350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:01:19.262603 containerd[1467]: time="2025-01-29T11:01:19.261052103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:01:19.262603 containerd[1467]: time="2025-01-29T11:01:19.261071931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:01:19.262603 containerd[1467]: time="2025-01-29T11:01:19.261086522Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:01:19.262603 containerd[1467]: time="2025-01-29T11:01:19.261112187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:01:19.262871 containerd[1467]: time="2025-01-29T11:01:19.261127338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:01:19.262871 containerd[1467]: time="2025-01-29T11:01:19.261141330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:01:19.262871 containerd[1467]: time="2025-01-29T11:01:19.261153523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:01:19.262871 containerd[1467]: time="2025-01-29T11:01:19.261168914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:01:19.262871 containerd[1467]: time="2025-01-29T11:01:19.261184345Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:01:19.262871 containerd[1467]: time="2025-01-29T11:01:19.261218924Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:01:19.262871 containerd[1467]: time="2025-01-29T11:01:19.261233556Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:01:19.262871 containerd[1467]: time="2025-01-29T11:01:19.261244149Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:01:19.262871 containerd[1467]: time="2025-01-29T11:01:19.261422524Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:01:19.262871 containerd[1467]: time="2025-01-29T11:01:19.261440473Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:01:19.262871 containerd[1467]: time="2025-01-29T11:01:19.261451867Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:01:19.262871 containerd[1467]: time="2025-01-29T11:01:19.261462980Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:01:19.262871 containerd[1467]: time="2025-01-29T11:01:19.261472255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:01:19.263077 containerd[1467]: time="2025-01-29T11:01:19.261484008Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:01:19.263077 containerd[1467]: time="2025-01-29T11:01:19.261494002Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:01:19.263077 containerd[1467]: time="2025-01-29T11:01:19.261503756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:01:19.263128 containerd[1467]: time="2025-01-29T11:01:19.261849592Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:01:19.263128 containerd[1467]: time="2025-01-29T11:01:19.261897084Z" level=info msg="Connect containerd service" Jan 29 11:01:19.263128 containerd[1467]: time="2025-01-29T11:01:19.261927546Z" level=info msg="using legacy CRI server" Jan 29 11:01:19.263128 containerd[1467]: time="2025-01-29T11:01:19.261934422Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:01:19.263128 containerd[1467]: time="2025-01-29T11:01:19.262164166Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:01:19.263826 containerd[1467]: time="2025-01-29T11:01:19.263800681Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:01:19.264263 containerd[1467]: time="2025-01-29T11:01:19.264169303Z" level=info msg="Start subscribing containerd event" Jan 29 11:01:19.264263 containerd[1467]: time="2025-01-29T11:01:19.264235864Z" level=info msg="Start recovering state" Jan 29 11:01:19.264323 containerd[1467]: time="2025-01-29T11:01:19.264299547Z" level=info msg="Start event monitor" Jan 29 11:01:19.264323 containerd[1467]: time="2025-01-29T11:01:19.264309900Z" level=info msg="Start snapshots syncer" Jan 29 11:01:19.264323 containerd[1467]: time="2025-01-29T11:01:19.264320294Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:01:19.264432 containerd[1467]: time="2025-01-29T11:01:19.264328689Z" level=info msg="Start streaming server" Jan 29 11:01:19.264806 containerd[1467]: time="2025-01-29T11:01:19.264781063Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:01:19.264863 containerd[1467]: time="2025-01-29T11:01:19.264832152Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:01:19.266228 containerd[1467]: time="2025-01-29T11:01:19.264877406Z" level=info msg="containerd successfully booted in 0.045346s" Jan 29 11:01:19.264973 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:01:19.441553 tar[1464]: linux-arm64/README.md Jan 29 11:01:19.456040 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 11:01:20.348811 systemd-networkd[1388]: eth0: Gained IPv6LL Jan 29 11:01:20.351367 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:01:20.353027 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:01:20.361840 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 11:01:20.364436 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:01:20.366665 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:01:20.384644 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:01:20.386187 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 11:01:20.386330 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 11:01:20.390631 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 11:01:20.394806 sshd_keygen[1459]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:01:20.415612 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:01:20.431844 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:01:20.437288 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:01:20.437505 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:01:20.440066 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:01:20.455623 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:01:20.468032 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:01:20.470162 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 29 11:01:20.471166 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:01:20.900952 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:01:20.902733 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:01:20.905469 (kubelet)[1553]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:01:20.908365 systemd[1]: Startup finished in 551ms (kernel) + 4.603s (initrd) + 3.626s (userspace) = 8.782s. Jan 29 11:01:21.308000 kubelet[1553]: E0129 11:01:21.307852 1553 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:01:21.310190 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:01:21.310337 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:01:25.400522 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 11:01:25.401662 systemd[1]: Started sshd@0-10.0.0.65:22-10.0.0.1:55052.service - OpenSSH per-connection server daemon (10.0.0.1:55052). Jan 29 11:01:25.464530 sshd[1567]: Accepted publickey for core from 10.0.0.1 port 55052 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:01:25.466308 sshd-session[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:01:25.476440 systemd-logind[1452]: New session 1 of user core. Jan 29 11:01:25.477439 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 11:01:25.487858 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 11:01:25.497112 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 11:01:25.500846 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 11:01:25.507563 (systemd)[1571]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 11:01:25.587781 systemd[1571]: Queued start job for default target default.target. Jan 29 11:01:25.602638 systemd[1571]: Created slice app.slice - User Application Slice. Jan 29 11:01:25.602679 systemd[1571]: Reached target paths.target - Paths. Jan 29 11:01:25.602691 systemd[1571]: Reached target timers.target - Timers. Jan 29 11:01:25.603926 systemd[1571]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 11:01:25.613866 systemd[1571]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 11:01:25.613929 systemd[1571]: Reached target sockets.target - Sockets. Jan 29 11:01:25.613941 systemd[1571]: Reached target basic.target - Basic System. Jan 29 11:01:25.613975 systemd[1571]: Reached target default.target - Main User Target. Jan 29 11:01:25.614000 systemd[1571]: Startup finished in 101ms. Jan 29 11:01:25.614307 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 11:01:25.623767 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 11:01:25.685937 systemd[1]: Started sshd@1-10.0.0.65:22-10.0.0.1:55056.service - OpenSSH per-connection server daemon (10.0.0.1:55056). Jan 29 11:01:25.739131 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 55056 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:01:25.740352 sshd-session[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:01:25.744305 systemd-logind[1452]: New session 2 of user core. Jan 29 11:01:25.753807 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 11:01:25.810978 sshd[1584]: Connection closed by 10.0.0.1 port 55056 Jan 29 11:01:25.811701 sshd-session[1582]: pam_unix(sshd:session): session closed for user core Jan 29 11:01:25.822549 systemd[1]: sshd@1-10.0.0.65:22-10.0.0.1:55056.service: Deactivated successfully. Jan 29 11:01:25.826301 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 11:01:25.827798 systemd-logind[1452]: Session 2 logged out. Waiting for processes to exit. Jan 29 11:01:25.835994 systemd[1]: Started sshd@2-10.0.0.65:22-10.0.0.1:55058.service - OpenSSH per-connection server daemon (10.0.0.1:55058). Jan 29 11:01:25.838331 systemd-logind[1452]: Removed session 2. Jan 29 11:01:25.878546 sshd[1589]: Accepted publickey for core from 10.0.0.1 port 55058 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:01:25.879074 sshd-session[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:01:25.883795 systemd-logind[1452]: New session 3 of user core. Jan 29 11:01:25.896797 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 11:01:25.945724 sshd[1591]: Connection closed by 10.0.0.1 port 55058 Jan 29 11:01:25.947964 sshd-session[1589]: pam_unix(sshd:session): session closed for user core Jan 29 11:01:25.956615 systemd[1]: sshd@2-10.0.0.65:22-10.0.0.1:55058.service: Deactivated successfully. Jan 29 11:01:25.958113 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 11:01:25.964062 systemd-logind[1452]: Session 3 logged out. Waiting for processes to exit. Jan 29 11:01:25.977110 systemd[1]: Started sshd@3-10.0.0.65:22-10.0.0.1:55064.service - OpenSSH per-connection server daemon (10.0.0.1:55064). Jan 29 11:01:25.979106 systemd-logind[1452]: Removed session 3. Jan 29 11:01:26.022691 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 55064 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:01:26.023764 sshd-session[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:01:26.029011 systemd-logind[1452]: New session 4 of user core. Jan 29 11:01:26.037847 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 11:01:26.098643 sshd[1598]: Connection closed by 10.0.0.1 port 55064 Jan 29 11:01:26.099048 sshd-session[1596]: pam_unix(sshd:session): session closed for user core Jan 29 11:01:26.114203 systemd[1]: sshd@3-10.0.0.65:22-10.0.0.1:55064.service: Deactivated successfully. Jan 29 11:01:26.115835 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 11:01:26.120502 systemd-logind[1452]: Session 4 logged out. Waiting for processes to exit. Jan 29 11:01:26.121784 systemd[1]: Started sshd@4-10.0.0.65:22-10.0.0.1:55076.service - OpenSSH per-connection server daemon (10.0.0.1:55076). Jan 29 11:01:26.122844 systemd-logind[1452]: Removed session 4. Jan 29 11:01:26.164565 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 55076 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:01:26.165098 sshd-session[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:01:26.169485 systemd-logind[1452]: New session 5 of user core. Jan 29 11:01:26.178769 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 11:01:26.247778 sudo[1606]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 11:01:26.248049 sudo[1606]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:01:26.262400 sudo[1606]: pam_unix(sudo:session): session closed for user root Jan 29 11:01:26.265964 sshd[1605]: Connection closed by 10.0.0.1 port 55076 Jan 29 11:01:26.264363 sshd-session[1603]: pam_unix(sshd:session): session closed for user core Jan 29 11:01:26.276286 systemd[1]: sshd@4-10.0.0.65:22-10.0.0.1:55076.service: Deactivated successfully. Jan 29 11:01:26.279127 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 11:01:26.282185 systemd-logind[1452]: Session 5 logged out. Waiting for processes to exit. Jan 29 11:01:26.283772 systemd[1]: Started sshd@5-10.0.0.65:22-10.0.0.1:55090.service - OpenSSH per-connection server daemon (10.0.0.1:55090). Jan 29 11:01:26.284479 systemd-logind[1452]: Removed session 5. Jan 29 11:01:26.333702 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 55090 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:01:26.334181 sshd-session[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:01:26.338684 systemd-logind[1452]: New session 6 of user core. Jan 29 11:01:26.350770 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 11:01:26.401497 sudo[1615]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 11:01:26.401784 sudo[1615]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:01:26.404885 sudo[1615]: pam_unix(sudo:session): session closed for user root Jan 29 11:01:26.409735 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 11:01:26.410258 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:01:26.426911 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:01:26.449224 augenrules[1637]: No rules Jan 29 11:01:26.450385 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:01:26.450578 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:01:26.451556 sudo[1614]: pam_unix(sudo:session): session closed for user root Jan 29 11:01:26.453304 sshd[1613]: Connection closed by 10.0.0.1 port 55090 Jan 29 11:01:26.453119 sshd-session[1611]: pam_unix(sshd:session): session closed for user core Jan 29 11:01:26.465995 systemd[1]: sshd@5-10.0.0.65:22-10.0.0.1:55090.service: Deactivated successfully. Jan 29 11:01:26.467604 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 11:01:26.468287 systemd-logind[1452]: Session 6 logged out. Waiting for processes to exit. Jan 29 11:01:26.480070 systemd[1]: Started sshd@6-10.0.0.65:22-10.0.0.1:55096.service - OpenSSH per-connection server daemon (10.0.0.1:55096). Jan 29 11:01:26.480824 systemd-logind[1452]: Removed session 6. Jan 29 11:01:26.525243 sshd[1645]: Accepted publickey for core from 10.0.0.1 port 55096 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:01:26.526382 sshd-session[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:01:26.530223 systemd-logind[1452]: New session 7 of user core. Jan 29 11:01:26.539742 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 11:01:26.592002 sudo[1648]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 11:01:26.592279 sudo[1648]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:01:26.936866 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 11:01:26.937142 (dockerd)[1669]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 11:01:27.225707 dockerd[1669]: time="2025-01-29T11:01:27.225308892Z" level=info msg="Starting up" Jan 29 11:01:27.391887 dockerd[1669]: time="2025-01-29T11:01:27.391831251Z" level=info msg="Loading containers: start." Jan 29 11:01:27.556650 kernel: Initializing XFRM netlink socket Jan 29 11:01:27.634270 systemd-networkd[1388]: docker0: Link UP Jan 29 11:01:27.679174 dockerd[1669]: time="2025-01-29T11:01:27.679115710Z" level=info msg="Loading containers: done." Jan 29 11:01:27.691658 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1072714397-merged.mount: Deactivated successfully. Jan 29 11:01:27.694617 dockerd[1669]: time="2025-01-29T11:01:27.694405914Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 11:01:27.694617 dockerd[1669]: time="2025-01-29T11:01:27.694504948Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 29 11:01:27.694734 dockerd[1669]: time="2025-01-29T11:01:27.694653121Z" level=info msg="Daemon has completed initialization" Jan 29 11:01:27.731394 dockerd[1669]: time="2025-01-29T11:01:27.731323540Z" level=info msg="API listen on /run/docker.sock" Jan 29 11:01:27.731536 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 11:01:28.301597 containerd[1467]: time="2025-01-29T11:01:28.301544754Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\"" Jan 29 11:01:29.088732 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount403890403.mount: Deactivated successfully. Jan 29 11:01:30.190220 containerd[1467]: time="2025-01-29T11:01:30.190151807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:01:30.190615 containerd[1467]: time="2025-01-29T11:01:30.190547683Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.1: active requests=0, bytes read=26220950" Jan 29 11:01:30.191641 containerd[1467]: time="2025-01-29T11:01:30.191577134Z" level=info msg="ImageCreate event name:\"sha256:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:01:30.195631 containerd[1467]: time="2025-01-29T11:01:30.195575231Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:01:30.196822 containerd[1467]: time="2025-01-29T11:01:30.196786327Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.1\" with image id \"sha256:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\", size \"26217748\" in 1.89518068s" Jan 29 11:01:30.196858 containerd[1467]: time="2025-01-29T11:01:30.196824032Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\" returns image reference \"sha256:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19\"" Jan 29 11:01:30.197908 containerd[1467]: time="2025-01-29T11:01:30.197880112Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\"" Jan 29 11:01:31.387533 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 11:01:31.396773 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:01:31.492886 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:01:31.496719 (kubelet)[1928]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:01:31.535325 kubelet[1928]: E0129 11:01:31.535219 1928 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:01:31.538143 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:01:31.538312 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:01:33.252036 containerd[1467]: time="2025-01-29T11:01:33.251993349Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:01:33.252873 containerd[1467]: time="2025-01-29T11:01:33.252411511Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.1: active requests=0, bytes read=22527109" Jan 29 11:01:33.254248 containerd[1467]: time="2025-01-29T11:01:33.253696585Z" level=info msg="ImageCreate event name:\"sha256:2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:01:33.257276 containerd[1467]: time="2025-01-29T11:01:33.257214534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:01:33.258409 containerd[1467]: time="2025-01-29T11:01:33.258293806Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.1\" with image id \"sha256:2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\", size \"23968433\" in 3.060377549s" Jan 29 11:01:33.258409 containerd[1467]: time="2025-01-29T11:01:33.258328593Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\" returns image reference \"sha256:2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13\"" Jan 29 11:01:33.258893 containerd[1467]: time="2025-01-29T11:01:33.258871108Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\"" Jan 29 11:01:34.619397 containerd[1467]: time="2025-01-29T11:01:34.619334588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:01:34.619968 containerd[1467]: time="2025-01-29T11:01:34.619920733Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.1: active requests=0, bytes read=17481115" Jan 29 11:01:34.620680 containerd[1467]: time="2025-01-29T11:01:34.620655544Z" level=info msg="ImageCreate event name:\"sha256:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:01:34.623494 containerd[1467]: time="2025-01-29T11:01:34.623434966Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:01:34.624892 containerd[1467]: time="2025-01-29T11:01:34.624772915Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.1\" with image id \"sha256:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\", size \"18922457\" in 1.365793968s" Jan 29 11:01:34.624892 containerd[1467]: time="2025-01-29T11:01:34.624808662Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\" returns image reference \"sha256:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c\"" Jan 29 11:01:34.625603 containerd[1467]: time="2025-01-29T11:01:34.625507766Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 29 11:01:35.863788 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3067069571.mount: Deactivated successfully. Jan 29 11:01:36.093601 containerd[1467]: time="2025-01-29T11:01:36.093529702Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:01:36.094104 containerd[1467]: time="2025-01-29T11:01:36.094047364Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=27364399" Jan 29 11:01:36.095058 containerd[1467]: time="2025-01-29T11:01:36.095025708Z" level=info msg="ImageCreate event name:\"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:01:36.097091 containerd[1467]: time="2025-01-29T11:01:36.097057649Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:01:36.097773 containerd[1467]: time="2025-01-29T11:01:36.097737455Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"27363416\" in 1.472196102s" Jan 29 11:01:36.097815 containerd[1467]: time="2025-01-29T11:01:36.097771044Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\"" Jan 29 11:01:36.098240 containerd[1467]: time="2025-01-29T11:01:36.098207134Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 29 11:01:36.805957 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2582591875.mount: Deactivated successfully. Jan 29 11:01:37.606093 containerd[1467]: time="2025-01-29T11:01:37.606018804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:01:37.607291 containerd[1467]: time="2025-01-29T11:01:37.606680303Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jan 29 11:01:37.607745 containerd[1467]: time="2025-01-29T11:01:37.607678851Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:01:37.610785 containerd[1467]: time="2025-01-29T11:01:37.610745030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:01:37.612289 containerd[1467]: time="2025-01-29T11:01:37.612213740Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.513974618s" Jan 29 11:01:37.612289 containerd[1467]: time="2025-01-29T11:01:37.612283877Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 29 11:01:37.612855 containerd[1467]: time="2025-01-29T11:01:37.612823777Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 29 11:01:38.198202 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1473580032.mount: Deactivated successfully. Jan 29 11:01:38.260719 containerd[1467]: time="2025-01-29T11:01:38.260655579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:01:38.263008 containerd[1467]: time="2025-01-29T11:01:38.262956357Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jan 29 11:01:38.265216 containerd[1467]: time="2025-01-29T11:01:38.265174481Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:01:38.269202 containerd[1467]: time="2025-01-29T11:01:38.269147959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:01:38.270017 containerd[1467]: time="2025-01-29T11:01:38.269873205Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 657.015439ms" Jan 29 11:01:38.270017 containerd[1467]: time="2025-01-29T11:01:38.269918510Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 29 11:01:38.270530 containerd[1467]: time="2025-01-29T11:01:38.270508800Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 29 11:01:39.056798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1222698660.mount: Deactivated successfully. Jan 29 11:01:40.854138 containerd[1467]: time="2025-01-29T11:01:40.852925523Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:01:40.854534 containerd[1467]: time="2025-01-29T11:01:40.854481652Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812431" Jan 29 11:01:40.855317 containerd[1467]: time="2025-01-29T11:01:40.855267054Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:01:40.861653 containerd[1467]: time="2025-01-29T11:01:40.861608894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:01:40.863675 containerd[1467]: time="2025-01-29T11:01:40.863633121Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.592929864s" Jan 29 11:01:40.863675 containerd[1467]: time="2025-01-29T11:01:40.863673509Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 29 11:01:41.637482 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 11:01:41.647802 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:01:41.746077 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:01:41.751065 (kubelet)[2094]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:01:41.790669 kubelet[2094]: E0129 11:01:41.790623 2094 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:01:41.793254 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:01:41.793414 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:01:48.477745 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:01:48.489105 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:01:48.513242 systemd[1]: Reloading requested from client PID 2109 ('systemctl') (unit session-7.scope)... Jan 29 11:01:48.513259 systemd[1]: Reloading... Jan 29 11:01:48.588606 zram_generator::config[2151]: No configuration found. Jan 29 11:01:49.029276 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:01:49.080655 systemd[1]: Reloading finished in 567 ms. Jan 29 11:01:49.117697 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:01:49.119970 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:01:49.120156 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:01:49.121704 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:01:49.225787 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:01:49.231631 (kubelet)[2195]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:01:49.266495 kubelet[2195]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:01:49.266495 kubelet[2195]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 29 11:01:49.266495 kubelet[2195]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:01:49.266950 kubelet[2195]: I0129 11:01:49.266555 2195 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:01:50.503607 kubelet[2195]: I0129 11:01:50.502994 2195 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 29 11:01:50.503607 kubelet[2195]: I0129 11:01:50.503030 2195 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:01:50.503607 kubelet[2195]: I0129 11:01:50.503295 2195 server.go:954] "Client rotation is on, will bootstrap in background" Jan 29 11:01:50.531371 kubelet[2195]: E0129 11:01:50.531334 2195 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.65:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:01:50.531908 kubelet[2195]: I0129 11:01:50.531870 2195 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:01:50.538466 kubelet[2195]: E0129 11:01:50.538422 2195 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:01:50.538466 kubelet[2195]: I0129 11:01:50.538457 2195 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:01:50.541214 kubelet[2195]: I0129 11:01:50.541175 2195 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:01:50.543166 kubelet[2195]: I0129 11:01:50.543111 2195 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:01:50.543417 kubelet[2195]: I0129 11:01:50.543162 2195 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:01:50.543502 kubelet[2195]: I0129 11:01:50.543480 2195 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:01:50.543502 kubelet[2195]: I0129 11:01:50.543489 2195 container_manager_linux.go:304] "Creating device plugin manager" Jan 29 11:01:50.543916 kubelet[2195]: I0129 11:01:50.543890 2195 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:01:50.546489 kubelet[2195]: I0129 11:01:50.546460 2195 kubelet.go:446] "Attempting to sync node with API server" Jan 29 11:01:50.546489 kubelet[2195]: I0129 11:01:50.546488 2195 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:01:50.546549 kubelet[2195]: I0129 11:01:50.546508 2195 kubelet.go:352] "Adding apiserver pod source" Jan 29 11:01:50.546549 kubelet[2195]: I0129 11:01:50.546518 2195 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:01:50.550478 kubelet[2195]: I0129 11:01:50.549907 2195 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:01:50.552958 kubelet[2195]: I0129 11:01:50.552532 2195 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:01:50.552958 kubelet[2195]: W0129 11:01:50.552670 2195 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 11:01:50.552958 kubelet[2195]: W0129 11:01:50.552762 2195 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jan 29 11:01:50.552958 kubelet[2195]: E0129 11:01:50.552820 2195 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:01:50.553541 kubelet[2195]: I0129 11:01:50.553514 2195 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 29 11:01:50.553607 kubelet[2195]: I0129 11:01:50.553554 2195 server.go:1287] "Started kubelet" Jan 29 11:01:50.553919 kubelet[2195]: W0129 11:01:50.553878 2195 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.65:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jan 29 11:01:50.554023 kubelet[2195]: E0129 11:01:50.554004 2195 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.65:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:01:50.557318 kubelet[2195]: I0129 11:01:50.555218 2195 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:01:50.557318 kubelet[2195]: I0129 11:01:50.556327 2195 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:01:50.557797 kubelet[2195]: E0129 11:01:50.557412 2195 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.65:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.65:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f24dc7a6afb8d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 11:01:50.553537421 +0000 UTC m=+1.319063725,LastTimestamp:2025-01-29 11:01:50.553537421 +0000 UTC m=+1.319063725,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 11:01:50.558658 kubelet[2195]: I0129 11:01:50.558641 2195 server.go:490] "Adding debug handlers to kubelet server" Jan 29 11:01:50.559538 kubelet[2195]: E0129 11:01:50.559446 2195 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:01:50.559538 kubelet[2195]: I0129 11:01:50.559499 2195 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 29 11:01:50.559741 kubelet[2195]: I0129 11:01:50.559686 2195 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 11:01:50.559781 kubelet[2195]: I0129 11:01:50.559745 2195 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:01:50.560184 kubelet[2195]: W0129 11:01:50.560029 2195 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jan 29 11:01:50.560184 kubelet[2195]: E0129 11:01:50.560068 2195 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:01:50.560284 kubelet[2195]: E0129 11:01:50.560211 2195 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:01:50.560448 kubelet[2195]: E0129 11:01:50.560332 2195 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.65:6443: connect: connection refused" interval="200ms" Jan 29 11:01:50.560495 kubelet[2195]: I0129 11:01:50.560467 2195 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:01:50.560597 kubelet[2195]: I0129 11:01:50.560552 2195 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:01:50.561163 kubelet[2195]: I0129 11:01:50.561048 2195 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:01:50.561432 kubelet[2195]: I0129 11:01:50.561412 2195 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:01:50.562232 kubelet[2195]: I0129 11:01:50.562156 2195 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:01:50.562430 kubelet[2195]: I0129 11:01:50.562400 2195 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:01:50.568869 kubelet[2195]: I0129 11:01:50.568350 2195 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:01:50.570032 kubelet[2195]: I0129 11:01:50.569678 2195 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:01:50.570032 kubelet[2195]: I0129 11:01:50.569709 2195 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 29 11:01:50.570032 kubelet[2195]: I0129 11:01:50.569727 2195 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 29 11:01:50.570032 kubelet[2195]: I0129 11:01:50.569734 2195 kubelet.go:2388] "Starting kubelet main sync loop" Jan 29 11:01:50.570032 kubelet[2195]: E0129 11:01:50.569773 2195 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:01:50.574384 kubelet[2195]: W0129 11:01:50.574191 2195 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jan 29 11:01:50.574384 kubelet[2195]: E0129 11:01:50.574246 2195 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:01:50.575106 kubelet[2195]: I0129 11:01:50.575081 2195 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 29 11:01:50.575106 kubelet[2195]: I0129 11:01:50.575095 2195 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 29 11:01:50.575206 kubelet[2195]: I0129 11:01:50.575114 2195 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:01:50.577876 kubelet[2195]: I0129 11:01:50.577844 2195 policy_none.go:49] "None policy: Start" Jan 29 11:01:50.577876 kubelet[2195]: I0129 11:01:50.577868 2195 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 29 11:01:50.577876 kubelet[2195]: I0129 11:01:50.577879 2195 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:01:50.584558 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 11:01:50.602297 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 11:01:50.605370 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 11:01:50.618715 kubelet[2195]: I0129 11:01:50.618492 2195 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:01:50.618804 kubelet[2195]: I0129 11:01:50.618738 2195 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:01:50.618804 kubelet[2195]: I0129 11:01:50.618751 2195 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:01:50.619237 kubelet[2195]: I0129 11:01:50.619216 2195 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:01:50.619705 kubelet[2195]: E0129 11:01:50.619682 2195 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 29 11:01:50.619805 kubelet[2195]: E0129 11:01:50.619722 2195 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 29 11:01:50.677436 systemd[1]: Created slice kubepods-burstable-pod6697d085349abaecd8b74e642f028e95.slice - libcontainer container kubepods-burstable-pod6697d085349abaecd8b74e642f028e95.slice. Jan 29 11:01:50.697071 kubelet[2195]: E0129 11:01:50.696869 2195 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 11:01:50.700015 systemd[1]: Created slice kubepods-burstable-pode9ba8773e418c2bbf5a955ad3b2b2e16.slice - libcontainer container kubepods-burstable-pode9ba8773e418c2bbf5a955ad3b2b2e16.slice. Jan 29 11:01:50.702431 kubelet[2195]: E0129 11:01:50.702399 2195 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 11:01:50.715070 systemd[1]: Created slice kubepods-burstable-podeb981ecac1bbdbbdd50082f31745642c.slice - libcontainer container kubepods-burstable-podeb981ecac1bbdbbdd50082f31745642c.slice. Jan 29 11:01:50.716600 kubelet[2195]: E0129 11:01:50.716443 2195 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 11:01:50.720425 kubelet[2195]: I0129 11:01:50.720404 2195 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 11:01:50.720988 kubelet[2195]: E0129 11:01:50.720953 2195 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.65:6443/api/v1/nodes\": dial tcp 10.0.0.65:6443: connect: connection refused" node="localhost" Jan 29 11:01:50.761927 kubelet[2195]: E0129 11:01:50.761812 2195 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.65:6443: connect: connection refused" interval="400ms" Jan 29 11:01:50.860881 kubelet[2195]: I0129 11:01:50.860824 2195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:01:50.860881 kubelet[2195]: I0129 11:01:50.860869 2195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:01:50.860881 kubelet[2195]: I0129 11:01:50.860891 2195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:01:50.861053 kubelet[2195]: I0129 11:01:50.860908 2195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:01:50.861053 kubelet[2195]: I0129 11:01:50.860925 2195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:01:50.861053 kubelet[2195]: I0129 11:01:50.860942 2195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6697d085349abaecd8b74e642f028e95-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6697d085349abaecd8b74e642f028e95\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:01:50.861053 kubelet[2195]: I0129 11:01:50.860957 2195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6697d085349abaecd8b74e642f028e95-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6697d085349abaecd8b74e642f028e95\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:01:50.861053 kubelet[2195]: I0129 11:01:50.860975 2195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6697d085349abaecd8b74e642f028e95-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6697d085349abaecd8b74e642f028e95\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:01:50.861159 kubelet[2195]: I0129 11:01:50.860990 2195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb981ecac1bbdbbdd50082f31745642c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"eb981ecac1bbdbbdd50082f31745642c\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:01:50.922859 kubelet[2195]: I0129 11:01:50.922824 2195 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 11:01:50.923171 kubelet[2195]: E0129 11:01:50.923132 2195 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.65:6443/api/v1/nodes\": dial tcp 10.0.0.65:6443: connect: connection refused" node="localhost" Jan 29 11:01:50.997756 kubelet[2195]: E0129 11:01:50.997704 2195 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:50.998573 containerd[1467]: time="2025-01-29T11:01:50.998520578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6697d085349abaecd8b74e642f028e95,Namespace:kube-system,Attempt:0,}" Jan 29 11:01:51.003653 kubelet[2195]: E0129 11:01:51.003619 2195 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:51.004065 containerd[1467]: time="2025-01-29T11:01:51.004016023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ba8773e418c2bbf5a955ad3b2b2e16,Namespace:kube-system,Attempt:0,}" Jan 29 11:01:51.017779 kubelet[2195]: E0129 11:01:51.017667 2195 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:51.019077 containerd[1467]: time="2025-01-29T11:01:51.018815063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:eb981ecac1bbdbbdd50082f31745642c,Namespace:kube-system,Attempt:0,}" Jan 29 11:01:51.163125 kubelet[2195]: E0129 11:01:51.163081 2195 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.65:6443: connect: connection refused" interval="800ms" Jan 29 11:01:51.324882 kubelet[2195]: I0129 11:01:51.324776 2195 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 11:01:51.325224 kubelet[2195]: E0129 11:01:51.325095 2195 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.65:6443/api/v1/nodes\": dial tcp 10.0.0.65:6443: connect: connection refused" node="localhost" Jan 29 11:01:51.480361 kubelet[2195]: W0129 11:01:51.480284 2195 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jan 29 11:01:51.480361 kubelet[2195]: E0129 11:01:51.480353 2195 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:01:51.567441 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3949143093.mount: Deactivated successfully. Jan 29 11:01:51.575410 containerd[1467]: time="2025-01-29T11:01:51.575284840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:01:51.576523 containerd[1467]: time="2025-01-29T11:01:51.576480984Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 29 11:01:51.579934 containerd[1467]: time="2025-01-29T11:01:51.579903414Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:01:51.585081 containerd[1467]: time="2025-01-29T11:01:51.585046155Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:01:51.587558 containerd[1467]: time="2025-01-29T11:01:51.586489207Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:01:51.588029 containerd[1467]: time="2025-01-29T11:01:51.587994086Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:01:51.589033 containerd[1467]: time="2025-01-29T11:01:51.588999591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:01:51.589365 containerd[1467]: time="2025-01-29T11:01:51.589322642Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:01:51.590032 containerd[1467]: time="2025-01-29T11:01:51.590003417Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 591.382541ms" Jan 29 11:01:51.591486 containerd[1467]: time="2025-01-29T11:01:51.591448548Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 587.359141ms" Jan 29 11:01:51.598120 containerd[1467]: time="2025-01-29T11:01:51.598085971Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 579.194045ms" Jan 29 11:01:51.723974 containerd[1467]: time="2025-01-29T11:01:51.723686872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:01:51.723974 containerd[1467]: time="2025-01-29T11:01:51.723752857Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:01:51.723974 containerd[1467]: time="2025-01-29T11:01:51.723767654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:01:51.724269 containerd[1467]: time="2025-01-29T11:01:51.724182246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:01:51.726148 containerd[1467]: time="2025-01-29T11:01:51.725929473Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:01:51.726148 containerd[1467]: time="2025-01-29T11:01:51.725983141Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:01:51.726148 containerd[1467]: time="2025-01-29T11:01:51.726006456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:01:51.726148 containerd[1467]: time="2025-01-29T11:01:51.726100956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:01:51.730201 containerd[1467]: time="2025-01-29T11:01:51.730104261Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:01:51.730798 containerd[1467]: time="2025-01-29T11:01:51.730629189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:01:51.730798 containerd[1467]: time="2025-01-29T11:01:51.730654424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:01:51.730798 containerd[1467]: time="2025-01-29T11:01:51.730733007Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:01:51.744803 systemd[1]: Started cri-containerd-30f517e3a919893ba0a7e92eb899f58ea53e9cff838c5c99c3c6694218885864.scope - libcontainer container 30f517e3a919893ba0a7e92eb899f58ea53e9cff838c5c99c3c6694218885864. Jan 29 11:01:51.748771 systemd[1]: Started cri-containerd-2448a287d76f8e489923eb71f0cbf3d6b3c307774a023ef0a166894f7d0306ca.scope - libcontainer container 2448a287d76f8e489923eb71f0cbf3d6b3c307774a023ef0a166894f7d0306ca. Jan 29 11:01:51.750219 systemd[1]: Started cri-containerd-8a19ef2ef2f93bee75385dae4c31bf0ce448d4ca0ce9c54ec0102fcfad1ec12f.scope - libcontainer container 8a19ef2ef2f93bee75385dae4c31bf0ce448d4ca0ce9c54ec0102fcfad1ec12f. Jan 29 11:01:51.782904 containerd[1467]: time="2025-01-29T11:01:51.782865595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6697d085349abaecd8b74e642f028e95,Namespace:kube-system,Attempt:0,} returns sandbox id \"2448a287d76f8e489923eb71f0cbf3d6b3c307774a023ef0a166894f7d0306ca\"" Jan 29 11:01:51.786837 kubelet[2195]: E0129 11:01:51.786786 2195 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:51.787674 containerd[1467]: time="2025-01-29T11:01:51.787521481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:eb981ecac1bbdbbdd50082f31745642c,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a19ef2ef2f93bee75385dae4c31bf0ce448d4ca0ce9c54ec0102fcfad1ec12f\"" Jan 29 11:01:51.789434 containerd[1467]: time="2025-01-29T11:01:51.788072443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ba8773e418c2bbf5a955ad3b2b2e16,Namespace:kube-system,Attempt:0,} returns sandbox id \"30f517e3a919893ba0a7e92eb899f58ea53e9cff838c5c99c3c6694218885864\"" Jan 29 11:01:51.789539 containerd[1467]: time="2025-01-29T11:01:51.789509216Z" level=info msg="CreateContainer within sandbox \"2448a287d76f8e489923eb71f0cbf3d6b3c307774a023ef0a166894f7d0306ca\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 11:01:51.789892 kubelet[2195]: E0129 11:01:51.789865 2195 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:51.790424 kubelet[2195]: E0129 11:01:51.790405 2195 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:51.792419 containerd[1467]: time="2025-01-29T11:01:51.792393561Z" level=info msg="CreateContainer within sandbox \"30f517e3a919893ba0a7e92eb899f58ea53e9cff838c5c99c3c6694218885864\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 11:01:51.792622 containerd[1467]: time="2025-01-29T11:01:51.792553286Z" level=info msg="CreateContainer within sandbox \"8a19ef2ef2f93bee75385dae4c31bf0ce448d4ca0ce9c54ec0102fcfad1ec12f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 11:01:51.822271 containerd[1467]: time="2025-01-29T11:01:51.822201076Z" level=info msg="CreateContainer within sandbox \"2448a287d76f8e489923eb71f0cbf3d6b3c307774a023ef0a166894f7d0306ca\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c1f84f802e356eaac0af2608c89be7abeafc8e50da7ab8fd4ec5e3b2486dea45\"" Jan 29 11:01:51.823259 containerd[1467]: time="2025-01-29T11:01:51.823089206Z" level=info msg="StartContainer for \"c1f84f802e356eaac0af2608c89be7abeafc8e50da7ab8fd4ec5e3b2486dea45\"" Jan 29 11:01:51.823726 kubelet[2195]: W0129 11:01:51.823631 2195 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.65:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jan 29 11:01:51.823726 kubelet[2195]: E0129 11:01:51.823691 2195 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.65:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:01:51.830099 containerd[1467]: time="2025-01-29T11:01:51.830006689Z" level=info msg="CreateContainer within sandbox \"8a19ef2ef2f93bee75385dae4c31bf0ce448d4ca0ce9c54ec0102fcfad1ec12f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b687d852332bf30a70b40e4f8e41eee03d24d47486463c4c6e0469fb760dfc23\"" Jan 29 11:01:51.831413 containerd[1467]: time="2025-01-29T11:01:51.831015474Z" level=info msg="StartContainer for \"b687d852332bf30a70b40e4f8e41eee03d24d47486463c4c6e0469fb760dfc23\"" Jan 29 11:01:51.835062 containerd[1467]: time="2025-01-29T11:01:51.834536202Z" level=info msg="CreateContainer within sandbox \"30f517e3a919893ba0a7e92eb899f58ea53e9cff838c5c99c3c6694218885864\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bb1ca981f750f033001b4ae5e5e76372a0ae854d2192773a038b41af7bd327fa\"" Jan 29 11:01:51.835617 containerd[1467]: time="2025-01-29T11:01:51.835509994Z" level=info msg="StartContainer for \"bb1ca981f750f033001b4ae5e5e76372a0ae854d2192773a038b41af7bd327fa\"" Jan 29 11:01:51.848806 systemd[1]: Started cri-containerd-c1f84f802e356eaac0af2608c89be7abeafc8e50da7ab8fd4ec5e3b2486dea45.scope - libcontainer container c1f84f802e356eaac0af2608c89be7abeafc8e50da7ab8fd4ec5e3b2486dea45. Jan 29 11:01:51.868833 systemd[1]: Started cri-containerd-b687d852332bf30a70b40e4f8e41eee03d24d47486463c4c6e0469fb760dfc23.scope - libcontainer container b687d852332bf30a70b40e4f8e41eee03d24d47486463c4c6e0469fb760dfc23. Jan 29 11:01:51.874131 systemd[1]: Started cri-containerd-bb1ca981f750f033001b4ae5e5e76372a0ae854d2192773a038b41af7bd327fa.scope - libcontainer container bb1ca981f750f033001b4ae5e5e76372a0ae854d2192773a038b41af7bd327fa. Jan 29 11:01:51.903594 containerd[1467]: time="2025-01-29T11:01:51.903536308Z" level=info msg="StartContainer for \"c1f84f802e356eaac0af2608c89be7abeafc8e50da7ab8fd4ec5e3b2486dea45\" returns successfully" Jan 29 11:01:51.954545 containerd[1467]: time="2025-01-29T11:01:51.951756332Z" level=info msg="StartContainer for \"bb1ca981f750f033001b4ae5e5e76372a0ae854d2192773a038b41af7bd327fa\" returns successfully" Jan 29 11:01:51.954545 containerd[1467]: time="2025-01-29T11:01:51.951848312Z" level=info msg="StartContainer for \"b687d852332bf30a70b40e4f8e41eee03d24d47486463c4c6e0469fb760dfc23\" returns successfully" Jan 29 11:01:51.966484 kubelet[2195]: E0129 11:01:51.966414 2195 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.65:6443: connect: connection refused" interval="1.6s" Jan 29 11:01:51.993874 kubelet[2195]: W0129 11:01:51.993780 2195 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jan 29 11:01:51.993874 kubelet[2195]: E0129 11:01:51.993846 2195 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:01:52.042330 kubelet[2195]: W0129 11:01:52.042212 2195 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jan 29 11:01:52.042330 kubelet[2195]: E0129 11:01:52.042294 2195 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:01:52.128367 kubelet[2195]: I0129 11:01:52.126949 2195 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 11:01:52.128367 kubelet[2195]: E0129 11:01:52.127397 2195 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.65:6443/api/v1/nodes\": dial tcp 10.0.0.65:6443: connect: connection refused" node="localhost" Jan 29 11:01:52.584164 kubelet[2195]: E0129 11:01:52.583937 2195 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 11:01:52.584164 kubelet[2195]: E0129 11:01:52.583987 2195 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 11:01:52.584164 kubelet[2195]: E0129 11:01:52.584066 2195 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:52.584164 kubelet[2195]: E0129 11:01:52.584081 2195 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:52.586690 kubelet[2195]: E0129 11:01:52.586425 2195 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 11:01:52.586690 kubelet[2195]: E0129 11:01:52.586531 2195 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:53.588380 kubelet[2195]: E0129 11:01:53.588345 2195 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 11:01:53.588762 kubelet[2195]: E0129 11:01:53.588474 2195 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:53.588762 kubelet[2195]: E0129 11:01:53.588492 2195 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 11:01:53.588762 kubelet[2195]: E0129 11:01:53.588706 2195 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:53.588762 kubelet[2195]: E0129 11:01:53.588727 2195 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 11:01:53.588893 kubelet[2195]: E0129 11:01:53.588815 2195 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:53.687662 kubelet[2195]: E0129 11:01:53.687622 2195 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 29 11:01:53.729616 kubelet[2195]: I0129 11:01:53.728611 2195 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 11:01:53.739442 kubelet[2195]: I0129 11:01:53.739243 2195 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Jan 29 11:01:53.739442 kubelet[2195]: E0129 11:01:53.739283 2195 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 29 11:01:53.743563 kubelet[2195]: E0129 11:01:53.743508 2195 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:01:53.844483 kubelet[2195]: E0129 11:01:53.844350 2195 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:01:53.944847 kubelet[2195]: E0129 11:01:53.944812 2195 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:01:54.045514 kubelet[2195]: E0129 11:01:54.045471 2195 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:01:54.160651 kubelet[2195]: I0129 11:01:54.160495 2195 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 29 11:01:54.166820 kubelet[2195]: E0129 11:01:54.166632 2195 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 29 11:01:54.166820 kubelet[2195]: I0129 11:01:54.166655 2195 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 29 11:01:54.168058 kubelet[2195]: E0129 11:01:54.168040 2195 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 29 11:01:54.168400 kubelet[2195]: I0129 11:01:54.168205 2195 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 29 11:01:54.171148 kubelet[2195]: E0129 11:01:54.171111 2195 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 29 11:01:54.549057 kubelet[2195]: I0129 11:01:54.548949 2195 apiserver.go:52] "Watching apiserver" Jan 29 11:01:54.559816 kubelet[2195]: I0129 11:01:54.559783 2195 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 11:01:55.695788 systemd[1]: Reloading requested from client PID 2474 ('systemctl') (unit session-7.scope)... Jan 29 11:01:55.695804 systemd[1]: Reloading... Jan 29 11:01:55.766700 zram_generator::config[2519]: No configuration found. Jan 29 11:01:55.841146 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:01:55.905022 systemd[1]: Reloading finished in 208 ms. Jan 29 11:01:55.941210 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:01:55.951615 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:01:55.951846 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:01:55.951922 systemd[1]: kubelet.service: Consumed 1.665s CPU time, 124.5M memory peak, 0B memory swap peak. Jan 29 11:01:55.968210 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:01:56.062391 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:01:56.066986 (kubelet)[2555]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:01:56.116609 kubelet[2555]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:01:56.117048 kubelet[2555]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 29 11:01:56.117048 kubelet[2555]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:01:56.117221 kubelet[2555]: I0129 11:01:56.117183 2555 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:01:56.124072 kubelet[2555]: I0129 11:01:56.124035 2555 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 29 11:01:56.124072 kubelet[2555]: I0129 11:01:56.124066 2555 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:01:56.124314 kubelet[2555]: I0129 11:01:56.124298 2555 server.go:954] "Client rotation is on, will bootstrap in background" Jan 29 11:01:56.126414 kubelet[2555]: I0129 11:01:56.126384 2555 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 11:01:56.130343 kubelet[2555]: I0129 11:01:56.130306 2555 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:01:56.134520 kubelet[2555]: E0129 11:01:56.134486 2555 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:01:56.134520 kubelet[2555]: I0129 11:01:56.134518 2555 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:01:56.137110 kubelet[2555]: I0129 11:01:56.137084 2555 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:01:56.137345 kubelet[2555]: I0129 11:01:56.137311 2555 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:01:56.137529 kubelet[2555]: I0129 11:01:56.137342 2555 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:01:56.137619 kubelet[2555]: I0129 11:01:56.137537 2555 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:01:56.137619 kubelet[2555]: I0129 11:01:56.137546 2555 container_manager_linux.go:304] "Creating device plugin manager" Jan 29 11:01:56.137661 kubelet[2555]: I0129 11:01:56.137633 2555 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:01:56.137798 kubelet[2555]: I0129 11:01:56.137776 2555 kubelet.go:446] "Attempting to sync node with API server" Jan 29 11:01:56.137798 kubelet[2555]: I0129 11:01:56.137797 2555 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:01:56.137850 kubelet[2555]: I0129 11:01:56.137814 2555 kubelet.go:352] "Adding apiserver pod source" Jan 29 11:01:56.137850 kubelet[2555]: I0129 11:01:56.137828 2555 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:01:56.138342 kubelet[2555]: I0129 11:01:56.138306 2555 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:01:56.138894 kubelet[2555]: I0129 11:01:56.138867 2555 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:01:56.139474 kubelet[2555]: I0129 11:01:56.139445 2555 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 29 11:01:56.139513 kubelet[2555]: I0129 11:01:56.139499 2555 server.go:1287] "Started kubelet" Jan 29 11:01:56.141697 kubelet[2555]: I0129 11:01:56.141647 2555 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:01:56.141961 kubelet[2555]: I0129 11:01:56.141945 2555 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:01:56.142021 kubelet[2555]: I0129 11:01:56.142001 2555 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:01:56.145602 kubelet[2555]: I0129 11:01:56.142922 2555 server.go:490] "Adding debug handlers to kubelet server" Jan 29 11:01:56.145602 kubelet[2555]: I0129 11:01:56.142994 2555 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:01:56.145602 kubelet[2555]: I0129 11:01:56.143839 2555 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:01:56.148759 kubelet[2555]: I0129 11:01:56.148724 2555 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 29 11:01:56.148874 kubelet[2555]: E0129 11:01:56.148857 2555 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:01:56.149152 kubelet[2555]: I0129 11:01:56.149131 2555 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 11:01:56.149367 kubelet[2555]: I0129 11:01:56.149351 2555 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:01:56.161198 kubelet[2555]: I0129 11:01:56.161153 2555 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:01:56.161312 kubelet[2555]: I0129 11:01:56.161280 2555 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:01:56.168773 kubelet[2555]: E0129 11:01:56.167158 2555 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:01:56.168773 kubelet[2555]: I0129 11:01:56.167574 2555 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:01:56.174255 kubelet[2555]: I0129 11:01:56.174213 2555 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:01:56.175358 kubelet[2555]: I0129 11:01:56.175323 2555 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:01:56.176215 kubelet[2555]: I0129 11:01:56.175721 2555 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 29 11:01:56.176215 kubelet[2555]: I0129 11:01:56.175753 2555 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 29 11:01:56.176215 kubelet[2555]: I0129 11:01:56.175760 2555 kubelet.go:2388] "Starting kubelet main sync loop" Jan 29 11:01:56.176215 kubelet[2555]: E0129 11:01:56.175810 2555 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:01:56.197485 kubelet[2555]: I0129 11:01:56.197451 2555 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 29 11:01:56.197485 kubelet[2555]: I0129 11:01:56.197476 2555 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 29 11:01:56.197485 kubelet[2555]: I0129 11:01:56.197495 2555 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:01:56.197680 kubelet[2555]: I0129 11:01:56.197666 2555 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 11:01:56.197710 kubelet[2555]: I0129 11:01:56.197678 2555 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 11:01:56.197710 kubelet[2555]: I0129 11:01:56.197695 2555 policy_none.go:49] "None policy: Start" Jan 29 11:01:56.197710 kubelet[2555]: I0129 11:01:56.197703 2555 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 29 11:01:56.197710 kubelet[2555]: I0129 11:01:56.197712 2555 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:01:56.197848 kubelet[2555]: I0129 11:01:56.197832 2555 state_mem.go:75] "Updated machine memory state" Jan 29 11:01:56.201071 kubelet[2555]: I0129 11:01:56.201041 2555 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:01:56.201249 kubelet[2555]: I0129 11:01:56.201223 2555 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:01:56.201296 kubelet[2555]: I0129 11:01:56.201240 2555 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:01:56.201483 kubelet[2555]: I0129 11:01:56.201410 2555 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:01:56.202503 kubelet[2555]: E0129 11:01:56.202246 2555 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 29 11:01:56.276757 kubelet[2555]: I0129 11:01:56.276631 2555 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 29 11:01:56.277338 kubelet[2555]: I0129 11:01:56.276732 2555 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 29 11:01:56.277338 kubelet[2555]: I0129 11:01:56.276632 2555 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 29 11:01:56.306172 kubelet[2555]: I0129 11:01:56.306144 2555 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 11:01:56.313497 kubelet[2555]: I0129 11:01:56.312787 2555 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Jan 29 11:01:56.313497 kubelet[2555]: I0129 11:01:56.312935 2555 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Jan 29 11:01:56.350950 kubelet[2555]: I0129 11:01:56.350908 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:01:56.351127 kubelet[2555]: I0129 11:01:56.351103 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:01:56.351297 kubelet[2555]: I0129 11:01:56.351278 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6697d085349abaecd8b74e642f028e95-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6697d085349abaecd8b74e642f028e95\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:01:56.351380 kubelet[2555]: I0129 11:01:56.351367 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6697d085349abaecd8b74e642f028e95-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6697d085349abaecd8b74e642f028e95\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:01:56.351464 kubelet[2555]: I0129 11:01:56.351451 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6697d085349abaecd8b74e642f028e95-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6697d085349abaecd8b74e642f028e95\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:01:56.351540 kubelet[2555]: I0129 11:01:56.351528 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:01:56.351634 kubelet[2555]: I0129 11:01:56.351621 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:01:56.351768 kubelet[2555]: I0129 11:01:56.351705 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:01:56.351768 kubelet[2555]: I0129 11:01:56.351727 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb981ecac1bbdbbdd50082f31745642c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"eb981ecac1bbdbbdd50082f31745642c\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:01:56.582013 kubelet[2555]: E0129 11:01:56.581850 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:56.582013 kubelet[2555]: E0129 11:01:56.581849 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:56.582013 kubelet[2555]: E0129 11:01:56.581857 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:56.694451 sudo[2592]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 29 11:01:56.694744 sudo[2592]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 29 11:01:57.110937 sudo[2592]: pam_unix(sudo:session): session closed for user root Jan 29 11:01:57.142073 kubelet[2555]: I0129 11:01:57.139729 2555 apiserver.go:52] "Watching apiserver" Jan 29 11:01:57.150236 kubelet[2555]: I0129 11:01:57.150204 2555 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 11:01:57.186815 kubelet[2555]: I0129 11:01:57.186769 2555 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 29 11:01:57.187391 kubelet[2555]: I0129 11:01:57.187200 2555 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 29 11:01:57.189845 kubelet[2555]: E0129 11:01:57.189826 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:57.197185 kubelet[2555]: E0129 11:01:57.196793 2555 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 29 11:01:57.197185 kubelet[2555]: E0129 11:01:57.196835 2555 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 29 11:01:57.197185 kubelet[2555]: E0129 11:01:57.196938 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:57.197185 kubelet[2555]: E0129 11:01:57.196944 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:57.211157 kubelet[2555]: I0129 11:01:57.211107 2555 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.211090244 podStartE2EDuration="1.211090244s" podCreationTimestamp="2025-01-29 11:01:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:01:57.21099858 +0000 UTC m=+1.140357761" watchObservedRunningTime="2025-01-29 11:01:57.211090244 +0000 UTC m=+1.140449425" Jan 29 11:01:57.226623 kubelet[2555]: I0129 11:01:57.226174 2555 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.2261559850000001 podStartE2EDuration="1.226155985s" podCreationTimestamp="2025-01-29 11:01:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:01:57.217959592 +0000 UTC m=+1.147318773" watchObservedRunningTime="2025-01-29 11:01:57.226155985 +0000 UTC m=+1.155515166" Jan 29 11:01:57.234349 kubelet[2555]: I0129 11:01:57.234295 2555 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.234271913 podStartE2EDuration="1.234271913s" podCreationTimestamp="2025-01-29 11:01:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:01:57.226390704 +0000 UTC m=+1.155749885" watchObservedRunningTime="2025-01-29 11:01:57.234271913 +0000 UTC m=+1.163631094" Jan 29 11:01:58.188319 kubelet[2555]: E0129 11:01:58.188171 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:58.188319 kubelet[2555]: E0129 11:01:58.188239 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:59.189935 kubelet[2555]: E0129 11:01:59.189632 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:59.189935 kubelet[2555]: E0129 11:01:59.189694 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:59.627670 sudo[1648]: pam_unix(sudo:session): session closed for user root Jan 29 11:01:59.629838 sshd[1647]: Connection closed by 10.0.0.1 port 55096 Jan 29 11:01:59.629355 sshd-session[1645]: pam_unix(sshd:session): session closed for user core Jan 29 11:01:59.631958 systemd[1]: sshd@6-10.0.0.65:22-10.0.0.1:55096.service: Deactivated successfully. Jan 29 11:01:59.633653 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 11:01:59.633792 systemd[1]: session-7.scope: Consumed 10.706s CPU time, 156.1M memory peak, 0B memory swap peak. Jan 29 11:01:59.635023 systemd-logind[1452]: Session 7 logged out. Waiting for processes to exit. Jan 29 11:01:59.636302 systemd-logind[1452]: Removed session 7. Jan 29 11:02:02.988650 kubelet[2555]: I0129 11:02:02.988615 2555 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 11:02:02.989669 kubelet[2555]: I0129 11:02:02.989098 2555 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 11:02:02.989714 containerd[1467]: time="2025-01-29T11:02:02.988916662Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 11:02:03.886808 systemd[1]: Created slice kubepods-besteffort-pod1ada0c1e_7915_446e_aea5_7336654a0be7.slice - libcontainer container kubepods-besteffort-pod1ada0c1e_7915_446e_aea5_7336654a0be7.slice. Jan 29 11:02:03.901410 systemd[1]: Created slice kubepods-burstable-podde1907c8_ae64_42aa_bf5a_fbde965b5645.slice - libcontainer container kubepods-burstable-podde1907c8_ae64_42aa_bf5a_fbde965b5645.slice. Jan 29 11:02:03.903853 kubelet[2555]: I0129 11:02:03.903646 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/de1907c8-ae64-42aa-bf5a-fbde965b5645-cilium-run\") pod \"cilium-5hwns\" (UID: \"de1907c8-ae64-42aa-bf5a-fbde965b5645\") " pod="kube-system/cilium-5hwns" Jan 29 11:02:03.903853 kubelet[2555]: I0129 11:02:03.903695 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de1907c8-ae64-42aa-bf5a-fbde965b5645-lib-modules\") pod \"cilium-5hwns\" (UID: \"de1907c8-ae64-42aa-bf5a-fbde965b5645\") " pod="kube-system/cilium-5hwns" Jan 29 11:02:03.903853 kubelet[2555]: I0129 11:02:03.903716 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxxw2\" (UniqueName: \"kubernetes.io/projected/de1907c8-ae64-42aa-bf5a-fbde965b5645-kube-api-access-fxxw2\") pod \"cilium-5hwns\" (UID: \"de1907c8-ae64-42aa-bf5a-fbde965b5645\") " pod="kube-system/cilium-5hwns" Jan 29 11:02:03.903853 kubelet[2555]: I0129 11:02:03.903733 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/de1907c8-ae64-42aa-bf5a-fbde965b5645-etc-cni-netd\") pod \"cilium-5hwns\" (UID: \"de1907c8-ae64-42aa-bf5a-fbde965b5645\") " pod="kube-system/cilium-5hwns" Jan 29 11:02:03.903853 kubelet[2555]: I0129 11:02:03.903755 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/de1907c8-ae64-42aa-bf5a-fbde965b5645-clustermesh-secrets\") pod \"cilium-5hwns\" (UID: \"de1907c8-ae64-42aa-bf5a-fbde965b5645\") " pod="kube-system/cilium-5hwns" Jan 29 11:02:03.903853 kubelet[2555]: I0129 11:02:03.903771 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ada0c1e-7915-446e-aea5-7336654a0be7-xtables-lock\") pod \"kube-proxy-pqxfp\" (UID: \"1ada0c1e-7915-446e-aea5-7336654a0be7\") " pod="kube-system/kube-proxy-pqxfp" Jan 29 11:02:03.904051 kubelet[2555]: I0129 11:02:03.903807 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwvg6\" (UniqueName: \"kubernetes.io/projected/1ada0c1e-7915-446e-aea5-7336654a0be7-kube-api-access-zwvg6\") pod \"kube-proxy-pqxfp\" (UID: \"1ada0c1e-7915-446e-aea5-7336654a0be7\") " pod="kube-system/kube-proxy-pqxfp" Jan 29 11:02:03.904051 kubelet[2555]: I0129 11:02:03.903837 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/de1907c8-ae64-42aa-bf5a-fbde965b5645-host-proc-sys-kernel\") pod \"cilium-5hwns\" (UID: \"de1907c8-ae64-42aa-bf5a-fbde965b5645\") " pod="kube-system/cilium-5hwns" Jan 29 11:02:03.904051 kubelet[2555]: I0129 11:02:03.903856 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/de1907c8-ae64-42aa-bf5a-fbde965b5645-host-proc-sys-net\") pod \"cilium-5hwns\" (UID: \"de1907c8-ae64-42aa-bf5a-fbde965b5645\") " pod="kube-system/cilium-5hwns" Jan 29 11:02:03.904051 kubelet[2555]: I0129 11:02:03.903871 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ada0c1e-7915-446e-aea5-7336654a0be7-lib-modules\") pod \"kube-proxy-pqxfp\" (UID: \"1ada0c1e-7915-446e-aea5-7336654a0be7\") " pod="kube-system/kube-proxy-pqxfp" Jan 29 11:02:03.904051 kubelet[2555]: I0129 11:02:03.903885 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/de1907c8-ae64-42aa-bf5a-fbde965b5645-bpf-maps\") pod \"cilium-5hwns\" (UID: \"de1907c8-ae64-42aa-bf5a-fbde965b5645\") " pod="kube-system/cilium-5hwns" Jan 29 11:02:03.904152 kubelet[2555]: I0129 11:02:03.903899 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/de1907c8-ae64-42aa-bf5a-fbde965b5645-hostproc\") pod \"cilium-5hwns\" (UID: \"de1907c8-ae64-42aa-bf5a-fbde965b5645\") " pod="kube-system/cilium-5hwns" Jan 29 11:02:03.904152 kubelet[2555]: I0129 11:02:03.903912 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/de1907c8-ae64-42aa-bf5a-fbde965b5645-cilium-cgroup\") pod \"cilium-5hwns\" (UID: \"de1907c8-ae64-42aa-bf5a-fbde965b5645\") " pod="kube-system/cilium-5hwns" Jan 29 11:02:03.904152 kubelet[2555]: I0129 11:02:03.903926 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/de1907c8-ae64-42aa-bf5a-fbde965b5645-cni-path\") pod \"cilium-5hwns\" (UID: \"de1907c8-ae64-42aa-bf5a-fbde965b5645\") " pod="kube-system/cilium-5hwns" Jan 29 11:02:03.904152 kubelet[2555]: I0129 11:02:03.903939 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/de1907c8-ae64-42aa-bf5a-fbde965b5645-cilium-config-path\") pod \"cilium-5hwns\" (UID: \"de1907c8-ae64-42aa-bf5a-fbde965b5645\") " pod="kube-system/cilium-5hwns" Jan 29 11:02:03.904152 kubelet[2555]: I0129 11:02:03.903954 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1ada0c1e-7915-446e-aea5-7336654a0be7-kube-proxy\") pod \"kube-proxy-pqxfp\" (UID: \"1ada0c1e-7915-446e-aea5-7336654a0be7\") " pod="kube-system/kube-proxy-pqxfp" Jan 29 11:02:03.904152 kubelet[2555]: I0129 11:02:03.903968 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/de1907c8-ae64-42aa-bf5a-fbde965b5645-hubble-tls\") pod \"cilium-5hwns\" (UID: \"de1907c8-ae64-42aa-bf5a-fbde965b5645\") " pod="kube-system/cilium-5hwns" Jan 29 11:02:03.904282 kubelet[2555]: I0129 11:02:03.903982 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de1907c8-ae64-42aa-bf5a-fbde965b5645-xtables-lock\") pod \"cilium-5hwns\" (UID: \"de1907c8-ae64-42aa-bf5a-fbde965b5645\") " pod="kube-system/cilium-5hwns" Jan 29 11:02:04.094019 systemd[1]: Created slice kubepods-besteffort-podc1c697cb_82e1_460e_a304_9a5ed44f90c4.slice - libcontainer container kubepods-besteffort-podc1c697cb_82e1_460e_a304_9a5ed44f90c4.slice. Jan 29 11:02:04.106597 kubelet[2555]: I0129 11:02:04.106498 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlhfd\" (UniqueName: \"kubernetes.io/projected/c1c697cb-82e1-460e-a304-9a5ed44f90c4-kube-api-access-nlhfd\") pod \"cilium-operator-6c4d7847fc-cvh5m\" (UID: \"c1c697cb-82e1-460e-a304-9a5ed44f90c4\") " pod="kube-system/cilium-operator-6c4d7847fc-cvh5m" Jan 29 11:02:04.106597 kubelet[2555]: I0129 11:02:04.106554 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c1c697cb-82e1-460e-a304-9a5ed44f90c4-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-cvh5m\" (UID: \"c1c697cb-82e1-460e-a304-9a5ed44f90c4\") " pod="kube-system/cilium-operator-6c4d7847fc-cvh5m" Jan 29 11:02:04.131572 update_engine[1455]: I20250129 11:02:04.131024 1455 update_attempter.cc:509] Updating boot flags... Jan 29 11:02:04.151657 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2643) Jan 29 11:02:04.179599 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2645) Jan 29 11:02:04.195774 kubelet[2555]: E0129 11:02:04.195745 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:02:04.197862 containerd[1467]: time="2025-01-29T11:02:04.197831297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pqxfp,Uid:1ada0c1e-7915-446e-aea5-7336654a0be7,Namespace:kube-system,Attempt:0,}" Jan 29 11:02:04.210623 kubelet[2555]: E0129 11:02:04.210515 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:02:04.213062 containerd[1467]: time="2025-01-29T11:02:04.212993314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5hwns,Uid:de1907c8-ae64-42aa-bf5a-fbde965b5645,Namespace:kube-system,Attempt:0,}" Jan 29 11:02:04.229240 containerd[1467]: time="2025-01-29T11:02:04.228951699Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:02:04.229240 containerd[1467]: time="2025-01-29T11:02:04.229017370Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:02:04.229240 containerd[1467]: time="2025-01-29T11:02:04.229033128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:02:04.229240 containerd[1467]: time="2025-01-29T11:02:04.229118716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:02:04.239761 containerd[1467]: time="2025-01-29T11:02:04.239657426Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:02:04.240220 containerd[1467]: time="2025-01-29T11:02:04.239714458Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:02:04.240220 containerd[1467]: time="2025-01-29T11:02:04.239730176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:02:04.240220 containerd[1467]: time="2025-01-29T11:02:04.240147717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:02:04.247759 systemd[1]: Started cri-containerd-90442f6a0e9221951f82ee2080ad732d47269c70f7d9a71f6368206a9ba277af.scope - libcontainer container 90442f6a0e9221951f82ee2080ad732d47269c70f7d9a71f6368206a9ba277af. Jan 29 11:02:04.251286 systemd[1]: Started cri-containerd-9755af9c94524111a5d79b6ae1b82dd0b1818c38d6ec61ef9a77190292e7d1a6.scope - libcontainer container 9755af9c94524111a5d79b6ae1b82dd0b1818c38d6ec61ef9a77190292e7d1a6. Jan 29 11:02:04.273372 containerd[1467]: time="2025-01-29T11:02:04.273123177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pqxfp,Uid:1ada0c1e-7915-446e-aea5-7336654a0be7,Namespace:kube-system,Attempt:0,} returns sandbox id \"90442f6a0e9221951f82ee2080ad732d47269c70f7d9a71f6368206a9ba277af\"" Jan 29 11:02:04.274010 containerd[1467]: time="2025-01-29T11:02:04.273778365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5hwns,Uid:de1907c8-ae64-42aa-bf5a-fbde965b5645,Namespace:kube-system,Attempt:0,} returns sandbox id \"9755af9c94524111a5d79b6ae1b82dd0b1818c38d6ec61ef9a77190292e7d1a6\"" Jan 29 11:02:04.275440 kubelet[2555]: E0129 11:02:04.274136 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:02:04.275970 kubelet[2555]: E0129 11:02:04.275889 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:02:04.277210 containerd[1467]: time="2025-01-29T11:02:04.277174445Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 11:02:04.279497 containerd[1467]: time="2025-01-29T11:02:04.278968151Z" level=info msg="CreateContainer within sandbox \"90442f6a0e9221951f82ee2080ad732d47269c70f7d9a71f6368206a9ba277af\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 11:02:04.293603 containerd[1467]: time="2025-01-29T11:02:04.293551251Z" level=info msg="CreateContainer within sandbox \"90442f6a0e9221951f82ee2080ad732d47269c70f7d9a71f6368206a9ba277af\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"14601ddf3c36c1b2a30e619d7868080693cddabf38dc1bc8787608ccfc510d3a\"" Jan 29 11:02:04.294620 containerd[1467]: time="2025-01-29T11:02:04.294289346Z" level=info msg="StartContainer for \"14601ddf3c36c1b2a30e619d7868080693cddabf38dc1bc8787608ccfc510d3a\"" Jan 29 11:02:04.320755 systemd[1]: Started cri-containerd-14601ddf3c36c1b2a30e619d7868080693cddabf38dc1bc8787608ccfc510d3a.scope - libcontainer container 14601ddf3c36c1b2a30e619d7868080693cddabf38dc1bc8787608ccfc510d3a. Jan 29 11:02:04.350937 containerd[1467]: time="2025-01-29T11:02:04.350809359Z" level=info msg="StartContainer for \"14601ddf3c36c1b2a30e619d7868080693cddabf38dc1bc8787608ccfc510d3a\" returns successfully" Jan 29 11:02:04.397570 kubelet[2555]: E0129 11:02:04.397538 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:02:04.400521 containerd[1467]: time="2025-01-29T11:02:04.400167464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-cvh5m,Uid:c1c697cb-82e1-460e-a304-9a5ed44f90c4,Namespace:kube-system,Attempt:0,}" Jan 29 11:02:04.425838 containerd[1467]: time="2025-01-29T11:02:04.425767447Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:02:04.425975 containerd[1467]: time="2025-01-29T11:02:04.425903428Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:02:04.425975 containerd[1467]: time="2025-01-29T11:02:04.425920465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:02:04.426391 containerd[1467]: time="2025-01-29T11:02:04.426006893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:02:04.444767 systemd[1]: Started cri-containerd-1cc35e6481b1e4d1c9f5d171a1b7253dd65e26348c59dedd4286dca4d2f2b999.scope - libcontainer container 1cc35e6481b1e4d1c9f5d171a1b7253dd65e26348c59dedd4286dca4d2f2b999. Jan 29 11:02:04.475023 containerd[1467]: time="2025-01-29T11:02:04.474986372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-cvh5m,Uid:c1c697cb-82e1-460e-a304-9a5ed44f90c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"1cc35e6481b1e4d1c9f5d171a1b7253dd65e26348c59dedd4286dca4d2f2b999\"" Jan 29 11:02:04.475706 kubelet[2555]: E0129 11:02:04.475685 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:02:04.488997 kubelet[2555]: E0129 11:02:04.488972 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:02:05.203615 kubelet[2555]: E0129 11:02:05.203539 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:02:05.207021 kubelet[2555]: E0129 11:02:05.206994 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:02:05.224565 kubelet[2555]: I0129 11:02:05.224503 2555 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pqxfp" podStartSLOduration=2.224485446 podStartE2EDuration="2.224485446s" podCreationTimestamp="2025-01-29 11:02:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:02:05.214032077 +0000 UTC m=+9.143391258" watchObservedRunningTime="2025-01-29 11:02:05.224485446 +0000 UTC m=+9.153844627" Jan 29 11:02:06.927613 kubelet[2555]: E0129 11:02:06.927509 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:02:07.209836 kubelet[2555]: E0129 11:02:07.209714 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:02:07.306930 kubelet[2555]: E0129 11:02:07.306862 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:02:08.212460 kubelet[2555]: E0129 11:02:08.212433 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:02:09.880239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount441486243.mount: Deactivated successfully. Jan 29 11:02:12.153432 containerd[1467]: time="2025-01-29T11:02:12.153381269Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:02:12.154382 containerd[1467]: time="2025-01-29T11:02:12.154339684Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 29 11:02:12.155228 containerd[1467]: time="2025-01-29T11:02:12.155167554Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:02:12.156954 containerd[1467]: time="2025-01-29T11:02:12.156812453Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.879394282s" Jan 29 11:02:12.156954 containerd[1467]: time="2025-01-29T11:02:12.156849369Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 29 11:02:12.160836 containerd[1467]: time="2025-01-29T11:02:12.160640474Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 29 11:02:12.166131 containerd[1467]: time="2025-01-29T11:02:12.166020564Z" level=info msg="CreateContainer within sandbox \"9755af9c94524111a5d79b6ae1b82dd0b1818c38d6ec61ef9a77190292e7d1a6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 11:02:12.202643 containerd[1467]: time="2025-01-29T11:02:12.202565558Z" level=info msg="CreateContainer within sandbox \"9755af9c94524111a5d79b6ae1b82dd0b1818c38d6ec61ef9a77190292e7d1a6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a52990c79910bbb95adfb756dbab9748d255501270ef35e0ff6c1c0b4510c31e\"" Jan 29 11:02:12.203667 containerd[1467]: time="2025-01-29T11:02:12.203102659Z" level=info msg="StartContainer for \"a52990c79910bbb95adfb756dbab9748d255501270ef35e0ff6c1c0b4510c31e\"" Jan 29 11:02:12.233744 systemd[1]: Started cri-containerd-a52990c79910bbb95adfb756dbab9748d255501270ef35e0ff6c1c0b4510c31e.scope - libcontainer container a52990c79910bbb95adfb756dbab9748d255501270ef35e0ff6c1c0b4510c31e. Jan 29 11:02:12.253399 containerd[1467]: time="2025-01-29T11:02:12.253365030Z" level=info msg="StartContainer for \"a52990c79910bbb95adfb756dbab9748d255501270ef35e0ff6c1c0b4510c31e\" returns successfully" Jan 29 11:02:12.303652 systemd[1]: cri-containerd-a52990c79910bbb95adfb756dbab9748d255501270ef35e0ff6c1c0b4510c31e.scope: Deactivated successfully. Jan 29 11:02:12.432868 containerd[1467]: time="2025-01-29T11:02:12.432596664Z" level=info msg="shim disconnected" id=a52990c79910bbb95adfb756dbab9748d255501270ef35e0ff6c1c0b4510c31e namespace=k8s.io Jan 29 11:02:12.432868 containerd[1467]: time="2025-01-29T11:02:12.432654977Z" level=warning msg="cleaning up after shim disconnected" id=a52990c79910bbb95adfb756dbab9748d255501270ef35e0ff6c1c0b4510c31e namespace=k8s.io Jan 29 11:02:12.432868 containerd[1467]: time="2025-01-29T11:02:12.432662977Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:02:13.186445 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a52990c79910bbb95adfb756dbab9748d255501270ef35e0ff6c1c0b4510c31e-rootfs.mount: Deactivated successfully. Jan 29 11:02:13.235442 kubelet[2555]: E0129 11:02:13.235264 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:02:13.245644 containerd[1467]: time="2025-01-29T11:02:13.245418365Z" level=info msg="CreateContainer within sandbox \"9755af9c94524111a5d79b6ae1b82dd0b1818c38d6ec61ef9a77190292e7d1a6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 11:02:13.269113 containerd[1467]: time="2025-01-29T11:02:13.269052055Z" level=info msg="CreateContainer within sandbox \"9755af9c94524111a5d79b6ae1b82dd0b1818c38d6ec61ef9a77190292e7d1a6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"48dfb53303af3b032d2d81d0f1330d5af0bcb4247e3ad70e7088095c8140e471\"" Jan 29 11:02:13.270533 containerd[1467]: time="2025-01-29T11:02:13.270496222Z" level=info msg="StartContainer for \"48dfb53303af3b032d2d81d0f1330d5af0bcb4247e3ad70e7088095c8140e471\"" Jan 29 11:02:13.298850 systemd[1]: Started cri-containerd-48dfb53303af3b032d2d81d0f1330d5af0bcb4247e3ad70e7088095c8140e471.scope - libcontainer container 48dfb53303af3b032d2d81d0f1330d5af0bcb4247e3ad70e7088095c8140e471. Jan 29 11:02:13.325118 containerd[1467]: time="2025-01-29T11:02:13.325062868Z" level=info msg="StartContainer for \"48dfb53303af3b032d2d81d0f1330d5af0bcb4247e3ad70e7088095c8140e471\" returns successfully" Jan 29 11:02:13.345545 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:02:13.345780 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:02:13.345854 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:02:13.351073 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:02:13.351363 systemd[1]: cri-containerd-48dfb53303af3b032d2d81d0f1330d5af0bcb4247e3ad70e7088095c8140e471.scope: Deactivated successfully. Jan 29 11:02:13.374709 containerd[1467]: time="2025-01-29T11:02:13.374484100Z" level=info msg="shim disconnected" id=48dfb53303af3b032d2d81d0f1330d5af0bcb4247e3ad70e7088095c8140e471 namespace=k8s.io Jan 29 11:02:13.374709 containerd[1467]: time="2025-01-29T11:02:13.374539814Z" level=warning msg="cleaning up after shim disconnected" id=48dfb53303af3b032d2d81d0f1330d5af0bcb4247e3ad70e7088095c8140e471 namespace=k8s.io Jan 29 11:02:13.374709 containerd[1467]: time="2025-01-29T11:02:13.374547773Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:02:13.380752 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:02:13.601744 containerd[1467]: time="2025-01-29T11:02:13.600985288Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:02:13.601744 containerd[1467]: time="2025-01-29T11:02:13.601502393Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 29 11:02:13.603254 containerd[1467]: time="2025-01-29T11:02:13.603222651Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:02:13.604694 containerd[1467]: time="2025-01-29T11:02:13.604649259Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.443976789s" Jan 29 11:02:13.604694 containerd[1467]: time="2025-01-29T11:02:13.604683416Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 29 11:02:13.607885 containerd[1467]: time="2025-01-29T11:02:13.607856839Z" level=info msg="CreateContainer within sandbox \"1cc35e6481b1e4d1c9f5d171a1b7253dd65e26348c59dedd4286dca4d2f2b999\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 29 11:02:13.617518 containerd[1467]: time="2025-01-29T11:02:13.617334312Z" level=info msg="CreateContainer within sandbox \"1cc35e6481b1e4d1c9f5d171a1b7253dd65e26348c59dedd4286dca4d2f2b999\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"28246f4f693b6a6ac58606f15eb56d0ed9f5955324a1684c287a6b372471920c\"" Jan 29 11:02:13.618128 containerd[1467]: time="2025-01-29T11:02:13.618016880Z" level=info msg="StartContainer for \"28246f4f693b6a6ac58606f15eb56d0ed9f5955324a1684c287a6b372471920c\"" Jan 29 11:02:13.641760 systemd[1]: Started cri-containerd-28246f4f693b6a6ac58606f15eb56d0ed9f5955324a1684c287a6b372471920c.scope - libcontainer container 28246f4f693b6a6ac58606f15eb56d0ed9f5955324a1684c287a6b372471920c. Jan 29 11:02:13.664208 containerd[1467]: time="2025-01-29T11:02:13.664161100Z" level=info msg="StartContainer for \"28246f4f693b6a6ac58606f15eb56d0ed9f5955324a1684c287a6b372471920c\" returns successfully" Jan 29 11:02:14.187358 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-48dfb53303af3b032d2d81d0f1330d5af0bcb4247e3ad70e7088095c8140e471-rootfs.mount: Deactivated successfully. Jan 29 11:02:14.238925 kubelet[2555]: E0129 11:02:14.238240 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:02:14.242609 kubelet[2555]: E0129 11:02:14.242529 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:02:14.245331 containerd[1467]: time="2025-01-29T11:02:14.245287604Z" level=info msg="CreateContainer within sandbox \"9755af9c94524111a5d79b6ae1b82dd0b1818c38d6ec61ef9a77190292e7d1a6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 11:02:14.250592 kubelet[2555]: I0129 11:02:14.250508 2555 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-cvh5m" podStartSLOduration=1.120982117 podStartE2EDuration="10.250489989s" podCreationTimestamp="2025-01-29 11:02:04 +0000 UTC" firstStartedPulling="2025-01-29 11:02:04.47634258 +0000 UTC m=+8.405701761" lastFinishedPulling="2025-01-29 11:02:13.605850452 +0000 UTC m=+17.535209633" observedRunningTime="2025-01-29 11:02:14.24969759 +0000 UTC m=+18.179056771" watchObservedRunningTime="2025-01-29 11:02:14.250489989 +0000 UTC m=+18.179849170" Jan 29 11:02:14.269420 containerd[1467]: time="2025-01-29T11:02:14.269055159Z" level=info msg="CreateContainer within sandbox \"9755af9c94524111a5d79b6ae1b82dd0b1818c38d6ec61ef9a77190292e7d1a6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0f1b6ccde88bfe7c64680a2c0a412c54414b4778d02e4cc8e706cf6b1a21ae03\"" Jan 29 11:02:14.269922 containerd[1467]: time="2025-01-29T11:02:14.269881354Z" level=info msg="StartContainer for \"0f1b6ccde88bfe7c64680a2c0a412c54414b4778d02e4cc8e706cf6b1a21ae03\"" Jan 29 11:02:14.306764 systemd[1]: Started cri-containerd-0f1b6ccde88bfe7c64680a2c0a412c54414b4778d02e4cc8e706cf6b1a21ae03.scope - libcontainer container 0f1b6ccde88bfe7c64680a2c0a412c54414b4778d02e4cc8e706cf6b1a21ae03. Jan 29 11:02:14.364381 systemd[1]: cri-containerd-0f1b6ccde88bfe7c64680a2c0a412c54414b4778d02e4cc8e706cf6b1a21ae03.scope: Deactivated successfully. Jan 29 11:02:14.376040 containerd[1467]: time="2025-01-29T11:02:14.375980400Z" level=info msg="StartContainer for \"0f1b6ccde88bfe7c64680a2c0a412c54414b4778d02e4cc8e706cf6b1a21ae03\" returns successfully" Jan 29 11:02:14.436663 containerd[1467]: time="2025-01-29T11:02:14.436536731Z" level=info msg="shim disconnected" id=0f1b6ccde88bfe7c64680a2c0a412c54414b4778d02e4cc8e706cf6b1a21ae03 namespace=k8s.io Jan 29 11:02:14.436663 containerd[1467]: time="2025-01-29T11:02:14.436625562Z" level=warning msg="cleaning up after shim disconnected" id=0f1b6ccde88bfe7c64680a2c0a412c54414b4778d02e4cc8e706cf6b1a21ae03 namespace=k8s.io Jan 29 11:02:14.436663 containerd[1467]: time="2025-01-29T11:02:14.436636400Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:02:15.189030 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f1b6ccde88bfe7c64680a2c0a412c54414b4778d02e4cc8e706cf6b1a21ae03-rootfs.mount: Deactivated successfully. Jan 29 11:02:15.249558 kubelet[2555]: E0129 11:02:15.249329 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:02:15.249558 kubelet[2555]: E0129 11:02:15.249367 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:02:15.252329 containerd[1467]: time="2025-01-29T11:02:15.252042209Z" level=info msg="CreateContainer within sandbox \"9755af9c94524111a5d79b6ae1b82dd0b1818c38d6ec61ef9a77190292e7d1a6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 11:02:15.268859 containerd[1467]: time="2025-01-29T11:02:15.268814138Z" level=info msg="CreateContainer within sandbox \"9755af9c94524111a5d79b6ae1b82dd0b1818c38d6ec61ef9a77190292e7d1a6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"dbdfda7718a45d427f5f3dabc48d5eafe656d3872f8b2506b144211cd2236f62\"" Jan 29 11:02:15.269639 containerd[1467]: time="2025-01-29T11:02:15.269616898Z" level=info msg="StartContainer for \"dbdfda7718a45d427f5f3dabc48d5eafe656d3872f8b2506b144211cd2236f62\"" Jan 29 11:02:15.299768 systemd[1]: Started cri-containerd-dbdfda7718a45d427f5f3dabc48d5eafe656d3872f8b2506b144211cd2236f62.scope - libcontainer container dbdfda7718a45d427f5f3dabc48d5eafe656d3872f8b2506b144211cd2236f62. Jan 29 11:02:15.321450 systemd[1]: cri-containerd-dbdfda7718a45d427f5f3dabc48d5eafe656d3872f8b2506b144211cd2236f62.scope: Deactivated successfully. Jan 29 11:02:15.325179 containerd[1467]: time="2025-01-29T11:02:15.324967622Z" level=info msg="StartContainer for \"dbdfda7718a45d427f5f3dabc48d5eafe656d3872f8b2506b144211cd2236f62\" returns successfully" Jan 29 11:02:15.343800 containerd[1467]: time="2025-01-29T11:02:15.343749111Z" level=info msg="shim disconnected" id=dbdfda7718a45d427f5f3dabc48d5eafe656d3872f8b2506b144211cd2236f62 namespace=k8s.io Jan 29 11:02:15.343800 containerd[1467]: time="2025-01-29T11:02:15.343798706Z" level=warning msg="cleaning up after shim disconnected" id=dbdfda7718a45d427f5f3dabc48d5eafe656d3872f8b2506b144211cd2236f62 namespace=k8s.io Jan 29 11:02:15.343985 containerd[1467]: time="2025-01-29T11:02:15.343806865Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:02:16.198548 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dbdfda7718a45d427f5f3dabc48d5eafe656d3872f8b2506b144211cd2236f62-rootfs.mount: Deactivated successfully. Jan 29 11:02:16.252969 kubelet[2555]: E0129 11:02:16.252931 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:02:16.255390 containerd[1467]: time="2025-01-29T11:02:16.255349900Z" level=info msg="CreateContainer within sandbox \"9755af9c94524111a5d79b6ae1b82dd0b1818c38d6ec61ef9a77190292e7d1a6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 11:02:16.269145 containerd[1467]: time="2025-01-29T11:02:16.269084854Z" level=info msg="CreateContainer within sandbox \"9755af9c94524111a5d79b6ae1b82dd0b1818c38d6ec61ef9a77190292e7d1a6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"30493d42437dd240f300e6bf0ad89218dd339e7910295889876ee2b563a3ae00\"" Jan 29 11:02:16.271931 containerd[1467]: time="2025-01-29T11:02:16.270126954Z" level=info msg="StartContainer for \"30493d42437dd240f300e6bf0ad89218dd339e7910295889876ee2b563a3ae00\"" Jan 29 11:02:16.297798 systemd[1]: Started cri-containerd-30493d42437dd240f300e6bf0ad89218dd339e7910295889876ee2b563a3ae00.scope - libcontainer container 30493d42437dd240f300e6bf0ad89218dd339e7910295889876ee2b563a3ae00. Jan 29 11:02:16.331574 containerd[1467]: time="2025-01-29T11:02:16.331523106Z" level=info msg="StartContainer for \"30493d42437dd240f300e6bf0ad89218dd339e7910295889876ee2b563a3ae00\" returns successfully" Jan 29 11:02:16.467151 kubelet[2555]: I0129 11:02:16.466800 2555 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 29 11:02:16.527201 systemd[1]: Created slice kubepods-burstable-pod0abc6b82_c7e7_4115_b8b8_9a151837c536.slice - libcontainer container kubepods-burstable-pod0abc6b82_c7e7_4115_b8b8_9a151837c536.slice. Jan 29 11:02:16.534710 systemd[1]: Created slice kubepods-burstable-podae7ffd82_6548_4a72_b0ea_7855a08c9ad8.slice - libcontainer container kubepods-burstable-podae7ffd82_6548_4a72_b0ea_7855a08c9ad8.slice. Jan 29 11:02:16.606333 kubelet[2555]: I0129 11:02:16.606283 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae7ffd82-6548-4a72-b0ea-7855a08c9ad8-config-volume\") pod \"coredns-668d6bf9bc-gdw7g\" (UID: \"ae7ffd82-6548-4a72-b0ea-7855a08c9ad8\") " pod="kube-system/coredns-668d6bf9bc-gdw7g" Jan 29 11:02:16.606333 kubelet[2555]: I0129 11:02:16.606332 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnbqw\" (UniqueName: \"kubernetes.io/projected/0abc6b82-c7e7-4115-b8b8-9a151837c536-kube-api-access-vnbqw\") pod \"coredns-668d6bf9bc-pd2k5\" (UID: \"0abc6b82-c7e7-4115-b8b8-9a151837c536\") " pod="kube-system/coredns-668d6bf9bc-pd2k5" Jan 29 11:02:16.606496 kubelet[2555]: I0129 11:02:16.606357 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqbjd\" (UniqueName: \"kubernetes.io/projected/ae7ffd82-6548-4a72-b0ea-7855a08c9ad8-kube-api-access-hqbjd\") pod \"coredns-668d6bf9bc-gdw7g\" (UID: \"ae7ffd82-6548-4a72-b0ea-7855a08c9ad8\") " pod="kube-system/coredns-668d6bf9bc-gdw7g" Jan 29 11:02:16.606496 kubelet[2555]: I0129 11:02:16.606376 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0abc6b82-c7e7-4115-b8b8-9a151837c536-config-volume\") pod \"coredns-668d6bf9bc-pd2k5\" (UID: \"0abc6b82-c7e7-4115-b8b8-9a151837c536\") " pod="kube-system/coredns-668d6bf9bc-pd2k5" Jan 29 11:02:16.831786 kubelet[2555]: E0129 11:02:16.831409 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:02:16.832205 containerd[1467]: time="2025-01-29T11:02:16.832163335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pd2k5,Uid:0abc6b82-c7e7-4115-b8b8-9a151837c536,Namespace:kube-system,Attempt:0,}" Jan 29 11:02:16.837058 kubelet[2555]: E0129 11:02:16.836757 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:02:16.837315 containerd[1467]: time="2025-01-29T11:02:16.837277002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gdw7g,Uid:ae7ffd82-6548-4a72-b0ea-7855a08c9ad8,Namespace:kube-system,Attempt:0,}" Jan 29 11:02:17.257001 kubelet[2555]: E0129 11:02:17.256950 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:02:17.281234 kubelet[2555]: I0129 11:02:17.280959 2555 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5hwns" podStartSLOduration=6.397262185 podStartE2EDuration="14.280941177s" podCreationTimestamp="2025-01-29 11:02:03 +0000 UTC" firstStartedPulling="2025-01-29 11:02:04.276772742 +0000 UTC m=+8.206131923" lastFinishedPulling="2025-01-29 11:02:12.160451734 +0000 UTC m=+16.089810915" observedRunningTime="2025-01-29 11:02:17.279723411 +0000 UTC m=+21.209082632" watchObservedRunningTime="2025-01-29 11:02:17.280941177 +0000 UTC m=+21.210300318" Jan 29 11:02:18.258776 kubelet[2555]: E0129 11:02:18.258734 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:02:18.555100 systemd-networkd[1388]: cilium_host: Link UP Jan 29 11:02:18.555348 systemd-networkd[1388]: cilium_net: Link UP Jan 29 11:02:18.556727 systemd-networkd[1388]: cilium_net: Gained carrier Jan 29 11:02:18.556915 systemd-networkd[1388]: cilium_host: Gained carrier Jan 29 11:02:18.557018 systemd-networkd[1388]: cilium_net: Gained IPv6LL Jan 29 11:02:18.557143 systemd-networkd[1388]: cilium_host: Gained IPv6LL Jan 29 11:02:18.635770 systemd-networkd[1388]: cilium_vxlan: Link UP Jan 29 11:02:18.635832 systemd-networkd[1388]: cilium_vxlan: Gained carrier Jan 29 11:02:18.939622 kernel: NET: Registered PF_ALG protocol family Jan 29 11:02:19.260625 kubelet[2555]: E0129 11:02:19.260497 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:02:19.504051 systemd-networkd[1388]: lxc_health: Link UP Jan 29 11:02:19.510744 systemd-networkd[1388]: lxc_health: Gained carrier Jan 29 11:02:20.013041 systemd-networkd[1388]: lxcf9b47f307312: Link UP Jan 29 11:02:20.022603 kernel: eth0: renamed from tmpc985d Jan 29 11:02:20.034350 systemd-networkd[1388]: lxc59eb86df00c0: Link UP Jan 29 11:02:20.048733 kernel: eth0: renamed from tmpb9a1e Jan 29 11:02:20.055255 systemd-networkd[1388]: lxcf9b47f307312: Gained carrier Jan 29 11:02:20.055701 systemd-networkd[1388]: lxc59eb86df00c0: Gained carrier Jan 29 11:02:20.262221 kubelet[2555]: E0129 11:02:20.262088 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:02:20.446020 systemd-networkd[1388]: cilium_vxlan: Gained IPv6LL Jan 29 11:02:21.085039 systemd-networkd[1388]: lxcf9b47f307312: Gained IPv6LL Jan 29 11:02:21.263375 kubelet[2555]: E0129 11:02:21.263348 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:02:21.469787 systemd-networkd[1388]: lxc_health: Gained IPv6LL Jan 29 11:02:21.661730 systemd-networkd[1388]: lxc59eb86df00c0: Gained IPv6LL Jan 29 11:02:22.265426 kubelet[2555]: E0129 11:02:22.265381 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:02:23.290163 systemd[1]: Started sshd@7-10.0.0.65:22-10.0.0.1:60562.service - OpenSSH per-connection server daemon (10.0.0.1:60562). Jan 29 11:02:23.341029 sshd[3796]: Accepted publickey for core from 10.0.0.1 port 60562 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:02:23.342218 sshd-session[3796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:02:23.348040 systemd-logind[1452]: New session 8 of user core. Jan 29 11:02:23.357717 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 11:02:23.495302 sshd[3798]: Connection closed by 10.0.0.1 port 60562 Jan 29 11:02:23.495642 sshd-session[3796]: pam_unix(sshd:session): session closed for user core Jan 29 11:02:23.500088 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 11:02:23.501698 systemd-logind[1452]: Session 8 logged out. Waiting for processes to exit. Jan 29 11:02:23.503695 systemd[1]: sshd@7-10.0.0.65:22-10.0.0.1:60562.service: Deactivated successfully. Jan 29 11:02:23.506898 systemd-logind[1452]: Removed session 8. Jan 29 11:02:23.631288 containerd[1467]: time="2025-01-29T11:02:23.631137969Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:02:23.631288 containerd[1467]: time="2025-01-29T11:02:23.631198964Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:02:23.631288 containerd[1467]: time="2025-01-29T11:02:23.631213803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:02:23.633019 containerd[1467]: time="2025-01-29T11:02:23.632080936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:02:23.633019 containerd[1467]: time="2025-01-29T11:02:23.632851077Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:02:23.633019 containerd[1467]: time="2025-01-29T11:02:23.632864236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:02:23.633019 containerd[1467]: time="2025-01-29T11:02:23.632944469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:02:23.633019 containerd[1467]: time="2025-01-29T11:02:23.632735246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:02:23.649210 systemd[1]: run-containerd-runc-k8s.io-c985d49f117a8b2366314bf8771395380c6bb7fc65c0214b2f8a079cc12a7afa-runc.YbX7f4.mount: Deactivated successfully. Jan 29 11:02:23.664745 systemd[1]: Started cri-containerd-b9a1eca799e6b9e7aeb3d350bfdca25839f22e71cc78573132799137395b796b.scope - libcontainer container b9a1eca799e6b9e7aeb3d350bfdca25839f22e71cc78573132799137395b796b. Jan 29 11:02:23.665978 systemd[1]: Started cri-containerd-c985d49f117a8b2366314bf8771395380c6bb7fc65c0214b2f8a079cc12a7afa.scope - libcontainer container c985d49f117a8b2366314bf8771395380c6bb7fc65c0214b2f8a079cc12a7afa. Jan 29 11:02:23.676043 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:02:23.678156 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:02:23.697009 containerd[1467]: time="2025-01-29T11:02:23.696898806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gdw7g,Uid:ae7ffd82-6548-4a72-b0ea-7855a08c9ad8,Namespace:kube-system,Attempt:0,} returns sandbox id \"b9a1eca799e6b9e7aeb3d350bfdca25839f22e71cc78573132799137395b796b\"" Jan 29 11:02:23.698654 containerd[1467]: time="2025-01-29T11:02:23.698599234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pd2k5,Uid:0abc6b82-c7e7-4115-b8b8-9a151837c536,Namespace:kube-system,Attempt:0,} returns sandbox id \"c985d49f117a8b2366314bf8771395380c6bb7fc65c0214b2f8a079cc12a7afa\"" Jan 29 11:02:23.698786 kubelet[2555]: E0129 11:02:23.698723 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:02:23.703314 containerd[1467]: time="2025-01-29T11:02:23.703037931Z" level=info msg="CreateContainer within sandbox \"b9a1eca799e6b9e7aeb3d350bfdca25839f22e71cc78573132799137395b796b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:02:23.704161 kubelet[2555]: E0129 11:02:23.704027 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:02:23.705336 containerd[1467]: time="2025-01-29T11:02:23.705311716Z" level=info msg="CreateContainer within sandbox \"c985d49f117a8b2366314bf8771395380c6bb7fc65c0214b2f8a079cc12a7afa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:02:23.719839 containerd[1467]: time="2025-01-29T11:02:23.719764958Z" level=info msg="CreateContainer within sandbox \"c985d49f117a8b2366314bf8771395380c6bb7fc65c0214b2f8a079cc12a7afa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"82a377989a1b782d9e182e75b1a263809736ab1c9e40a37f28cb3fd98da407f5\"" Jan 29 11:02:23.720855 containerd[1467]: time="2025-01-29T11:02:23.720304757Z" level=info msg="StartContainer for \"82a377989a1b782d9e182e75b1a263809736ab1c9e40a37f28cb3fd98da407f5\"" Jan 29 11:02:23.721710 containerd[1467]: time="2025-01-29T11:02:23.721674331Z" level=info msg="CreateContainer within sandbox \"b9a1eca799e6b9e7aeb3d350bfdca25839f22e71cc78573132799137395b796b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"41348da593fd6211b160d463ec17c92e25646dc5160807c1e570abfb29ad6ad2\"" Jan 29 11:02:23.722406 containerd[1467]: time="2025-01-29T11:02:23.722368277Z" level=info msg="StartContainer for \"41348da593fd6211b160d463ec17c92e25646dc5160807c1e570abfb29ad6ad2\"" Jan 29 11:02:23.750751 systemd[1]: Started cri-containerd-41348da593fd6211b160d463ec17c92e25646dc5160807c1e570abfb29ad6ad2.scope - libcontainer container 41348da593fd6211b160d463ec17c92e25646dc5160807c1e570abfb29ad6ad2. Jan 29 11:02:23.751982 systemd[1]: Started cri-containerd-82a377989a1b782d9e182e75b1a263809736ab1c9e40a37f28cb3fd98da407f5.scope - libcontainer container 82a377989a1b782d9e182e75b1a263809736ab1c9e40a37f28cb3fd98da407f5. Jan 29 11:02:23.794432 containerd[1467]: time="2025-01-29T11:02:23.794348233Z" level=info msg="StartContainer for \"82a377989a1b782d9e182e75b1a263809736ab1c9e40a37f28cb3fd98da407f5\" returns successfully" Jan 29 11:02:23.794432 containerd[1467]: time="2025-01-29T11:02:23.794368712Z" level=info msg="StartContainer for \"41348da593fd6211b160d463ec17c92e25646dc5160807c1e570abfb29ad6ad2\" returns successfully" Jan 29 11:02:24.275432 kubelet[2555]: E0129 11:02:24.275196 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:02:24.279898 kubelet[2555]: E0129 11:02:24.279864 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:02:24.285714 kubelet[2555]: I0129 11:02:24.285079 2555 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-pd2k5" podStartSLOduration=20.285067548 podStartE2EDuration="20.285067548s" podCreationTimestamp="2025-01-29 11:02:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:02:24.284787769 +0000 UTC m=+28.214146950" watchObservedRunningTime="2025-01-29 11:02:24.285067548 +0000 UTC m=+28.214426729" Jan 29 11:02:24.637390 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2365133870.mount: Deactivated successfully. Jan 29 11:02:25.281494 kubelet[2555]: E0129 11:02:25.281361 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:02:25.281494 kubelet[2555]: E0129 11:02:25.281430 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:02:26.282647 kubelet[2555]: E0129 11:02:26.282260 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:02:26.283000 kubelet[2555]: E0129 11:02:26.282905 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:02:28.508269 systemd[1]: Started sshd@8-10.0.0.65:22-10.0.0.1:60572.service - OpenSSH per-connection server daemon (10.0.0.1:60572). Jan 29 11:02:28.559732 sshd[3983]: Accepted publickey for core from 10.0.0.1 port 60572 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:02:28.557389 sshd-session[3983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:02:28.563097 systemd-logind[1452]: New session 9 of user core. Jan 29 11:02:28.569761 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 11:02:28.697985 sshd[3985]: Connection closed by 10.0.0.1 port 60572 Jan 29 11:02:28.698341 sshd-session[3983]: pam_unix(sshd:session): session closed for user core Jan 29 11:02:28.701425 systemd[1]: sshd@8-10.0.0.65:22-10.0.0.1:60572.service: Deactivated successfully. Jan 29 11:02:28.704072 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 11:02:28.705165 systemd-logind[1452]: Session 9 logged out. Waiting for processes to exit. Jan 29 11:02:28.706390 systemd-logind[1452]: Removed session 9. Jan 29 11:02:33.714986 systemd[1]: Started sshd@9-10.0.0.65:22-10.0.0.1:36674.service - OpenSSH per-connection server daemon (10.0.0.1:36674). Jan 29 11:02:33.758022 sshd[3999]: Accepted publickey for core from 10.0.0.1 port 36674 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:02:33.759175 sshd-session[3999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:02:33.763086 systemd-logind[1452]: New session 10 of user core. Jan 29 11:02:33.772787 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 11:02:33.888704 sshd[4001]: Connection closed by 10.0.0.1 port 36674 Jan 29 11:02:33.890172 sshd-session[3999]: pam_unix(sshd:session): session closed for user core Jan 29 11:02:33.894333 systemd[1]: sshd@9-10.0.0.65:22-10.0.0.1:36674.service: Deactivated successfully. Jan 29 11:02:33.896306 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 11:02:33.897154 systemd-logind[1452]: Session 10 logged out. Waiting for processes to exit. Jan 29 11:02:33.898250 systemd-logind[1452]: Removed session 10. Jan 29 11:02:38.902335 systemd[1]: Started sshd@10-10.0.0.65:22-10.0.0.1:36682.service - OpenSSH per-connection server daemon (10.0.0.1:36682). Jan 29 11:02:38.959481 sshd[4017]: Accepted publickey for core from 10.0.0.1 port 36682 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:02:38.960642 sshd-session[4017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:02:38.965070 systemd-logind[1452]: New session 11 of user core. Jan 29 11:02:38.975750 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 11:02:39.097262 sshd[4019]: Connection closed by 10.0.0.1 port 36682 Jan 29 11:02:39.099327 sshd-session[4017]: pam_unix(sshd:session): session closed for user core Jan 29 11:02:39.110098 systemd[1]: sshd@10-10.0.0.65:22-10.0.0.1:36682.service: Deactivated successfully. Jan 29 11:02:39.113038 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 11:02:39.116495 systemd-logind[1452]: Session 11 logged out. Waiting for processes to exit. Jan 29 11:02:39.120901 systemd[1]: Started sshd@11-10.0.0.65:22-10.0.0.1:36684.service - OpenSSH per-connection server daemon (10.0.0.1:36684). Jan 29 11:02:39.124285 systemd-logind[1452]: Removed session 11. Jan 29 11:02:39.183641 sshd[4032]: Accepted publickey for core from 10.0.0.1 port 36684 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:02:39.185175 sshd-session[4032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:02:39.190020 systemd-logind[1452]: New session 12 of user core. Jan 29 11:02:39.203570 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 11:02:39.377880 sshd[4034]: Connection closed by 10.0.0.1 port 36684 Jan 29 11:02:39.379146 sshd-session[4032]: pam_unix(sshd:session): session closed for user core Jan 29 11:02:39.388731 systemd[1]: sshd@11-10.0.0.65:22-10.0.0.1:36684.service: Deactivated successfully. Jan 29 11:02:39.393491 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 11:02:39.399651 systemd-logind[1452]: Session 12 logged out. Waiting for processes to exit. Jan 29 11:02:39.410441 systemd[1]: Started sshd@12-10.0.0.65:22-10.0.0.1:36700.service - OpenSSH per-connection server daemon (10.0.0.1:36700). Jan 29 11:02:39.411477 systemd-logind[1452]: Removed session 12. Jan 29 11:02:39.456063 sshd[4044]: Accepted publickey for core from 10.0.0.1 port 36700 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:02:39.456609 sshd-session[4044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:02:39.464084 systemd-logind[1452]: New session 13 of user core. Jan 29 11:02:39.479781 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 11:02:39.602320 sshd[4046]: Connection closed by 10.0.0.1 port 36700 Jan 29 11:02:39.603018 sshd-session[4044]: pam_unix(sshd:session): session closed for user core Jan 29 11:02:39.606148 systemd[1]: sshd@12-10.0.0.65:22-10.0.0.1:36700.service: Deactivated successfully. Jan 29 11:02:39.607813 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 11:02:39.612001 systemd-logind[1452]: Session 13 logged out. Waiting for processes to exit. Jan 29 11:02:39.612925 systemd-logind[1452]: Removed session 13. Jan 29 11:02:44.614238 systemd[1]: Started sshd@13-10.0.0.65:22-10.0.0.1:46938.service - OpenSSH per-connection server daemon (10.0.0.1:46938). Jan 29 11:02:44.664258 sshd[4058]: Accepted publickey for core from 10.0.0.1 port 46938 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:02:44.665479 sshd-session[4058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:02:44.669984 systemd-logind[1452]: New session 14 of user core. Jan 29 11:02:44.677753 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 11:02:44.810421 sshd[4060]: Connection closed by 10.0.0.1 port 46938 Jan 29 11:02:44.809404 sshd-session[4058]: pam_unix(sshd:session): session closed for user core Jan 29 11:02:44.812954 systemd[1]: sshd@13-10.0.0.65:22-10.0.0.1:46938.service: Deactivated successfully. Jan 29 11:02:44.816229 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 11:02:44.820629 systemd-logind[1452]: Session 14 logged out. Waiting for processes to exit. Jan 29 11:02:44.823617 systemd-logind[1452]: Removed session 14. Jan 29 11:02:49.820174 systemd[1]: Started sshd@14-10.0.0.65:22-10.0.0.1:46952.service - OpenSSH per-connection server daemon (10.0.0.1:46952). Jan 29 11:02:49.862217 sshd[4072]: Accepted publickey for core from 10.0.0.1 port 46952 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:02:49.863349 sshd-session[4072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:02:49.867058 systemd-logind[1452]: New session 15 of user core. Jan 29 11:02:49.873749 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 11:02:49.984990 sshd[4074]: Connection closed by 10.0.0.1 port 46952 Jan 29 11:02:49.985524 sshd-session[4072]: pam_unix(sshd:session): session closed for user core Jan 29 11:02:50.003145 systemd[1]: sshd@14-10.0.0.65:22-10.0.0.1:46952.service: Deactivated successfully. Jan 29 11:02:50.005724 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 11:02:50.007012 systemd-logind[1452]: Session 15 logged out. Waiting for processes to exit. Jan 29 11:02:50.015941 systemd[1]: Started sshd@15-10.0.0.65:22-10.0.0.1:46954.service - OpenSSH per-connection server daemon (10.0.0.1:46954). Jan 29 11:02:50.016782 systemd-logind[1452]: Removed session 15. Jan 29 11:02:50.053720 sshd[4087]: Accepted publickey for core from 10.0.0.1 port 46954 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:02:50.055066 sshd-session[4087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:02:50.058411 systemd-logind[1452]: New session 16 of user core. Jan 29 11:02:50.067720 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 11:02:50.291177 sshd[4089]: Connection closed by 10.0.0.1 port 46954 Jan 29 11:02:50.291673 sshd-session[4087]: pam_unix(sshd:session): session closed for user core Jan 29 11:02:50.306185 systemd[1]: sshd@15-10.0.0.65:22-10.0.0.1:46954.service: Deactivated successfully. Jan 29 11:02:50.308626 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 11:02:50.310214 systemd-logind[1452]: Session 16 logged out. Waiting for processes to exit. Jan 29 11:02:50.315203 systemd[1]: Started sshd@16-10.0.0.65:22-10.0.0.1:46970.service - OpenSSH per-connection server daemon (10.0.0.1:46970). Jan 29 11:02:50.317090 systemd-logind[1452]: Removed session 16. Jan 29 11:02:50.362043 sshd[4100]: Accepted publickey for core from 10.0.0.1 port 46970 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:02:50.363447 sshd-session[4100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:02:50.367393 systemd-logind[1452]: New session 17 of user core. Jan 29 11:02:50.374752 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 11:02:51.089437 sshd[4102]: Connection closed by 10.0.0.1 port 46970 Jan 29 11:02:51.090318 sshd-session[4100]: pam_unix(sshd:session): session closed for user core Jan 29 11:02:51.096989 systemd[1]: sshd@16-10.0.0.65:22-10.0.0.1:46970.service: Deactivated successfully. Jan 29 11:02:51.101008 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 11:02:51.103091 systemd-logind[1452]: Session 17 logged out. Waiting for processes to exit. Jan 29 11:02:51.114966 systemd[1]: Started sshd@17-10.0.0.65:22-10.0.0.1:46980.service - OpenSSH per-connection server daemon (10.0.0.1:46980). Jan 29 11:02:51.117060 systemd-logind[1452]: Removed session 17. Jan 29 11:02:51.157806 sshd[4121]: Accepted publickey for core from 10.0.0.1 port 46980 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:02:51.159105 sshd-session[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:02:51.162842 systemd-logind[1452]: New session 18 of user core. Jan 29 11:02:51.173803 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 11:02:51.388118 sshd[4124]: Connection closed by 10.0.0.1 port 46980 Jan 29 11:02:51.390766 sshd-session[4121]: pam_unix(sshd:session): session closed for user core Jan 29 11:02:51.397988 systemd[1]: sshd@17-10.0.0.65:22-10.0.0.1:46980.service: Deactivated successfully. Jan 29 11:02:51.399693 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 11:02:51.402065 systemd-logind[1452]: Session 18 logged out. Waiting for processes to exit. Jan 29 11:02:51.408907 systemd[1]: Started sshd@18-10.0.0.65:22-10.0.0.1:46988.service - OpenSSH per-connection server daemon (10.0.0.1:46988). Jan 29 11:02:51.409757 systemd-logind[1452]: Removed session 18. Jan 29 11:02:51.448891 sshd[4134]: Accepted publickey for core from 10.0.0.1 port 46988 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:02:51.450468 sshd-session[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:02:51.454727 systemd-logind[1452]: New session 19 of user core. Jan 29 11:02:51.460753 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 11:02:51.571738 sshd[4136]: Connection closed by 10.0.0.1 port 46988 Jan 29 11:02:51.572460 sshd-session[4134]: pam_unix(sshd:session): session closed for user core Jan 29 11:02:51.575465 systemd[1]: sshd@18-10.0.0.65:22-10.0.0.1:46988.service: Deactivated successfully. Jan 29 11:02:51.577632 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 11:02:51.579906 systemd-logind[1452]: Session 19 logged out. Waiting for processes to exit. Jan 29 11:02:51.580981 systemd-logind[1452]: Removed session 19. Jan 29 11:02:56.583092 systemd[1]: Started sshd@19-10.0.0.65:22-10.0.0.1:54138.service - OpenSSH per-connection server daemon (10.0.0.1:54138). Jan 29 11:02:56.626440 sshd[4154]: Accepted publickey for core from 10.0.0.1 port 54138 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:02:56.627915 sshd-session[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:02:56.632290 systemd-logind[1452]: New session 20 of user core. Jan 29 11:02:56.641757 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 11:02:56.751057 sshd[4156]: Connection closed by 10.0.0.1 port 54138 Jan 29 11:02:56.751603 sshd-session[4154]: pam_unix(sshd:session): session closed for user core Jan 29 11:02:56.755219 systemd[1]: sshd@19-10.0.0.65:22-10.0.0.1:54138.service: Deactivated successfully. Jan 29 11:02:56.757047 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 11:02:56.757809 systemd-logind[1452]: Session 20 logged out. Waiting for processes to exit. Jan 29 11:02:56.758680 systemd-logind[1452]: Removed session 20. Jan 29 11:03:01.762041 systemd[1]: Started sshd@20-10.0.0.65:22-10.0.0.1:54144.service - OpenSSH per-connection server daemon (10.0.0.1:54144). Jan 29 11:03:01.803951 sshd[4168]: Accepted publickey for core from 10.0.0.1 port 54144 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:03:01.805045 sshd-session[4168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:03:01.808648 systemd-logind[1452]: New session 21 of user core. Jan 29 11:03:01.813830 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 11:03:01.918555 sshd[4170]: Connection closed by 10.0.0.1 port 54144 Jan 29 11:03:01.918887 sshd-session[4168]: pam_unix(sshd:session): session closed for user core Jan 29 11:03:01.922331 systemd[1]: sshd@20-10.0.0.65:22-10.0.0.1:54144.service: Deactivated successfully. Jan 29 11:03:01.924379 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 11:03:01.925276 systemd-logind[1452]: Session 21 logged out. Waiting for processes to exit. Jan 29 11:03:01.925979 systemd-logind[1452]: Removed session 21. Jan 29 11:03:06.934867 systemd[1]: Started sshd@21-10.0.0.65:22-10.0.0.1:34868.service - OpenSSH per-connection server daemon (10.0.0.1:34868). Jan 29 11:03:06.977409 sshd[4185]: Accepted publickey for core from 10.0.0.1 port 34868 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:03:06.978795 sshd-session[4185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:03:06.982761 systemd-logind[1452]: New session 22 of user core. Jan 29 11:03:06.990750 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 11:03:07.098639 sshd[4187]: Connection closed by 10.0.0.1 port 34868 Jan 29 11:03:07.099174 sshd-session[4185]: pam_unix(sshd:session): session closed for user core Jan 29 11:03:07.109403 systemd[1]: sshd@21-10.0.0.65:22-10.0.0.1:34868.service: Deactivated successfully. Jan 29 11:03:07.111276 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 11:03:07.112449 systemd-logind[1452]: Session 22 logged out. Waiting for processes to exit. Jan 29 11:03:07.121837 systemd[1]: Started sshd@22-10.0.0.65:22-10.0.0.1:34872.service - OpenSSH per-connection server daemon (10.0.0.1:34872). Jan 29 11:03:07.122737 systemd-logind[1452]: Removed session 22. Jan 29 11:03:07.161756 sshd[4199]: Accepted publickey for core from 10.0.0.1 port 34872 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:03:07.162970 sshd-session[4199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:03:07.166800 systemd-logind[1452]: New session 23 of user core. Jan 29 11:03:07.176755 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 11:03:09.630494 kubelet[2555]: I0129 11:03:09.630427 2555 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-gdw7g" podStartSLOduration=65.63041011 podStartE2EDuration="1m5.63041011s" podCreationTimestamp="2025-01-29 11:02:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:02:24.3065423 +0000 UTC m=+28.235901481" watchObservedRunningTime="2025-01-29 11:03:09.63041011 +0000 UTC m=+73.559769291" Jan 29 11:03:09.640578 containerd[1467]: time="2025-01-29T11:03:09.640522009Z" level=info msg="StopContainer for \"28246f4f693b6a6ac58606f15eb56d0ed9f5955324a1684c287a6b372471920c\" with timeout 30 (s)" Jan 29 11:03:09.641085 containerd[1467]: time="2025-01-29T11:03:09.640865642Z" level=info msg="Stop container \"28246f4f693b6a6ac58606f15eb56d0ed9f5955324a1684c287a6b372471920c\" with signal terminated" Jan 29 11:03:09.653133 systemd[1]: cri-containerd-28246f4f693b6a6ac58606f15eb56d0ed9f5955324a1684c287a6b372471920c.scope: Deactivated successfully. Jan 29 11:03:09.682279 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28246f4f693b6a6ac58606f15eb56d0ed9f5955324a1684c287a6b372471920c-rootfs.mount: Deactivated successfully. Jan 29 11:03:09.688445 containerd[1467]: time="2025-01-29T11:03:09.688389990Z" level=info msg="shim disconnected" id=28246f4f693b6a6ac58606f15eb56d0ed9f5955324a1684c287a6b372471920c namespace=k8s.io Jan 29 11:03:09.688445 containerd[1467]: time="2025-01-29T11:03:09.688441549Z" level=warning msg="cleaning up after shim disconnected" id=28246f4f693b6a6ac58606f15eb56d0ed9f5955324a1684c287a6b372471920c namespace=k8s.io Jan 29 11:03:09.688445 containerd[1467]: time="2025-01-29T11:03:09.688449909Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:03:09.693822 containerd[1467]: time="2025-01-29T11:03:09.693774533Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:03:09.697203 containerd[1467]: time="2025-01-29T11:03:09.697171352Z" level=info msg="StopContainer for \"30493d42437dd240f300e6bf0ad89218dd339e7910295889876ee2b563a3ae00\" with timeout 2 (s)" Jan 29 11:03:09.697520 containerd[1467]: time="2025-01-29T11:03:09.697498426Z" level=info msg="Stop container \"30493d42437dd240f300e6bf0ad89218dd339e7910295889876ee2b563a3ae00\" with signal terminated" Jan 29 11:03:09.705030 systemd-networkd[1388]: lxc_health: Link DOWN Jan 29 11:03:09.705040 systemd-networkd[1388]: lxc_health: Lost carrier Jan 29 11:03:09.728611 containerd[1467]: time="2025-01-29T11:03:09.726409947Z" level=info msg="StopContainer for \"28246f4f693b6a6ac58606f15eb56d0ed9f5955324a1684c287a6b372471920c\" returns successfully" Jan 29 11:03:09.729995 systemd[1]: cri-containerd-30493d42437dd240f300e6bf0ad89218dd339e7910295889876ee2b563a3ae00.scope: Deactivated successfully. Jan 29 11:03:09.730252 systemd[1]: cri-containerd-30493d42437dd240f300e6bf0ad89218dd339e7910295889876ee2b563a3ae00.scope: Consumed 6.476s CPU time. Jan 29 11:03:09.733118 containerd[1467]: time="2025-01-29T11:03:09.733081548Z" level=info msg="StopPodSandbox for \"1cc35e6481b1e4d1c9f5d171a1b7253dd65e26348c59dedd4286dca4d2f2b999\"" Jan 29 11:03:09.737162 containerd[1467]: time="2025-01-29T11:03:09.737116155Z" level=info msg="Container to stop \"28246f4f693b6a6ac58606f15eb56d0ed9f5955324a1684c287a6b372471920c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:03:09.739408 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1cc35e6481b1e4d1c9f5d171a1b7253dd65e26348c59dedd4286dca4d2f2b999-shm.mount: Deactivated successfully. Jan 29 11:03:09.745995 systemd[1]: cri-containerd-1cc35e6481b1e4d1c9f5d171a1b7253dd65e26348c59dedd4286dca4d2f2b999.scope: Deactivated successfully. Jan 29 11:03:09.750433 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30493d42437dd240f300e6bf0ad89218dd339e7910295889876ee2b563a3ae00-rootfs.mount: Deactivated successfully. Jan 29 11:03:09.765594 containerd[1467]: time="2025-01-29T11:03:09.765179732Z" level=info msg="shim disconnected" id=30493d42437dd240f300e6bf0ad89218dd339e7910295889876ee2b563a3ae00 namespace=k8s.io Jan 29 11:03:09.765818 containerd[1467]: time="2025-01-29T11:03:09.765797121Z" level=warning msg="cleaning up after shim disconnected" id=30493d42437dd240f300e6bf0ad89218dd339e7910295889876ee2b563a3ae00 namespace=k8s.io Jan 29 11:03:09.765942 containerd[1467]: time="2025-01-29T11:03:09.765926478Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:03:09.771814 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1cc35e6481b1e4d1c9f5d171a1b7253dd65e26348c59dedd4286dca4d2f2b999-rootfs.mount: Deactivated successfully. Jan 29 11:03:09.774017 containerd[1467]: time="2025-01-29T11:03:09.773696139Z" level=info msg="shim disconnected" id=1cc35e6481b1e4d1c9f5d171a1b7253dd65e26348c59dedd4286dca4d2f2b999 namespace=k8s.io Jan 29 11:03:09.774017 containerd[1467]: time="2025-01-29T11:03:09.774007093Z" level=warning msg="cleaning up after shim disconnected" id=1cc35e6481b1e4d1c9f5d171a1b7253dd65e26348c59dedd4286dca4d2f2b999 namespace=k8s.io Jan 29 11:03:09.774017 containerd[1467]: time="2025-01-29T11:03:09.774019413Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:03:09.785321 containerd[1467]: time="2025-01-29T11:03:09.785278651Z" level=info msg="TearDown network for sandbox \"1cc35e6481b1e4d1c9f5d171a1b7253dd65e26348c59dedd4286dca4d2f2b999\" successfully" Jan 29 11:03:09.785321 containerd[1467]: time="2025-01-29T11:03:09.785315651Z" level=info msg="StopPodSandbox for \"1cc35e6481b1e4d1c9f5d171a1b7253dd65e26348c59dedd4286dca4d2f2b999\" returns successfully" Jan 29 11:03:09.796211 containerd[1467]: time="2025-01-29T11:03:09.796148136Z" level=info msg="StopContainer for \"30493d42437dd240f300e6bf0ad89218dd339e7910295889876ee2b563a3ae00\" returns successfully" Jan 29 11:03:09.796597 containerd[1467]: time="2025-01-29T11:03:09.796557569Z" level=info msg="StopPodSandbox for \"9755af9c94524111a5d79b6ae1b82dd0b1818c38d6ec61ef9a77190292e7d1a6\"" Jan 29 11:03:09.796870 containerd[1467]: time="2025-01-29T11:03:09.796636447Z" level=info msg="Container to stop \"a52990c79910bbb95adfb756dbab9748d255501270ef35e0ff6c1c0b4510c31e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:03:09.796870 containerd[1467]: time="2025-01-29T11:03:09.796832684Z" level=info msg="Container to stop \"48dfb53303af3b032d2d81d0f1330d5af0bcb4247e3ad70e7088095c8140e471\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:03:09.796870 containerd[1467]: time="2025-01-29T11:03:09.796843684Z" level=info msg="Container to stop \"0f1b6ccde88bfe7c64680a2c0a412c54414b4778d02e4cc8e706cf6b1a21ae03\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:03:09.796870 containerd[1467]: time="2025-01-29T11:03:09.796851804Z" level=info msg="Container to stop \"dbdfda7718a45d427f5f3dabc48d5eafe656d3872f8b2506b144211cd2236f62\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:03:09.796870 containerd[1467]: time="2025-01-29T11:03:09.796860403Z" level=info msg="Container to stop \"30493d42437dd240f300e6bf0ad89218dd339e7910295889876ee2b563a3ae00\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:03:09.803449 systemd[1]: cri-containerd-9755af9c94524111a5d79b6ae1b82dd0b1818c38d6ec61ef9a77190292e7d1a6.scope: Deactivated successfully. Jan 29 11:03:09.832114 containerd[1467]: time="2025-01-29T11:03:09.832041172Z" level=info msg="shim disconnected" id=9755af9c94524111a5d79b6ae1b82dd0b1818c38d6ec61ef9a77190292e7d1a6 namespace=k8s.io Jan 29 11:03:09.832636 containerd[1467]: time="2025-01-29T11:03:09.832431805Z" level=warning msg="cleaning up after shim disconnected" id=9755af9c94524111a5d79b6ae1b82dd0b1818c38d6ec61ef9a77190292e7d1a6 namespace=k8s.io Jan 29 11:03:09.832636 containerd[1467]: time="2025-01-29T11:03:09.832451485Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:03:09.852201 containerd[1467]: time="2025-01-29T11:03:09.852112732Z" level=info msg="TearDown network for sandbox \"9755af9c94524111a5d79b6ae1b82dd0b1818c38d6ec61ef9a77190292e7d1a6\" successfully" Jan 29 11:03:09.852201 containerd[1467]: time="2025-01-29T11:03:09.852151611Z" level=info msg="StopPodSandbox for \"9755af9c94524111a5d79b6ae1b82dd0b1818c38d6ec61ef9a77190292e7d1a6\" returns successfully" Jan 29 11:03:09.927051 kubelet[2555]: I0129 11:03:09.926993 2555 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlhfd\" (UniqueName: \"kubernetes.io/projected/c1c697cb-82e1-460e-a304-9a5ed44f90c4-kube-api-access-nlhfd\") pod \"c1c697cb-82e1-460e-a304-9a5ed44f90c4\" (UID: \"c1c697cb-82e1-460e-a304-9a5ed44f90c4\") " Jan 29 11:03:09.927051 kubelet[2555]: I0129 11:03:09.927056 2555 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c1c697cb-82e1-460e-a304-9a5ed44f90c4-cilium-config-path\") pod \"c1c697cb-82e1-460e-a304-9a5ed44f90c4\" (UID: \"c1c697cb-82e1-460e-a304-9a5ed44f90c4\") " Jan 29 11:03:09.930567 kubelet[2555]: I0129 11:03:09.930479 2555 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1c697cb-82e1-460e-a304-9a5ed44f90c4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c1c697cb-82e1-460e-a304-9a5ed44f90c4" (UID: "c1c697cb-82e1-460e-a304-9a5ed44f90c4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 29 11:03:09.932876 kubelet[2555]: I0129 11:03:09.932841 2555 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1c697cb-82e1-460e-a304-9a5ed44f90c4-kube-api-access-nlhfd" (OuterVolumeSpecName: "kube-api-access-nlhfd") pod "c1c697cb-82e1-460e-a304-9a5ed44f90c4" (UID: "c1c697cb-82e1-460e-a304-9a5ed44f90c4"). InnerVolumeSpecName "kube-api-access-nlhfd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 29 11:03:10.028218 kubelet[2555]: I0129 11:03:10.028141 2555 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/de1907c8-ae64-42aa-bf5a-fbde965b5645-hostproc\") pod \"de1907c8-ae64-42aa-bf5a-fbde965b5645\" (UID: \"de1907c8-ae64-42aa-bf5a-fbde965b5645\") " Jan 29 11:03:10.028218 kubelet[2555]: I0129 11:03:10.028193 2555 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/de1907c8-ae64-42aa-bf5a-fbde965b5645-hubble-tls\") pod \"de1907c8-ae64-42aa-bf5a-fbde965b5645\" (UID: \"de1907c8-ae64-42aa-bf5a-fbde965b5645\") " Jan 29 11:03:10.028373 kubelet[2555]: I0129 11:03:10.028260 2555 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de1907c8-ae64-42aa-bf5a-fbde965b5645-hostproc" (OuterVolumeSpecName: "hostproc") pod "de1907c8-ae64-42aa-bf5a-fbde965b5645" (UID: "de1907c8-ae64-42aa-bf5a-fbde965b5645"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 11:03:10.028640 kubelet[2555]: I0129 11:03:10.028424 2555 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/de1907c8-ae64-42aa-bf5a-fbde965b5645-host-proc-sys-kernel\") pod \"de1907c8-ae64-42aa-bf5a-fbde965b5645\" (UID: \"de1907c8-ae64-42aa-bf5a-fbde965b5645\") " Jan 29 11:03:10.028640 kubelet[2555]: I0129 11:03:10.028454 2555 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/de1907c8-ae64-42aa-bf5a-fbde965b5645-host-proc-sys-net\") pod \"de1907c8-ae64-42aa-bf5a-fbde965b5645\" (UID: \"de1907c8-ae64-42aa-bf5a-fbde965b5645\") " Jan 29 11:03:10.028640 kubelet[2555]: I0129 11:03:10.028473 2555 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/de1907c8-ae64-42aa-bf5a-fbde965b5645-cilium-config-path\") pod \"de1907c8-ae64-42aa-bf5a-fbde965b5645\" (UID: \"de1907c8-ae64-42aa-bf5a-fbde965b5645\") " Jan 29 11:03:10.028640 kubelet[2555]: I0129 11:03:10.028486 2555 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de1907c8-ae64-42aa-bf5a-fbde965b5645-xtables-lock\") pod \"de1907c8-ae64-42aa-bf5a-fbde965b5645\" (UID: \"de1907c8-ae64-42aa-bf5a-fbde965b5645\") " Jan 29 11:03:10.028640 kubelet[2555]: I0129 11:03:10.028502 2555 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/de1907c8-ae64-42aa-bf5a-fbde965b5645-cilium-cgroup\") pod \"de1907c8-ae64-42aa-bf5a-fbde965b5645\" (UID: \"de1907c8-ae64-42aa-bf5a-fbde965b5645\") " Jan 29 11:03:10.028640 kubelet[2555]: I0129 11:03:10.028516 2555 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de1907c8-ae64-42aa-bf5a-fbde965b5645-lib-modules\") pod \"de1907c8-ae64-42aa-bf5a-fbde965b5645\" (UID: \"de1907c8-ae64-42aa-bf5a-fbde965b5645\") " Jan 29 11:03:10.028804 kubelet[2555]: I0129 11:03:10.028529 2555 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/de1907c8-ae64-42aa-bf5a-fbde965b5645-cilium-run\") pod \"de1907c8-ae64-42aa-bf5a-fbde965b5645\" (UID: \"de1907c8-ae64-42aa-bf5a-fbde965b5645\") " Jan 29 11:03:10.028804 kubelet[2555]: I0129 11:03:10.028557 2555 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxxw2\" (UniqueName: \"kubernetes.io/projected/de1907c8-ae64-42aa-bf5a-fbde965b5645-kube-api-access-fxxw2\") pod \"de1907c8-ae64-42aa-bf5a-fbde965b5645\" (UID: \"de1907c8-ae64-42aa-bf5a-fbde965b5645\") " Jan 29 11:03:10.028804 kubelet[2555]: I0129 11:03:10.028576 2555 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/de1907c8-ae64-42aa-bf5a-fbde965b5645-etc-cni-netd\") pod \"de1907c8-ae64-42aa-bf5a-fbde965b5645\" (UID: \"de1907c8-ae64-42aa-bf5a-fbde965b5645\") " Jan 29 11:03:10.028804 kubelet[2555]: I0129 11:03:10.028712 2555 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de1907c8-ae64-42aa-bf5a-fbde965b5645-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "de1907c8-ae64-42aa-bf5a-fbde965b5645" (UID: "de1907c8-ae64-42aa-bf5a-fbde965b5645"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 11:03:10.028804 kubelet[2555]: I0129 11:03:10.028741 2555 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de1907c8-ae64-42aa-bf5a-fbde965b5645-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "de1907c8-ae64-42aa-bf5a-fbde965b5645" (UID: "de1907c8-ae64-42aa-bf5a-fbde965b5645"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 11:03:10.028927 kubelet[2555]: I0129 11:03:10.028758 2555 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de1907c8-ae64-42aa-bf5a-fbde965b5645-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "de1907c8-ae64-42aa-bf5a-fbde965b5645" (UID: "de1907c8-ae64-42aa-bf5a-fbde965b5645"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 11:03:10.028927 kubelet[2555]: I0129 11:03:10.028815 2555 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de1907c8-ae64-42aa-bf5a-fbde965b5645-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "de1907c8-ae64-42aa-bf5a-fbde965b5645" (UID: "de1907c8-ae64-42aa-bf5a-fbde965b5645"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 11:03:10.028927 kubelet[2555]: I0129 11:03:10.028848 2555 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de1907c8-ae64-42aa-bf5a-fbde965b5645-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "de1907c8-ae64-42aa-bf5a-fbde965b5645" (UID: "de1907c8-ae64-42aa-bf5a-fbde965b5645"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 11:03:10.029454 kubelet[2555]: I0129 11:03:10.028631 2555 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/de1907c8-ae64-42aa-bf5a-fbde965b5645-clustermesh-secrets\") pod \"de1907c8-ae64-42aa-bf5a-fbde965b5645\" (UID: \"de1907c8-ae64-42aa-bf5a-fbde965b5645\") " Jan 29 11:03:10.029454 kubelet[2555]: I0129 11:03:10.029039 2555 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/de1907c8-ae64-42aa-bf5a-fbde965b5645-bpf-maps\") pod \"de1907c8-ae64-42aa-bf5a-fbde965b5645\" (UID: \"de1907c8-ae64-42aa-bf5a-fbde965b5645\") " Jan 29 11:03:10.029454 kubelet[2555]: I0129 11:03:10.029055 2555 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/de1907c8-ae64-42aa-bf5a-fbde965b5645-cni-path\") pod \"de1907c8-ae64-42aa-bf5a-fbde965b5645\" (UID: \"de1907c8-ae64-42aa-bf5a-fbde965b5645\") " Jan 29 11:03:10.029454 kubelet[2555]: I0129 11:03:10.029118 2555 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/de1907c8-ae64-42aa-bf5a-fbde965b5645-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 29 11:03:10.029454 kubelet[2555]: I0129 11:03:10.029127 2555 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de1907c8-ae64-42aa-bf5a-fbde965b5645-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 29 11:03:10.029454 kubelet[2555]: I0129 11:03:10.029136 2555 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/de1907c8-ae64-42aa-bf5a-fbde965b5645-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 29 11:03:10.029454 kubelet[2555]: I0129 11:03:10.029144 2555 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nlhfd\" (UniqueName: \"kubernetes.io/projected/c1c697cb-82e1-460e-a304-9a5ed44f90c4-kube-api-access-nlhfd\") on node \"localhost\" DevicePath \"\"" Jan 29 11:03:10.029679 kubelet[2555]: I0129 11:03:10.029153 2555 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/de1907c8-ae64-42aa-bf5a-fbde965b5645-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 29 11:03:10.029679 kubelet[2555]: I0129 11:03:10.029162 2555 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/de1907c8-ae64-42aa-bf5a-fbde965b5645-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 29 11:03:10.029679 kubelet[2555]: I0129 11:03:10.029170 2555 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c1c697cb-82e1-460e-a304-9a5ed44f90c4-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 29 11:03:10.029679 kubelet[2555]: I0129 11:03:10.029181 2555 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/de1907c8-ae64-42aa-bf5a-fbde965b5645-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 29 11:03:10.029679 kubelet[2555]: I0129 11:03:10.029208 2555 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de1907c8-ae64-42aa-bf5a-fbde965b5645-cni-path" (OuterVolumeSpecName: "cni-path") pod "de1907c8-ae64-42aa-bf5a-fbde965b5645" (UID: "de1907c8-ae64-42aa-bf5a-fbde965b5645"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 11:03:10.029679 kubelet[2555]: I0129 11:03:10.029228 2555 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de1907c8-ae64-42aa-bf5a-fbde965b5645-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "de1907c8-ae64-42aa-bf5a-fbde965b5645" (UID: "de1907c8-ae64-42aa-bf5a-fbde965b5645"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 11:03:10.030746 kubelet[2555]: I0129 11:03:10.030708 2555 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de1907c8-ae64-42aa-bf5a-fbde965b5645-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "de1907c8-ae64-42aa-bf5a-fbde965b5645" (UID: "de1907c8-ae64-42aa-bf5a-fbde965b5645"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 29 11:03:10.030746 kubelet[2555]: I0129 11:03:10.030749 2555 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de1907c8-ae64-42aa-bf5a-fbde965b5645-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "de1907c8-ae64-42aa-bf5a-fbde965b5645" (UID: "de1907c8-ae64-42aa-bf5a-fbde965b5645"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 11:03:10.030843 kubelet[2555]: I0129 11:03:10.030765 2555 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de1907c8-ae64-42aa-bf5a-fbde965b5645-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "de1907c8-ae64-42aa-bf5a-fbde965b5645" (UID: "de1907c8-ae64-42aa-bf5a-fbde965b5645"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 11:03:10.030869 kubelet[2555]: I0129 11:03:10.030858 2555 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de1907c8-ae64-42aa-bf5a-fbde965b5645-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "de1907c8-ae64-42aa-bf5a-fbde965b5645" (UID: "de1907c8-ae64-42aa-bf5a-fbde965b5645"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 29 11:03:10.031415 kubelet[2555]: I0129 11:03:10.030921 2555 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de1907c8-ae64-42aa-bf5a-fbde965b5645-kube-api-access-fxxw2" (OuterVolumeSpecName: "kube-api-access-fxxw2") pod "de1907c8-ae64-42aa-bf5a-fbde965b5645" (UID: "de1907c8-ae64-42aa-bf5a-fbde965b5645"). InnerVolumeSpecName "kube-api-access-fxxw2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 29 11:03:10.031895 kubelet[2555]: I0129 11:03:10.031868 2555 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de1907c8-ae64-42aa-bf5a-fbde965b5645-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "de1907c8-ae64-42aa-bf5a-fbde965b5645" (UID: "de1907c8-ae64-42aa-bf5a-fbde965b5645"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 29 11:03:10.130236 kubelet[2555]: I0129 11:03:10.130197 2555 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/de1907c8-ae64-42aa-bf5a-fbde965b5645-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 29 11:03:10.130236 kubelet[2555]: I0129 11:03:10.130230 2555 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/de1907c8-ae64-42aa-bf5a-fbde965b5645-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 29 11:03:10.130236 kubelet[2555]: I0129 11:03:10.130242 2555 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de1907c8-ae64-42aa-bf5a-fbde965b5645-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 29 11:03:10.130446 kubelet[2555]: I0129 11:03:10.130250 2555 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fxxw2\" (UniqueName: \"kubernetes.io/projected/de1907c8-ae64-42aa-bf5a-fbde965b5645-kube-api-access-fxxw2\") on node \"localhost\" DevicePath \"\"" Jan 29 11:03:10.130446 kubelet[2555]: I0129 11:03:10.130258 2555 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/de1907c8-ae64-42aa-bf5a-fbde965b5645-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 29 11:03:10.130446 kubelet[2555]: I0129 11:03:10.130266 2555 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/de1907c8-ae64-42aa-bf5a-fbde965b5645-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 29 11:03:10.130446 kubelet[2555]: I0129 11:03:10.130274 2555 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/de1907c8-ae64-42aa-bf5a-fbde965b5645-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 29 11:03:10.130446 kubelet[2555]: I0129 11:03:10.130282 2555 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/de1907c8-ae64-42aa-bf5a-fbde965b5645-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 29 11:03:10.184682 systemd[1]: Removed slice kubepods-besteffort-podc1c697cb_82e1_460e_a304_9a5ed44f90c4.slice - libcontainer container kubepods-besteffort-podc1c697cb_82e1_460e_a304_9a5ed44f90c4.slice. Jan 29 11:03:10.186311 systemd[1]: Removed slice kubepods-burstable-podde1907c8_ae64_42aa_bf5a_fbde965b5645.slice - libcontainer container kubepods-burstable-podde1907c8_ae64_42aa_bf5a_fbde965b5645.slice. Jan 29 11:03:10.186666 systemd[1]: kubepods-burstable-podde1907c8_ae64_42aa_bf5a_fbde965b5645.slice: Consumed 6.607s CPU time. Jan 29 11:03:10.378060 kubelet[2555]: I0129 11:03:10.378032 2555 scope.go:117] "RemoveContainer" containerID="28246f4f693b6a6ac58606f15eb56d0ed9f5955324a1684c287a6b372471920c" Jan 29 11:03:10.379234 containerd[1467]: time="2025-01-29T11:03:10.379188886Z" level=info msg="RemoveContainer for \"28246f4f693b6a6ac58606f15eb56d0ed9f5955324a1684c287a6b372471920c\"" Jan 29 11:03:10.391606 containerd[1467]: time="2025-01-29T11:03:10.391135878Z" level=info msg="RemoveContainer for \"28246f4f693b6a6ac58606f15eb56d0ed9f5955324a1684c287a6b372471920c\" returns successfully" Jan 29 11:03:10.391800 kubelet[2555]: I0129 11:03:10.391782 2555 scope.go:117] "RemoveContainer" containerID="28246f4f693b6a6ac58606f15eb56d0ed9f5955324a1684c287a6b372471920c" Jan 29 11:03:10.392196 containerd[1467]: time="2025-01-29T11:03:10.392046903Z" level=error msg="ContainerStatus for \"28246f4f693b6a6ac58606f15eb56d0ed9f5955324a1684c287a6b372471920c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"28246f4f693b6a6ac58606f15eb56d0ed9f5955324a1684c287a6b372471920c\": not found" Jan 29 11:03:10.403893 kubelet[2555]: E0129 11:03:10.403366 2555 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"28246f4f693b6a6ac58606f15eb56d0ed9f5955324a1684c287a6b372471920c\": not found" containerID="28246f4f693b6a6ac58606f15eb56d0ed9f5955324a1684c287a6b372471920c" Jan 29 11:03:10.403893 kubelet[2555]: I0129 11:03:10.403446 2555 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"28246f4f693b6a6ac58606f15eb56d0ed9f5955324a1684c287a6b372471920c"} err="failed to get container status \"28246f4f693b6a6ac58606f15eb56d0ed9f5955324a1684c287a6b372471920c\": rpc error: code = NotFound desc = an error occurred when try to find container \"28246f4f693b6a6ac58606f15eb56d0ed9f5955324a1684c287a6b372471920c\": not found" Jan 29 11:03:10.403893 kubelet[2555]: I0129 11:03:10.403553 2555 scope.go:117] "RemoveContainer" containerID="30493d42437dd240f300e6bf0ad89218dd339e7910295889876ee2b563a3ae00" Jan 29 11:03:10.405095 containerd[1467]: time="2025-01-29T11:03:10.405065196Z" level=info msg="RemoveContainer for \"30493d42437dd240f300e6bf0ad89218dd339e7910295889876ee2b563a3ae00\"" Jan 29 11:03:10.421114 containerd[1467]: time="2025-01-29T11:03:10.421071158Z" level=info msg="RemoveContainer for \"30493d42437dd240f300e6bf0ad89218dd339e7910295889876ee2b563a3ae00\" returns successfully" Jan 29 11:03:10.421349 kubelet[2555]: I0129 11:03:10.421325 2555 scope.go:117] "RemoveContainer" containerID="dbdfda7718a45d427f5f3dabc48d5eafe656d3872f8b2506b144211cd2236f62" Jan 29 11:03:10.422677 containerd[1467]: time="2025-01-29T11:03:10.422411135Z" level=info msg="RemoveContainer for \"dbdfda7718a45d427f5f3dabc48d5eafe656d3872f8b2506b144211cd2236f62\"" Jan 29 11:03:10.427399 containerd[1467]: time="2025-01-29T11:03:10.427310730Z" level=info msg="RemoveContainer for \"dbdfda7718a45d427f5f3dabc48d5eafe656d3872f8b2506b144211cd2236f62\" returns successfully" Jan 29 11:03:10.427511 kubelet[2555]: I0129 11:03:10.427485 2555 scope.go:117] "RemoveContainer" containerID="0f1b6ccde88bfe7c64680a2c0a412c54414b4778d02e4cc8e706cf6b1a21ae03" Jan 29 11:03:10.428672 containerd[1467]: time="2025-01-29T11:03:10.428420070Z" level=info msg="RemoveContainer for \"0f1b6ccde88bfe7c64680a2c0a412c54414b4778d02e4cc8e706cf6b1a21ae03\"" Jan 29 11:03:10.430784 containerd[1467]: time="2025-01-29T11:03:10.430757350Z" level=info msg="RemoveContainer for \"0f1b6ccde88bfe7c64680a2c0a412c54414b4778d02e4cc8e706cf6b1a21ae03\" returns successfully" Jan 29 11:03:10.431015 kubelet[2555]: I0129 11:03:10.430902 2555 scope.go:117] "RemoveContainer" containerID="48dfb53303af3b032d2d81d0f1330d5af0bcb4247e3ad70e7088095c8140e471" Jan 29 11:03:10.431818 containerd[1467]: time="2025-01-29T11:03:10.431790612Z" level=info msg="RemoveContainer for \"48dfb53303af3b032d2d81d0f1330d5af0bcb4247e3ad70e7088095c8140e471\"" Jan 29 11:03:10.433822 containerd[1467]: time="2025-01-29T11:03:10.433789737Z" level=info msg="RemoveContainer for \"48dfb53303af3b032d2d81d0f1330d5af0bcb4247e3ad70e7088095c8140e471\" returns successfully" Jan 29 11:03:10.433977 kubelet[2555]: I0129 11:03:10.433951 2555 scope.go:117] "RemoveContainer" containerID="a52990c79910bbb95adfb756dbab9748d255501270ef35e0ff6c1c0b4510c31e" Jan 29 11:03:10.435261 containerd[1467]: time="2025-01-29T11:03:10.435172833Z" level=info msg="RemoveContainer for \"a52990c79910bbb95adfb756dbab9748d255501270ef35e0ff6c1c0b4510c31e\"" Jan 29 11:03:10.437500 containerd[1467]: time="2025-01-29T11:03:10.437471393Z" level=info msg="RemoveContainer for \"a52990c79910bbb95adfb756dbab9748d255501270ef35e0ff6c1c0b4510c31e\" returns successfully" Jan 29 11:03:10.437748 kubelet[2555]: I0129 11:03:10.437724 2555 scope.go:117] "RemoveContainer" containerID="30493d42437dd240f300e6bf0ad89218dd339e7910295889876ee2b563a3ae00" Jan 29 11:03:10.438120 containerd[1467]: time="2025-01-29T11:03:10.437985264Z" level=error msg="ContainerStatus for \"30493d42437dd240f300e6bf0ad89218dd339e7910295889876ee2b563a3ae00\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"30493d42437dd240f300e6bf0ad89218dd339e7910295889876ee2b563a3ae00\": not found" Jan 29 11:03:10.438184 kubelet[2555]: E0129 11:03:10.438141 2555 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"30493d42437dd240f300e6bf0ad89218dd339e7910295889876ee2b563a3ae00\": not found" containerID="30493d42437dd240f300e6bf0ad89218dd339e7910295889876ee2b563a3ae00" Jan 29 11:03:10.438184 kubelet[2555]: I0129 11:03:10.438162 2555 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"30493d42437dd240f300e6bf0ad89218dd339e7910295889876ee2b563a3ae00"} err="failed to get container status \"30493d42437dd240f300e6bf0ad89218dd339e7910295889876ee2b563a3ae00\": rpc error: code = NotFound desc = an error occurred when try to find container \"30493d42437dd240f300e6bf0ad89218dd339e7910295889876ee2b563a3ae00\": not found" Jan 29 11:03:10.438184 kubelet[2555]: I0129 11:03:10.438180 2555 scope.go:117] "RemoveContainer" containerID="dbdfda7718a45d427f5f3dabc48d5eafe656d3872f8b2506b144211cd2236f62" Jan 29 11:03:10.438519 containerd[1467]: time="2025-01-29T11:03:10.438395537Z" level=error msg="ContainerStatus for \"dbdfda7718a45d427f5f3dabc48d5eafe656d3872f8b2506b144211cd2236f62\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dbdfda7718a45d427f5f3dabc48d5eafe656d3872f8b2506b144211cd2236f62\": not found" Jan 29 11:03:10.438569 kubelet[2555]: E0129 11:03:10.438519 2555 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dbdfda7718a45d427f5f3dabc48d5eafe656d3872f8b2506b144211cd2236f62\": not found" containerID="dbdfda7718a45d427f5f3dabc48d5eafe656d3872f8b2506b144211cd2236f62" Jan 29 11:03:10.438569 kubelet[2555]: I0129 11:03:10.438535 2555 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dbdfda7718a45d427f5f3dabc48d5eafe656d3872f8b2506b144211cd2236f62"} err="failed to get container status \"dbdfda7718a45d427f5f3dabc48d5eafe656d3872f8b2506b144211cd2236f62\": rpc error: code = NotFound desc = an error occurred when try to find container \"dbdfda7718a45d427f5f3dabc48d5eafe656d3872f8b2506b144211cd2236f62\": not found" Jan 29 11:03:10.438569 kubelet[2555]: I0129 11:03:10.438554 2555 scope.go:117] "RemoveContainer" containerID="0f1b6ccde88bfe7c64680a2c0a412c54414b4778d02e4cc8e706cf6b1a21ae03" Jan 29 11:03:10.438787 containerd[1467]: time="2025-01-29T11:03:10.438715971Z" level=error msg="ContainerStatus for \"0f1b6ccde88bfe7c64680a2c0a412c54414b4778d02e4cc8e706cf6b1a21ae03\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0f1b6ccde88bfe7c64680a2c0a412c54414b4778d02e4cc8e706cf6b1a21ae03\": not found" Jan 29 11:03:10.438868 kubelet[2555]: E0129 11:03:10.438810 2555 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0f1b6ccde88bfe7c64680a2c0a412c54414b4778d02e4cc8e706cf6b1a21ae03\": not found" containerID="0f1b6ccde88bfe7c64680a2c0a412c54414b4778d02e4cc8e706cf6b1a21ae03" Jan 29 11:03:10.438868 kubelet[2555]: I0129 11:03:10.438828 2555 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0f1b6ccde88bfe7c64680a2c0a412c54414b4778d02e4cc8e706cf6b1a21ae03"} err="failed to get container status \"0f1b6ccde88bfe7c64680a2c0a412c54414b4778d02e4cc8e706cf6b1a21ae03\": rpc error: code = NotFound desc = an error occurred when try to find container \"0f1b6ccde88bfe7c64680a2c0a412c54414b4778d02e4cc8e706cf6b1a21ae03\": not found" Jan 29 11:03:10.438868 kubelet[2555]: I0129 11:03:10.438841 2555 scope.go:117] "RemoveContainer" containerID="48dfb53303af3b032d2d81d0f1330d5af0bcb4247e3ad70e7088095c8140e471" Jan 29 11:03:10.439283 containerd[1467]: time="2025-01-29T11:03:10.439167244Z" level=error msg="ContainerStatus for \"48dfb53303af3b032d2d81d0f1330d5af0bcb4247e3ad70e7088095c8140e471\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"48dfb53303af3b032d2d81d0f1330d5af0bcb4247e3ad70e7088095c8140e471\": not found" Jan 29 11:03:10.439323 kubelet[2555]: E0129 11:03:10.439279 2555 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"48dfb53303af3b032d2d81d0f1330d5af0bcb4247e3ad70e7088095c8140e471\": not found" containerID="48dfb53303af3b032d2d81d0f1330d5af0bcb4247e3ad70e7088095c8140e471" Jan 29 11:03:10.439323 kubelet[2555]: I0129 11:03:10.439296 2555 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"48dfb53303af3b032d2d81d0f1330d5af0bcb4247e3ad70e7088095c8140e471"} err="failed to get container status \"48dfb53303af3b032d2d81d0f1330d5af0bcb4247e3ad70e7088095c8140e471\": rpc error: code = NotFound desc = an error occurred when try to find container \"48dfb53303af3b032d2d81d0f1330d5af0bcb4247e3ad70e7088095c8140e471\": not found" Jan 29 11:03:10.439323 kubelet[2555]: I0129 11:03:10.439307 2555 scope.go:117] "RemoveContainer" containerID="a52990c79910bbb95adfb756dbab9748d255501270ef35e0ff6c1c0b4510c31e" Jan 29 11:03:10.439468 containerd[1467]: time="2025-01-29T11:03:10.439435759Z" level=error msg="ContainerStatus for \"a52990c79910bbb95adfb756dbab9748d255501270ef35e0ff6c1c0b4510c31e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a52990c79910bbb95adfb756dbab9748d255501270ef35e0ff6c1c0b4510c31e\": not found" Jan 29 11:03:10.439655 kubelet[2555]: E0129 11:03:10.439636 2555 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a52990c79910bbb95adfb756dbab9748d255501270ef35e0ff6c1c0b4510c31e\": not found" containerID="a52990c79910bbb95adfb756dbab9748d255501270ef35e0ff6c1c0b4510c31e" Jan 29 11:03:10.439710 kubelet[2555]: I0129 11:03:10.439661 2555 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a52990c79910bbb95adfb756dbab9748d255501270ef35e0ff6c1c0b4510c31e"} err="failed to get container status \"a52990c79910bbb95adfb756dbab9748d255501270ef35e0ff6c1c0b4510c31e\": rpc error: code = NotFound desc = an error occurred when try to find container \"a52990c79910bbb95adfb756dbab9748d255501270ef35e0ff6c1c0b4510c31e\": not found" Jan 29 11:03:10.663546 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9755af9c94524111a5d79b6ae1b82dd0b1818c38d6ec61ef9a77190292e7d1a6-rootfs.mount: Deactivated successfully. Jan 29 11:03:10.663667 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9755af9c94524111a5d79b6ae1b82dd0b1818c38d6ec61ef9a77190292e7d1a6-shm.mount: Deactivated successfully. Jan 29 11:03:10.663724 systemd[1]: var-lib-kubelet-pods-c1c697cb\x2d82e1\x2d460e\x2da304\x2d9a5ed44f90c4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnlhfd.mount: Deactivated successfully. Jan 29 11:03:10.663776 systemd[1]: var-lib-kubelet-pods-de1907c8\x2dae64\x2d42aa\x2dbf5a\x2dfbde965b5645-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfxxw2.mount: Deactivated successfully. Jan 29 11:03:10.663830 systemd[1]: var-lib-kubelet-pods-de1907c8\x2dae64\x2d42aa\x2dbf5a\x2dfbde965b5645-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 29 11:03:10.663880 systemd[1]: var-lib-kubelet-pods-de1907c8\x2dae64\x2d42aa\x2dbf5a\x2dfbde965b5645-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 29 11:03:11.220749 kubelet[2555]: E0129 11:03:11.220711 2555 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 11:03:11.601702 sshd[4201]: Connection closed by 10.0.0.1 port 34872 Jan 29 11:03:11.602088 sshd-session[4199]: pam_unix(sshd:session): session closed for user core Jan 29 11:03:11.612086 systemd[1]: sshd@22-10.0.0.65:22-10.0.0.1:34872.service: Deactivated successfully. Jan 29 11:03:11.613700 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 11:03:11.613840 systemd[1]: session-23.scope: Consumed 1.784s CPU time. Jan 29 11:03:11.615759 systemd-logind[1452]: Session 23 logged out. Waiting for processes to exit. Jan 29 11:03:11.626834 systemd[1]: Started sshd@23-10.0.0.65:22-10.0.0.1:34884.service - OpenSSH per-connection server daemon (10.0.0.1:34884). Jan 29 11:03:11.627763 systemd-logind[1452]: Removed session 23. Jan 29 11:03:11.665118 sshd[4362]: Accepted publickey for core from 10.0.0.1 port 34884 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:03:11.666208 sshd-session[4362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:03:11.669662 systemd-logind[1452]: New session 24 of user core. Jan 29 11:03:11.678731 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 11:03:12.179635 kubelet[2555]: I0129 11:03:12.179558 2555 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1c697cb-82e1-460e-a304-9a5ed44f90c4" path="/var/lib/kubelet/pods/c1c697cb-82e1-460e-a304-9a5ed44f90c4/volumes" Jan 29 11:03:12.179982 kubelet[2555]: I0129 11:03:12.179961 2555 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de1907c8-ae64-42aa-bf5a-fbde965b5645" path="/var/lib/kubelet/pods/de1907c8-ae64-42aa-bf5a-fbde965b5645/volumes" Jan 29 11:03:12.286925 sshd[4364]: Connection closed by 10.0.0.1 port 34884 Jan 29 11:03:12.289869 sshd-session[4362]: pam_unix(sshd:session): session closed for user core Jan 29 11:03:12.297310 systemd[1]: sshd@23-10.0.0.65:22-10.0.0.1:34884.service: Deactivated successfully. Jan 29 11:03:12.298805 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 11:03:12.300765 systemd-logind[1452]: Session 24 logged out. Waiting for processes to exit. Jan 29 11:03:12.304847 kubelet[2555]: I0129 11:03:12.304811 2555 memory_manager.go:355] "RemoveStaleState removing state" podUID="c1c697cb-82e1-460e-a304-9a5ed44f90c4" containerName="cilium-operator" Jan 29 11:03:12.304847 kubelet[2555]: I0129 11:03:12.304843 2555 memory_manager.go:355] "RemoveStaleState removing state" podUID="de1907c8-ae64-42aa-bf5a-fbde965b5645" containerName="cilium-agent" Jan 29 11:03:12.312879 systemd[1]: Started sshd@24-10.0.0.65:22-10.0.0.1:34894.service - OpenSSH per-connection server daemon (10.0.0.1:34894). Jan 29 11:03:12.315993 systemd-logind[1452]: Removed session 24. Jan 29 11:03:12.331570 systemd[1]: Created slice kubepods-burstable-podbb080ea6_8531_46d7_8077_db95dcffa55c.slice - libcontainer container kubepods-burstable-podbb080ea6_8531_46d7_8077_db95dcffa55c.slice. Jan 29 11:03:12.369837 sshd[4375]: Accepted publickey for core from 10.0.0.1 port 34894 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:03:12.371109 sshd-session[4375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:03:12.374731 systemd-logind[1452]: New session 25 of user core. Jan 29 11:03:12.384861 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 11:03:12.436730 sshd[4377]: Connection closed by 10.0.0.1 port 34894 Jan 29 11:03:12.437080 sshd-session[4375]: pam_unix(sshd:session): session closed for user core Jan 29 11:03:12.442030 kubelet[2555]: I0129 11:03:12.442003 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bb080ea6-8531-46d7-8077-db95dcffa55c-cni-path\") pod \"cilium-jfsrz\" (UID: \"bb080ea6-8531-46d7-8077-db95dcffa55c\") " pod="kube-system/cilium-jfsrz" Jan 29 11:03:12.442133 kubelet[2555]: I0129 11:03:12.442040 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bb080ea6-8531-46d7-8077-db95dcffa55c-clustermesh-secrets\") pod \"cilium-jfsrz\" (UID: \"bb080ea6-8531-46d7-8077-db95dcffa55c\") " pod="kube-system/cilium-jfsrz" Jan 29 11:03:12.442133 kubelet[2555]: I0129 11:03:12.442061 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bb080ea6-8531-46d7-8077-db95dcffa55c-cilium-ipsec-secrets\") pod \"cilium-jfsrz\" (UID: \"bb080ea6-8531-46d7-8077-db95dcffa55c\") " pod="kube-system/cilium-jfsrz" Jan 29 11:03:12.442133 kubelet[2555]: I0129 11:03:12.442076 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bb080ea6-8531-46d7-8077-db95dcffa55c-host-proc-sys-kernel\") pod \"cilium-jfsrz\" (UID: \"bb080ea6-8531-46d7-8077-db95dcffa55c\") " pod="kube-system/cilium-jfsrz" Jan 29 11:03:12.442133 kubelet[2555]: I0129 11:03:12.442118 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zshp9\" (UniqueName: \"kubernetes.io/projected/bb080ea6-8531-46d7-8077-db95dcffa55c-kube-api-access-zshp9\") pod \"cilium-jfsrz\" (UID: \"bb080ea6-8531-46d7-8077-db95dcffa55c\") " pod="kube-system/cilium-jfsrz" Jan 29 11:03:12.442231 kubelet[2555]: I0129 11:03:12.442188 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bb080ea6-8531-46d7-8077-db95dcffa55c-cilium-run\") pod \"cilium-jfsrz\" (UID: \"bb080ea6-8531-46d7-8077-db95dcffa55c\") " pod="kube-system/cilium-jfsrz" Jan 29 11:03:12.442231 kubelet[2555]: I0129 11:03:12.442220 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bb080ea6-8531-46d7-8077-db95dcffa55c-bpf-maps\") pod \"cilium-jfsrz\" (UID: \"bb080ea6-8531-46d7-8077-db95dcffa55c\") " pod="kube-system/cilium-jfsrz" Jan 29 11:03:12.442271 kubelet[2555]: I0129 11:03:12.442240 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bb080ea6-8531-46d7-8077-db95dcffa55c-lib-modules\") pod \"cilium-jfsrz\" (UID: \"bb080ea6-8531-46d7-8077-db95dcffa55c\") " pod="kube-system/cilium-jfsrz" Jan 29 11:03:12.442271 kubelet[2555]: I0129 11:03:12.442257 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bb080ea6-8531-46d7-8077-db95dcffa55c-host-proc-sys-net\") pod \"cilium-jfsrz\" (UID: \"bb080ea6-8531-46d7-8077-db95dcffa55c\") " pod="kube-system/cilium-jfsrz" Jan 29 11:03:12.442312 kubelet[2555]: I0129 11:03:12.442277 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bb080ea6-8531-46d7-8077-db95dcffa55c-hostproc\") pod \"cilium-jfsrz\" (UID: \"bb080ea6-8531-46d7-8077-db95dcffa55c\") " pod="kube-system/cilium-jfsrz" Jan 29 11:03:12.442312 kubelet[2555]: I0129 11:03:12.442294 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bb080ea6-8531-46d7-8077-db95dcffa55c-cilium-config-path\") pod \"cilium-jfsrz\" (UID: \"bb080ea6-8531-46d7-8077-db95dcffa55c\") " pod="kube-system/cilium-jfsrz" Jan 29 11:03:12.442353 kubelet[2555]: I0129 11:03:12.442313 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bb080ea6-8531-46d7-8077-db95dcffa55c-xtables-lock\") pod \"cilium-jfsrz\" (UID: \"bb080ea6-8531-46d7-8077-db95dcffa55c\") " pod="kube-system/cilium-jfsrz" Jan 29 11:03:12.442353 kubelet[2555]: I0129 11:03:12.442334 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bb080ea6-8531-46d7-8077-db95dcffa55c-cilium-cgroup\") pod \"cilium-jfsrz\" (UID: \"bb080ea6-8531-46d7-8077-db95dcffa55c\") " pod="kube-system/cilium-jfsrz" Jan 29 11:03:12.442353 kubelet[2555]: I0129 11:03:12.442351 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bb080ea6-8531-46d7-8077-db95dcffa55c-etc-cni-netd\") pod \"cilium-jfsrz\" (UID: \"bb080ea6-8531-46d7-8077-db95dcffa55c\") " pod="kube-system/cilium-jfsrz" Jan 29 11:03:12.445191 kubelet[2555]: I0129 11:03:12.442709 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bb080ea6-8531-46d7-8077-db95dcffa55c-hubble-tls\") pod \"cilium-jfsrz\" (UID: \"bb080ea6-8531-46d7-8077-db95dcffa55c\") " pod="kube-system/cilium-jfsrz" Jan 29 11:03:12.447734 systemd[1]: sshd@24-10.0.0.65:22-10.0.0.1:34894.service: Deactivated successfully. Jan 29 11:03:12.449335 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 11:03:12.450715 systemd-logind[1452]: Session 25 logged out. Waiting for processes to exit. Jan 29 11:03:12.459819 systemd[1]: Started sshd@25-10.0.0.65:22-10.0.0.1:34908.service - OpenSSH per-connection server daemon (10.0.0.1:34908). Jan 29 11:03:12.461253 systemd-logind[1452]: Removed session 25. Jan 29 11:03:12.501033 sshd[4383]: Accepted publickey for core from 10.0.0.1 port 34908 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:03:12.502244 sshd-session[4383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:03:12.505856 systemd-logind[1452]: New session 26 of user core. Jan 29 11:03:12.515715 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 11:03:12.637167 kubelet[2555]: E0129 11:03:12.637123 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:03:12.637757 containerd[1467]: time="2025-01-29T11:03:12.637677092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jfsrz,Uid:bb080ea6-8531-46d7-8077-db95dcffa55c,Namespace:kube-system,Attempt:0,}" Jan 29 11:03:12.655131 containerd[1467]: time="2025-01-29T11:03:12.655045289Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:03:12.655131 containerd[1467]: time="2025-01-29T11:03:12.655108208Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:03:12.655131 containerd[1467]: time="2025-01-29T11:03:12.655119968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:03:12.655305 containerd[1467]: time="2025-01-29T11:03:12.655203327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:03:12.690760 systemd[1]: Started cri-containerd-3b25057438e298549e20bbbda5530ab2bdebbc355d68d12a50612369b3492c9b.scope - libcontainer container 3b25057438e298549e20bbbda5530ab2bdebbc355d68d12a50612369b3492c9b. Jan 29 11:03:12.716659 containerd[1467]: time="2025-01-29T11:03:12.716617125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jfsrz,Uid:bb080ea6-8531-46d7-8077-db95dcffa55c,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b25057438e298549e20bbbda5530ab2bdebbc355d68d12a50612369b3492c9b\"" Jan 29 11:03:12.717303 kubelet[2555]: E0129 11:03:12.717283 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:03:12.719043 containerd[1467]: time="2025-01-29T11:03:12.719013326Z" level=info msg="CreateContainer within sandbox \"3b25057438e298549e20bbbda5530ab2bdebbc355d68d12a50612369b3492c9b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 11:03:12.737403 containerd[1467]: time="2025-01-29T11:03:12.737269108Z" level=info msg="CreateContainer within sandbox \"3b25057438e298549e20bbbda5530ab2bdebbc355d68d12a50612369b3492c9b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d4ca6b7f53393f464b8380315c1af702409a4d8fc1a835248afba987f067f741\"" Jan 29 11:03:12.738030 containerd[1467]: time="2025-01-29T11:03:12.737811859Z" level=info msg="StartContainer for \"d4ca6b7f53393f464b8380315c1af702409a4d8fc1a835248afba987f067f741\"" Jan 29 11:03:12.763749 systemd[1]: Started cri-containerd-d4ca6b7f53393f464b8380315c1af702409a4d8fc1a835248afba987f067f741.scope - libcontainer container d4ca6b7f53393f464b8380315c1af702409a4d8fc1a835248afba987f067f741. Jan 29 11:03:12.791871 containerd[1467]: time="2025-01-29T11:03:12.791823338Z" level=info msg="StartContainer for \"d4ca6b7f53393f464b8380315c1af702409a4d8fc1a835248afba987f067f741\" returns successfully" Jan 29 11:03:12.809760 systemd[1]: cri-containerd-d4ca6b7f53393f464b8380315c1af702409a4d8fc1a835248afba987f067f741.scope: Deactivated successfully. Jan 29 11:03:12.842270 containerd[1467]: time="2025-01-29T11:03:12.842207996Z" level=info msg="shim disconnected" id=d4ca6b7f53393f464b8380315c1af702409a4d8fc1a835248afba987f067f741 namespace=k8s.io Jan 29 11:03:12.842270 containerd[1467]: time="2025-01-29T11:03:12.842262155Z" level=warning msg="cleaning up after shim disconnected" id=d4ca6b7f53393f464b8380315c1af702409a4d8fc1a835248afba987f067f741 namespace=k8s.io Jan 29 11:03:12.842270 containerd[1467]: time="2025-01-29T11:03:12.842270435Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:03:13.389364 kubelet[2555]: E0129 11:03:13.389325 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:03:13.392231 containerd[1467]: time="2025-01-29T11:03:13.391969707Z" level=info msg="CreateContainer within sandbox \"3b25057438e298549e20bbbda5530ab2bdebbc355d68d12a50612369b3492c9b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 11:03:13.414755 containerd[1467]: time="2025-01-29T11:03:13.414702028Z" level=info msg="CreateContainer within sandbox \"3b25057438e298549e20bbbda5530ab2bdebbc355d68d12a50612369b3492c9b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c3f798270f9f00920e8145ca86a2872d58ea0e898b27bd6eb1be2248c1aaa2d4\"" Jan 29 11:03:13.415452 containerd[1467]: time="2025-01-29T11:03:13.415178340Z" level=info msg="StartContainer for \"c3f798270f9f00920e8145ca86a2872d58ea0e898b27bd6eb1be2248c1aaa2d4\"" Jan 29 11:03:13.438745 systemd[1]: Started cri-containerd-c3f798270f9f00920e8145ca86a2872d58ea0e898b27bd6eb1be2248c1aaa2d4.scope - libcontainer container c3f798270f9f00920e8145ca86a2872d58ea0e898b27bd6eb1be2248c1aaa2d4. Jan 29 11:03:13.461821 containerd[1467]: time="2025-01-29T11:03:13.461696805Z" level=info msg="StartContainer for \"c3f798270f9f00920e8145ca86a2872d58ea0e898b27bd6eb1be2248c1aaa2d4\" returns successfully" Jan 29 11:03:13.469749 systemd[1]: cri-containerd-c3f798270f9f00920e8145ca86a2872d58ea0e898b27bd6eb1be2248c1aaa2d4.scope: Deactivated successfully. Jan 29 11:03:13.494319 containerd[1467]: time="2025-01-29T11:03:13.494259731Z" level=info msg="shim disconnected" id=c3f798270f9f00920e8145ca86a2872d58ea0e898b27bd6eb1be2248c1aaa2d4 namespace=k8s.io Jan 29 11:03:13.494319 containerd[1467]: time="2025-01-29T11:03:13.494314010Z" level=warning msg="cleaning up after shim disconnected" id=c3f798270f9f00920e8145ca86a2872d58ea0e898b27bd6eb1be2248c1aaa2d4 namespace=k8s.io Jan 29 11:03:13.494319 containerd[1467]: time="2025-01-29T11:03:13.494322450Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:03:14.177199 kubelet[2555]: E0129 11:03:14.177153 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:03:14.177941 kubelet[2555]: E0129 11:03:14.177830 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:03:14.392904 kubelet[2555]: E0129 11:03:14.392783 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:03:14.395159 containerd[1467]: time="2025-01-29T11:03:14.395122649Z" level=info msg="CreateContainer within sandbox \"3b25057438e298549e20bbbda5530ab2bdebbc355d68d12a50612369b3492c9b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 11:03:14.418325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount587155822.mount: Deactivated successfully. Jan 29 11:03:14.427969 containerd[1467]: time="2025-01-29T11:03:14.427877428Z" level=info msg="CreateContainer within sandbox \"3b25057438e298549e20bbbda5530ab2bdebbc355d68d12a50612369b3492c9b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1466d9384d975078b193431aab80ffc2dca006909c951e123dd4a32ff84c988b\"" Jan 29 11:03:14.428821 containerd[1467]: time="2025-01-29T11:03:14.428794974Z" level=info msg="StartContainer for \"1466d9384d975078b193431aab80ffc2dca006909c951e123dd4a32ff84c988b\"" Jan 29 11:03:14.456747 systemd[1]: Started cri-containerd-1466d9384d975078b193431aab80ffc2dca006909c951e123dd4a32ff84c988b.scope - libcontainer container 1466d9384d975078b193431aab80ffc2dca006909c951e123dd4a32ff84c988b. Jan 29 11:03:14.481970 systemd[1]: cri-containerd-1466d9384d975078b193431aab80ffc2dca006909c951e123dd4a32ff84c988b.scope: Deactivated successfully. Jan 29 11:03:14.483081 containerd[1467]: time="2025-01-29T11:03:14.483044583Z" level=info msg="StartContainer for \"1466d9384d975078b193431aab80ffc2dca006909c951e123dd4a32ff84c988b\" returns successfully" Jan 29 11:03:14.506287 containerd[1467]: time="2025-01-29T11:03:14.506218948Z" level=info msg="shim disconnected" id=1466d9384d975078b193431aab80ffc2dca006909c951e123dd4a32ff84c988b namespace=k8s.io Jan 29 11:03:14.506287 containerd[1467]: time="2025-01-29T11:03:14.506282588Z" level=warning msg="cleaning up after shim disconnected" id=1466d9384d975078b193431aab80ffc2dca006909c951e123dd4a32ff84c988b namespace=k8s.io Jan 29 11:03:14.506287 containerd[1467]: time="2025-01-29T11:03:14.506291667Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:03:14.548226 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1466d9384d975078b193431aab80ffc2dca006909c951e123dd4a32ff84c988b-rootfs.mount: Deactivated successfully. Jan 29 11:03:15.177229 kubelet[2555]: E0129 11:03:15.176869 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:03:15.397769 kubelet[2555]: E0129 11:03:15.397724 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:03:15.400912 containerd[1467]: time="2025-01-29T11:03:15.400787125Z" level=info msg="CreateContainer within sandbox \"3b25057438e298549e20bbbda5530ab2bdebbc355d68d12a50612369b3492c9b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 11:03:15.414447 containerd[1467]: time="2025-01-29T11:03:15.414301964Z" level=info msg="CreateContainer within sandbox \"3b25057438e298549e20bbbda5530ab2bdebbc355d68d12a50612369b3492c9b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"873703c1685254843694329448370502a48f2fe7f822939c6cda4476da90632f\"" Jan 29 11:03:15.414972 containerd[1467]: time="2025-01-29T11:03:15.414945355Z" level=info msg="StartContainer for \"873703c1685254843694329448370502a48f2fe7f822939c6cda4476da90632f\"" Jan 29 11:03:15.442759 systemd[1]: Started cri-containerd-873703c1685254843694329448370502a48f2fe7f822939c6cda4476da90632f.scope - libcontainer container 873703c1685254843694329448370502a48f2fe7f822939c6cda4476da90632f. Jan 29 11:03:15.463921 systemd[1]: cri-containerd-873703c1685254843694329448370502a48f2fe7f822939c6cda4476da90632f.scope: Deactivated successfully. Jan 29 11:03:15.465233 containerd[1467]: time="2025-01-29T11:03:15.465176370Z" level=info msg="StartContainer for \"873703c1685254843694329448370502a48f2fe7f822939c6cda4476da90632f\" returns successfully" Jan 29 11:03:15.487534 containerd[1467]: time="2025-01-29T11:03:15.487452279Z" level=info msg="shim disconnected" id=873703c1685254843694329448370502a48f2fe7f822939c6cda4476da90632f namespace=k8s.io Jan 29 11:03:15.487534 containerd[1467]: time="2025-01-29T11:03:15.487524078Z" level=warning msg="cleaning up after shim disconnected" id=873703c1685254843694329448370502a48f2fe7f822939c6cda4476da90632f namespace=k8s.io Jan 29 11:03:15.487534 containerd[1467]: time="2025-01-29T11:03:15.487532558Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:03:15.548355 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-873703c1685254843694329448370502a48f2fe7f822939c6cda4476da90632f-rootfs.mount: Deactivated successfully. Jan 29 11:03:16.221659 kubelet[2555]: E0129 11:03:16.221612 2555 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 11:03:16.401773 kubelet[2555]: E0129 11:03:16.400289 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:03:16.405923 containerd[1467]: time="2025-01-29T11:03:16.405282653Z" level=info msg="CreateContainer within sandbox \"3b25057438e298549e20bbbda5530ab2bdebbc355d68d12a50612369b3492c9b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 11:03:16.423066 containerd[1467]: time="2025-01-29T11:03:16.423019998Z" level=info msg="CreateContainer within sandbox \"3b25057438e298549e20bbbda5530ab2bdebbc355d68d12a50612369b3492c9b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7b3f9adc0fa80da290032eeeeef407997943f95f4e94738a45d671ff2313d9d7\"" Jan 29 11:03:16.424013 containerd[1467]: time="2025-01-29T11:03:16.423481831Z" level=info msg="StartContainer for \"7b3f9adc0fa80da290032eeeeef407997943f95f4e94738a45d671ff2313d9d7\"" Jan 29 11:03:16.446732 systemd[1]: Started cri-containerd-7b3f9adc0fa80da290032eeeeef407997943f95f4e94738a45d671ff2313d9d7.scope - libcontainer container 7b3f9adc0fa80da290032eeeeef407997943f95f4e94738a45d671ff2313d9d7. Jan 29 11:03:16.469796 containerd[1467]: time="2025-01-29T11:03:16.469307133Z" level=info msg="StartContainer for \"7b3f9adc0fa80da290032eeeeef407997943f95f4e94738a45d671ff2313d9d7\" returns successfully" Jan 29 11:03:16.737614 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 29 11:03:17.405882 kubelet[2555]: E0129 11:03:17.405840 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:03:17.423283 kubelet[2555]: I0129 11:03:17.422574 2555 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jfsrz" podStartSLOduration=5.422556706 podStartE2EDuration="5.422556706s" podCreationTimestamp="2025-01-29 11:03:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:03:17.421783317 +0000 UTC m=+81.351142498" watchObservedRunningTime="2025-01-29 11:03:17.422556706 +0000 UTC m=+81.351915887" Jan 29 11:03:18.450310 kubelet[2555]: I0129 11:03:18.448904 2555 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-29T11:03:18Z","lastTransitionTime":"2025-01-29T11:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 29 11:03:18.639922 kubelet[2555]: E0129 11:03:18.639877 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:03:19.568521 systemd-networkd[1388]: lxc_health: Link UP Jan 29 11:03:19.571271 systemd-networkd[1388]: lxc_health: Gained carrier Jan 29 11:03:20.639246 kubelet[2555]: E0129 11:03:20.639193 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:03:20.798609 systemd-networkd[1388]: lxc_health: Gained IPv6LL Jan 29 11:03:21.414250 kubelet[2555]: E0129 11:03:21.414189 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:03:22.416051 kubelet[2555]: E0129 11:03:22.416011 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:03:23.177092 kubelet[2555]: E0129 11:03:23.177045 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:03:25.324322 sshd[4385]: Connection closed by 10.0.0.1 port 34908 Jan 29 11:03:25.324965 sshd-session[4383]: pam_unix(sshd:session): session closed for user core Jan 29 11:03:25.328260 systemd[1]: sshd@25-10.0.0.65:22-10.0.0.1:34908.service: Deactivated successfully. Jan 29 11:03:25.329951 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 11:03:25.331257 systemd-logind[1452]: Session 26 logged out. Waiting for processes to exit. Jan 29 11:03:25.332449 systemd-logind[1452]: Removed session 26.