Mar 25 01:32:02.886328 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 25 01:32:02.886362 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Mon Mar 24 23:39:14 -00 2025 Mar 25 01:32:02.886373 kernel: KASLR enabled Mar 25 01:32:02.886379 kernel: efi: EFI v2.7 by EDK II Mar 25 01:32:02.886384 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Mar 25 01:32:02.886390 kernel: random: crng init done Mar 25 01:32:02.886397 kernel: secureboot: Secure boot disabled Mar 25 01:32:02.886402 kernel: ACPI: Early table checksum verification disabled Mar 25 01:32:02.886408 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Mar 25 01:32:02.886415 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Mar 25 01:32:02.886421 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Mar 25 01:32:02.886427 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 25 01:32:02.886433 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Mar 25 01:32:02.886439 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 25 01:32:02.886446 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 25 01:32:02.886453 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 25 01:32:02.886460 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 25 01:32:02.886466 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Mar 25 01:32:02.886472 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 25 01:32:02.886478 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Mar 25 01:32:02.886484 kernel: NUMA: Failed to initialise from firmware Mar 25 01:32:02.886489 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Mar 25 01:32:02.886495 kernel: NUMA: NODE_DATA [mem 0xdc956800-0xdc95bfff] Mar 25 01:32:02.886501 kernel: Zone ranges: Mar 25 01:32:02.886507 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Mar 25 01:32:02.886514 kernel: DMA32 empty Mar 25 01:32:02.886520 kernel: Normal empty Mar 25 01:32:02.886526 kernel: Movable zone start for each node Mar 25 01:32:02.886532 kernel: Early memory node ranges Mar 25 01:32:02.886538 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Mar 25 01:32:02.886544 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Mar 25 01:32:02.886550 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Mar 25 01:32:02.886555 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Mar 25 01:32:02.886561 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Mar 25 01:32:02.886567 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Mar 25 01:32:02.886573 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Mar 25 01:32:02.886579 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Mar 25 01:32:02.886586 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Mar 25 01:32:02.886592 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Mar 25 01:32:02.886598 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Mar 25 01:32:02.886606 kernel: psci: probing for conduit method from ACPI. Mar 25 01:32:02.886612 kernel: psci: PSCIv1.1 detected in firmware. Mar 25 01:32:02.886619 kernel: psci: Using standard PSCI v0.2 function IDs Mar 25 01:32:02.886626 kernel: psci: Trusted OS migration not required Mar 25 01:32:02.886632 kernel: psci: SMC Calling Convention v1.1 Mar 25 01:32:02.886639 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Mar 25 01:32:02.886645 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Mar 25 01:32:02.886651 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Mar 25 01:32:02.886658 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Mar 25 01:32:02.886664 kernel: Detected PIPT I-cache on CPU0 Mar 25 01:32:02.886670 kernel: CPU features: detected: GIC system register CPU interface Mar 25 01:32:02.886677 kernel: CPU features: detected: Hardware dirty bit management Mar 25 01:32:02.886683 kernel: CPU features: detected: Spectre-v4 Mar 25 01:32:02.886691 kernel: CPU features: detected: Spectre-BHB Mar 25 01:32:02.886697 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 25 01:32:02.886703 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 25 01:32:02.886710 kernel: CPU features: detected: ARM erratum 1418040 Mar 25 01:32:02.886716 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 25 01:32:02.886722 kernel: alternatives: applying boot alternatives Mar 25 01:32:02.886729 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=b84e5f613acd6cd0a8a878f32f5653a14f2e6fb2820997fecd5b2bd33a4ba3ab Mar 25 01:32:02.886744 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 25 01:32:02.886750 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 25 01:32:02.886757 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 25 01:32:02.886763 kernel: Fallback order for Node 0: 0 Mar 25 01:32:02.886772 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Mar 25 01:32:02.886778 kernel: Policy zone: DMA Mar 25 01:32:02.886784 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 25 01:32:02.886790 kernel: software IO TLB: area num 4. Mar 25 01:32:02.886797 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Mar 25 01:32:02.886803 kernel: Memory: 2387404K/2572288K available (10304K kernel code, 2186K rwdata, 8096K rodata, 38464K init, 897K bss, 184884K reserved, 0K cma-reserved) Mar 25 01:32:02.886810 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 25 01:32:02.886816 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 25 01:32:02.886823 kernel: rcu: RCU event tracing is enabled. Mar 25 01:32:02.886829 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 25 01:32:02.886836 kernel: Trampoline variant of Tasks RCU enabled. Mar 25 01:32:02.886842 kernel: Tracing variant of Tasks RCU enabled. Mar 25 01:32:02.886850 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 25 01:32:02.886865 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 25 01:32:02.886871 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 25 01:32:02.886878 kernel: GICv3: 256 SPIs implemented Mar 25 01:32:02.886884 kernel: GICv3: 0 Extended SPIs implemented Mar 25 01:32:02.886890 kernel: Root IRQ handler: gic_handle_irq Mar 25 01:32:02.886896 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Mar 25 01:32:02.886902 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Mar 25 01:32:02.886909 kernel: ITS [mem 0x08080000-0x0809ffff] Mar 25 01:32:02.886915 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Mar 25 01:32:02.886922 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Mar 25 01:32:02.886930 kernel: GICv3: using LPI property table @0x00000000400f0000 Mar 25 01:32:02.886936 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Mar 25 01:32:02.886943 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 25 01:32:02.886949 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 25 01:32:02.886955 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 25 01:32:02.886962 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 25 01:32:02.886968 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 25 01:32:02.886975 kernel: arm-pv: using stolen time PV Mar 25 01:32:02.886981 kernel: Console: colour dummy device 80x25 Mar 25 01:32:02.886988 kernel: ACPI: Core revision 20230628 Mar 25 01:32:02.887000 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 25 01:32:02.887008 kernel: pid_max: default: 32768 minimum: 301 Mar 25 01:32:02.887014 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 25 01:32:02.887021 kernel: landlock: Up and running. Mar 25 01:32:02.887027 kernel: SELinux: Initializing. Mar 25 01:32:02.887039 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 25 01:32:02.887045 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 25 01:32:02.887052 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 25 01:32:02.887059 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 25 01:32:02.887065 kernel: rcu: Hierarchical SRCU implementation. Mar 25 01:32:02.887073 kernel: rcu: Max phase no-delay instances is 400. Mar 25 01:32:02.887080 kernel: Platform MSI: ITS@0x8080000 domain created Mar 25 01:32:02.887086 kernel: PCI/MSI: ITS@0x8080000 domain created Mar 25 01:32:02.887092 kernel: Remapping and enabling EFI services. Mar 25 01:32:02.887099 kernel: smp: Bringing up secondary CPUs ... Mar 25 01:32:02.887105 kernel: Detected PIPT I-cache on CPU1 Mar 25 01:32:02.887112 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Mar 25 01:32:02.887119 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Mar 25 01:32:02.887125 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 25 01:32:02.887133 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 25 01:32:02.887140 kernel: Detected PIPT I-cache on CPU2 Mar 25 01:32:02.887152 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Mar 25 01:32:02.887161 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Mar 25 01:32:02.887168 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 25 01:32:02.887174 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Mar 25 01:32:02.887181 kernel: Detected PIPT I-cache on CPU3 Mar 25 01:32:02.887188 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Mar 25 01:32:02.887195 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Mar 25 01:32:02.887203 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 25 01:32:02.887210 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Mar 25 01:32:02.887217 kernel: smp: Brought up 1 node, 4 CPUs Mar 25 01:32:02.887223 kernel: SMP: Total of 4 processors activated. Mar 25 01:32:02.887230 kernel: CPU features: detected: 32-bit EL0 Support Mar 25 01:32:02.887237 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 25 01:32:02.887244 kernel: CPU features: detected: Common not Private translations Mar 25 01:32:02.887251 kernel: CPU features: detected: CRC32 instructions Mar 25 01:32:02.887259 kernel: CPU features: detected: Enhanced Virtualization Traps Mar 25 01:32:02.887266 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 25 01:32:02.887273 kernel: CPU features: detected: LSE atomic instructions Mar 25 01:32:02.887279 kernel: CPU features: detected: Privileged Access Never Mar 25 01:32:02.887286 kernel: CPU features: detected: RAS Extension Support Mar 25 01:32:02.887293 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 25 01:32:02.887300 kernel: CPU: All CPU(s) started at EL1 Mar 25 01:32:02.887307 kernel: alternatives: applying system-wide alternatives Mar 25 01:32:02.887314 kernel: devtmpfs: initialized Mar 25 01:32:02.887322 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 25 01:32:02.887329 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 25 01:32:02.887336 kernel: pinctrl core: initialized pinctrl subsystem Mar 25 01:32:02.887343 kernel: SMBIOS 3.0.0 present. Mar 25 01:32:02.887350 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Mar 25 01:32:02.887356 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 25 01:32:02.887363 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 25 01:32:02.887370 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 25 01:32:02.887377 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 25 01:32:02.887386 kernel: audit: initializing netlink subsys (disabled) Mar 25 01:32:02.887393 kernel: audit: type=2000 audit(0.019:1): state=initialized audit_enabled=0 res=1 Mar 25 01:32:02.887400 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 25 01:32:02.887407 kernel: cpuidle: using governor menu Mar 25 01:32:02.887413 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 25 01:32:02.887420 kernel: ASID allocator initialised with 32768 entries Mar 25 01:32:02.887427 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 25 01:32:02.887434 kernel: Serial: AMBA PL011 UART driver Mar 25 01:32:02.887440 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 25 01:32:02.887449 kernel: Modules: 0 pages in range for non-PLT usage Mar 25 01:32:02.887456 kernel: Modules: 509248 pages in range for PLT usage Mar 25 01:32:02.887462 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 25 01:32:02.887469 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 25 01:32:02.887476 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 25 01:32:02.887496 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 25 01:32:02.887503 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 25 01:32:02.887510 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 25 01:32:02.887517 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 25 01:32:02.887525 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 25 01:32:02.887532 kernel: ACPI: Added _OSI(Module Device) Mar 25 01:32:02.887538 kernel: ACPI: Added _OSI(Processor Device) Mar 25 01:32:02.887545 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 25 01:32:02.887552 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 25 01:32:02.887559 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 25 01:32:02.887566 kernel: ACPI: Interpreter enabled Mar 25 01:32:02.887572 kernel: ACPI: Using GIC for interrupt routing Mar 25 01:32:02.887579 kernel: ACPI: MCFG table detected, 1 entries Mar 25 01:32:02.887586 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Mar 25 01:32:02.887594 kernel: printk: console [ttyAMA0] enabled Mar 25 01:32:02.887601 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 25 01:32:02.887723 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 25 01:32:02.887809 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 25 01:32:02.887900 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 25 01:32:02.887966 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Mar 25 01:32:02.888028 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Mar 25 01:32:02.888040 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Mar 25 01:32:02.888047 kernel: PCI host bridge to bus 0000:00 Mar 25 01:32:02.888118 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Mar 25 01:32:02.888176 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 25 01:32:02.888232 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Mar 25 01:32:02.888290 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 25 01:32:02.888368 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Mar 25 01:32:02.888449 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Mar 25 01:32:02.888517 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Mar 25 01:32:02.888581 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Mar 25 01:32:02.888645 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Mar 25 01:32:02.888707 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Mar 25 01:32:02.888782 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Mar 25 01:32:02.888850 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Mar 25 01:32:02.888954 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Mar 25 01:32:02.889013 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 25 01:32:02.889072 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Mar 25 01:32:02.889081 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 25 01:32:02.889088 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 25 01:32:02.889095 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 25 01:32:02.889102 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 25 01:32:02.889111 kernel: iommu: Default domain type: Translated Mar 25 01:32:02.889118 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 25 01:32:02.889125 kernel: efivars: Registered efivars operations Mar 25 01:32:02.889131 kernel: vgaarb: loaded Mar 25 01:32:02.889138 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 25 01:32:02.889145 kernel: VFS: Disk quotas dquot_6.6.0 Mar 25 01:32:02.889152 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 25 01:32:02.889159 kernel: pnp: PnP ACPI init Mar 25 01:32:02.889228 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Mar 25 01:32:02.889239 kernel: pnp: PnP ACPI: found 1 devices Mar 25 01:32:02.889246 kernel: NET: Registered PF_INET protocol family Mar 25 01:32:02.889253 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 25 01:32:02.889260 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 25 01:32:02.889267 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 25 01:32:02.889274 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 25 01:32:02.889281 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 25 01:32:02.889288 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 25 01:32:02.889296 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 25 01:32:02.889303 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 25 01:32:02.889310 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 25 01:32:02.889317 kernel: PCI: CLS 0 bytes, default 64 Mar 25 01:32:02.889323 kernel: kvm [1]: HYP mode not available Mar 25 01:32:02.889330 kernel: Initialise system trusted keyrings Mar 25 01:32:02.889337 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 25 01:32:02.889344 kernel: Key type asymmetric registered Mar 25 01:32:02.889350 kernel: Asymmetric key parser 'x509' registered Mar 25 01:32:02.889357 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 25 01:32:02.889365 kernel: io scheduler mq-deadline registered Mar 25 01:32:02.889372 kernel: io scheduler kyber registered Mar 25 01:32:02.889379 kernel: io scheduler bfq registered Mar 25 01:32:02.889386 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 25 01:32:02.889392 kernel: ACPI: button: Power Button [PWRB] Mar 25 01:32:02.889400 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 25 01:32:02.889464 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Mar 25 01:32:02.889473 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 25 01:32:02.889480 kernel: thunder_xcv, ver 1.0 Mar 25 01:32:02.889488 kernel: thunder_bgx, ver 1.0 Mar 25 01:32:02.889495 kernel: nicpf, ver 1.0 Mar 25 01:32:02.889502 kernel: nicvf, ver 1.0 Mar 25 01:32:02.889571 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 25 01:32:02.889644 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-25T01:32:02 UTC (1742866322) Mar 25 01:32:02.889654 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 25 01:32:02.889661 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Mar 25 01:32:02.889670 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 25 01:32:02.889680 kernel: watchdog: Hard watchdog permanently disabled Mar 25 01:32:02.889687 kernel: NET: Registered PF_INET6 protocol family Mar 25 01:32:02.889695 kernel: Segment Routing with IPv6 Mar 25 01:32:02.889702 kernel: In-situ OAM (IOAM) with IPv6 Mar 25 01:32:02.889712 kernel: NET: Registered PF_PACKET protocol family Mar 25 01:32:02.889720 kernel: Key type dns_resolver registered Mar 25 01:32:02.889731 kernel: registered taskstats version 1 Mar 25 01:32:02.889745 kernel: Loading compiled-in X.509 certificates Mar 25 01:32:02.889752 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: ed4ababe871f0afac8b4236504477de11a6baf07' Mar 25 01:32:02.889761 kernel: Key type .fscrypt registered Mar 25 01:32:02.889767 kernel: Key type fscrypt-provisioning registered Mar 25 01:32:02.889774 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 25 01:32:02.889781 kernel: ima: Allocated hash algorithm: sha1 Mar 25 01:32:02.889788 kernel: ima: No architecture policies found Mar 25 01:32:02.889795 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 25 01:32:02.889801 kernel: clk: Disabling unused clocks Mar 25 01:32:02.889808 kernel: Freeing unused kernel memory: 38464K Mar 25 01:32:02.889815 kernel: Run /init as init process Mar 25 01:32:02.889823 kernel: with arguments: Mar 25 01:32:02.889830 kernel: /init Mar 25 01:32:02.889840 kernel: with environment: Mar 25 01:32:02.889847 kernel: HOME=/ Mar 25 01:32:02.889862 kernel: TERM=linux Mar 25 01:32:02.889870 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 25 01:32:02.889878 systemd[1]: Successfully made /usr/ read-only. Mar 25 01:32:02.889887 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 25 01:32:02.889898 systemd[1]: Detected virtualization kvm. Mar 25 01:32:02.889905 systemd[1]: Detected architecture arm64. Mar 25 01:32:02.889912 systemd[1]: Running in initrd. Mar 25 01:32:02.889919 systemd[1]: No hostname configured, using default hostname. Mar 25 01:32:02.889926 systemd[1]: Hostname set to <localhost>. Mar 25 01:32:02.889933 systemd[1]: Initializing machine ID from VM UUID. Mar 25 01:32:02.889940 systemd[1]: Queued start job for default target initrd.target. Mar 25 01:32:02.889948 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 25 01:32:02.889957 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 25 01:32:02.889965 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 25 01:32:02.889973 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 25 01:32:02.889980 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 25 01:32:02.889988 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 25 01:32:02.889996 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 25 01:32:02.890006 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 25 01:32:02.890013 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 25 01:32:02.890020 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 25 01:32:02.890028 systemd[1]: Reached target paths.target - Path Units. Mar 25 01:32:02.890035 systemd[1]: Reached target slices.target - Slice Units. Mar 25 01:32:02.890042 systemd[1]: Reached target swap.target - Swaps. Mar 25 01:32:02.890050 systemd[1]: Reached target timers.target - Timer Units. Mar 25 01:32:02.890057 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 25 01:32:02.890064 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 25 01:32:02.890074 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 25 01:32:02.890081 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 25 01:32:02.890088 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 25 01:32:02.890096 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 25 01:32:02.890103 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 25 01:32:02.890111 systemd[1]: Reached target sockets.target - Socket Units. Mar 25 01:32:02.890118 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 25 01:32:02.890125 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 25 01:32:02.890134 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 25 01:32:02.890141 systemd[1]: Starting systemd-fsck-usr.service... Mar 25 01:32:02.890149 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 25 01:32:02.890156 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 25 01:32:02.890164 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:32:02.890171 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 25 01:32:02.890178 systemd[1]: Finished systemd-fsck-usr.service. Mar 25 01:32:02.890188 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 25 01:32:02.890195 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 25 01:32:02.890203 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:32:02.890227 systemd-journald[237]: Collecting audit messages is disabled. Mar 25 01:32:02.890246 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 25 01:32:02.890254 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 25 01:32:02.890262 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 25 01:32:02.890269 systemd-journald[237]: Journal started Mar 25 01:32:02.890288 systemd-journald[237]: Runtime Journal (/run/log/journal/dffe2b2cc4ad450991da419d3b6613f1) is 5.9M, max 47.3M, 41.4M free. Mar 25 01:32:02.875302 systemd-modules-load[238]: Inserted module 'overlay' Mar 25 01:32:02.893828 systemd-modules-load[238]: Inserted module 'br_netfilter' Mar 25 01:32:02.894467 kernel: Bridge firewalling registered Mar 25 01:32:02.910252 systemd[1]: Started systemd-journald.service - Journal Service. Mar 25 01:32:02.912471 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 25 01:32:02.916531 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 25 01:32:02.917880 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 25 01:32:02.921000 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 25 01:32:02.926308 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 01:32:02.928205 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 25 01:32:02.935519 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 25 01:32:02.936987 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:32:02.938870 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 25 01:32:02.942231 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 25 01:32:02.944479 dracut-cmdline[274]: dracut-dracut-053 Mar 25 01:32:02.946970 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=b84e5f613acd6cd0a8a878f32f5653a14f2e6fb2820997fecd5b2bd33a4ba3ab Mar 25 01:32:02.983336 systemd-resolved[288]: Positive Trust Anchors: Mar 25 01:32:02.983348 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 25 01:32:02.983379 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 25 01:32:02.994707 systemd-resolved[288]: Defaulting to hostname 'linux'. Mar 25 01:32:02.995692 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 25 01:32:02.996548 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 25 01:32:03.016881 kernel: SCSI subsystem initialized Mar 25 01:32:03.021869 kernel: Loading iSCSI transport class v2.0-870. Mar 25 01:32:03.028887 kernel: iscsi: registered transport (tcp) Mar 25 01:32:03.041886 kernel: iscsi: registered transport (qla4xxx) Mar 25 01:32:03.041912 kernel: QLogic iSCSI HBA Driver Mar 25 01:32:03.081503 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 25 01:32:03.083513 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 25 01:32:03.118885 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 25 01:32:03.118946 kernel: device-mapper: uevent: version 1.0.3 Mar 25 01:32:03.120032 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 25 01:32:03.166881 kernel: raid6: neonx8 gen() 15725 MB/s Mar 25 01:32:03.183880 kernel: raid6: neonx4 gen() 15744 MB/s Mar 25 01:32:03.200884 kernel: raid6: neonx2 gen() 13183 MB/s Mar 25 01:32:03.217882 kernel: raid6: neonx1 gen() 10425 MB/s Mar 25 01:32:03.234878 kernel: raid6: int64x8 gen() 6777 MB/s Mar 25 01:32:03.251875 kernel: raid6: int64x4 gen() 7333 MB/s Mar 25 01:32:03.268872 kernel: raid6: int64x2 gen() 6083 MB/s Mar 25 01:32:03.285880 kernel: raid6: int64x1 gen() 5046 MB/s Mar 25 01:32:03.285904 kernel: raid6: using algorithm neonx4 gen() 15744 MB/s Mar 25 01:32:03.302884 kernel: raid6: .... xor() 12295 MB/s, rmw enabled Mar 25 01:32:03.302909 kernel: raid6: using neon recovery algorithm Mar 25 01:32:03.308008 kernel: xor: measuring software checksum speed Mar 25 01:32:03.308022 kernel: 8regs : 21624 MB/sec Mar 25 01:32:03.308031 kernel: 32regs : 21710 MB/sec Mar 25 01:32:03.308933 kernel: arm64_neon : 28041 MB/sec Mar 25 01:32:03.308963 kernel: xor: using function: arm64_neon (28041 MB/sec) Mar 25 01:32:03.359875 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 25 01:32:03.370017 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 25 01:32:03.372420 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 25 01:32:03.404398 systemd-udevd[465]: Using default interface naming scheme 'v255'. Mar 25 01:32:03.408045 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 25 01:32:03.410822 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 25 01:32:03.438987 dracut-pre-trigger[474]: rd.md=0: removing MD RAID activation Mar 25 01:32:03.463915 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 25 01:32:03.465388 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 25 01:32:03.515035 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 25 01:32:03.517327 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 25 01:32:03.535437 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 25 01:32:03.536977 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 25 01:32:03.538637 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 25 01:32:03.540960 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 25 01:32:03.543540 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 25 01:32:03.560918 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 25 01:32:03.565368 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Mar 25 01:32:03.573550 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 25 01:32:03.573831 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 25 01:32:03.573849 kernel: GPT:9289727 != 19775487 Mar 25 01:32:03.573869 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 25 01:32:03.573879 kernel: GPT:9289727 != 19775487 Mar 25 01:32:03.573888 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 25 01:32:03.573896 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 25 01:32:03.574765 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 25 01:32:03.574919 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 01:32:03.578150 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 25 01:32:03.579250 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 25 01:32:03.579510 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:32:03.582953 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:32:03.584766 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:32:03.594888 kernel: BTRFS: device fsid bf348154-9cb1-474d-801c-0e035a5758cf devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (526) Mar 25 01:32:03.597878 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by (udev-worker) (524) Mar 25 01:32:03.608442 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 25 01:32:03.610571 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:32:03.618361 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 25 01:32:03.633065 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 25 01:32:03.634230 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 25 01:32:03.642928 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 25 01:32:03.644737 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 25 01:32:03.646432 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 25 01:32:03.662928 disk-uuid[555]: Primary Header is updated. Mar 25 01:32:03.662928 disk-uuid[555]: Secondary Entries is updated. Mar 25 01:32:03.662928 disk-uuid[555]: Secondary Header is updated. Mar 25 01:32:03.670882 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 25 01:32:03.683325 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 01:32:04.675884 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 25 01:32:04.676845 disk-uuid[556]: The operation has completed successfully. Mar 25 01:32:04.703970 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 25 01:32:04.704069 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 25 01:32:04.727967 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 25 01:32:04.747528 sh[575]: Success Mar 25 01:32:04.760873 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 25 01:32:04.789683 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 25 01:32:04.792192 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 25 01:32:04.807882 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 25 01:32:04.813539 kernel: BTRFS info (device dm-0): first mount of filesystem bf348154-9cb1-474d-801c-0e035a5758cf Mar 25 01:32:04.813562 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 25 01:32:04.813572 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 25 01:32:04.814359 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 25 01:32:04.815864 kernel: BTRFS info (device dm-0): using free space tree Mar 25 01:32:04.818525 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 25 01:32:04.819771 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 25 01:32:04.820393 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 25 01:32:04.822567 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 25 01:32:04.845339 kernel: BTRFS info (device vda6): first mount of filesystem 09629b08-d05c-4ce3-8bf7-615041c4b2c9 Mar 25 01:32:04.845375 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 25 01:32:04.845385 kernel: BTRFS info (device vda6): using free space tree Mar 25 01:32:04.847880 kernel: BTRFS info (device vda6): auto enabling async discard Mar 25 01:32:04.851880 kernel: BTRFS info (device vda6): last unmount of filesystem 09629b08-d05c-4ce3-8bf7-615041c4b2c9 Mar 25 01:32:04.854067 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 25 01:32:04.857949 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 25 01:32:04.918398 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 25 01:32:04.921647 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 25 01:32:04.961449 ignition[665]: Ignition 2.20.0 Mar 25 01:32:04.961459 ignition[665]: Stage: fetch-offline Mar 25 01:32:04.961488 ignition[665]: no configs at "/usr/lib/ignition/base.d" Mar 25 01:32:04.961496 ignition[665]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 25 01:32:04.961700 ignition[665]: parsed url from cmdline: "" Mar 25 01:32:04.961706 ignition[665]: no config URL provided Mar 25 01:32:04.961710 ignition[665]: reading system config file "/usr/lib/ignition/user.ign" Mar 25 01:32:04.961717 ignition[665]: no config at "/usr/lib/ignition/user.ign" Mar 25 01:32:04.961748 ignition[665]: op(1): [started] loading QEMU firmware config module Mar 25 01:32:04.961752 ignition[665]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 25 01:32:04.969070 systemd-networkd[763]: lo: Link UP Mar 25 01:32:04.969074 systemd-networkd[763]: lo: Gained carrier Mar 25 01:32:04.969919 systemd-networkd[763]: Enumeration completed Mar 25 01:32:04.972175 ignition[665]: op(1): [finished] loading QEMU firmware config module Mar 25 01:32:04.970406 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:32:04.970409 systemd-networkd[763]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 25 01:32:04.971098 systemd-networkd[763]: eth0: Link UP Mar 25 01:32:04.971101 systemd-networkd[763]: eth0: Gained carrier Mar 25 01:32:04.971107 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:32:04.971348 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 25 01:32:04.972291 systemd[1]: Reached target network.target - Network. Mar 25 01:32:04.984887 systemd-networkd[763]: eth0: DHCPv4 address 10.0.0.141/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 25 01:32:05.017585 ignition[665]: parsing config with SHA512: df08bab5facf62cfe94029bdca6cff6a8aefcccceddb388573c84f28c2e9cdc7adffdc8f65d00741a7f28fa98d39b3eae9a73338389d46c8aea27acda6145c70 Mar 25 01:32:05.023927 unknown[665]: fetched base config from "system" Mar 25 01:32:05.023935 unknown[665]: fetched user config from "qemu" Mar 25 01:32:05.024360 ignition[665]: fetch-offline: fetch-offline passed Mar 25 01:32:05.026056 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 25 01:32:05.024425 ignition[665]: Ignition finished successfully Mar 25 01:32:05.027261 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 25 01:32:05.028938 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 25 01:32:05.051409 ignition[772]: Ignition 2.20.0 Mar 25 01:32:05.051417 ignition[772]: Stage: kargs Mar 25 01:32:05.051556 ignition[772]: no configs at "/usr/lib/ignition/base.d" Mar 25 01:32:05.051565 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 25 01:32:05.052518 ignition[772]: kargs: kargs passed Mar 25 01:32:05.054970 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 25 01:32:05.052557 ignition[772]: Ignition finished successfully Mar 25 01:32:05.056586 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 25 01:32:05.078291 ignition[781]: Ignition 2.20.0 Mar 25 01:32:05.078300 ignition[781]: Stage: disks Mar 25 01:32:05.078442 ignition[781]: no configs at "/usr/lib/ignition/base.d" Mar 25 01:32:05.078452 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 25 01:32:05.080541 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 25 01:32:05.079287 ignition[781]: disks: disks passed Mar 25 01:32:05.082193 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 25 01:32:05.079325 ignition[781]: Ignition finished successfully Mar 25 01:32:05.082967 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 25 01:32:05.083767 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 25 01:32:05.084477 systemd[1]: Reached target sysinit.target - System Initialization. Mar 25 01:32:05.085931 systemd[1]: Reached target basic.target - Basic System. Mar 25 01:32:05.087665 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 25 01:32:05.107537 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 25 01:32:05.111166 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 25 01:32:05.112879 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 25 01:32:05.170792 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 25 01:32:05.172243 kernel: EXT4-fs (vda9): mounted filesystem a7a89271-ee7d-4bda-a834-705261d6cda9 r/w with ordered data mode. Quota mode: none. Mar 25 01:32:05.171968 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 25 01:32:05.174248 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 25 01:32:05.175747 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 25 01:32:05.176732 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 25 01:32:05.176772 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 25 01:32:05.176793 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 25 01:32:05.193226 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 25 01:32:05.195520 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 25 01:32:05.199578 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (801) Mar 25 01:32:05.199602 kernel: BTRFS info (device vda6): first mount of filesystem 09629b08-d05c-4ce3-8bf7-615041c4b2c9 Mar 25 01:32:05.199618 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 25 01:32:05.200314 kernel: BTRFS info (device vda6): using free space tree Mar 25 01:32:05.203880 kernel: BTRFS info (device vda6): auto enabling async discard Mar 25 01:32:05.204002 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 25 01:32:05.239171 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Mar 25 01:32:05.242771 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Mar 25 01:32:05.246511 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Mar 25 01:32:05.250105 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Mar 25 01:32:05.314512 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 25 01:32:05.316423 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 25 01:32:05.317970 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 25 01:32:05.333886 kernel: BTRFS info (device vda6): last unmount of filesystem 09629b08-d05c-4ce3-8bf7-615041c4b2c9 Mar 25 01:32:05.344053 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 25 01:32:05.352982 ignition[915]: INFO : Ignition 2.20.0 Mar 25 01:32:05.352982 ignition[915]: INFO : Stage: mount Mar 25 01:32:05.354208 ignition[915]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 25 01:32:05.354208 ignition[915]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 25 01:32:05.354208 ignition[915]: INFO : mount: mount passed Mar 25 01:32:05.354208 ignition[915]: INFO : Ignition finished successfully Mar 25 01:32:05.356317 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 25 01:32:05.358173 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 25 01:32:05.943117 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 25 01:32:05.944548 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 25 01:32:05.966215 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/vda6 scanned by mount (928) Mar 25 01:32:05.966252 kernel: BTRFS info (device vda6): first mount of filesystem 09629b08-d05c-4ce3-8bf7-615041c4b2c9 Mar 25 01:32:05.966263 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 25 01:32:05.967330 kernel: BTRFS info (device vda6): using free space tree Mar 25 01:32:05.969887 kernel: BTRFS info (device vda6): auto enabling async discard Mar 25 01:32:05.970295 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 25 01:32:05.996140 ignition[945]: INFO : Ignition 2.20.0 Mar 25 01:32:05.996140 ignition[945]: INFO : Stage: files Mar 25 01:32:05.997320 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 25 01:32:05.997320 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 25 01:32:05.997320 ignition[945]: DEBUG : files: compiled without relabeling support, skipping Mar 25 01:32:05.999867 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 25 01:32:05.999867 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 25 01:32:06.001787 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 25 01:32:06.001787 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 25 01:32:06.001787 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 25 01:32:06.001787 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 25 01:32:06.001787 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Mar 25 01:32:06.000439 unknown[945]: wrote ssh authorized keys file for user: core Mar 25 01:32:06.399707 systemd-networkd[763]: eth0: Gained IPv6LL Mar 25 01:32:06.964336 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 25 01:32:10.022023 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 25 01:32:10.023756 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 25 01:32:10.023756 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 25 01:32:10.354264 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 25 01:32:10.438952 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 25 01:32:10.438952 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 25 01:32:10.438952 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 25 01:32:10.438952 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 25 01:32:10.438952 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 25 01:32:10.438952 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 25 01:32:10.438952 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 25 01:32:10.438952 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 25 01:32:10.449803 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 25 01:32:10.449803 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 25 01:32:10.449803 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 25 01:32:10.449803 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 25 01:32:10.449803 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 25 01:32:10.449803 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 25 01:32:10.449803 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Mar 25 01:32:10.677167 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 25 01:32:11.021598 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 25 01:32:11.021598 ignition[945]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 25 01:32:11.024658 ignition[945]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 25 01:32:11.024658 ignition[945]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 25 01:32:11.024658 ignition[945]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 25 01:32:11.024658 ignition[945]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 25 01:32:11.024658 ignition[945]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 25 01:32:11.024658 ignition[945]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 25 01:32:11.024658 ignition[945]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 25 01:32:11.024658 ignition[945]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Mar 25 01:32:11.038153 ignition[945]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 25 01:32:11.041343 ignition[945]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 25 01:32:11.043553 ignition[945]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Mar 25 01:32:11.043553 ignition[945]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 25 01:32:11.043553 ignition[945]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 25 01:32:11.043553 ignition[945]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 25 01:32:11.043553 ignition[945]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 25 01:32:11.043553 ignition[945]: INFO : files: files passed Mar 25 01:32:11.043553 ignition[945]: INFO : Ignition finished successfully Mar 25 01:32:11.046934 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 25 01:32:11.050990 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 25 01:32:11.052986 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 25 01:32:11.061279 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 25 01:32:11.061376 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 25 01:32:11.064483 initrd-setup-root-after-ignition[975]: grep: /sysroot/oem/oem-release: No such file or directory Mar 25 01:32:11.065901 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 25 01:32:11.065901 initrd-setup-root-after-ignition[977]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 25 01:32:11.069307 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 25 01:32:11.068931 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 25 01:32:11.070469 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 25 01:32:11.074006 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 25 01:32:11.103054 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 25 01:32:11.103997 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 25 01:32:11.105115 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 25 01:32:11.106635 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 25 01:32:11.108268 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 25 01:32:11.109722 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 25 01:32:11.123212 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 25 01:32:11.125244 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 25 01:32:11.146505 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 25 01:32:11.147621 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 25 01:32:11.149194 systemd[1]: Stopped target timers.target - Timer Units. Mar 25 01:32:11.150539 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 25 01:32:11.150668 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 25 01:32:11.152564 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 25 01:32:11.154033 systemd[1]: Stopped target basic.target - Basic System. Mar 25 01:32:11.155308 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 25 01:32:11.156620 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 25 01:32:11.158163 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 25 01:32:11.159674 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 25 01:32:11.161072 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 25 01:32:11.162507 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 25 01:32:11.163919 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 25 01:32:11.165213 systemd[1]: Stopped target swap.target - Swaps. Mar 25 01:32:11.166325 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 25 01:32:11.166442 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 25 01:32:11.168194 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 25 01:32:11.169634 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 25 01:32:11.171080 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 25 01:32:11.171155 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 25 01:32:11.172742 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 25 01:32:11.172872 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 25 01:32:11.174951 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 25 01:32:11.175080 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 25 01:32:11.176516 systemd[1]: Stopped target paths.target - Path Units. Mar 25 01:32:11.177702 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 25 01:32:11.181889 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 25 01:32:11.182883 systemd[1]: Stopped target slices.target - Slice Units. Mar 25 01:32:11.184610 systemd[1]: Stopped target sockets.target - Socket Units. Mar 25 01:32:11.185812 systemd[1]: iscsid.socket: Deactivated successfully. Mar 25 01:32:11.185912 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 25 01:32:11.187116 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 25 01:32:11.187200 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 25 01:32:11.188422 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 25 01:32:11.188531 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 25 01:32:11.189820 systemd[1]: ignition-files.service: Deactivated successfully. Mar 25 01:32:11.189937 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 25 01:32:11.191805 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 25 01:32:11.193742 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 25 01:32:11.194486 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 25 01:32:11.194604 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 25 01:32:11.195961 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 25 01:32:11.196058 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 25 01:32:11.204041 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 25 01:32:11.204135 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 25 01:32:11.212450 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 25 01:32:11.214838 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 25 01:32:11.215744 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 25 01:32:11.217721 ignition[1001]: INFO : Ignition 2.20.0 Mar 25 01:32:11.217721 ignition[1001]: INFO : Stage: umount Mar 25 01:32:11.217721 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 25 01:32:11.217721 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 25 01:32:11.217721 ignition[1001]: INFO : umount: umount passed Mar 25 01:32:11.217721 ignition[1001]: INFO : Ignition finished successfully Mar 25 01:32:11.218951 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 25 01:32:11.219053 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 25 01:32:11.220613 systemd[1]: Stopped target network.target - Network. Mar 25 01:32:11.221803 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 25 01:32:11.221874 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 25 01:32:11.223086 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 25 01:32:11.223126 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 25 01:32:11.224410 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 25 01:32:11.224449 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 25 01:32:11.225740 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 25 01:32:11.225778 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 25 01:32:11.227211 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 25 01:32:11.227256 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 25 01:32:11.228702 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 25 01:32:11.230001 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 25 01:32:11.237113 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 25 01:32:11.237205 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 25 01:32:11.239868 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 25 01:32:11.240098 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 25 01:32:11.240196 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 25 01:32:11.243692 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 25 01:32:11.244303 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 25 01:32:11.244351 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 25 01:32:11.247208 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 25 01:32:11.248411 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 25 01:32:11.248465 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 25 01:32:11.250089 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 25 01:32:11.250135 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:32:11.252469 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 25 01:32:11.252512 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 25 01:32:11.253955 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 25 01:32:11.253992 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 25 01:32:11.256278 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 25 01:32:11.258531 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 25 01:32:11.258602 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 25 01:32:11.266297 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 25 01:32:11.266455 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 25 01:32:11.271412 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 25 01:32:11.271554 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 25 01:32:11.273317 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 25 01:32:11.273353 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 25 01:32:11.274772 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 25 01:32:11.274804 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 25 01:32:11.276222 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 25 01:32:11.276266 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 25 01:32:11.278467 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 25 01:32:11.278512 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 25 01:32:11.280550 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 25 01:32:11.280601 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 01:32:11.283626 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 25 01:32:11.285136 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 25 01:32:11.285192 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 25 01:32:11.287786 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 25 01:32:11.287830 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 25 01:32:11.289555 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 25 01:32:11.289606 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 25 01:32:11.291296 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 25 01:32:11.291336 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:32:11.294720 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 25 01:32:11.294774 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 25 01:32:11.301205 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 25 01:32:11.301316 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 25 01:32:11.303044 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 25 01:32:11.305051 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 25 01:32:11.326866 systemd[1]: Switching root. Mar 25 01:32:11.370081 systemd-journald[237]: Journal stopped Mar 25 01:32:12.188402 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Mar 25 01:32:12.188451 kernel: SELinux: policy capability network_peer_controls=1 Mar 25 01:32:12.188466 kernel: SELinux: policy capability open_perms=1 Mar 25 01:32:12.188476 kernel: SELinux: policy capability extended_socket_class=1 Mar 25 01:32:12.188485 kernel: SELinux: policy capability always_check_network=0 Mar 25 01:32:12.188494 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 25 01:32:12.188503 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 25 01:32:12.188515 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 25 01:32:12.188525 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 25 01:32:12.188534 kernel: audit: type=1403 audit(1742866331.569:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 25 01:32:12.188547 systemd[1]: Successfully loaded SELinux policy in 30.602ms. Mar 25 01:32:12.188567 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.985ms. Mar 25 01:32:12.188581 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 25 01:32:12.188594 systemd[1]: Detected virtualization kvm. Mar 25 01:32:12.188606 systemd[1]: Detected architecture arm64. Mar 25 01:32:12.188616 systemd[1]: Detected first boot. Mar 25 01:32:12.188626 systemd[1]: Initializing machine ID from VM UUID. Mar 25 01:32:12.188636 zram_generator::config[1048]: No configuration found. Mar 25 01:32:12.188647 kernel: NET: Registered PF_VSOCK protocol family Mar 25 01:32:12.188658 systemd[1]: Populated /etc with preset unit settings. Mar 25 01:32:12.188673 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 25 01:32:12.188683 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 25 01:32:12.188693 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 25 01:32:12.188703 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 25 01:32:12.188713 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 25 01:32:12.188724 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 25 01:32:12.188734 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 25 01:32:12.188745 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 25 01:32:12.188756 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 25 01:32:12.188766 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 25 01:32:12.188776 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 25 01:32:12.188787 systemd[1]: Created slice user.slice - User and Session Slice. Mar 25 01:32:12.188797 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 25 01:32:12.188808 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 25 01:32:12.188819 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 25 01:32:12.188829 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 25 01:32:12.188842 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 25 01:32:12.188852 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 25 01:32:12.188872 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 25 01:32:12.188883 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 25 01:32:12.188893 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 25 01:32:12.188903 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 25 01:32:12.188913 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 25 01:32:12.188924 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 25 01:32:12.188936 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 25 01:32:12.188946 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 25 01:32:12.188956 systemd[1]: Reached target slices.target - Slice Units. Mar 25 01:32:12.188966 systemd[1]: Reached target swap.target - Swaps. Mar 25 01:32:12.188976 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 25 01:32:12.188987 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 25 01:32:12.188997 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 25 01:32:12.189007 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 25 01:32:12.189018 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 25 01:32:12.189030 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 25 01:32:12.189040 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 25 01:32:12.189051 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 25 01:32:12.189065 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 25 01:32:12.189075 systemd[1]: Mounting media.mount - External Media Directory... Mar 25 01:32:12.189085 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 25 01:32:12.189095 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 25 01:32:12.189106 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 25 01:32:12.189646 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 25 01:32:12.189685 systemd[1]: Reached target machines.target - Containers. Mar 25 01:32:12.189697 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 25 01:32:12.189707 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 25 01:32:12.189718 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 25 01:32:12.189728 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 25 01:32:12.189739 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 25 01:32:12.189750 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 25 01:32:12.189760 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 25 01:32:12.189786 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 25 01:32:12.189796 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 25 01:32:12.189807 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 25 01:32:12.189817 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 25 01:32:12.189830 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 25 01:32:12.189841 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 25 01:32:12.189851 systemd[1]: Stopped systemd-fsck-usr.service. Mar 25 01:32:12.189960 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 25 01:32:12.189976 kernel: fuse: init (API version 7.39) Mar 25 01:32:12.189987 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 25 01:32:12.189997 kernel: loop: module loaded Mar 25 01:32:12.190007 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 25 01:32:12.190017 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 25 01:32:12.190027 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 25 01:32:12.190038 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 25 01:32:12.190047 kernel: ACPI: bus type drm_connector registered Mar 25 01:32:12.190057 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 25 01:32:12.190069 systemd[1]: verity-setup.service: Deactivated successfully. Mar 25 01:32:12.190079 systemd[1]: Stopped verity-setup.service. Mar 25 01:32:12.190089 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 25 01:32:12.190099 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 25 01:32:12.190110 systemd[1]: Mounted media.mount - External Media Directory. Mar 25 01:32:12.190122 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 25 01:32:12.190132 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 25 01:32:12.190167 systemd-journald[1113]: Collecting audit messages is disabled. Mar 25 01:32:12.190188 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 25 01:32:12.190199 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 25 01:32:12.190209 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 25 01:32:12.190219 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 25 01:32:12.190229 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 25 01:32:12.190243 systemd-journald[1113]: Journal started Mar 25 01:32:12.190264 systemd-journald[1113]: Runtime Journal (/run/log/journal/dffe2b2cc4ad450991da419d3b6613f1) is 5.9M, max 47.3M, 41.4M free. Mar 25 01:32:11.975708 systemd[1]: Queued start job for default target multi-user.target. Mar 25 01:32:11.988715 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 25 01:32:11.989105 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 25 01:32:12.191044 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 25 01:32:12.193534 systemd[1]: Started systemd-journald.service - Journal Service. Mar 25 01:32:12.194226 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 25 01:32:12.194405 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 25 01:32:12.195671 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 25 01:32:12.195839 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 25 01:32:12.196995 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 25 01:32:12.197143 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 25 01:32:12.198286 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 25 01:32:12.198438 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 25 01:32:12.199713 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 25 01:32:12.202933 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 25 01:32:12.204169 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 25 01:32:12.205522 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 25 01:32:12.206815 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 25 01:32:12.220399 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 25 01:32:12.222867 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 25 01:32:12.224618 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 25 01:32:12.225587 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 25 01:32:12.225618 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 25 01:32:12.227486 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 25 01:32:12.234631 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 25 01:32:12.236517 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 25 01:32:12.237437 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 25 01:32:12.238484 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 25 01:32:12.240261 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 25 01:32:12.241270 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 25 01:32:12.245019 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 25 01:32:12.245947 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 25 01:32:12.248574 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 25 01:32:12.248726 systemd-journald[1113]: Time spent on flushing to /var/log/journal/dffe2b2cc4ad450991da419d3b6613f1 is 12.192ms for 872 entries. Mar 25 01:32:12.248726 systemd-journald[1113]: System Journal (/var/log/journal/dffe2b2cc4ad450991da419d3b6613f1) is 8M, max 195.6M, 187.6M free. Mar 25 01:32:12.267793 systemd-journald[1113]: Received client request to flush runtime journal. Mar 25 01:32:12.251204 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 25 01:32:12.253030 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 25 01:32:12.265330 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 25 01:32:12.266579 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 25 01:32:12.269391 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 25 01:32:12.270711 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 25 01:32:12.272144 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 25 01:32:12.274900 kernel: loop0: detected capacity change from 0 to 189592 Mar 25 01:32:12.275188 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 25 01:32:12.280156 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 25 01:32:12.287027 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 25 01:32:12.290743 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 25 01:32:12.298886 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 25 01:32:12.300680 systemd-tmpfiles[1166]: ACLs are not supported, ignoring. Mar 25 01:32:12.301544 systemd-tmpfiles[1166]: ACLs are not supported, ignoring. Mar 25 01:32:12.304507 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:32:12.307533 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 25 01:32:12.310121 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 25 01:32:12.314068 udevadm[1177]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 25 01:32:12.325169 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 25 01:32:12.332897 kernel: loop1: detected capacity change from 0 to 103832 Mar 25 01:32:12.352413 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 25 01:32:12.356820 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 25 01:32:12.370963 kernel: loop2: detected capacity change from 0 to 126448 Mar 25 01:32:12.380198 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Mar 25 01:32:12.380215 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Mar 25 01:32:12.385893 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 25 01:32:12.415895 kernel: loop3: detected capacity change from 0 to 189592 Mar 25 01:32:12.423076 kernel: loop4: detected capacity change from 0 to 103832 Mar 25 01:32:12.428882 kernel: loop5: detected capacity change from 0 to 126448 Mar 25 01:32:12.433512 (sd-merge)[1192]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 25 01:32:12.433996 (sd-merge)[1192]: Merged extensions into '/usr'. Mar 25 01:32:12.437503 systemd[1]: Reload requested from client PID 1165 ('systemd-sysext') (unit systemd-sysext.service)... Mar 25 01:32:12.437645 systemd[1]: Reloading... Mar 25 01:32:12.478887 zram_generator::config[1220]: No configuration found. Mar 25 01:32:12.511951 ldconfig[1160]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 25 01:32:12.582747 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:32:12.632392 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 25 01:32:12.632934 systemd[1]: Reloading finished in 194 ms. Mar 25 01:32:12.657486 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 25 01:32:12.660899 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 25 01:32:12.674082 systemd[1]: Starting ensure-sysext.service... Mar 25 01:32:12.675841 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 25 01:32:12.687682 systemd[1]: Reload requested from client PID 1254 ('systemctl') (unit ensure-sysext.service)... Mar 25 01:32:12.687702 systemd[1]: Reloading... Mar 25 01:32:12.708212 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 25 01:32:12.708410 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 25 01:32:12.709055 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 25 01:32:12.709261 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Mar 25 01:32:12.709304 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Mar 25 01:32:12.711627 systemd-tmpfiles[1255]: Detected autofs mount point /boot during canonicalization of boot. Mar 25 01:32:12.711640 systemd-tmpfiles[1255]: Skipping /boot Mar 25 01:32:12.720014 systemd-tmpfiles[1255]: Detected autofs mount point /boot during canonicalization of boot. Mar 25 01:32:12.720032 systemd-tmpfiles[1255]: Skipping /boot Mar 25 01:32:12.749878 zram_generator::config[1290]: No configuration found. Mar 25 01:32:12.824679 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:32:12.873925 systemd[1]: Reloading finished in 185 ms. Mar 25 01:32:12.887896 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 25 01:32:12.899040 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 25 01:32:12.906464 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 25 01:32:12.908611 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 25 01:32:12.918845 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 25 01:32:12.922198 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 25 01:32:12.927102 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 25 01:32:12.929282 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 25 01:32:12.940932 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 25 01:32:12.943883 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 25 01:32:12.947170 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 25 01:32:12.950191 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 25 01:32:12.953095 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 25 01:32:12.953217 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 25 01:32:12.967408 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 25 01:32:12.970576 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 25 01:32:12.970682 systemd-udevd[1325]: Using default interface naming scheme 'v255'. Mar 25 01:32:12.974405 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 25 01:32:12.974599 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 25 01:32:12.976354 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 25 01:32:12.976493 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 25 01:32:12.978365 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 25 01:32:12.978516 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 25 01:32:12.980282 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 25 01:32:12.988234 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 25 01:32:12.989910 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 25 01:32:12.994150 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 25 01:32:12.996445 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 25 01:32:12.996888 augenrules[1356]: No rules Mar 25 01:32:12.999080 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 25 01:32:12.999247 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 25 01:32:13.010796 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 25 01:32:13.011820 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 25 01:32:13.014284 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 25 01:32:13.017033 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 25 01:32:13.018683 systemd[1]: audit-rules.service: Deactivated successfully. Mar 25 01:32:13.020295 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 25 01:32:13.022121 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 25 01:32:13.023901 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 25 01:32:13.025489 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 25 01:32:13.025649 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 25 01:32:13.029039 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 25 01:32:13.029219 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 25 01:32:13.031576 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 25 01:32:13.037405 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 25 01:32:13.051243 systemd[1]: Finished ensure-sysext.service. Mar 25 01:32:13.057975 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 25 01:32:13.059034 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 25 01:32:13.062786 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 25 01:32:13.066483 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 25 01:32:13.082178 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 25 01:32:13.085089 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 25 01:32:13.088124 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 25 01:32:13.088169 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 25 01:32:13.094988 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 25 01:32:13.099506 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 25 01:32:13.100289 systemd-resolved[1323]: Positive Trust Anchors: Mar 25 01:32:13.100670 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 25 01:32:13.101270 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 25 01:32:13.102481 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 25 01:32:13.104152 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 25 01:32:13.104297 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 25 01:32:13.104587 systemd-resolved[1323]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 25 01:32:13.104683 systemd-resolved[1323]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 25 01:32:13.106082 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 25 01:32:13.106235 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 25 01:32:13.109605 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 25 01:32:13.109969 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 25 01:32:13.112889 systemd-resolved[1323]: Defaulting to hostname 'linux'. Mar 25 01:32:13.116031 augenrules[1395]: /sbin/augenrules: No change Mar 25 01:32:13.122018 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1377) Mar 25 01:32:13.119106 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 25 01:32:13.121461 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 25 01:32:13.129257 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 25 01:32:13.130417 augenrules[1428]: No rules Mar 25 01:32:13.131424 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 25 01:32:13.131480 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 25 01:32:13.134765 systemd[1]: audit-rules.service: Deactivated successfully. Mar 25 01:32:13.135026 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 25 01:32:13.153732 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 25 01:32:13.155982 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 25 01:32:13.183659 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 25 01:32:13.184739 systemd[1]: Reached target time-set.target - System Time Set. Mar 25 01:32:13.188232 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 25 01:32:13.196254 systemd-networkd[1406]: lo: Link UP Mar 25 01:32:13.196261 systemd-networkd[1406]: lo: Gained carrier Mar 25 01:32:13.197109 systemd-networkd[1406]: Enumeration completed Mar 25 01:32:13.197495 systemd-networkd[1406]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:32:13.197503 systemd-networkd[1406]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 25 01:32:13.197619 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:32:13.197998 systemd-networkd[1406]: eth0: Link UP Mar 25 01:32:13.198001 systemd-networkd[1406]: eth0: Gained carrier Mar 25 01:32:13.198014 systemd-networkd[1406]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:32:13.198753 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 25 01:32:13.200608 systemd[1]: Reached target network.target - Network. Mar 25 01:32:13.208991 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 25 01:32:13.210923 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 25 01:32:13.214124 systemd-networkd[1406]: eth0: DHCPv4 address 10.0.0.141/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 25 01:32:13.214685 systemd-timesyncd[1410]: Network configuration changed, trying to establish connection. Mar 25 01:32:13.215234 systemd-timesyncd[1410]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 25 01:32:13.215278 systemd-timesyncd[1410]: Initial clock synchronization to Tue 2025-03-25 01:32:13.467129 UTC. Mar 25 01:32:13.223319 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 25 01:32:13.226559 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 25 01:32:13.237997 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 25 01:32:13.246992 lvm[1447]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 25 01:32:13.253826 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:32:13.283252 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 25 01:32:13.284327 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 25 01:32:13.285176 systemd[1]: Reached target sysinit.target - System Initialization. Mar 25 01:32:13.285998 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 25 01:32:13.286874 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 25 01:32:13.287893 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 25 01:32:13.288738 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 25 01:32:13.289681 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 25 01:32:13.290691 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 25 01:32:13.290722 systemd[1]: Reached target paths.target - Path Units. Mar 25 01:32:13.291383 systemd[1]: Reached target timers.target - Timer Units. Mar 25 01:32:13.292486 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 25 01:32:13.294375 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 25 01:32:13.297160 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 25 01:32:13.298240 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 25 01:32:13.299161 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 25 01:32:13.303586 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 25 01:32:13.304921 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 25 01:32:13.306708 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 25 01:32:13.308015 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 25 01:32:13.308880 systemd[1]: Reached target sockets.target - Socket Units. Mar 25 01:32:13.309559 systemd[1]: Reached target basic.target - Basic System. Mar 25 01:32:13.310278 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 25 01:32:13.310308 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 25 01:32:13.311126 systemd[1]: Starting containerd.service - containerd container runtime... Mar 25 01:32:13.312695 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 25 01:32:13.312985 lvm[1455]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 25 01:32:13.315237 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 25 01:32:13.317359 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 25 01:32:13.319965 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 25 01:32:13.320886 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 25 01:32:13.324939 jq[1458]: false Mar 25 01:32:13.323674 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 25 01:32:13.325558 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 25 01:32:13.333158 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 25 01:32:13.337280 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 25 01:32:13.339130 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 25 01:32:13.339587 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 25 01:32:13.340385 extend-filesystems[1459]: Found loop3 Mar 25 01:32:13.341692 extend-filesystems[1459]: Found loop4 Mar 25 01:32:13.341692 extend-filesystems[1459]: Found loop5 Mar 25 01:32:13.341692 extend-filesystems[1459]: Found vda Mar 25 01:32:13.341692 extend-filesystems[1459]: Found vda1 Mar 25 01:32:13.341692 extend-filesystems[1459]: Found vda2 Mar 25 01:32:13.341692 extend-filesystems[1459]: Found vda3 Mar 25 01:32:13.341692 extend-filesystems[1459]: Found usr Mar 25 01:32:13.341692 extend-filesystems[1459]: Found vda4 Mar 25 01:32:13.341692 extend-filesystems[1459]: Found vda6 Mar 25 01:32:13.341692 extend-filesystems[1459]: Found vda7 Mar 25 01:32:13.341692 extend-filesystems[1459]: Found vda9 Mar 25 01:32:13.341692 extend-filesystems[1459]: Checking size of /dev/vda9 Mar 25 01:32:13.342870 dbus-daemon[1457]: [system] SELinux support is enabled Mar 25 01:32:13.352350 extend-filesystems[1459]: Resized partition /dev/vda9 Mar 25 01:32:13.352431 systemd[1]: Starting update-engine.service - Update Engine... Mar 25 01:32:13.354930 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 25 01:32:13.360931 extend-filesystems[1480]: resize2fs 1.47.2 (1-Jan-2025) Mar 25 01:32:13.373335 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 25 01:32:13.373359 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1389) Mar 25 01:32:13.365489 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 25 01:32:13.368882 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 25 01:32:13.375394 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 25 01:32:13.375845 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 25 01:32:13.376149 systemd[1]: motdgen.service: Deactivated successfully. Mar 25 01:32:13.376580 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 25 01:32:13.382456 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 25 01:32:13.382932 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 25 01:32:13.386662 jq[1479]: true Mar 25 01:32:13.395149 (ntainerd)[1484]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 25 01:32:13.404925 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 25 01:32:13.423742 tar[1483]: linux-arm64/helm Mar 25 01:32:13.423964 update_engine[1474]: I20250325 01:32:13.415751 1474 main.cc:92] Flatcar Update Engine starting Mar 25 01:32:13.423964 update_engine[1474]: I20250325 01:32:13.422086 1474 update_check_scheduler.cc:74] Next update check in 9m35s Mar 25 01:32:13.424113 jq[1485]: true Mar 25 01:32:13.423843 systemd-logind[1467]: Watching system buttons on /dev/input/event0 (Power Button) Mar 25 01:32:13.424129 systemd-logind[1467]: New seat seat0. Mar 25 01:32:13.424417 extend-filesystems[1480]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 25 01:32:13.424417 extend-filesystems[1480]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 25 01:32:13.424417 extend-filesystems[1480]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 25 01:32:13.429476 extend-filesystems[1459]: Resized filesystem in /dev/vda9 Mar 25 01:32:13.424956 systemd[1]: Started systemd-logind.service - User Login Management. Mar 25 01:32:13.431431 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 25 01:32:13.431673 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 25 01:32:13.436261 systemd[1]: Started update-engine.service - Update Engine. Mar 25 01:32:13.439679 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 25 01:32:13.439846 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 25 01:32:13.441141 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 25 01:32:13.441253 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 25 01:32:13.446162 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 25 01:32:13.496424 bash[1513]: Updated "/home/core/.ssh/authorized_keys" Mar 25 01:32:13.497491 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 25 01:32:13.498459 locksmithd[1503]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 25 01:32:13.500740 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 25 01:32:13.627175 containerd[1484]: time="2025-03-25T01:32:13Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 25 01:32:13.628073 containerd[1484]: time="2025-03-25T01:32:13.628039320Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 Mar 25 01:32:13.638883 containerd[1484]: time="2025-03-25T01:32:13.637926960Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.44µs" Mar 25 01:32:13.638883 containerd[1484]: time="2025-03-25T01:32:13.637961920Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 25 01:32:13.638883 containerd[1484]: time="2025-03-25T01:32:13.637979600Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 25 01:32:13.638883 containerd[1484]: time="2025-03-25T01:32:13.638119520Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 25 01:32:13.638883 containerd[1484]: time="2025-03-25T01:32:13.638135920Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 25 01:32:13.638883 containerd[1484]: time="2025-03-25T01:32:13.638158760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 25 01:32:13.638883 containerd[1484]: time="2025-03-25T01:32:13.638203520Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 25 01:32:13.638883 containerd[1484]: time="2025-03-25T01:32:13.638215960Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 25 01:32:13.638883 containerd[1484]: time="2025-03-25T01:32:13.638492280Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 25 01:32:13.638883 containerd[1484]: time="2025-03-25T01:32:13.638506560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 25 01:32:13.638883 containerd[1484]: time="2025-03-25T01:32:13.638517320Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 25 01:32:13.638883 containerd[1484]: time="2025-03-25T01:32:13.638539640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 25 01:32:13.639124 containerd[1484]: time="2025-03-25T01:32:13.638629080Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 25 01:32:13.639124 containerd[1484]: time="2025-03-25T01:32:13.638829280Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 25 01:32:13.639197 containerd[1484]: time="2025-03-25T01:32:13.639175200Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 25 01:32:13.639242 containerd[1484]: time="2025-03-25T01:32:13.639229360Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 25 01:32:13.639318 containerd[1484]: time="2025-03-25T01:32:13.639304360Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 25 01:32:13.639698 containerd[1484]: time="2025-03-25T01:32:13.639678680Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 25 01:32:13.639823 containerd[1484]: time="2025-03-25T01:32:13.639804320Z" level=info msg="metadata content store policy set" policy=shared Mar 25 01:32:13.642901 containerd[1484]: time="2025-03-25T01:32:13.642875240Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 25 01:32:13.642998 containerd[1484]: time="2025-03-25T01:32:13.642984520Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 25 01:32:13.643081 containerd[1484]: time="2025-03-25T01:32:13.643066360Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 25 01:32:13.643135 containerd[1484]: time="2025-03-25T01:32:13.643122520Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 25 01:32:13.643186 containerd[1484]: time="2025-03-25T01:32:13.643173360Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 25 01:32:13.643239 containerd[1484]: time="2025-03-25T01:32:13.643225040Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 25 01:32:13.643322 containerd[1484]: time="2025-03-25T01:32:13.643305840Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 25 01:32:13.643375 containerd[1484]: time="2025-03-25T01:32:13.643362400Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 25 01:32:13.643426 containerd[1484]: time="2025-03-25T01:32:13.643413920Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 25 01:32:13.643477 containerd[1484]: time="2025-03-25T01:32:13.643464120Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 25 01:32:13.643541 containerd[1484]: time="2025-03-25T01:32:13.643515160Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 25 01:32:13.643594 containerd[1484]: time="2025-03-25T01:32:13.643581480Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 25 01:32:13.643761 containerd[1484]: time="2025-03-25T01:32:13.643727240Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 25 01:32:13.643845 containerd[1484]: time="2025-03-25T01:32:13.643829520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 25 01:32:13.643921 containerd[1484]: time="2025-03-25T01:32:13.643906800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 25 01:32:13.643972 containerd[1484]: time="2025-03-25T01:32:13.643958560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 25 01:32:13.644024 containerd[1484]: time="2025-03-25T01:32:13.644011360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 25 01:32:13.644074 containerd[1484]: time="2025-03-25T01:32:13.644061960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 25 01:32:13.644143 containerd[1484]: time="2025-03-25T01:32:13.644128720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 25 01:32:13.644204 containerd[1484]: time="2025-03-25T01:32:13.644189760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 25 01:32:13.644265 containerd[1484]: time="2025-03-25T01:32:13.644252080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 25 01:32:13.644317 containerd[1484]: time="2025-03-25T01:32:13.644303160Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 25 01:32:13.644376 containerd[1484]: time="2025-03-25T01:32:13.644362160Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 25 01:32:13.644724 containerd[1484]: time="2025-03-25T01:32:13.644702920Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 25 01:32:13.644792 containerd[1484]: time="2025-03-25T01:32:13.644779600Z" level=info msg="Start snapshots syncer" Mar 25 01:32:13.644935 containerd[1484]: time="2025-03-25T01:32:13.644917080Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 25 01:32:13.646763 containerd[1484]: time="2025-03-25T01:32:13.645549800Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 25 01:32:13.646763 containerd[1484]: time="2025-03-25T01:32:13.645624600Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 25 01:32:13.647088 containerd[1484]: time="2025-03-25T01:32:13.647058720Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 25 01:32:13.647292 containerd[1484]: time="2025-03-25T01:32:13.647269760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 25 01:32:13.647373 containerd[1484]: time="2025-03-25T01:32:13.647357880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 25 01:32:13.647433 containerd[1484]: time="2025-03-25T01:32:13.647418080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 25 01:32:13.647490 containerd[1484]: time="2025-03-25T01:32:13.647473040Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 25 01:32:13.647558 containerd[1484]: time="2025-03-25T01:32:13.647542160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 25 01:32:13.647613 containerd[1484]: time="2025-03-25T01:32:13.647600360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 25 01:32:13.647682 containerd[1484]: time="2025-03-25T01:32:13.647667400Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 25 01:32:13.647764 containerd[1484]: time="2025-03-25T01:32:13.647748320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 25 01:32:13.647825 containerd[1484]: time="2025-03-25T01:32:13.647811920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 25 01:32:13.647904 containerd[1484]: time="2025-03-25T01:32:13.647889120Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 25 01:32:13.648479 containerd[1484]: time="2025-03-25T01:32:13.648437720Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 25 01:32:13.649070 containerd[1484]: time="2025-03-25T01:32:13.649044320Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 25 01:32:13.649144 containerd[1484]: time="2025-03-25T01:32:13.649129440Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 25 01:32:13.649215 containerd[1484]: time="2025-03-25T01:32:13.649199680Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 25 01:32:13.649263 containerd[1484]: time="2025-03-25T01:32:13.649251000Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 25 01:32:13.649319 containerd[1484]: time="2025-03-25T01:32:13.649306040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 25 01:32:13.649372 containerd[1484]: time="2025-03-25T01:32:13.649359520Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 25 01:32:13.649491 containerd[1484]: time="2025-03-25T01:32:13.649478480Z" level=info msg="runtime interface created" Mar 25 01:32:13.649550 containerd[1484]: time="2025-03-25T01:32:13.649523120Z" level=info msg="created NRI interface" Mar 25 01:32:13.649613 containerd[1484]: time="2025-03-25T01:32:13.649598640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 25 01:32:13.649672 containerd[1484]: time="2025-03-25T01:32:13.649660040Z" level=info msg="Connect containerd service" Mar 25 01:32:13.649747 containerd[1484]: time="2025-03-25T01:32:13.649733280Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 25 01:32:13.650575 containerd[1484]: time="2025-03-25T01:32:13.650542680Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 25 01:32:13.734665 sshd_keygen[1475]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 25 01:32:13.755344 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 25 01:32:13.758137 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 25 01:32:13.764873 containerd[1484]: time="2025-03-25T01:32:13.763352840Z" level=info msg="Start subscribing containerd event" Mar 25 01:32:13.764873 containerd[1484]: time="2025-03-25T01:32:13.763422680Z" level=info msg="Start recovering state" Mar 25 01:32:13.764873 containerd[1484]: time="2025-03-25T01:32:13.763594840Z" level=info msg="Start event monitor" Mar 25 01:32:13.764873 containerd[1484]: time="2025-03-25T01:32:13.763611600Z" level=info msg="Start cni network conf syncer for default" Mar 25 01:32:13.764873 containerd[1484]: time="2025-03-25T01:32:13.763629480Z" level=info msg="Start streaming server" Mar 25 01:32:13.764873 containerd[1484]: time="2025-03-25T01:32:13.763641320Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 25 01:32:13.764873 containerd[1484]: time="2025-03-25T01:32:13.763649680Z" level=info msg="runtime interface starting up..." Mar 25 01:32:13.764873 containerd[1484]: time="2025-03-25T01:32:13.763655320Z" level=info msg="starting plugins..." Mar 25 01:32:13.764873 containerd[1484]: time="2025-03-25T01:32:13.763669560Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 25 01:32:13.764873 containerd[1484]: time="2025-03-25T01:32:13.764062520Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 25 01:32:13.764873 containerd[1484]: time="2025-03-25T01:32:13.764111680Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 25 01:32:13.764873 containerd[1484]: time="2025-03-25T01:32:13.764168120Z" level=info msg="containerd successfully booted in 0.137517s" Mar 25 01:32:13.764222 systemd[1]: Started containerd.service - containerd container runtime. Mar 25 01:32:13.775335 systemd[1]: issuegen.service: Deactivated successfully. Mar 25 01:32:13.775575 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 25 01:32:13.778110 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 25 01:32:13.783314 tar[1483]: linux-arm64/LICENSE Mar 25 01:32:13.783377 tar[1483]: linux-arm64/README.md Mar 25 01:32:13.800832 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 25 01:32:13.803934 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 25 01:32:13.806504 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 25 01:32:13.808565 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 25 01:32:13.809901 systemd[1]: Reached target getty.target - Login Prompts. Mar 25 01:32:14.526796 systemd-networkd[1406]: eth0: Gained IPv6LL Mar 25 01:32:14.529207 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 25 01:32:14.531118 systemd[1]: Reached target network-online.target - Network is Online. Mar 25 01:32:14.535300 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 25 01:32:14.537700 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:32:14.548753 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 25 01:32:14.564499 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 25 01:32:14.564742 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 25 01:32:14.566199 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 25 01:32:14.571297 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 25 01:32:15.038826 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:32:15.040425 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 25 01:32:15.043039 systemd[1]: Startup finished in 518ms (kernel) + 8.870s (initrd) + 3.513s (userspace) = 12.901s. Mar 25 01:32:15.043566 (kubelet)[1585]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 01:32:15.457937 kubelet[1585]: E0325 01:32:15.457808 1585 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 01:32:15.460281 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 01:32:15.460437 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 01:32:15.460724 systemd[1]: kubelet.service: Consumed 757ms CPU time, 233.5M memory peak. Mar 25 01:32:15.694721 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 25 01:32:15.695843 systemd[1]: Started sshd@0-10.0.0.141:22-10.0.0.1:43736.service - OpenSSH per-connection server daemon (10.0.0.1:43736). Mar 25 01:32:15.752848 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 43736 ssh2: RSA SHA256:RyyrKoKHvyGTiWIDeMwuNNfmpVLXChNPYxUIZdc99cw Mar 25 01:32:15.754525 sshd-session[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:32:15.760052 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 25 01:32:15.760922 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 25 01:32:15.766028 systemd-logind[1467]: New session 1 of user core. Mar 25 01:32:15.778415 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 25 01:32:15.782693 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 25 01:32:15.800461 (systemd)[1603]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 25 01:32:15.802371 systemd-logind[1467]: New session c1 of user core. Mar 25 01:32:15.922928 systemd[1603]: Queued start job for default target default.target. Mar 25 01:32:15.931727 systemd[1603]: Created slice app.slice - User Application Slice. Mar 25 01:32:15.931757 systemd[1603]: Reached target paths.target - Paths. Mar 25 01:32:15.931791 systemd[1603]: Reached target timers.target - Timers. Mar 25 01:32:15.932926 systemd[1603]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 25 01:32:15.941274 systemd[1603]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 25 01:32:15.941332 systemd[1603]: Reached target sockets.target - Sockets. Mar 25 01:32:15.941368 systemd[1603]: Reached target basic.target - Basic System. Mar 25 01:32:15.941395 systemd[1603]: Reached target default.target - Main User Target. Mar 25 01:32:15.941418 systemd[1603]: Startup finished in 134ms. Mar 25 01:32:15.941575 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 25 01:32:15.942858 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 25 01:32:16.005980 systemd[1]: Started sshd@1-10.0.0.141:22-10.0.0.1:43748.service - OpenSSH per-connection server daemon (10.0.0.1:43748). Mar 25 01:32:16.056286 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 43748 ssh2: RSA SHA256:RyyrKoKHvyGTiWIDeMwuNNfmpVLXChNPYxUIZdc99cw Mar 25 01:32:16.057414 sshd-session[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:32:16.062077 systemd-logind[1467]: New session 2 of user core. Mar 25 01:32:16.072012 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 25 01:32:16.122032 sshd[1616]: Connection closed by 10.0.0.1 port 43748 Mar 25 01:32:16.122393 sshd-session[1614]: pam_unix(sshd:session): session closed for user core Mar 25 01:32:16.133768 systemd[1]: sshd@1-10.0.0.141:22-10.0.0.1:43748.service: Deactivated successfully. Mar 25 01:32:16.135851 systemd[1]: session-2.scope: Deactivated successfully. Mar 25 01:32:16.137966 systemd-logind[1467]: Session 2 logged out. Waiting for processes to exit. Mar 25 01:32:16.139114 systemd[1]: Started sshd@2-10.0.0.141:22-10.0.0.1:43760.service - OpenSSH per-connection server daemon (10.0.0.1:43760). Mar 25 01:32:16.139963 systemd-logind[1467]: Removed session 2. Mar 25 01:32:16.189520 sshd[1621]: Accepted publickey for core from 10.0.0.1 port 43760 ssh2: RSA SHA256:RyyrKoKHvyGTiWIDeMwuNNfmpVLXChNPYxUIZdc99cw Mar 25 01:32:16.190554 sshd-session[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:32:16.193949 systemd-logind[1467]: New session 3 of user core. Mar 25 01:32:16.204023 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 25 01:32:16.251797 sshd[1624]: Connection closed by 10.0.0.1 port 43760 Mar 25 01:32:16.251699 sshd-session[1621]: pam_unix(sshd:session): session closed for user core Mar 25 01:32:16.262023 systemd[1]: sshd@2-10.0.0.141:22-10.0.0.1:43760.service: Deactivated successfully. Mar 25 01:32:16.263393 systemd[1]: session-3.scope: Deactivated successfully. Mar 25 01:32:16.265378 systemd-logind[1467]: Session 3 logged out. Waiting for processes to exit. Mar 25 01:32:16.267057 systemd[1]: Started sshd@3-10.0.0.141:22-10.0.0.1:43762.service - OpenSSH per-connection server daemon (10.0.0.1:43762). Mar 25 01:32:16.267966 systemd-logind[1467]: Removed session 3. Mar 25 01:32:16.316140 sshd[1629]: Accepted publickey for core from 10.0.0.1 port 43762 ssh2: RSA SHA256:RyyrKoKHvyGTiWIDeMwuNNfmpVLXChNPYxUIZdc99cw Mar 25 01:32:16.317168 sshd-session[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:32:16.320896 systemd-logind[1467]: New session 4 of user core. Mar 25 01:32:16.331083 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 25 01:32:16.382614 sshd[1632]: Connection closed by 10.0.0.1 port 43762 Mar 25 01:32:16.382909 sshd-session[1629]: pam_unix(sshd:session): session closed for user core Mar 25 01:32:16.391740 systemd[1]: sshd@3-10.0.0.141:22-10.0.0.1:43762.service: Deactivated successfully. Mar 25 01:32:16.393007 systemd[1]: session-4.scope: Deactivated successfully. Mar 25 01:32:16.395967 systemd-logind[1467]: Session 4 logged out. Waiting for processes to exit. Mar 25 01:32:16.396339 systemd[1]: Started sshd@4-10.0.0.141:22-10.0.0.1:43766.service - OpenSSH per-connection server daemon (10.0.0.1:43766). Mar 25 01:32:16.397477 systemd-logind[1467]: Removed session 4. Mar 25 01:32:16.442674 sshd[1637]: Accepted publickey for core from 10.0.0.1 port 43766 ssh2: RSA SHA256:RyyrKoKHvyGTiWIDeMwuNNfmpVLXChNPYxUIZdc99cw Mar 25 01:32:16.443669 sshd-session[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:32:16.447424 systemd-logind[1467]: New session 5 of user core. Mar 25 01:32:16.455018 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 25 01:32:16.518102 sudo[1641]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 25 01:32:16.518359 sudo[1641]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 01:32:16.541684 sudo[1641]: pam_unix(sudo:session): session closed for user root Mar 25 01:32:16.544822 sshd[1640]: Connection closed by 10.0.0.1 port 43766 Mar 25 01:32:16.545293 sshd-session[1637]: pam_unix(sshd:session): session closed for user core Mar 25 01:32:16.556936 systemd[1]: sshd@4-10.0.0.141:22-10.0.0.1:43766.service: Deactivated successfully. Mar 25 01:32:16.560074 systemd[1]: session-5.scope: Deactivated successfully. Mar 25 01:32:16.560684 systemd-logind[1467]: Session 5 logged out. Waiting for processes to exit. Mar 25 01:32:16.563168 systemd[1]: Started sshd@5-10.0.0.141:22-10.0.0.1:43780.service - OpenSSH per-connection server daemon (10.0.0.1:43780). Mar 25 01:32:16.565237 systemd-logind[1467]: Removed session 5. Mar 25 01:32:16.609799 sshd[1646]: Accepted publickey for core from 10.0.0.1 port 43780 ssh2: RSA SHA256:RyyrKoKHvyGTiWIDeMwuNNfmpVLXChNPYxUIZdc99cw Mar 25 01:32:16.610798 sshd-session[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:32:16.614219 systemd-logind[1467]: New session 6 of user core. Mar 25 01:32:16.623094 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 25 01:32:16.675009 sudo[1651]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 25 01:32:16.675292 sudo[1651]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 01:32:16.678512 sudo[1651]: pam_unix(sudo:session): session closed for user root Mar 25 01:32:16.682967 sudo[1650]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 25 01:32:16.683218 sudo[1650]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 01:32:16.691506 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 25 01:32:16.727752 augenrules[1673]: No rules Mar 25 01:32:16.728950 systemd[1]: audit-rules.service: Deactivated successfully. Mar 25 01:32:16.729177 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 25 01:32:16.730349 sudo[1650]: pam_unix(sudo:session): session closed for user root Mar 25 01:32:16.731524 sshd[1649]: Connection closed by 10.0.0.1 port 43780 Mar 25 01:32:16.731931 sshd-session[1646]: pam_unix(sshd:session): session closed for user core Mar 25 01:32:16.745090 systemd[1]: sshd@5-10.0.0.141:22-10.0.0.1:43780.service: Deactivated successfully. Mar 25 01:32:16.746569 systemd[1]: session-6.scope: Deactivated successfully. Mar 25 01:32:16.747828 systemd-logind[1467]: Session 6 logged out. Waiting for processes to exit. Mar 25 01:32:16.749240 systemd[1]: Started sshd@6-10.0.0.141:22-10.0.0.1:43794.service - OpenSSH per-connection server daemon (10.0.0.1:43794). Mar 25 01:32:16.749925 systemd-logind[1467]: Removed session 6. Mar 25 01:32:16.794376 sshd[1681]: Accepted publickey for core from 10.0.0.1 port 43794 ssh2: RSA SHA256:RyyrKoKHvyGTiWIDeMwuNNfmpVLXChNPYxUIZdc99cw Mar 25 01:32:16.795578 sshd-session[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:32:16.799617 systemd-logind[1467]: New session 7 of user core. Mar 25 01:32:16.810082 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 25 01:32:16.861000 sudo[1685]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 25 01:32:16.861572 sudo[1685]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 01:32:17.206966 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 25 01:32:17.221175 (dockerd)[1705]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 25 01:32:17.471269 dockerd[1705]: time="2025-03-25T01:32:17.471145145Z" level=info msg="Starting up" Mar 25 01:32:17.472701 dockerd[1705]: time="2025-03-25T01:32:17.472662375Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 25 01:32:17.667833 dockerd[1705]: time="2025-03-25T01:32:17.667783424Z" level=info msg="Loading containers: start." Mar 25 01:32:17.796919 kernel: Initializing XFRM netlink socket Mar 25 01:32:17.852305 systemd-networkd[1406]: docker0: Link UP Mar 25 01:32:17.929198 dockerd[1705]: time="2025-03-25T01:32:17.929146383Z" level=info msg="Loading containers: done." Mar 25 01:32:17.948618 dockerd[1705]: time="2025-03-25T01:32:17.948211197Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 25 01:32:17.948618 dockerd[1705]: time="2025-03-25T01:32:17.948297419Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 Mar 25 01:32:17.948618 dockerd[1705]: time="2025-03-25T01:32:17.948457609Z" level=info msg="Daemon has completed initialization" Mar 25 01:32:17.975889 dockerd[1705]: time="2025-03-25T01:32:17.975818035Z" level=info msg="API listen on /run/docker.sock" Mar 25 01:32:17.975986 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 25 01:32:18.833603 containerd[1484]: time="2025-03-25T01:32:18.833518788Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\"" Mar 25 01:32:19.471029 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1365032958.mount: Deactivated successfully. Mar 25 01:32:21.236666 containerd[1484]: time="2025-03-25T01:32:21.236491581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:32:21.237488 containerd[1484]: time="2025-03-25T01:32:21.237242434Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.7: active requests=0, bytes read=25552768" Mar 25 01:32:21.238184 containerd[1484]: time="2025-03-25T01:32:21.238108858Z" level=info msg="ImageCreate event name:\"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:32:21.240587 containerd[1484]: time="2025-03-25T01:32:21.240552471Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:32:21.242409 containerd[1484]: time="2025-03-25T01:32:21.242369590Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.7\" with image id \"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\", size \"25549566\" in 2.408808446s" Mar 25 01:32:21.242465 containerd[1484]: time="2025-03-25T01:32:21.242409437Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\" returns image reference \"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\"" Mar 25 01:32:21.243072 containerd[1484]: time="2025-03-25T01:32:21.243032165Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\"" Mar 25 01:32:22.877281 containerd[1484]: time="2025-03-25T01:32:22.877052374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:32:22.878116 containerd[1484]: time="2025-03-25T01:32:22.877845586Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.7: active requests=0, bytes read=22458980" Mar 25 01:32:22.878828 containerd[1484]: time="2025-03-25T01:32:22.878796851Z" level=info msg="ImageCreate event name:\"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:32:22.882772 containerd[1484]: time="2025-03-25T01:32:22.882702100Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:32:22.883741 containerd[1484]: time="2025-03-25T01:32:22.883707504Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.7\" with image id \"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\", size \"23899774\" in 1.640631294s" Mar 25 01:32:22.883741 containerd[1484]: time="2025-03-25T01:32:22.883741629Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\" returns image reference \"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\"" Mar 25 01:32:22.884378 containerd[1484]: time="2025-03-25T01:32:22.884168479Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\"" Mar 25 01:32:24.176979 containerd[1484]: time="2025-03-25T01:32:24.176801845Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:32:24.177786 containerd[1484]: time="2025-03-25T01:32:24.177705269Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.7: active requests=0, bytes read=17125831" Mar 25 01:32:24.178493 containerd[1484]: time="2025-03-25T01:32:24.178422764Z" level=info msg="ImageCreate event name:\"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:32:24.181435 containerd[1484]: time="2025-03-25T01:32:24.181388898Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:32:24.183085 containerd[1484]: time="2025-03-25T01:32:24.183058014Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.7\" with image id \"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\", size \"18566643\" in 1.298858866s" Mar 25 01:32:24.183176 containerd[1484]: time="2025-03-25T01:32:24.183090118Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\" returns image reference \"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\"" Mar 25 01:32:24.183544 containerd[1484]: time="2025-03-25T01:32:24.183523237Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\"" Mar 25 01:32:25.276129 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1534857945.mount: Deactivated successfully. Mar 25 01:32:25.638347 containerd[1484]: time="2025-03-25T01:32:25.638215171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:32:25.638939 containerd[1484]: time="2025-03-25T01:32:25.638894783Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.7: active requests=0, bytes read=26871917" Mar 25 01:32:25.639737 containerd[1484]: time="2025-03-25T01:32:25.639699293Z" level=info msg="ImageCreate event name:\"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:32:25.641943 containerd[1484]: time="2025-03-25T01:32:25.641900497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:32:25.642449 containerd[1484]: time="2025-03-25T01:32:25.642418465Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.7\" with image id \"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\", repo tag \"registry.k8s.io/kube-proxy:v1.31.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\", size \"26870934\" in 1.458866157s" Mar 25 01:32:25.642479 containerd[1484]: time="2025-03-25T01:32:25.642447393Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\" returns image reference \"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\"" Mar 25 01:32:25.642965 containerd[1484]: time="2025-03-25T01:32:25.642928456Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 25 01:32:25.710905 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 25 01:32:25.712375 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:32:25.834894 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:32:25.838628 (kubelet)[1991]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 01:32:25.873936 kubelet[1991]: E0325 01:32:25.873885 1991 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 01:32:25.877075 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 01:32:25.877224 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 01:32:25.877643 systemd[1]: kubelet.service: Consumed 130ms CPU time, 97M memory peak. Mar 25 01:32:26.278263 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3583320565.mount: Deactivated successfully. Mar 25 01:32:27.307675 containerd[1484]: time="2025-03-25T01:32:27.307626797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:32:27.308644 containerd[1484]: time="2025-03-25T01:32:27.308406381Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Mar 25 01:32:27.309228 containerd[1484]: time="2025-03-25T01:32:27.309193968Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:32:27.313739 containerd[1484]: time="2025-03-25T01:32:27.313704949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:32:27.314881 containerd[1484]: time="2025-03-25T01:32:27.314817812Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.671857863s" Mar 25 01:32:27.314881 containerd[1484]: time="2025-03-25T01:32:27.314850150Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Mar 25 01:32:27.315249 containerd[1484]: time="2025-03-25T01:32:27.315229967Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 25 01:32:27.738433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2952984448.mount: Deactivated successfully. Mar 25 01:32:27.742494 containerd[1484]: time="2025-03-25T01:32:27.742455756Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 01:32:27.743148 containerd[1484]: time="2025-03-25T01:32:27.743095487Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Mar 25 01:32:27.743791 containerd[1484]: time="2025-03-25T01:32:27.743750985Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 01:32:27.745772 containerd[1484]: time="2025-03-25T01:32:27.745736545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 01:32:27.746469 containerd[1484]: time="2025-03-25T01:32:27.746439184Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 431.181102ms" Mar 25 01:32:27.746502 containerd[1484]: time="2025-03-25T01:32:27.746467822Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 25 01:32:27.746975 containerd[1484]: time="2025-03-25T01:32:27.746943648Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Mar 25 01:32:28.219918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1470307015.mount: Deactivated successfully. Mar 25 01:32:30.930640 containerd[1484]: time="2025-03-25T01:32:30.930590534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:32:30.931646 containerd[1484]: time="2025-03-25T01:32:30.931374599Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406427" Mar 25 01:32:30.932474 containerd[1484]: time="2025-03-25T01:32:30.932431555Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:32:30.935894 containerd[1484]: time="2025-03-25T01:32:30.935795698Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:32:30.936464 containerd[1484]: time="2025-03-25T01:32:30.936434544Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.189446143s" Mar 25 01:32:30.936519 containerd[1484]: time="2025-03-25T01:32:30.936465579Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Mar 25 01:32:36.098672 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 25 01:32:36.100062 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:32:36.118294 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 25 01:32:36.118372 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 25 01:32:36.118566 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:32:36.121217 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:32:36.141998 systemd[1]: Reload requested from client PID 2131 ('systemctl') (unit session-7.scope)... Mar 25 01:32:36.142015 systemd[1]: Reloading... Mar 25 01:32:36.208879 zram_generator::config[2178]: No configuration found. Mar 25 01:32:36.342848 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:32:36.413329 systemd[1]: Reloading finished in 271 ms. Mar 25 01:32:36.457046 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 25 01:32:36.457118 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 25 01:32:36.457359 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:32:36.457397 systemd[1]: kubelet.service: Consumed 77ms CPU time, 82.4M memory peak. Mar 25 01:32:36.459275 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:32:36.554458 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:32:36.558159 (kubelet)[2221]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 25 01:32:36.593230 kubelet[2221]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 01:32:36.593230 kubelet[2221]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 25 01:32:36.593230 kubelet[2221]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 01:32:36.593570 kubelet[2221]: I0325 01:32:36.593364 2221 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 25 01:32:37.271249 kubelet[2221]: I0325 01:32:37.271199 2221 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 25 01:32:37.271249 kubelet[2221]: I0325 01:32:37.271232 2221 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 25 01:32:37.271502 kubelet[2221]: I0325 01:32:37.271470 2221 server.go:929] "Client rotation is on, will bootstrap in background" Mar 25 01:32:37.346203 kubelet[2221]: E0325 01:32:37.346164 2221 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.141:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:32:37.351023 kubelet[2221]: I0325 01:32:37.350996 2221 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 25 01:32:37.362847 kubelet[2221]: I0325 01:32:37.362827 2221 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 25 01:32:37.369636 kubelet[2221]: I0325 01:32:37.369610 2221 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 25 01:32:37.369861 kubelet[2221]: I0325 01:32:37.369845 2221 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 25 01:32:37.370016 kubelet[2221]: I0325 01:32:37.369993 2221 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 25 01:32:37.370173 kubelet[2221]: I0325 01:32:37.370017 2221 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 25 01:32:37.370321 kubelet[2221]: I0325 01:32:37.370310 2221 topology_manager.go:138] "Creating topology manager with none policy" Mar 25 01:32:37.370321 kubelet[2221]: I0325 01:32:37.370321 2221 container_manager_linux.go:300] "Creating device plugin manager" Mar 25 01:32:37.370510 kubelet[2221]: I0325 01:32:37.370499 2221 state_mem.go:36] "Initialized new in-memory state store" Mar 25 01:32:37.372204 kubelet[2221]: I0325 01:32:37.372182 2221 kubelet.go:408] "Attempting to sync node with API server" Mar 25 01:32:37.372248 kubelet[2221]: I0325 01:32:37.372214 2221 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 25 01:32:37.372921 kubelet[2221]: I0325 01:32:37.372300 2221 kubelet.go:314] "Adding apiserver pod source" Mar 25 01:32:37.372921 kubelet[2221]: I0325 01:32:37.372314 2221 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 25 01:32:37.374021 kubelet[2221]: I0325 01:32:37.373995 2221 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 25 01:32:37.374140 kubelet[2221]: W0325 01:32:37.374090 2221 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.141:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused Mar 25 01:32:37.374173 kubelet[2221]: E0325 01:32:37.374149 2221 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.141:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:32:37.374349 kubelet[2221]: W0325 01:32:37.374311 2221 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.141:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused Mar 25 01:32:37.374446 kubelet[2221]: E0325 01:32:37.374426 2221 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.141:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:32:37.375915 kubelet[2221]: I0325 01:32:37.375899 2221 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 25 01:32:37.379782 kubelet[2221]: W0325 01:32:37.379757 2221 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 25 01:32:37.380443 kubelet[2221]: I0325 01:32:37.380425 2221 server.go:1269] "Started kubelet" Mar 25 01:32:37.381309 kubelet[2221]: I0325 01:32:37.380911 2221 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 25 01:32:37.383056 kubelet[2221]: I0325 01:32:37.382438 2221 server.go:460] "Adding debug handlers to kubelet server" Mar 25 01:32:37.384176 kubelet[2221]: I0325 01:32:37.384130 2221 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 25 01:32:37.384388 kubelet[2221]: I0325 01:32:37.384364 2221 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 25 01:32:37.384890 kubelet[2221]: I0325 01:32:37.384812 2221 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 25 01:32:37.385083 kubelet[2221]: I0325 01:32:37.385054 2221 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 25 01:32:37.386928 kubelet[2221]: I0325 01:32:37.386688 2221 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 25 01:32:37.386928 kubelet[2221]: I0325 01:32:37.386807 2221 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 25 01:32:37.386928 kubelet[2221]: I0325 01:32:37.386873 2221 reconciler.go:26] "Reconciler: start to sync state" Mar 25 01:32:37.387187 kubelet[2221]: W0325 01:32:37.387141 2221 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.141:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused Mar 25 01:32:37.387230 kubelet[2221]: E0325 01:32:37.387190 2221 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.141:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:32:37.387646 kubelet[2221]: E0325 01:32:37.385606 2221 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.141:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.141:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182fe7b7caa33a64 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-25 01:32:37.380397668 +0000 UTC m=+0.819106114,LastTimestamp:2025-03-25 01:32:37.380397668 +0000 UTC m=+0.819106114,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 25 01:32:37.387758 kubelet[2221]: E0325 01:32:37.387697 2221 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 25 01:32:37.387801 kubelet[2221]: E0325 01:32:37.387757 2221 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.141:6443: connect: connection refused" interval="200ms" Mar 25 01:32:37.388201 kubelet[2221]: I0325 01:32:37.388182 2221 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 25 01:32:37.388391 kubelet[2221]: E0325 01:32:37.388343 2221 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 25 01:32:37.389716 kubelet[2221]: I0325 01:32:37.389680 2221 factory.go:221] Registration of the containerd container factory successfully Mar 25 01:32:37.389716 kubelet[2221]: I0325 01:32:37.389704 2221 factory.go:221] Registration of the systemd container factory successfully Mar 25 01:32:37.401247 kubelet[2221]: I0325 01:32:37.401218 2221 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 25 01:32:37.401247 kubelet[2221]: I0325 01:32:37.401239 2221 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 25 01:32:37.401356 kubelet[2221]: I0325 01:32:37.401256 2221 state_mem.go:36] "Initialized new in-memory state store" Mar 25 01:32:37.403047 kubelet[2221]: I0325 01:32:37.403010 2221 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 25 01:32:37.404173 kubelet[2221]: I0325 01:32:37.404150 2221 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 25 01:32:37.404267 kubelet[2221]: I0325 01:32:37.404258 2221 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 25 01:32:37.404337 kubelet[2221]: I0325 01:32:37.404327 2221 kubelet.go:2321] "Starting kubelet main sync loop" Mar 25 01:32:37.406281 kubelet[2221]: E0325 01:32:37.406258 2221 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 25 01:32:37.406920 kubelet[2221]: W0325 01:32:37.406843 2221 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.141:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused Mar 25 01:32:37.406999 kubelet[2221]: E0325 01:32:37.406919 2221 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.141:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:32:37.463586 kubelet[2221]: I0325 01:32:37.463540 2221 policy_none.go:49] "None policy: Start" Mar 25 01:32:37.464355 kubelet[2221]: I0325 01:32:37.464321 2221 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 25 01:32:37.464417 kubelet[2221]: I0325 01:32:37.464364 2221 state_mem.go:35] "Initializing new in-memory state store" Mar 25 01:32:37.470249 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 25 01:32:37.484401 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 25 01:32:37.487877 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 25 01:32:37.488058 kubelet[2221]: E0325 01:32:37.487864 2221 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 25 01:32:37.498601 kubelet[2221]: I0325 01:32:37.498552 2221 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 25 01:32:37.498777 kubelet[2221]: I0325 01:32:37.498756 2221 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 25 01:32:37.498829 kubelet[2221]: I0325 01:32:37.498772 2221 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 25 01:32:37.499003 kubelet[2221]: I0325 01:32:37.498983 2221 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 25 01:32:37.499996 kubelet[2221]: E0325 01:32:37.499958 2221 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 25 01:32:37.514165 systemd[1]: Created slice kubepods-burstable-poddcac1880986b7da700b3ab24032bdecb.slice - libcontainer container kubepods-burstable-poddcac1880986b7da700b3ab24032bdecb.slice. Mar 25 01:32:37.525474 systemd[1]: Created slice kubepods-burstable-pod60762308083b5ef6c837b1be48ec53d6.slice - libcontainer container kubepods-burstable-pod60762308083b5ef6c837b1be48ec53d6.slice. Mar 25 01:32:37.539822 systemd[1]: Created slice kubepods-burstable-pod6f32907a07e55aea05abdc5cd284a8d5.slice - libcontainer container kubepods-burstable-pod6f32907a07e55aea05abdc5cd284a8d5.slice. Mar 25 01:32:37.588824 kubelet[2221]: E0325 01:32:37.588785 2221 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.141:6443: connect: connection refused" interval="400ms" Mar 25 01:32:37.600129 kubelet[2221]: I0325 01:32:37.600094 2221 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 25 01:32:37.600473 kubelet[2221]: E0325 01:32:37.600431 2221 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.141:6443/api/v1/nodes\": dial tcp 10.0.0.141:6443: connect: connection refused" node="localhost" Mar 25 01:32:37.687890 kubelet[2221]: I0325 01:32:37.687793 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f32907a07e55aea05abdc5cd284a8d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6f32907a07e55aea05abdc5cd284a8d5\") " pod="kube-system/kube-scheduler-localhost" Mar 25 01:32:37.687890 kubelet[2221]: I0325 01:32:37.687830 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dcac1880986b7da700b3ab24032bdecb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"dcac1880986b7da700b3ab24032bdecb\") " pod="kube-system/kube-apiserver-localhost" Mar 25 01:32:37.687890 kubelet[2221]: I0325 01:32:37.687850 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 25 01:32:37.687890 kubelet[2221]: I0325 01:32:37.687884 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 25 01:32:37.688065 kubelet[2221]: I0325 01:32:37.687918 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 25 01:32:37.688065 kubelet[2221]: I0325 01:32:37.687946 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 25 01:32:37.688065 kubelet[2221]: I0325 01:32:37.687966 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dcac1880986b7da700b3ab24032bdecb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"dcac1880986b7da700b3ab24032bdecb\") " pod="kube-system/kube-apiserver-localhost" Mar 25 01:32:37.688065 kubelet[2221]: I0325 01:32:37.687991 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dcac1880986b7da700b3ab24032bdecb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"dcac1880986b7da700b3ab24032bdecb\") " pod="kube-system/kube-apiserver-localhost" Mar 25 01:32:37.688065 kubelet[2221]: I0325 01:32:37.688021 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 25 01:32:37.802348 kubelet[2221]: I0325 01:32:37.802244 2221 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 25 01:32:37.802638 kubelet[2221]: E0325 01:32:37.802582 2221 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.141:6443/api/v1/nodes\": dial tcp 10.0.0.141:6443: connect: connection refused" node="localhost" Mar 25 01:32:37.823008 kubelet[2221]: E0325 01:32:37.822987 2221 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:32:37.823655 containerd[1484]: time="2025-03-25T01:32:37.823585919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:dcac1880986b7da700b3ab24032bdecb,Namespace:kube-system,Attempt:0,}" Mar 25 01:32:37.838035 kubelet[2221]: E0325 01:32:37.838008 2221 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:32:37.838435 containerd[1484]: time="2025-03-25T01:32:37.838404687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:60762308083b5ef6c837b1be48ec53d6,Namespace:kube-system,Attempt:0,}" Mar 25 01:32:37.840436 containerd[1484]: time="2025-03-25T01:32:37.840381169Z" level=info msg="connecting to shim d8593ffdda406eb5ef8169333f31fba40278eba68b8f796ac4e4feb4aff18202" address="unix:///run/containerd/s/794bfbcdc592a1863b1e5f165e778ed9eed2161df11368e9812059e62c30d71b" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:32:37.841712 kubelet[2221]: E0325 01:32:37.841689 2221 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:32:37.842571 containerd[1484]: time="2025-03-25T01:32:37.842527899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6f32907a07e55aea05abdc5cd284a8d5,Namespace:kube-system,Attempt:0,}" Mar 25 01:32:37.863852 containerd[1484]: time="2025-03-25T01:32:37.863808047Z" level=info msg="connecting to shim 96d95aeb9ef5e046b942c1ab53c8035714f33a2c468724b51aca3b2ab3d2b0bb" address="unix:///run/containerd/s/3b7260a4e6089d6a176784347dbbf952316af03a47bb51c7fb7324e2fd30cd57" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:32:37.865623 containerd[1484]: time="2025-03-25T01:32:37.865465384Z" level=info msg="connecting to shim a31b8313dc417c8e0d47c83ebf811a772655279b1b4a78fac76390066b59014b" address="unix:///run/containerd/s/cff83afb288ce0eb81ed3e7c9ddf135360a7ae819255e8ed7e7b11fb3e6e9b68" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:32:37.868121 systemd[1]: Started cri-containerd-d8593ffdda406eb5ef8169333f31fba40278eba68b8f796ac4e4feb4aff18202.scope - libcontainer container d8593ffdda406eb5ef8169333f31fba40278eba68b8f796ac4e4feb4aff18202. Mar 25 01:32:37.889037 systemd[1]: Started cri-containerd-96d95aeb9ef5e046b942c1ab53c8035714f33a2c468724b51aca3b2ab3d2b0bb.scope - libcontainer container 96d95aeb9ef5e046b942c1ab53c8035714f33a2c468724b51aca3b2ab3d2b0bb. Mar 25 01:32:37.892335 systemd[1]: Started cri-containerd-a31b8313dc417c8e0d47c83ebf811a772655279b1b4a78fac76390066b59014b.scope - libcontainer container a31b8313dc417c8e0d47c83ebf811a772655279b1b4a78fac76390066b59014b. Mar 25 01:32:37.910266 containerd[1484]: time="2025-03-25T01:32:37.909991267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:dcac1880986b7da700b3ab24032bdecb,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8593ffdda406eb5ef8169333f31fba40278eba68b8f796ac4e4feb4aff18202\"" Mar 25 01:32:37.911532 kubelet[2221]: E0325 01:32:37.911410 2221 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:32:37.914444 containerd[1484]: time="2025-03-25T01:32:37.914274713Z" level=info msg="CreateContainer within sandbox \"d8593ffdda406eb5ef8169333f31fba40278eba68b8f796ac4e4feb4aff18202\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 25 01:32:37.926335 containerd[1484]: time="2025-03-25T01:32:37.926236394Z" level=info msg="Container 45444c4409bdbbdef17b92c3763514f245c60ceabce030e0d4b9be7a80efda95: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:32:37.926866 containerd[1484]: time="2025-03-25T01:32:37.926815839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:60762308083b5ef6c837b1be48ec53d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"96d95aeb9ef5e046b942c1ab53c8035714f33a2c468724b51aca3b2ab3d2b0bb\"" Mar 25 01:32:37.927584 kubelet[2221]: E0325 01:32:37.927539 2221 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:32:37.929987 containerd[1484]: time="2025-03-25T01:32:37.929953854Z" level=info msg="CreateContainer within sandbox \"96d95aeb9ef5e046b942c1ab53c8035714f33a2c468724b51aca3b2ab3d2b0bb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 25 01:32:37.932537 containerd[1484]: time="2025-03-25T01:32:37.932504213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6f32907a07e55aea05abdc5cd284a8d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"a31b8313dc417c8e0d47c83ebf811a772655279b1b4a78fac76390066b59014b\"" Mar 25 01:32:37.933560 containerd[1484]: time="2025-03-25T01:32:37.933526824Z" level=info msg="CreateContainer within sandbox \"d8593ffdda406eb5ef8169333f31fba40278eba68b8f796ac4e4feb4aff18202\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"45444c4409bdbbdef17b92c3763514f245c60ceabce030e0d4b9be7a80efda95\"" Mar 25 01:32:37.934132 kubelet[2221]: E0325 01:32:37.934099 2221 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:32:37.934896 containerd[1484]: time="2025-03-25T01:32:37.934358837Z" level=info msg="StartContainer for \"45444c4409bdbbdef17b92c3763514f245c60ceabce030e0d4b9be7a80efda95\"" Mar 25 01:32:37.935610 containerd[1484]: time="2025-03-25T01:32:37.935572487Z" level=info msg="connecting to shim 45444c4409bdbbdef17b92c3763514f245c60ceabce030e0d4b9be7a80efda95" address="unix:///run/containerd/s/794bfbcdc592a1863b1e5f165e778ed9eed2161df11368e9812059e62c30d71b" protocol=ttrpc version=3 Mar 25 01:32:37.938118 containerd[1484]: time="2025-03-25T01:32:37.938078301Z" level=info msg="Container a7c2c7a9278ec902980d1e45a79a4161445b12687edccaa15eaf09a4c28d6c5d: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:32:37.939233 containerd[1484]: time="2025-03-25T01:32:37.938900059Z" level=info msg="CreateContainer within sandbox \"a31b8313dc417c8e0d47c83ebf811a772655279b1b4a78fac76390066b59014b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 25 01:32:37.950260 containerd[1484]: time="2025-03-25T01:32:37.950215358Z" level=info msg="Container c1c457fe94652d1bd4b0fa6e2a3e1c0a4c2d75730c4965144fd80ddd597db4c1: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:32:37.950493 containerd[1484]: time="2025-03-25T01:32:37.950456509Z" level=info msg="CreateContainer within sandbox \"96d95aeb9ef5e046b942c1ab53c8035714f33a2c468724b51aca3b2ab3d2b0bb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a7c2c7a9278ec902980d1e45a79a4161445b12687edccaa15eaf09a4c28d6c5d\"" Mar 25 01:32:37.951150 containerd[1484]: time="2025-03-25T01:32:37.951125204Z" level=info msg="StartContainer for \"a7c2c7a9278ec902980d1e45a79a4161445b12687edccaa15eaf09a4c28d6c5d\"" Mar 25 01:32:37.952239 containerd[1484]: time="2025-03-25T01:32:37.952205980Z" level=info msg="connecting to shim a7c2c7a9278ec902980d1e45a79a4161445b12687edccaa15eaf09a4c28d6c5d" address="unix:///run/containerd/s/3b7260a4e6089d6a176784347dbbf952316af03a47bb51c7fb7324e2fd30cd57" protocol=ttrpc version=3 Mar 25 01:32:37.954050 systemd[1]: Started cri-containerd-45444c4409bdbbdef17b92c3763514f245c60ceabce030e0d4b9be7a80efda95.scope - libcontainer container 45444c4409bdbbdef17b92c3763514f245c60ceabce030e0d4b9be7a80efda95. Mar 25 01:32:37.959087 containerd[1484]: time="2025-03-25T01:32:37.959039144Z" level=info msg="CreateContainer within sandbox \"a31b8313dc417c8e0d47c83ebf811a772655279b1b4a78fac76390066b59014b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c1c457fe94652d1bd4b0fa6e2a3e1c0a4c2d75730c4965144fd80ddd597db4c1\"" Mar 25 01:32:37.959488 containerd[1484]: time="2025-03-25T01:32:37.959464564Z" level=info msg="StartContainer for \"c1c457fe94652d1bd4b0fa6e2a3e1c0a4c2d75730c4965144fd80ddd597db4c1\"" Mar 25 01:32:37.960437 containerd[1484]: time="2025-03-25T01:32:37.960398646Z" level=info msg="connecting to shim c1c457fe94652d1bd4b0fa6e2a3e1c0a4c2d75730c4965144fd80ddd597db4c1" address="unix:///run/containerd/s/cff83afb288ce0eb81ed3e7c9ddf135360a7ae819255e8ed7e7b11fb3e6e9b68" protocol=ttrpc version=3 Mar 25 01:32:37.972015 systemd[1]: Started cri-containerd-a7c2c7a9278ec902980d1e45a79a4161445b12687edccaa15eaf09a4c28d6c5d.scope - libcontainer container a7c2c7a9278ec902980d1e45a79a4161445b12687edccaa15eaf09a4c28d6c5d. Mar 25 01:32:37.975737 systemd[1]: Started cri-containerd-c1c457fe94652d1bd4b0fa6e2a3e1c0a4c2d75730c4965144fd80ddd597db4c1.scope - libcontainer container c1c457fe94652d1bd4b0fa6e2a3e1c0a4c2d75730c4965144fd80ddd597db4c1. Mar 25 01:32:37.989926 kubelet[2221]: E0325 01:32:37.989879 2221 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.141:6443: connect: connection refused" interval="800ms" Mar 25 01:32:37.998765 containerd[1484]: time="2025-03-25T01:32:37.998681746Z" level=info msg="StartContainer for \"45444c4409bdbbdef17b92c3763514f245c60ceabce030e0d4b9be7a80efda95\" returns successfully" Mar 25 01:32:38.031468 containerd[1484]: time="2025-03-25T01:32:38.028061796Z" level=info msg="StartContainer for \"c1c457fe94652d1bd4b0fa6e2a3e1c0a4c2d75730c4965144fd80ddd597db4c1\" returns successfully" Mar 25 01:32:38.031468 containerd[1484]: time="2025-03-25T01:32:38.028246511Z" level=info msg="StartContainer for \"a7c2c7a9278ec902980d1e45a79a4161445b12687edccaa15eaf09a4c28d6c5d\" returns successfully" Mar 25 01:32:38.205508 kubelet[2221]: I0325 01:32:38.205313 2221 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 25 01:32:38.412074 kubelet[2221]: E0325 01:32:38.411915 2221 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:32:38.414379 kubelet[2221]: E0325 01:32:38.414131 2221 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:32:38.416756 kubelet[2221]: E0325 01:32:38.416737 2221 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:32:39.423627 kubelet[2221]: E0325 01:32:39.423553 2221 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:32:39.773823 kubelet[2221]: E0325 01:32:39.773790 2221 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 25 01:32:39.867973 kubelet[2221]: I0325 01:32:39.867934 2221 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Mar 25 01:32:40.375543 kubelet[2221]: I0325 01:32:40.375502 2221 apiserver.go:52] "Watching apiserver" Mar 25 01:32:40.387257 kubelet[2221]: I0325 01:32:40.387202 2221 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 25 01:32:42.011234 systemd[1]: Reload requested from client PID 2493 ('systemctl') (unit session-7.scope)... Mar 25 01:32:42.011248 systemd[1]: Reloading... Mar 25 01:32:42.079017 zram_generator::config[2541]: No configuration found. Mar 25 01:32:42.175009 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:32:42.256895 systemd[1]: Reloading finished in 245 ms. Mar 25 01:32:42.276462 kubelet[2221]: I0325 01:32:42.276383 2221 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 25 01:32:42.276845 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:32:42.291301 systemd[1]: kubelet.service: Deactivated successfully. Mar 25 01:32:42.291498 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:32:42.291536 systemd[1]: kubelet.service: Consumed 1.228s CPU time, 118.8M memory peak. Mar 25 01:32:42.293537 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:32:42.404602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:32:42.407982 (kubelet)[2579]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 25 01:32:42.440439 kubelet[2579]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 01:32:42.440439 kubelet[2579]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 25 01:32:42.440439 kubelet[2579]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 01:32:42.440439 kubelet[2579]: I0325 01:32:42.439920 2579 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 25 01:32:42.446145 kubelet[2579]: I0325 01:32:42.446111 2579 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 25 01:32:42.446145 kubelet[2579]: I0325 01:32:42.446138 2579 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 25 01:32:42.446340 kubelet[2579]: I0325 01:32:42.446324 2579 server.go:929] "Client rotation is on, will bootstrap in background" Mar 25 01:32:42.447549 kubelet[2579]: I0325 01:32:42.447528 2579 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 25 01:32:42.449442 kubelet[2579]: I0325 01:32:42.449348 2579 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 25 01:32:42.453912 kubelet[2579]: I0325 01:32:42.453087 2579 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 25 01:32:42.455444 kubelet[2579]: I0325 01:32:42.455427 2579 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 25 01:32:42.455543 kubelet[2579]: I0325 01:32:42.455522 2579 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 25 01:32:42.455647 kubelet[2579]: I0325 01:32:42.455616 2579 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 25 01:32:42.455789 kubelet[2579]: I0325 01:32:42.455641 2579 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 25 01:32:42.455866 kubelet[2579]: I0325 01:32:42.455794 2579 topology_manager.go:138] "Creating topology manager with none policy" Mar 25 01:32:42.455866 kubelet[2579]: I0325 01:32:42.455804 2579 container_manager_linux.go:300] "Creating device plugin manager" Mar 25 01:32:42.455866 kubelet[2579]: I0325 01:32:42.455832 2579 state_mem.go:36] "Initialized new in-memory state store" Mar 25 01:32:42.455959 kubelet[2579]: I0325 01:32:42.455953 2579 kubelet.go:408] "Attempting to sync node with API server" Mar 25 01:32:42.455981 kubelet[2579]: I0325 01:32:42.455967 2579 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 25 01:32:42.456005 kubelet[2579]: I0325 01:32:42.455987 2579 kubelet.go:314] "Adding apiserver pod source" Mar 25 01:32:42.456005 kubelet[2579]: I0325 01:32:42.456000 2579 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 25 01:32:42.457126 kubelet[2579]: I0325 01:32:42.457058 2579 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 25 01:32:42.457569 kubelet[2579]: I0325 01:32:42.457542 2579 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 25 01:32:42.457991 kubelet[2579]: I0325 01:32:42.457968 2579 server.go:1269] "Started kubelet" Mar 25 01:32:42.460180 kubelet[2579]: I0325 01:32:42.460163 2579 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 25 01:32:42.460634 kubelet[2579]: I0325 01:32:42.460548 2579 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 25 01:32:42.460705 kubelet[2579]: I0325 01:32:42.460640 2579 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 25 01:32:42.461178 kubelet[2579]: I0325 01:32:42.461160 2579 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 25 01:32:42.461237 kubelet[2579]: I0325 01:32:42.461158 2579 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 25 01:32:42.461279 kubelet[2579]: E0325 01:32:42.461256 2579 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 25 01:32:42.461373 kubelet[2579]: I0325 01:32:42.461360 2579 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 25 01:32:42.461503 kubelet[2579]: I0325 01:32:42.461492 2579 reconciler.go:26] "Reconciler: start to sync state" Mar 25 01:32:42.462047 kubelet[2579]: I0325 01:32:42.462015 2579 factory.go:221] Registration of the systemd container factory successfully Mar 25 01:32:42.466891 kubelet[2579]: I0325 01:32:42.465344 2579 server.go:460] "Adding debug handlers to kubelet server" Mar 25 01:32:42.469313 kubelet[2579]: I0325 01:32:42.469285 2579 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 25 01:32:42.470242 kubelet[2579]: I0325 01:32:42.470210 2579 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 25 01:32:42.477526 kubelet[2579]: I0325 01:32:42.477442 2579 factory.go:221] Registration of the containerd container factory successfully Mar 25 01:32:42.482121 kubelet[2579]: I0325 01:32:42.482085 2579 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 25 01:32:42.483133 kubelet[2579]: I0325 01:32:42.483105 2579 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 25 01:32:42.483133 kubelet[2579]: I0325 01:32:42.483129 2579 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 25 01:32:42.483226 kubelet[2579]: I0325 01:32:42.483144 2579 kubelet.go:2321] "Starting kubelet main sync loop" Mar 25 01:32:42.483226 kubelet[2579]: E0325 01:32:42.483197 2579 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 25 01:32:42.515947 kubelet[2579]: I0325 01:32:42.515921 2579 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 25 01:32:42.515947 kubelet[2579]: I0325 01:32:42.515941 2579 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 25 01:32:42.515947 kubelet[2579]: I0325 01:32:42.515968 2579 state_mem.go:36] "Initialized new in-memory state store" Mar 25 01:32:42.516105 kubelet[2579]: I0325 01:32:42.516094 2579 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 25 01:32:42.516128 kubelet[2579]: I0325 01:32:42.516104 2579 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 25 01:32:42.516128 kubelet[2579]: I0325 01:32:42.516120 2579 policy_none.go:49] "None policy: Start" Mar 25 01:32:42.516750 kubelet[2579]: I0325 01:32:42.516732 2579 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 25 01:32:42.516806 kubelet[2579]: I0325 01:32:42.516756 2579 state_mem.go:35] "Initializing new in-memory state store" Mar 25 01:32:42.516940 kubelet[2579]: I0325 01:32:42.516927 2579 state_mem.go:75] "Updated machine memory state" Mar 25 01:32:42.520671 kubelet[2579]: I0325 01:32:42.520509 2579 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 25 01:32:42.520671 kubelet[2579]: I0325 01:32:42.520667 2579 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 25 01:32:42.520770 kubelet[2579]: I0325 01:32:42.520676 2579 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 25 01:32:42.520852 kubelet[2579]: I0325 01:32:42.520833 2579 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 25 01:32:42.624969 kubelet[2579]: I0325 01:32:42.624886 2579 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 25 01:32:42.631214 kubelet[2579]: I0325 01:32:42.631065 2579 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Mar 25 01:32:42.631214 kubelet[2579]: I0325 01:32:42.631132 2579 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Mar 25 01:32:42.662877 kubelet[2579]: I0325 01:32:42.662794 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 25 01:32:42.662965 kubelet[2579]: I0325 01:32:42.662887 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 25 01:32:42.662965 kubelet[2579]: I0325 01:32:42.662907 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f32907a07e55aea05abdc5cd284a8d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6f32907a07e55aea05abdc5cd284a8d5\") " pod="kube-system/kube-scheduler-localhost" Mar 25 01:32:42.662965 kubelet[2579]: I0325 01:32:42.662923 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dcac1880986b7da700b3ab24032bdecb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"dcac1880986b7da700b3ab24032bdecb\") " pod="kube-system/kube-apiserver-localhost" Mar 25 01:32:42.662965 kubelet[2579]: I0325 01:32:42.662938 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dcac1880986b7da700b3ab24032bdecb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"dcac1880986b7da700b3ab24032bdecb\") " pod="kube-system/kube-apiserver-localhost" Mar 25 01:32:42.662965 kubelet[2579]: I0325 01:32:42.662953 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 25 01:32:42.663082 kubelet[2579]: I0325 01:32:42.662968 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 25 01:32:42.663082 kubelet[2579]: I0325 01:32:42.662987 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 25 01:32:42.663082 kubelet[2579]: I0325 01:32:42.663006 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dcac1880986b7da700b3ab24032bdecb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"dcac1880986b7da700b3ab24032bdecb\") " pod="kube-system/kube-apiserver-localhost" Mar 25 01:32:42.890909 kubelet[2579]: E0325 01:32:42.890787 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:32:42.891399 kubelet[2579]: E0325 01:32:42.891091 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:32:42.891561 kubelet[2579]: E0325 01:32:42.891449 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:32:43.010558 sudo[2614]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 25 01:32:43.010821 sudo[2614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 25 01:32:43.434564 sudo[2614]: pam_unix(sudo:session): session closed for user root Mar 25 01:32:43.457234 kubelet[2579]: I0325 01:32:43.457197 2579 apiserver.go:52] "Watching apiserver" Mar 25 01:32:43.461786 kubelet[2579]: I0325 01:32:43.461747 2579 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 25 01:32:43.498199 kubelet[2579]: E0325 01:32:43.497513 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:32:43.498199 kubelet[2579]: E0325 01:32:43.498056 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:32:43.506109 kubelet[2579]: E0325 01:32:43.506084 2579 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 25 01:32:43.506883 kubelet[2579]: E0325 01:32:43.506625 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:32:43.524878 kubelet[2579]: I0325 01:32:43.524822 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.524809582 podStartE2EDuration="1.524809582s" podCreationTimestamp="2025-03-25 01:32:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:32:43.524314577 +0000 UTC m=+1.113597764" watchObservedRunningTime="2025-03-25 01:32:43.524809582 +0000 UTC m=+1.114092728" Mar 25 01:32:43.539029 kubelet[2579]: I0325 01:32:43.538917 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.5389017310000002 podStartE2EDuration="1.538901731s" podCreationTimestamp="2025-03-25 01:32:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:32:43.531144571 +0000 UTC m=+1.120427757" watchObservedRunningTime="2025-03-25 01:32:43.538901731 +0000 UTC m=+1.128184837" Mar 25 01:32:43.539319 kubelet[2579]: I0325 01:32:43.539138 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.539131201 podStartE2EDuration="1.539131201s" podCreationTimestamp="2025-03-25 01:32:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:32:43.538617825 +0000 UTC m=+1.127900971" watchObservedRunningTime="2025-03-25 01:32:43.539131201 +0000 UTC m=+1.128414347" Mar 25 01:32:44.498835 kubelet[2579]: E0325 01:32:44.498793 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:32:45.065293 sudo[1685]: pam_unix(sudo:session): session closed for user root Mar 25 01:32:45.066421 sshd[1684]: Connection closed by 10.0.0.1 port 43794 Mar 25 01:32:45.066741 sshd-session[1681]: pam_unix(sshd:session): session closed for user core Mar 25 01:32:45.069173 systemd[1]: sshd@6-10.0.0.141:22-10.0.0.1:43794.service: Deactivated successfully. Mar 25 01:32:45.070851 systemd[1]: session-7.scope: Deactivated successfully. Mar 25 01:32:45.071044 systemd[1]: session-7.scope: Consumed 7.376s CPU time, 263.5M memory peak. Mar 25 01:32:45.072430 systemd-logind[1467]: Session 7 logged out. Waiting for processes to exit. Mar 25 01:32:45.073399 systemd-logind[1467]: Removed session 7. Mar 25 01:32:46.412877 kubelet[2579]: E0325 01:32:46.412827 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:32:47.809715 kubelet[2579]: E0325 01:32:47.809669 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:32:48.503596 kubelet[2579]: E0325 01:32:48.503548 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:32:48.872513 kubelet[2579]: I0325 01:32:48.872474 2579 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 25 01:32:48.872879 containerd[1484]: time="2025-03-25T01:32:48.872812271Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 25 01:32:48.873078 kubelet[2579]: I0325 01:32:48.872999 2579 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 25 01:32:49.831902 systemd[1]: Created slice kubepods-besteffort-pod8e2d448d_59b4_4c38_8475_a60b8530440c.slice - libcontainer container kubepods-besteffort-pod8e2d448d_59b4_4c38_8475_a60b8530440c.slice. Mar 25 01:32:49.846257 systemd[1]: Created slice kubepods-burstable-pod754d5315_6528_4b3e_87a3_834fcc09b71f.slice - libcontainer container kubepods-burstable-pod754d5315_6528_4b3e_87a3_834fcc09b71f.slice. Mar 25 01:32:49.911608 kubelet[2579]: I0325 01:32:49.911548 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/754d5315-6528-4b3e-87a3-834fcc09b71f-host-proc-sys-kernel\") pod \"cilium-g2s9h\" (UID: \"754d5315-6528-4b3e-87a3-834fcc09b71f\") " pod="kube-system/cilium-g2s9h" Mar 25 01:32:49.912037 kubelet[2579]: I0325 01:32:49.911613 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/754d5315-6528-4b3e-87a3-834fcc09b71f-cilium-run\") pod \"cilium-g2s9h\" (UID: \"754d5315-6528-4b3e-87a3-834fcc09b71f\") " pod="kube-system/cilium-g2s9h" Mar 25 01:32:49.912037 kubelet[2579]: I0325 01:32:49.911654 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv2hk\" (UniqueName: \"kubernetes.io/projected/754d5315-6528-4b3e-87a3-834fcc09b71f-kube-api-access-nv2hk\") pod \"cilium-g2s9h\" (UID: \"754d5315-6528-4b3e-87a3-834fcc09b71f\") " pod="kube-system/cilium-g2s9h" Mar 25 01:32:49.912037 kubelet[2579]: I0325 01:32:49.911681 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/754d5315-6528-4b3e-87a3-834fcc09b71f-etc-cni-netd\") pod \"cilium-g2s9h\" (UID: \"754d5315-6528-4b3e-87a3-834fcc09b71f\") " pod="kube-system/cilium-g2s9h" Mar 25 01:32:49.912037 kubelet[2579]: I0325 01:32:49.911707 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/754d5315-6528-4b3e-87a3-834fcc09b71f-xtables-lock\") pod \"cilium-g2s9h\" (UID: \"754d5315-6528-4b3e-87a3-834fcc09b71f\") " pod="kube-system/cilium-g2s9h" Mar 25 01:32:49.912037 kubelet[2579]: I0325 01:32:49.911727 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8e2d448d-59b4-4c38-8475-a60b8530440c-kube-proxy\") pod \"kube-proxy-zmrcj\" (UID: \"8e2d448d-59b4-4c38-8475-a60b8530440c\") " pod="kube-system/kube-proxy-zmrcj" Mar 25 01:32:49.912037 kubelet[2579]: I0325 01:32:49.911741 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/754d5315-6528-4b3e-87a3-834fcc09b71f-lib-modules\") pod \"cilium-g2s9h\" (UID: \"754d5315-6528-4b3e-87a3-834fcc09b71f\") " pod="kube-system/cilium-g2s9h" Mar 25 01:32:49.912247 kubelet[2579]: I0325 01:32:49.911756 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/754d5315-6528-4b3e-87a3-834fcc09b71f-clustermesh-secrets\") pod \"cilium-g2s9h\" (UID: \"754d5315-6528-4b3e-87a3-834fcc09b71f\") " pod="kube-system/cilium-g2s9h" Mar 25 01:32:49.912247 kubelet[2579]: I0325 01:32:49.911771 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/754d5315-6528-4b3e-87a3-834fcc09b71f-cilium-config-path\") pod \"cilium-g2s9h\" (UID: \"754d5315-6528-4b3e-87a3-834fcc09b71f\") " pod="kube-system/cilium-g2s9h" Mar 25 01:32:49.912247 kubelet[2579]: I0325 01:32:49.911786 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/754d5315-6528-4b3e-87a3-834fcc09b71f-hubble-tls\") pod \"cilium-g2s9h\" (UID: \"754d5315-6528-4b3e-87a3-834fcc09b71f\") " pod="kube-system/cilium-g2s9h" Mar 25 01:32:49.912247 kubelet[2579]: I0325 01:32:49.911808 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/754d5315-6528-4b3e-87a3-834fcc09b71f-cilium-cgroup\") pod \"cilium-g2s9h\" (UID: \"754d5315-6528-4b3e-87a3-834fcc09b71f\") " pod="kube-system/cilium-g2s9h" Mar 25 01:32:49.912247 kubelet[2579]: I0325 01:32:49.911824 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e2d448d-59b4-4c38-8475-a60b8530440c-lib-modules\") pod \"kube-proxy-zmrcj\" (UID: \"8e2d448d-59b4-4c38-8475-a60b8530440c\") " pod="kube-system/kube-proxy-zmrcj" Mar 25 01:32:49.912347 kubelet[2579]: I0325 01:32:49.911839 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phlfx\" (UniqueName: \"kubernetes.io/projected/8e2d448d-59b4-4c38-8475-a60b8530440c-kube-api-access-phlfx\") pod \"kube-proxy-zmrcj\" (UID: \"8e2d448d-59b4-4c38-8475-a60b8530440c\") " pod="kube-system/kube-proxy-zmrcj" Mar 25 01:32:49.912347 kubelet[2579]: I0325 01:32:49.911899 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/754d5315-6528-4b3e-87a3-834fcc09b71f-hostproc\") pod \"cilium-g2s9h\" (UID: \"754d5315-6528-4b3e-87a3-834fcc09b71f\") " pod="kube-system/cilium-g2s9h" Mar 25 01:32:49.912347 kubelet[2579]: I0325 01:32:49.911968 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/754d5315-6528-4b3e-87a3-834fcc09b71f-cni-path\") pod \"cilium-g2s9h\" (UID: \"754d5315-6528-4b3e-87a3-834fcc09b71f\") " pod="kube-system/cilium-g2s9h" Mar 25 01:32:49.912347 kubelet[2579]: I0325 01:32:49.911992 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e2d448d-59b4-4c38-8475-a60b8530440c-xtables-lock\") pod \"kube-proxy-zmrcj\" (UID: \"8e2d448d-59b4-4c38-8475-a60b8530440c\") " pod="kube-system/kube-proxy-zmrcj" Mar 25 01:32:49.912347 kubelet[2579]: I0325 01:32:49.912009 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/754d5315-6528-4b3e-87a3-834fcc09b71f-bpf-maps\") pod \"cilium-g2s9h\" (UID: \"754d5315-6528-4b3e-87a3-834fcc09b71f\") " pod="kube-system/cilium-g2s9h" Mar 25 01:32:49.912347 kubelet[2579]: I0325 01:32:49.912026 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/754d5315-6528-4b3e-87a3-834fcc09b71f-host-proc-sys-net\") pod \"cilium-g2s9h\" (UID: \"754d5315-6528-4b3e-87a3-834fcc09b71f\") " pod="kube-system/cilium-g2s9h" Mar 25 01:32:49.986358 systemd[1]: Created slice kubepods-besteffort-pod482fbfe6_5429_4fc6_a231_ca59762b9968.slice - libcontainer container kubepods-besteffort-pod482fbfe6_5429_4fc6_a231_ca59762b9968.slice. Mar 25 01:32:50.017898 kubelet[2579]: I0325 01:32:50.012537 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/482fbfe6-5429-4fc6-a231-ca59762b9968-cilium-config-path\") pod \"cilium-operator-5d85765b45-pbxxj\" (UID: \"482fbfe6-5429-4fc6-a231-ca59762b9968\") " pod="kube-system/cilium-operator-5d85765b45-pbxxj" Mar 25 01:32:50.017898 kubelet[2579]: I0325 01:32:50.012589 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzzrb\" (UniqueName: \"kubernetes.io/projected/482fbfe6-5429-4fc6-a231-ca59762b9968-kube-api-access-zzzrb\") pod \"cilium-operator-5d85765b45-pbxxj\" (UID: \"482fbfe6-5429-4fc6-a231-ca59762b9968\") " pod="kube-system/cilium-operator-5d85765b45-pbxxj" Mar 25 01:32:50.144992 kubelet[2579]: E0325 01:32:50.144898 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:32:50.145701 containerd[1484]: time="2025-03-25T01:32:50.145661541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zmrcj,Uid:8e2d448d-59b4-4c38-8475-a60b8530440c,Namespace:kube-system,Attempt:0,}" Mar 25 01:32:50.149290 kubelet[2579]: E0325 01:32:50.149263 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:32:50.149873 containerd[1484]: time="2025-03-25T01:32:50.149733305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g2s9h,Uid:754d5315-6528-4b3e-87a3-834fcc09b71f,Namespace:kube-system,Attempt:0,}" Mar 25 01:32:50.168669 containerd[1484]: time="2025-03-25T01:32:50.168625316Z" level=info msg="connecting to shim 669829de9feb489c532a3f5c6a2b55c1f69397b8039f2532a7817aaf5790a6dc" address="unix:///run/containerd/s/da86e824e44e7b02480a7259d850e51e8073cfa28e43e4f11cd865f28dd4035f" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:32:50.181551 containerd[1484]: time="2025-03-25T01:32:50.181510137Z" level=info msg="connecting to shim 60968c8638effe57e89fe18730c2f32c7178be3ed56f6273b8829811093204e6" address="unix:///run/containerd/s/47e4704638fe046c63353c730789e6c27d67c72a4893ce5df1c6b86650a02330" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:32:50.195039 systemd[1]: Started cri-containerd-669829de9feb489c532a3f5c6a2b55c1f69397b8039f2532a7817aaf5790a6dc.scope - libcontainer container 669829de9feb489c532a3f5c6a2b55c1f69397b8039f2532a7817aaf5790a6dc. Mar 25 01:32:50.203785 systemd[1]: Started cri-containerd-60968c8638effe57e89fe18730c2f32c7178be3ed56f6273b8829811093204e6.scope - libcontainer container 60968c8638effe57e89fe18730c2f32c7178be3ed56f6273b8829811093204e6. Mar 25 01:32:50.234007 containerd[1484]: time="2025-03-25T01:32:50.233957921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zmrcj,Uid:8e2d448d-59b4-4c38-8475-a60b8530440c,Namespace:kube-system,Attempt:0,} returns sandbox id \"669829de9feb489c532a3f5c6a2b55c1f69397b8039f2532a7817aaf5790a6dc\"" Mar 25 01:32:50.235137 kubelet[2579]: E0325 01:32:50.234847 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:32:50.237330 containerd[1484]: time="2025-03-25T01:32:50.237257579Z" level=info msg="CreateContainer within sandbox \"669829de9feb489c532a3f5c6a2b55c1f69397b8039f2532a7817aaf5790a6dc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 25 01:32:50.237832 containerd[1484]: time="2025-03-25T01:32:50.237774344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g2s9h,Uid:754d5315-6528-4b3e-87a3-834fcc09b71f,Namespace:kube-system,Attempt:0,} returns sandbox id \"60968c8638effe57e89fe18730c2f32c7178be3ed56f6273b8829811093204e6\"" Mar 25 01:32:50.238464 kubelet[2579]: E0325 01:32:50.238442 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:32:50.240133 containerd[1484]: time="2025-03-25T01:32:50.239966152Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 25 01:32:50.244637 containerd[1484]: time="2025-03-25T01:32:50.244600466Z" level=info msg="Container 32a76df0d45507978e64052fe82bb98c8e803f64eb762758a641314c1f6c2bf5: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:32:50.252497 containerd[1484]: time="2025-03-25T01:32:50.252443148Z" level=info msg="CreateContainer within sandbox \"669829de9feb489c532a3f5c6a2b55c1f69397b8039f2532a7817aaf5790a6dc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"32a76df0d45507978e64052fe82bb98c8e803f64eb762758a641314c1f6c2bf5\"" Mar 25 01:32:50.253187 containerd[1484]: time="2025-03-25T01:32:50.253145575Z" level=info msg="StartContainer for \"32a76df0d45507978e64052fe82bb98c8e803f64eb762758a641314c1f6c2bf5\"" Mar 25 01:32:50.254792 containerd[1484]: time="2025-03-25T01:32:50.254760345Z" level=info msg="connecting to shim 32a76df0d45507978e64052fe82bb98c8e803f64eb762758a641314c1f6c2bf5" address="unix:///run/containerd/s/da86e824e44e7b02480a7259d850e51e8073cfa28e43e4f11cd865f28dd4035f" protocol=ttrpc version=3 Mar 25 01:32:50.280051 systemd[1]: Started cri-containerd-32a76df0d45507978e64052fe82bb98c8e803f64eb762758a641314c1f6c2bf5.scope - libcontainer container 32a76df0d45507978e64052fe82bb98c8e803f64eb762758a641314c1f6c2bf5. Mar 25 01:32:50.291835 kubelet[2579]: E0325 01:32:50.291801 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:32:50.292450 containerd[1484]: time="2025-03-25T01:32:50.292406252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-pbxxj,Uid:482fbfe6-5429-4fc6-a231-ca59762b9968,Namespace:kube-system,Attempt:0,}" Mar 25 01:32:50.309244 containerd[1484]: time="2025-03-25T01:32:50.308864482Z" level=info msg="connecting to shim 52a5e3ee74fef16701b698af9785908125cec6dca14020d67492dd13656c7ddc" address="unix:///run/containerd/s/ef4bb5fd171172c083801f00bf5daa49ab5fd193dd583bc6ce187d03a7186c30" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:32:50.321626 containerd[1484]: time="2025-03-25T01:32:50.321588534Z" level=info msg="StartContainer for \"32a76df0d45507978e64052fe82bb98c8e803f64eb762758a641314c1f6c2bf5\" returns successfully" Mar 25 01:32:50.330031 systemd[1]: Started cri-containerd-52a5e3ee74fef16701b698af9785908125cec6dca14020d67492dd13656c7ddc.scope - libcontainer container 52a5e3ee74fef16701b698af9785908125cec6dca14020d67492dd13656c7ddc. Mar 25 01:32:50.362634 containerd[1484]: time="2025-03-25T01:32:50.362431923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-pbxxj,Uid:482fbfe6-5429-4fc6-a231-ca59762b9968,Namespace:kube-system,Attempt:0,} returns sandbox id \"52a5e3ee74fef16701b698af9785908125cec6dca14020d67492dd13656c7ddc\"" Mar 25 01:32:50.364286 kubelet[2579]: E0325 01:32:50.364263 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:32:50.510666 kubelet[2579]: E0325 01:32:50.510633 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:32:51.704824 kubelet[2579]: E0325 01:32:51.704794 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:32:51.718039 kubelet[2579]: I0325 01:32:51.717974 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zmrcj" podStartSLOduration=2.717958388 podStartE2EDuration="2.717958388s" podCreationTimestamp="2025-03-25 01:32:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:32:50.520744928 +0000 UTC m=+8.110028034" watchObservedRunningTime="2025-03-25 01:32:51.717958388 +0000 UTC m=+9.307241534" Mar 25 01:32:52.516705 kubelet[2579]: E0325 01:32:52.516678 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:32:56.420805 kubelet[2579]: E0325 01:32:56.420764 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:32:57.344335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1546673865.mount: Deactivated successfully. Mar 25 01:32:58.632820 containerd[1484]: time="2025-03-25T01:32:58.632762189Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:32:58.633971 containerd[1484]: time="2025-03-25T01:32:58.633772073Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Mar 25 01:32:58.634677 containerd[1484]: time="2025-03-25T01:32:58.634641705Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:32:58.636276 containerd[1484]: time="2025-03-25T01:32:58.636240801Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.396236149s" Mar 25 01:32:58.636347 containerd[1484]: time="2025-03-25T01:32:58.636278695Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 25 01:32:58.638611 containerd[1484]: time="2025-03-25T01:32:58.638570479Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 25 01:32:58.641402 containerd[1484]: time="2025-03-25T01:32:58.641348239Z" level=info msg="CreateContainer within sandbox \"60968c8638effe57e89fe18730c2f32c7178be3ed56f6273b8829811093204e6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 25 01:32:58.651114 containerd[1484]: time="2025-03-25T01:32:58.650882030Z" level=info msg="Container 9f2f200a4e5c2b8e71116c82161bfa033d971c959e61ffd757caa834fd3a0d35: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:32:58.659472 containerd[1484]: time="2025-03-25T01:32:58.659418702Z" level=info msg="CreateContainer within sandbox \"60968c8638effe57e89fe18730c2f32c7178be3ed56f6273b8829811093204e6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9f2f200a4e5c2b8e71116c82161bfa033d971c959e61ffd757caa834fd3a0d35\"" Mar 25 01:32:58.660522 containerd[1484]: time="2025-03-25T01:32:58.660416701Z" level=info msg="StartContainer for \"9f2f200a4e5c2b8e71116c82161bfa033d971c959e61ffd757caa834fd3a0d35\"" Mar 25 01:32:58.661893 containerd[1484]: time="2025-03-25T01:32:58.661851377Z" level=info msg="connecting to shim 9f2f200a4e5c2b8e71116c82161bfa033d971c959e61ffd757caa834fd3a0d35" address="unix:///run/containerd/s/47e4704638fe046c63353c730789e6c27d67c72a4893ce5df1c6b86650a02330" protocol=ttrpc version=3 Mar 25 01:32:58.701091 systemd[1]: Started cri-containerd-9f2f200a4e5c2b8e71116c82161bfa033d971c959e61ffd757caa834fd3a0d35.scope - libcontainer container 9f2f200a4e5c2b8e71116c82161bfa033d971c959e61ffd757caa834fd3a0d35. Mar 25 01:32:58.759442 containerd[1484]: time="2025-03-25T01:32:58.759395201Z" level=info msg="StartContainer for \"9f2f200a4e5c2b8e71116c82161bfa033d971c959e61ffd757caa834fd3a0d35\" returns successfully" Mar 25 01:32:58.785075 systemd[1]: cri-containerd-9f2f200a4e5c2b8e71116c82161bfa033d971c959e61ffd757caa834fd3a0d35.scope: Deactivated successfully. Mar 25 01:32:58.812775 containerd[1484]: time="2025-03-25T01:32:58.812721392Z" level=info msg="received exit event container_id:\"9f2f200a4e5c2b8e71116c82161bfa033d971c959e61ffd757caa834fd3a0d35\" id:\"9f2f200a4e5c2b8e71116c82161bfa033d971c959e61ffd757caa834fd3a0d35\" pid:3001 exited_at:{seconds:1742866378 nanos:801127819}" Mar 25 01:32:58.812925 containerd[1484]: time="2025-03-25T01:32:58.812816226Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9f2f200a4e5c2b8e71116c82161bfa033d971c959e61ffd757caa834fd3a0d35\" id:\"9f2f200a4e5c2b8e71116c82161bfa033d971c959e61ffd757caa834fd3a0d35\" pid:3001 exited_at:{seconds:1742866378 nanos:801127819}" Mar 25 01:32:59.142254 update_engine[1474]: I20250325 01:32:59.142187 1474 update_attempter.cc:509] Updating boot flags... Mar 25 01:32:59.168898 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3040) Mar 25 01:32:59.213881 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3039) Mar 25 01:32:59.257884 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3039) Mar 25 01:32:59.539606 kubelet[2579]: E0325 01:32:59.539573 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:32:59.543845 containerd[1484]: time="2025-03-25T01:32:59.542127549Z" level=info msg="CreateContainer within sandbox \"60968c8638effe57e89fe18730c2f32c7178be3ed56f6273b8829811093204e6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 25 01:32:59.549772 containerd[1484]: time="2025-03-25T01:32:59.549726309Z" level=info msg="Container 769ba3c1c8f0d6c84aa21995cc4eb62b0e623c28aa2003df07679993eb2cbde9: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:32:59.565440 containerd[1484]: time="2025-03-25T01:32:59.565392670Z" level=info msg="CreateContainer within sandbox \"60968c8638effe57e89fe18730c2f32c7178be3ed56f6273b8829811093204e6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"769ba3c1c8f0d6c84aa21995cc4eb62b0e623c28aa2003df07679993eb2cbde9\"" Mar 25 01:32:59.565959 containerd[1484]: time="2025-03-25T01:32:59.565889640Z" level=info msg="StartContainer for \"769ba3c1c8f0d6c84aa21995cc4eb62b0e623c28aa2003df07679993eb2cbde9\"" Mar 25 01:32:59.566706 containerd[1484]: time="2025-03-25T01:32:59.566681871Z" level=info msg="connecting to shim 769ba3c1c8f0d6c84aa21995cc4eb62b0e623c28aa2003df07679993eb2cbde9" address="unix:///run/containerd/s/47e4704638fe046c63353c730789e6c27d67c72a4893ce5df1c6b86650a02330" protocol=ttrpc version=3 Mar 25 01:32:59.592031 systemd[1]: Started cri-containerd-769ba3c1c8f0d6c84aa21995cc4eb62b0e623c28aa2003df07679993eb2cbde9.scope - libcontainer container 769ba3c1c8f0d6c84aa21995cc4eb62b0e623c28aa2003df07679993eb2cbde9. Mar 25 01:32:59.613023 containerd[1484]: time="2025-03-25T01:32:59.612980874Z" level=info msg="StartContainer for \"769ba3c1c8f0d6c84aa21995cc4eb62b0e623c28aa2003df07679993eb2cbde9\" returns successfully" Mar 25 01:32:59.641756 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 25 01:32:59.641996 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:32:59.642206 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 25 01:32:59.643709 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 25 01:32:59.644091 systemd[1]: cri-containerd-769ba3c1c8f0d6c84aa21995cc4eb62b0e623c28aa2003df07679993eb2cbde9.scope: Deactivated successfully. Mar 25 01:32:59.644914 containerd[1484]: time="2025-03-25T01:32:59.644877108Z" level=info msg="TaskExit event in podsandbox handler container_id:\"769ba3c1c8f0d6c84aa21995cc4eb62b0e623c28aa2003df07679993eb2cbde9\" id:\"769ba3c1c8f0d6c84aa21995cc4eb62b0e623c28aa2003df07679993eb2cbde9\" pid:3060 exited_at:{seconds:1742866379 nanos:644497658}" Mar 25 01:32:59.645271 containerd[1484]: time="2025-03-25T01:32:59.644999030Z" level=info msg="received exit event container_id:\"769ba3c1c8f0d6c84aa21995cc4eb62b0e623c28aa2003df07679993eb2cbde9\" id:\"769ba3c1c8f0d6c84aa21995cc4eb62b0e623c28aa2003df07679993eb2cbde9\" pid:3060 exited_at:{seconds:1742866379 nanos:644497658}" Mar 25 01:32:59.651003 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f2f200a4e5c2b8e71116c82161bfa033d971c959e61ffd757caa834fd3a0d35-rootfs.mount: Deactivated successfully. Mar 25 01:32:59.651093 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 25 01:32:59.662540 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-769ba3c1c8f0d6c84aa21995cc4eb62b0e623c28aa2003df07679993eb2cbde9-rootfs.mount: Deactivated successfully. Mar 25 01:32:59.672925 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:33:00.539999 kubelet[2579]: E0325 01:33:00.539954 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:33:00.542687 containerd[1484]: time="2025-03-25T01:33:00.542344545Z" level=info msg="CreateContainer within sandbox \"60968c8638effe57e89fe18730c2f32c7178be3ed56f6273b8829811093204e6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 25 01:33:00.573562 containerd[1484]: time="2025-03-25T01:33:00.573486765Z" level=info msg="Container 393a9334b32d7cb9f32f9b3b6efcdaa2c7b99fbe1e0f8c2a2e260931924d9aa8: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:33:00.574611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount878896238.mount: Deactivated successfully. Mar 25 01:33:00.580733 containerd[1484]: time="2025-03-25T01:33:00.580617126Z" level=info msg="CreateContainer within sandbox \"60968c8638effe57e89fe18730c2f32c7178be3ed56f6273b8829811093204e6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"393a9334b32d7cb9f32f9b3b6efcdaa2c7b99fbe1e0f8c2a2e260931924d9aa8\"" Mar 25 01:33:00.582356 containerd[1484]: time="2025-03-25T01:33:00.582330484Z" level=info msg="StartContainer for \"393a9334b32d7cb9f32f9b3b6efcdaa2c7b99fbe1e0f8c2a2e260931924d9aa8\"" Mar 25 01:33:00.583667 containerd[1484]: time="2025-03-25T01:33:00.583635349Z" level=info msg="connecting to shim 393a9334b32d7cb9f32f9b3b6efcdaa2c7b99fbe1e0f8c2a2e260931924d9aa8" address="unix:///run/containerd/s/47e4704638fe046c63353c730789e6c27d67c72a4893ce5df1c6b86650a02330" protocol=ttrpc version=3 Mar 25 01:33:00.602079 systemd[1]: Started cri-containerd-393a9334b32d7cb9f32f9b3b6efcdaa2c7b99fbe1e0f8c2a2e260931924d9aa8.scope - libcontainer container 393a9334b32d7cb9f32f9b3b6efcdaa2c7b99fbe1e0f8c2a2e260931924d9aa8. Mar 25 01:33:00.636415 containerd[1484]: time="2025-03-25T01:33:00.636367519Z" level=info msg="StartContainer for \"393a9334b32d7cb9f32f9b3b6efcdaa2c7b99fbe1e0f8c2a2e260931924d9aa8\" returns successfully" Mar 25 01:33:00.661096 systemd[1]: cri-containerd-393a9334b32d7cb9f32f9b3b6efcdaa2c7b99fbe1e0f8c2a2e260931924d9aa8.scope: Deactivated successfully. Mar 25 01:33:00.670540 containerd[1484]: time="2025-03-25T01:33:00.670472704Z" level=info msg="received exit event container_id:\"393a9334b32d7cb9f32f9b3b6efcdaa2c7b99fbe1e0f8c2a2e260931924d9aa8\" id:\"393a9334b32d7cb9f32f9b3b6efcdaa2c7b99fbe1e0f8c2a2e260931924d9aa8\" pid:3107 exited_at:{seconds:1742866380 nanos:670284202}" Mar 25 01:33:00.670848 containerd[1484]: time="2025-03-25T01:33:00.670549769Z" level=info msg="TaskExit event in podsandbox handler container_id:\"393a9334b32d7cb9f32f9b3b6efcdaa2c7b99fbe1e0f8c2a2e260931924d9aa8\" id:\"393a9334b32d7cb9f32f9b3b6efcdaa2c7b99fbe1e0f8c2a2e260931924d9aa8\" pid:3107 exited_at:{seconds:1742866380 nanos:670284202}" Mar 25 01:33:00.687883 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-393a9334b32d7cb9f32f9b3b6efcdaa2c7b99fbe1e0f8c2a2e260931924d9aa8-rootfs.mount: Deactivated successfully. Mar 25 01:33:01.554510 kubelet[2579]: E0325 01:33:01.554481 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:33:01.557618 containerd[1484]: time="2025-03-25T01:33:01.557012943Z" level=info msg="CreateContainer within sandbox \"60968c8638effe57e89fe18730c2f32c7178be3ed56f6273b8829811093204e6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 25 01:33:01.569380 containerd[1484]: time="2025-03-25T01:33:01.569318438Z" level=info msg="Container 7a11f606307e133131fb85be25ad374edc5ff6709693743162a6f3d2db2f0106: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:33:01.582582 containerd[1484]: time="2025-03-25T01:33:01.582538617Z" level=info msg="CreateContainer within sandbox \"60968c8638effe57e89fe18730c2f32c7178be3ed56f6273b8829811093204e6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7a11f606307e133131fb85be25ad374edc5ff6709693743162a6f3d2db2f0106\"" Mar 25 01:33:01.583233 containerd[1484]: time="2025-03-25T01:33:01.583075103Z" level=info msg="StartContainer for \"7a11f606307e133131fb85be25ad374edc5ff6709693743162a6f3d2db2f0106\"" Mar 25 01:33:01.584562 containerd[1484]: time="2025-03-25T01:33:01.584355620Z" level=info msg="connecting to shim 7a11f606307e133131fb85be25ad374edc5ff6709693743162a6f3d2db2f0106" address="unix:///run/containerd/s/47e4704638fe046c63353c730789e6c27d67c72a4893ce5df1c6b86650a02330" protocol=ttrpc version=3 Mar 25 01:33:01.605024 systemd[1]: Started cri-containerd-7a11f606307e133131fb85be25ad374edc5ff6709693743162a6f3d2db2f0106.scope - libcontainer container 7a11f606307e133131fb85be25ad374edc5ff6709693743162a6f3d2db2f0106. Mar 25 01:33:01.630021 systemd[1]: cri-containerd-7a11f606307e133131fb85be25ad374edc5ff6709693743162a6f3d2db2f0106.scope: Deactivated successfully. Mar 25 01:33:01.632101 containerd[1484]: time="2025-03-25T01:33:01.632051169Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7a11f606307e133131fb85be25ad374edc5ff6709693743162a6f3d2db2f0106\" id:\"7a11f606307e133131fb85be25ad374edc5ff6709693743162a6f3d2db2f0106\" pid:3148 exited_at:{seconds:1742866381 nanos:630219881}" Mar 25 01:33:01.638588 containerd[1484]: time="2025-03-25T01:33:01.638470639Z" level=info msg="received exit event container_id:\"7a11f606307e133131fb85be25ad374edc5ff6709693743162a6f3d2db2f0106\" id:\"7a11f606307e133131fb85be25ad374edc5ff6709693743162a6f3d2db2f0106\" pid:3148 exited_at:{seconds:1742866381 nanos:630219881}" Mar 25 01:33:01.640184 containerd[1484]: time="2025-03-25T01:33:01.640156922Z" level=info msg="StartContainer for \"7a11f606307e133131fb85be25ad374edc5ff6709693743162a6f3d2db2f0106\" returns successfully" Mar 25 01:33:01.657311 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a11f606307e133131fb85be25ad374edc5ff6709693743162a6f3d2db2f0106-rootfs.mount: Deactivated successfully. Mar 25 01:33:01.794365 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2236609290.mount: Deactivated successfully. Mar 25 01:33:02.217667 containerd[1484]: time="2025-03-25T01:33:02.217622969Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:33:02.218536 containerd[1484]: time="2025-03-25T01:33:02.218323016Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Mar 25 01:33:02.219416 containerd[1484]: time="2025-03-25T01:33:02.219368685Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:33:02.220785 containerd[1484]: time="2025-03-25T01:33:02.220752373Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.58214276s" Mar 25 01:33:02.220785 containerd[1484]: time="2025-03-25T01:33:02.220783543Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 25 01:33:02.222870 containerd[1484]: time="2025-03-25T01:33:02.222820705Z" level=info msg="CreateContainer within sandbox \"52a5e3ee74fef16701b698af9785908125cec6dca14020d67492dd13656c7ddc\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 25 01:33:02.238032 containerd[1484]: time="2025-03-25T01:33:02.237985466Z" level=info msg="Container 93dd13253db5c2e2c3b7007d676830224b1a5a864a478bc9333f62f85047d3d8: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:33:02.243525 containerd[1484]: time="2025-03-25T01:33:02.243476408Z" level=info msg="CreateContainer within sandbox \"52a5e3ee74fef16701b698af9785908125cec6dca14020d67492dd13656c7ddc\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"93dd13253db5c2e2c3b7007d676830224b1a5a864a478bc9333f62f85047d3d8\"" Mar 25 01:33:02.244024 containerd[1484]: time="2025-03-25T01:33:02.243987399Z" level=info msg="StartContainer for \"93dd13253db5c2e2c3b7007d676830224b1a5a864a478bc9333f62f85047d3d8\"" Mar 25 01:33:02.245609 containerd[1484]: time="2025-03-25T01:33:02.245456113Z" level=info msg="connecting to shim 93dd13253db5c2e2c3b7007d676830224b1a5a864a478bc9333f62f85047d3d8" address="unix:///run/containerd/s/ef4bb5fd171172c083801f00bf5daa49ab5fd193dd583bc6ce187d03a7186c30" protocol=ttrpc version=3 Mar 25 01:33:02.267060 systemd[1]: Started cri-containerd-93dd13253db5c2e2c3b7007d676830224b1a5a864a478bc9333f62f85047d3d8.scope - libcontainer container 93dd13253db5c2e2c3b7007d676830224b1a5a864a478bc9333f62f85047d3d8. Mar 25 01:33:02.295713 containerd[1484]: time="2025-03-25T01:33:02.293326458Z" level=info msg="StartContainer for \"93dd13253db5c2e2c3b7007d676830224b1a5a864a478bc9333f62f85047d3d8\" returns successfully" Mar 25 01:33:02.563189 kubelet[2579]: E0325 01:33:02.562977 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:33:02.566642 containerd[1484]: time="2025-03-25T01:33:02.566410348Z" level=info msg="CreateContainer within sandbox \"60968c8638effe57e89fe18730c2f32c7178be3ed56f6273b8829811093204e6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 25 01:33:02.567509 kubelet[2579]: E0325 01:33:02.567260 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:33:02.616400 kubelet[2579]: I0325 01:33:02.616323 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-pbxxj" podStartSLOduration=1.75929772 podStartE2EDuration="13.616308532s" podCreationTimestamp="2025-03-25 01:32:49 +0000 UTC" firstStartedPulling="2025-03-25 01:32:50.364669116 +0000 UTC m=+7.953952262" lastFinishedPulling="2025-03-25 01:33:02.221679928 +0000 UTC m=+19.810963074" observedRunningTime="2025-03-25 01:33:02.615980475 +0000 UTC m=+20.205263621" watchObservedRunningTime="2025-03-25 01:33:02.616308532 +0000 UTC m=+20.205591638" Mar 25 01:33:02.627193 containerd[1484]: time="2025-03-25T01:33:02.627148575Z" level=info msg="Container b20196338dff5e31d708216d81bcbd11c93dc70c1410c50f8ebe8de620ad3f1f: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:33:02.638024 containerd[1484]: time="2025-03-25T01:33:02.637981456Z" level=info msg="CreateContainer within sandbox \"60968c8638effe57e89fe18730c2f32c7178be3ed56f6273b8829811093204e6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b20196338dff5e31d708216d81bcbd11c93dc70c1410c50f8ebe8de620ad3f1f\"" Mar 25 01:33:02.641138 containerd[1484]: time="2025-03-25T01:33:02.641113382Z" level=info msg="StartContainer for \"b20196338dff5e31d708216d81bcbd11c93dc70c1410c50f8ebe8de620ad3f1f\"" Mar 25 01:33:02.642060 containerd[1484]: time="2025-03-25T01:33:02.642035014Z" level=info msg="connecting to shim b20196338dff5e31d708216d81bcbd11c93dc70c1410c50f8ebe8de620ad3f1f" address="unix:///run/containerd/s/47e4704638fe046c63353c730789e6c27d67c72a4893ce5df1c6b86650a02330" protocol=ttrpc version=3 Mar 25 01:33:02.671063 systemd[1]: Started cri-containerd-b20196338dff5e31d708216d81bcbd11c93dc70c1410c50f8ebe8de620ad3f1f.scope - libcontainer container b20196338dff5e31d708216d81bcbd11c93dc70c1410c50f8ebe8de620ad3f1f. Mar 25 01:33:02.730032 containerd[1484]: time="2025-03-25T01:33:02.729910219Z" level=info msg="StartContainer for \"b20196338dff5e31d708216d81bcbd11c93dc70c1410c50f8ebe8de620ad3f1f\" returns successfully" Mar 25 01:33:02.852785 containerd[1484]: time="2025-03-25T01:33:02.851806717Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b20196338dff5e31d708216d81bcbd11c93dc70c1410c50f8ebe8de620ad3f1f\" id:\"1c828ddfe7740f3b82d15d1425af6e60c26f8d16dcd0aef39cfaa1472ea1488b\" pid:3266 exited_at:{seconds:1742866382 nanos:851505668}" Mar 25 01:33:02.909536 kubelet[2579]: I0325 01:33:02.909144 2579 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Mar 25 01:33:02.956591 systemd[1]: Created slice kubepods-burstable-pode238055f_b213_4984_8548_ba7464280952.slice - libcontainer container kubepods-burstable-pode238055f_b213_4984_8548_ba7464280952.slice. Mar 25 01:33:02.964138 systemd[1]: Created slice kubepods-burstable-pod2fe57d1d_be8b_4cb0_9af7_df8474c39866.slice - libcontainer container kubepods-burstable-pod2fe57d1d_be8b_4cb0_9af7_df8474c39866.slice. Mar 25 01:33:03.102891 kubelet[2579]: I0325 01:33:03.102681 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e238055f-b213-4984-8548-ba7464280952-config-volume\") pod \"coredns-6f6b679f8f-bvnkn\" (UID: \"e238055f-b213-4984-8548-ba7464280952\") " pod="kube-system/coredns-6f6b679f8f-bvnkn" Mar 25 01:33:03.102891 kubelet[2579]: I0325 01:33:03.102738 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hkjz\" (UniqueName: \"kubernetes.io/projected/e238055f-b213-4984-8548-ba7464280952-kube-api-access-9hkjz\") pod \"coredns-6f6b679f8f-bvnkn\" (UID: \"e238055f-b213-4984-8548-ba7464280952\") " pod="kube-system/coredns-6f6b679f8f-bvnkn" Mar 25 01:33:03.102891 kubelet[2579]: I0325 01:33:03.102763 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2fe57d1d-be8b-4cb0-9af7-df8474c39866-config-volume\") pod \"coredns-6f6b679f8f-xtzvt\" (UID: \"2fe57d1d-be8b-4cb0-9af7-df8474c39866\") " pod="kube-system/coredns-6f6b679f8f-xtzvt" Mar 25 01:33:03.102891 kubelet[2579]: I0325 01:33:03.102782 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c257s\" (UniqueName: \"kubernetes.io/projected/2fe57d1d-be8b-4cb0-9af7-df8474c39866-kube-api-access-c257s\") pod \"coredns-6f6b679f8f-xtzvt\" (UID: \"2fe57d1d-be8b-4cb0-9af7-df8474c39866\") " pod="kube-system/coredns-6f6b679f8f-xtzvt" Mar 25 01:33:03.259489 kubelet[2579]: E0325 01:33:03.259452 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:33:03.260074 containerd[1484]: time="2025-03-25T01:33:03.260032972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bvnkn,Uid:e238055f-b213-4984-8548-ba7464280952,Namespace:kube-system,Attempt:0,}" Mar 25 01:33:03.270896 kubelet[2579]: E0325 01:33:03.270578 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:33:03.271166 containerd[1484]: time="2025-03-25T01:33:03.271135341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xtzvt,Uid:2fe57d1d-be8b-4cb0-9af7-df8474c39866,Namespace:kube-system,Attempt:0,}" Mar 25 01:33:03.573187 kubelet[2579]: E0325 01:33:03.573087 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:33:03.573187 kubelet[2579]: E0325 01:33:03.573124 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:33:04.574924 kubelet[2579]: E0325 01:33:04.574805 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:33:05.019948 systemd-networkd[1406]: cilium_host: Link UP Mar 25 01:33:05.020082 systemd-networkd[1406]: cilium_net: Link UP Mar 25 01:33:05.020228 systemd-networkd[1406]: cilium_net: Gained carrier Mar 25 01:33:05.020347 systemd-networkd[1406]: cilium_host: Gained carrier Mar 25 01:33:05.075958 systemd-networkd[1406]: cilium_host: Gained IPv6LL Mar 25 01:33:05.097748 systemd-networkd[1406]: cilium_vxlan: Link UP Mar 25 01:33:05.097756 systemd-networkd[1406]: cilium_vxlan: Gained carrier Mar 25 01:33:05.404971 kernel: NET: Registered PF_ALG protocol family Mar 25 01:33:05.577609 kubelet[2579]: E0325 01:33:05.577576 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:33:05.789970 systemd-networkd[1406]: cilium_net: Gained IPv6LL Mar 25 01:33:05.962472 systemd-networkd[1406]: lxc_health: Link UP Mar 25 01:33:05.963495 systemd-networkd[1406]: lxc_health: Gained carrier Mar 25 01:33:06.172893 kubelet[2579]: I0325 01:33:06.172821 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-g2s9h" podStartSLOduration=8.773495667 podStartE2EDuration="17.172806169s" podCreationTimestamp="2025-03-25 01:32:49 +0000 UTC" firstStartedPulling="2025-03-25 01:32:50.239126089 +0000 UTC m=+7.828409235" lastFinishedPulling="2025-03-25 01:32:58.638436591 +0000 UTC m=+16.227719737" observedRunningTime="2025-03-25 01:33:03.589103909 +0000 UTC m=+21.178387055" watchObservedRunningTime="2025-03-25 01:33:06.172806169 +0000 UTC m=+23.762089315" Mar 25 01:33:06.401922 kernel: eth0: renamed from tmpfab4c Mar 25 01:33:06.409097 systemd-networkd[1406]: lxcc4a815e57019: Link UP Mar 25 01:33:06.419877 kernel: eth0: renamed from tmp39de0 Mar 25 01:33:06.425208 systemd-networkd[1406]: lxcc4a815e57019: Gained carrier Mar 25 01:33:06.425374 systemd-networkd[1406]: lxca017338fd9d3: Link UP Mar 25 01:33:06.425622 systemd-networkd[1406]: lxca017338fd9d3: Gained carrier Mar 25 01:33:06.430012 systemd-networkd[1406]: cilium_vxlan: Gained IPv6LL Mar 25 01:33:06.581193 kubelet[2579]: E0325 01:33:06.579671 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:33:07.205245 systemd-networkd[1406]: lxc_health: Gained IPv6LL Mar 25 01:33:07.581207 kubelet[2579]: E0325 01:33:07.581165 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:33:07.710140 systemd-networkd[1406]: lxca017338fd9d3: Gained IPv6LL Mar 25 01:33:08.094048 systemd-networkd[1406]: lxcc4a815e57019: Gained IPv6LL Mar 25 01:33:08.526980 systemd[1]: Started sshd@7-10.0.0.141:22-10.0.0.1:38930.service - OpenSSH per-connection server daemon (10.0.0.1:38930). Mar 25 01:33:08.570018 sshd[3750]: Accepted publickey for core from 10.0.0.1 port 38930 ssh2: RSA SHA256:RyyrKoKHvyGTiWIDeMwuNNfmpVLXChNPYxUIZdc99cw Mar 25 01:33:08.571311 sshd-session[3750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:33:08.575654 systemd-logind[1467]: New session 8 of user core. Mar 25 01:33:08.583213 kubelet[2579]: E0325 01:33:08.583138 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:33:08.583999 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 25 01:33:08.723071 sshd[3752]: Connection closed by 10.0.0.1 port 38930 Mar 25 01:33:08.723556 sshd-session[3750]: pam_unix(sshd:session): session closed for user core Mar 25 01:33:08.727476 systemd[1]: sshd@7-10.0.0.141:22-10.0.0.1:38930.service: Deactivated successfully. Mar 25 01:33:08.729161 systemd[1]: session-8.scope: Deactivated successfully. Mar 25 01:33:08.729932 systemd-logind[1467]: Session 8 logged out. Waiting for processes to exit. Mar 25 01:33:08.730710 systemd-logind[1467]: Removed session 8. Mar 25 01:33:09.893996 containerd[1484]: time="2025-03-25T01:33:09.893940804Z" level=info msg="connecting to shim fab4c133cd10dc72f9c02f0b5f630d0839e21e5b9d99548deb86cf5e985c7f1d" address="unix:///run/containerd/s/6fe89127ccf7e5bf5f8fed8da4ebe47f724fa41f8f36a97a84cfb900bd70ad21" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:33:09.895288 containerd[1484]: time="2025-03-25T01:33:09.895259889Z" level=info msg="connecting to shim 39de08f530fbc257d9d731dcf42f90c08ad61ceef9088e452381530f585bfa65" address="unix:///run/containerd/s/741ff3c3da3f799d795cfd7a55be042531ac827c8b10cd7593c11b70cafcd7de" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:33:09.923090 systemd[1]: Started cri-containerd-fab4c133cd10dc72f9c02f0b5f630d0839e21e5b9d99548deb86cf5e985c7f1d.scope - libcontainer container fab4c133cd10dc72f9c02f0b5f630d0839e21e5b9d99548deb86cf5e985c7f1d. Mar 25 01:33:09.926344 systemd[1]: Started cri-containerd-39de08f530fbc257d9d731dcf42f90c08ad61ceef9088e452381530f585bfa65.scope - libcontainer container 39de08f530fbc257d9d731dcf42f90c08ad61ceef9088e452381530f585bfa65. Mar 25 01:33:09.935918 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 25 01:33:09.938163 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 25 01:33:09.958633 containerd[1484]: time="2025-03-25T01:33:09.958593289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xtzvt,Uid:2fe57d1d-be8b-4cb0-9af7-df8474c39866,Namespace:kube-system,Attempt:0,} returns sandbox id \"fab4c133cd10dc72f9c02f0b5f630d0839e21e5b9d99548deb86cf5e985c7f1d\"" Mar 25 01:33:09.959342 kubelet[2579]: E0325 01:33:09.959322 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:33:09.960740 containerd[1484]: time="2025-03-25T01:33:09.960712667Z" level=info msg="CreateContainer within sandbox \"fab4c133cd10dc72f9c02f0b5f630d0839e21e5b9d99548deb86cf5e985c7f1d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 25 01:33:09.961080 containerd[1484]: time="2025-03-25T01:33:09.961048379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bvnkn,Uid:e238055f-b213-4984-8548-ba7464280952,Namespace:kube-system,Attempt:0,} returns sandbox id \"39de08f530fbc257d9d731dcf42f90c08ad61ceef9088e452381530f585bfa65\"" Mar 25 01:33:09.961775 kubelet[2579]: E0325 01:33:09.961751 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:33:09.963939 containerd[1484]: time="2025-03-25T01:33:09.963907717Z" level=info msg="CreateContainer within sandbox \"39de08f530fbc257d9d731dcf42f90c08ad61ceef9088e452381530f585bfa65\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 25 01:33:09.971559 containerd[1484]: time="2025-03-25T01:33:09.970515864Z" level=info msg="Container 9ae1f400667de9e18aa3cd11d7615eeb115e941d2901d996deb43696dc885fd7: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:33:09.976995 containerd[1484]: time="2025-03-25T01:33:09.976959536Z" level=info msg="CreateContainer within sandbox \"fab4c133cd10dc72f9c02f0b5f630d0839e21e5b9d99548deb86cf5e985c7f1d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9ae1f400667de9e18aa3cd11d7615eeb115e941d2901d996deb43696dc885fd7\"" Mar 25 01:33:09.977421 containerd[1484]: time="2025-03-25T01:33:09.977380667Z" level=info msg="Container 4d3651b35b0074c8ccba150a0a35581816ed65d54cac6871593f605694ab1c40: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:33:09.977704 containerd[1484]: time="2025-03-25T01:33:09.977673010Z" level=info msg="StartContainer for \"9ae1f400667de9e18aa3cd11d7615eeb115e941d2901d996deb43696dc885fd7\"" Mar 25 01:33:09.978510 containerd[1484]: time="2025-03-25T01:33:09.978446177Z" level=info msg="connecting to shim 9ae1f400667de9e18aa3cd11d7615eeb115e941d2901d996deb43696dc885fd7" address="unix:///run/containerd/s/6fe89127ccf7e5bf5f8fed8da4ebe47f724fa41f8f36a97a84cfb900bd70ad21" protocol=ttrpc version=3 Mar 25 01:33:09.984319 containerd[1484]: time="2025-03-25T01:33:09.984226546Z" level=info msg="CreateContainer within sandbox \"39de08f530fbc257d9d731dcf42f90c08ad61ceef9088e452381530f585bfa65\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4d3651b35b0074c8ccba150a0a35581816ed65d54cac6871593f605694ab1c40\"" Mar 25 01:33:09.984722 containerd[1484]: time="2025-03-25T01:33:09.984689486Z" level=info msg="StartContainer for \"4d3651b35b0074c8ccba150a0a35581816ed65d54cac6871593f605694ab1c40\"" Mar 25 01:33:09.986120 containerd[1484]: time="2025-03-25T01:33:09.986094829Z" level=info msg="connecting to shim 4d3651b35b0074c8ccba150a0a35581816ed65d54cac6871593f605694ab1c40" address="unix:///run/containerd/s/741ff3c3da3f799d795cfd7a55be042531ac827c8b10cd7593c11b70cafcd7de" protocol=ttrpc version=3 Mar 25 01:33:09.996011 systemd[1]: Started cri-containerd-9ae1f400667de9e18aa3cd11d7615eeb115e941d2901d996deb43696dc885fd7.scope - libcontainer container 9ae1f400667de9e18aa3cd11d7615eeb115e941d2901d996deb43696dc885fd7. Mar 25 01:33:10.000030 systemd[1]: Started cri-containerd-4d3651b35b0074c8ccba150a0a35581816ed65d54cac6871593f605694ab1c40.scope - libcontainer container 4d3651b35b0074c8ccba150a0a35581816ed65d54cac6871593f605694ab1c40. Mar 25 01:33:10.030708 containerd[1484]: time="2025-03-25T01:33:10.030675159Z" level=info msg="StartContainer for \"9ae1f400667de9e18aa3cd11d7615eeb115e941d2901d996deb43696dc885fd7\" returns successfully" Mar 25 01:33:10.036702 containerd[1484]: time="2025-03-25T01:33:10.036666321Z" level=info msg="StartContainer for \"4d3651b35b0074c8ccba150a0a35581816ed65d54cac6871593f605694ab1c40\" returns successfully" Mar 25 01:33:10.590204 kubelet[2579]: E0325 01:33:10.590171 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:33:10.593440 kubelet[2579]: E0325 01:33:10.593265 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:33:10.601261 kubelet[2579]: I0325 01:33:10.600897 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-xtzvt" podStartSLOduration=21.600885563 podStartE2EDuration="21.600885563s" podCreationTimestamp="2025-03-25 01:32:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:33:10.600783862 +0000 UTC m=+28.190067008" watchObservedRunningTime="2025-03-25 01:33:10.600885563 +0000 UTC m=+28.190168709" Mar 25 01:33:10.620336 kubelet[2579]: I0325 01:33:10.620268 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-bvnkn" podStartSLOduration=21.620252338 podStartE2EDuration="21.620252338s" podCreationTimestamp="2025-03-25 01:32:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:33:10.610775414 +0000 UTC m=+28.200058560" watchObservedRunningTime="2025-03-25 01:33:10.620252338 +0000 UTC m=+28.209535484" Mar 25 01:33:10.872686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3314713275.mount: Deactivated successfully. Mar 25 01:33:11.594463 kubelet[2579]: E0325 01:33:11.594421 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:33:12.603219 kubelet[2579]: E0325 01:33:12.603109 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:33:13.261086 kubelet[2579]: E0325 01:33:13.260848 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:33:13.598741 kubelet[2579]: E0325 01:33:13.598607 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:33:13.738540 systemd[1]: Started sshd@8-10.0.0.141:22-10.0.0.1:42404.service - OpenSSH per-connection server daemon (10.0.0.1:42404). Mar 25 01:33:13.792371 sshd[3948]: Accepted publickey for core from 10.0.0.1 port 42404 ssh2: RSA SHA256:RyyrKoKHvyGTiWIDeMwuNNfmpVLXChNPYxUIZdc99cw Mar 25 01:33:13.793895 sshd-session[3948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:33:13.798635 systemd-logind[1467]: New session 9 of user core. Mar 25 01:33:13.805088 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 25 01:33:13.918513 sshd[3950]: Connection closed by 10.0.0.1 port 42404 Mar 25 01:33:13.918763 sshd-session[3948]: pam_unix(sshd:session): session closed for user core Mar 25 01:33:13.923343 systemd[1]: sshd@8-10.0.0.141:22-10.0.0.1:42404.service: Deactivated successfully. Mar 25 01:33:13.925023 systemd[1]: session-9.scope: Deactivated successfully. Mar 25 01:33:13.925582 systemd-logind[1467]: Session 9 logged out. Waiting for processes to exit. Mar 25 01:33:13.926434 systemd-logind[1467]: Removed session 9. Mar 25 01:33:18.934106 systemd[1]: Started sshd@9-10.0.0.141:22-10.0.0.1:42406.service - OpenSSH per-connection server daemon (10.0.0.1:42406). Mar 25 01:33:18.983750 sshd[3965]: Accepted publickey for core from 10.0.0.1 port 42406 ssh2: RSA SHA256:RyyrKoKHvyGTiWIDeMwuNNfmpVLXChNPYxUIZdc99cw Mar 25 01:33:18.984942 sshd-session[3965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:33:18.989486 systemd-logind[1467]: New session 10 of user core. Mar 25 01:33:18.996042 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 25 01:33:19.118660 sshd[3967]: Connection closed by 10.0.0.1 port 42406 Mar 25 01:33:19.117706 sshd-session[3965]: pam_unix(sshd:session): session closed for user core Mar 25 01:33:19.122560 systemd[1]: sshd@9-10.0.0.141:22-10.0.0.1:42406.service: Deactivated successfully. Mar 25 01:33:19.124204 systemd[1]: session-10.scope: Deactivated successfully. Mar 25 01:33:19.125473 systemd-logind[1467]: Session 10 logged out. Waiting for processes to exit. Mar 25 01:33:19.126473 systemd-logind[1467]: Removed session 10. Mar 25 01:33:24.132477 systemd[1]: Started sshd@10-10.0.0.141:22-10.0.0.1:56504.service - OpenSSH per-connection server daemon (10.0.0.1:56504). Mar 25 01:33:24.184092 sshd[3986]: Accepted publickey for core from 10.0.0.1 port 56504 ssh2: RSA SHA256:RyyrKoKHvyGTiWIDeMwuNNfmpVLXChNPYxUIZdc99cw Mar 25 01:33:24.185403 sshd-session[3986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:33:24.189664 systemd-logind[1467]: New session 11 of user core. Mar 25 01:33:24.198036 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 25 01:33:24.309737 sshd[3988]: Connection closed by 10.0.0.1 port 56504 Mar 25 01:33:24.310290 sshd-session[3986]: pam_unix(sshd:session): session closed for user core Mar 25 01:33:24.318398 systemd[1]: sshd@10-10.0.0.141:22-10.0.0.1:56504.service: Deactivated successfully. Mar 25 01:33:24.320086 systemd[1]: session-11.scope: Deactivated successfully. Mar 25 01:33:24.322173 systemd-logind[1467]: Session 11 logged out. Waiting for processes to exit. Mar 25 01:33:24.324428 systemd[1]: Started sshd@11-10.0.0.141:22-10.0.0.1:56516.service - OpenSSH per-connection server daemon (10.0.0.1:56516). Mar 25 01:33:24.325506 systemd-logind[1467]: Removed session 11. Mar 25 01:33:24.378508 sshd[4001]: Accepted publickey for core from 10.0.0.1 port 56516 ssh2: RSA SHA256:RyyrKoKHvyGTiWIDeMwuNNfmpVLXChNPYxUIZdc99cw Mar 25 01:33:24.379819 sshd-session[4001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:33:24.384396 systemd-logind[1467]: New session 12 of user core. Mar 25 01:33:24.396017 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 25 01:33:24.547869 sshd[4004]: Connection closed by 10.0.0.1 port 56516 Mar 25 01:33:24.549184 sshd-session[4001]: pam_unix(sshd:session): session closed for user core Mar 25 01:33:24.560738 systemd[1]: sshd@11-10.0.0.141:22-10.0.0.1:56516.service: Deactivated successfully. Mar 25 01:33:24.563558 systemd[1]: session-12.scope: Deactivated successfully. Mar 25 01:33:24.564835 systemd-logind[1467]: Session 12 logged out. Waiting for processes to exit. Mar 25 01:33:24.568262 systemd[1]: Started sshd@12-10.0.0.141:22-10.0.0.1:56532.service - OpenSSH per-connection server daemon (10.0.0.1:56532). Mar 25 01:33:24.570957 systemd-logind[1467]: Removed session 12. Mar 25 01:33:24.615141 sshd[4015]: Accepted publickey for core from 10.0.0.1 port 56532 ssh2: RSA SHA256:RyyrKoKHvyGTiWIDeMwuNNfmpVLXChNPYxUIZdc99cw Mar 25 01:33:24.616413 sshd-session[4015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:33:24.620926 systemd-logind[1467]: New session 13 of user core. Mar 25 01:33:24.634010 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 25 01:33:24.752175 sshd[4018]: Connection closed by 10.0.0.1 port 56532 Mar 25 01:33:24.752476 sshd-session[4015]: pam_unix(sshd:session): session closed for user core Mar 25 01:33:24.756469 systemd[1]: sshd@12-10.0.0.141:22-10.0.0.1:56532.service: Deactivated successfully. Mar 25 01:33:24.758478 systemd[1]: session-13.scope: Deactivated successfully. Mar 25 01:33:24.760469 systemd-logind[1467]: Session 13 logged out. Waiting for processes to exit. Mar 25 01:33:24.761406 systemd-logind[1467]: Removed session 13. Mar 25 01:33:29.764128 systemd[1]: Started sshd@13-10.0.0.141:22-10.0.0.1:56544.service - OpenSSH per-connection server daemon (10.0.0.1:56544). Mar 25 01:33:29.811532 sshd[4031]: Accepted publickey for core from 10.0.0.1 port 56544 ssh2: RSA SHA256:RyyrKoKHvyGTiWIDeMwuNNfmpVLXChNPYxUIZdc99cw Mar 25 01:33:29.812608 sshd-session[4031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:33:29.816663 systemd-logind[1467]: New session 14 of user core. Mar 25 01:33:29.825996 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 25 01:33:29.935165 sshd[4033]: Connection closed by 10.0.0.1 port 56544 Mar 25 01:33:29.935677 sshd-session[4031]: pam_unix(sshd:session): session closed for user core Mar 25 01:33:29.938907 systemd[1]: sshd@13-10.0.0.141:22-10.0.0.1:56544.service: Deactivated successfully. Mar 25 01:33:29.941379 systemd[1]: session-14.scope: Deactivated successfully. Mar 25 01:33:29.942113 systemd-logind[1467]: Session 14 logged out. Waiting for processes to exit. Mar 25 01:33:29.943067 systemd-logind[1467]: Removed session 14. Mar 25 01:33:34.947111 systemd[1]: Started sshd@14-10.0.0.141:22-10.0.0.1:40234.service - OpenSSH per-connection server daemon (10.0.0.1:40234). Mar 25 01:33:34.991184 sshd[4047]: Accepted publickey for core from 10.0.0.1 port 40234 ssh2: RSA SHA256:RyyrKoKHvyGTiWIDeMwuNNfmpVLXChNPYxUIZdc99cw Mar 25 01:33:34.992523 sshd-session[4047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:33:34.997246 systemd-logind[1467]: New session 15 of user core. Mar 25 01:33:35.007055 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 25 01:33:35.111494 sshd[4049]: Connection closed by 10.0.0.1 port 40234 Mar 25 01:33:35.112037 sshd-session[4047]: pam_unix(sshd:session): session closed for user core Mar 25 01:33:35.126301 systemd[1]: sshd@14-10.0.0.141:22-10.0.0.1:40234.service: Deactivated successfully. Mar 25 01:33:35.128078 systemd[1]: session-15.scope: Deactivated successfully. Mar 25 01:33:35.128806 systemd-logind[1467]: Session 15 logged out. Waiting for processes to exit. Mar 25 01:33:35.131289 systemd[1]: Started sshd@15-10.0.0.141:22-10.0.0.1:40246.service - OpenSSH per-connection server daemon (10.0.0.1:40246). Mar 25 01:33:35.132872 systemd-logind[1467]: Removed session 15. Mar 25 01:33:35.179552 sshd[4061]: Accepted publickey for core from 10.0.0.1 port 40246 ssh2: RSA SHA256:RyyrKoKHvyGTiWIDeMwuNNfmpVLXChNPYxUIZdc99cw Mar 25 01:33:35.181154 sshd-session[4061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:33:35.185300 systemd-logind[1467]: New session 16 of user core. Mar 25 01:33:35.189057 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 25 01:33:35.407963 sshd[4064]: Connection closed by 10.0.0.1 port 40246 Mar 25 01:33:35.408635 sshd-session[4061]: pam_unix(sshd:session): session closed for user core Mar 25 01:33:35.421213 systemd[1]: sshd@15-10.0.0.141:22-10.0.0.1:40246.service: Deactivated successfully. Mar 25 01:33:35.423058 systemd[1]: session-16.scope: Deactivated successfully. Mar 25 01:33:35.423798 systemd-logind[1467]: Session 16 logged out. Waiting for processes to exit. Mar 25 01:33:35.425914 systemd[1]: Started sshd@16-10.0.0.141:22-10.0.0.1:40256.service - OpenSSH per-connection server daemon (10.0.0.1:40256). Mar 25 01:33:35.427254 systemd-logind[1467]: Removed session 16. Mar 25 01:33:35.480546 sshd[4075]: Accepted publickey for core from 10.0.0.1 port 40256 ssh2: RSA SHA256:RyyrKoKHvyGTiWIDeMwuNNfmpVLXChNPYxUIZdc99cw Mar 25 01:33:35.481956 sshd-session[4075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:33:35.486620 systemd-logind[1467]: New session 17 of user core. Mar 25 01:33:35.495034 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 25 01:33:36.891686 sshd[4078]: Connection closed by 10.0.0.1 port 40256 Mar 25 01:33:36.892251 sshd-session[4075]: pam_unix(sshd:session): session closed for user core Mar 25 01:33:36.902542 systemd[1]: sshd@16-10.0.0.141:22-10.0.0.1:40256.service: Deactivated successfully. Mar 25 01:33:36.905897 systemd[1]: session-17.scope: Deactivated successfully. Mar 25 01:33:36.907499 systemd-logind[1467]: Session 17 logged out. Waiting for processes to exit. Mar 25 01:33:36.910346 systemd[1]: Started sshd@17-10.0.0.141:22-10.0.0.1:40268.service - OpenSSH per-connection server daemon (10.0.0.1:40268). Mar 25 01:33:36.914239 systemd-logind[1467]: Removed session 17. Mar 25 01:33:36.958088 sshd[4095]: Accepted publickey for core from 10.0.0.1 port 40268 ssh2: RSA SHA256:RyyrKoKHvyGTiWIDeMwuNNfmpVLXChNPYxUIZdc99cw Mar 25 01:33:36.959295 sshd-session[4095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:33:36.963447 systemd-logind[1467]: New session 18 of user core. Mar 25 01:33:36.976079 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 25 01:33:37.199944 sshd[4098]: Connection closed by 10.0.0.1 port 40268 Mar 25 01:33:37.200676 sshd-session[4095]: pam_unix(sshd:session): session closed for user core Mar 25 01:33:37.210232 systemd[1]: sshd@17-10.0.0.141:22-10.0.0.1:40268.service: Deactivated successfully. Mar 25 01:33:37.212020 systemd[1]: session-18.scope: Deactivated successfully. Mar 25 01:33:37.212760 systemd-logind[1467]: Session 18 logged out. Waiting for processes to exit. Mar 25 01:33:37.214759 systemd[1]: Started sshd@18-10.0.0.141:22-10.0.0.1:40284.service - OpenSSH per-connection server daemon (10.0.0.1:40284). Mar 25 01:33:37.216267 systemd-logind[1467]: Removed session 18. Mar 25 01:33:37.266054 sshd[4109]: Accepted publickey for core from 10.0.0.1 port 40284 ssh2: RSA SHA256:RyyrKoKHvyGTiWIDeMwuNNfmpVLXChNPYxUIZdc99cw Mar 25 01:33:37.266954 sshd-session[4109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:33:37.270734 systemd-logind[1467]: New session 19 of user core. Mar 25 01:33:37.274987 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 25 01:33:37.384484 sshd[4112]: Connection closed by 10.0.0.1 port 40284 Mar 25 01:33:37.384853 sshd-session[4109]: pam_unix(sshd:session): session closed for user core Mar 25 01:33:37.389083 systemd[1]: sshd@18-10.0.0.141:22-10.0.0.1:40284.service: Deactivated successfully. Mar 25 01:33:37.390903 systemd[1]: session-19.scope: Deactivated successfully. Mar 25 01:33:37.392640 systemd-logind[1467]: Session 19 logged out. Waiting for processes to exit. Mar 25 01:33:37.393486 systemd-logind[1467]: Removed session 19. Mar 25 01:33:42.396296 systemd[1]: Started sshd@19-10.0.0.141:22-10.0.0.1:40296.service - OpenSSH per-connection server daemon (10.0.0.1:40296). Mar 25 01:33:42.445581 sshd[4130]: Accepted publickey for core from 10.0.0.1 port 40296 ssh2: RSA SHA256:RyyrKoKHvyGTiWIDeMwuNNfmpVLXChNPYxUIZdc99cw Mar 25 01:33:42.446665 sshd-session[4130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:33:42.450537 systemd-logind[1467]: New session 20 of user core. Mar 25 01:33:42.462074 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 25 01:33:42.570616 sshd[4132]: Connection closed by 10.0.0.1 port 40296 Mar 25 01:33:42.571073 sshd-session[4130]: pam_unix(sshd:session): session closed for user core Mar 25 01:33:42.573709 systemd[1]: sshd@19-10.0.0.141:22-10.0.0.1:40296.service: Deactivated successfully. Mar 25 01:33:42.575358 systemd[1]: session-20.scope: Deactivated successfully. Mar 25 01:33:42.576658 systemd-logind[1467]: Session 20 logged out. Waiting for processes to exit. Mar 25 01:33:42.577608 systemd-logind[1467]: Removed session 20. Mar 25 01:33:47.582234 systemd[1]: Started sshd@20-10.0.0.141:22-10.0.0.1:60480.service - OpenSSH per-connection server daemon (10.0.0.1:60480). Mar 25 01:33:47.631248 sshd[4148]: Accepted publickey for core from 10.0.0.1 port 60480 ssh2: RSA SHA256:RyyrKoKHvyGTiWIDeMwuNNfmpVLXChNPYxUIZdc99cw Mar 25 01:33:47.632350 sshd-session[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:33:47.635926 systemd-logind[1467]: New session 21 of user core. Mar 25 01:33:47.642058 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 25 01:33:47.747584 sshd[4150]: Connection closed by 10.0.0.1 port 60480 Mar 25 01:33:47.748076 sshd-session[4148]: pam_unix(sshd:session): session closed for user core Mar 25 01:33:47.750781 systemd-logind[1467]: Session 21 logged out. Waiting for processes to exit. Mar 25 01:33:47.751024 systemd[1]: sshd@20-10.0.0.141:22-10.0.0.1:60480.service: Deactivated successfully. Mar 25 01:33:47.752620 systemd[1]: session-21.scope: Deactivated successfully. Mar 25 01:33:47.754174 systemd-logind[1467]: Removed session 21. Mar 25 01:33:52.759125 systemd[1]: Started sshd@21-10.0.0.141:22-10.0.0.1:58310.service - OpenSSH per-connection server daemon (10.0.0.1:58310). Mar 25 01:33:52.804257 sshd[4165]: Accepted publickey for core from 10.0.0.1 port 58310 ssh2: RSA SHA256:RyyrKoKHvyGTiWIDeMwuNNfmpVLXChNPYxUIZdc99cw Mar 25 01:33:52.805521 sshd-session[4165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:33:52.809745 systemd-logind[1467]: New session 22 of user core. Mar 25 01:33:52.821005 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 25 01:33:52.926546 sshd[4167]: Connection closed by 10.0.0.1 port 58310 Mar 25 01:33:52.927031 sshd-session[4165]: pam_unix(sshd:session): session closed for user core Mar 25 01:33:52.944044 systemd[1]: sshd@21-10.0.0.141:22-10.0.0.1:58310.service: Deactivated successfully. Mar 25 01:33:52.945407 systemd[1]: session-22.scope: Deactivated successfully. Mar 25 01:33:52.946398 systemd-logind[1467]: Session 22 logged out. Waiting for processes to exit. Mar 25 01:33:52.948200 systemd[1]: Started sshd@22-10.0.0.141:22-10.0.0.1:58314.service - OpenSSH per-connection server daemon (10.0.0.1:58314). Mar 25 01:33:52.949665 systemd-logind[1467]: Removed session 22. Mar 25 01:33:52.992521 sshd[4180]: Accepted publickey for core from 10.0.0.1 port 58314 ssh2: RSA SHA256:RyyrKoKHvyGTiWIDeMwuNNfmpVLXChNPYxUIZdc99cw Mar 25 01:33:52.993644 sshd-session[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:33:52.997521 systemd-logind[1467]: New session 23 of user core. Mar 25 01:33:53.012013 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 25 01:33:54.636745 containerd[1484]: time="2025-03-25T01:33:54.636430774Z" level=info msg="StopContainer for \"93dd13253db5c2e2c3b7007d676830224b1a5a864a478bc9333f62f85047d3d8\" with timeout 30 (s)" Mar 25 01:33:54.639328 containerd[1484]: time="2025-03-25T01:33:54.638932330Z" level=info msg="Stop container \"93dd13253db5c2e2c3b7007d676830224b1a5a864a478bc9333f62f85047d3d8\" with signal terminated" Mar 25 01:33:54.650491 systemd[1]: cri-containerd-93dd13253db5c2e2c3b7007d676830224b1a5a864a478bc9333f62f85047d3d8.scope: Deactivated successfully. Mar 25 01:33:54.654600 containerd[1484]: time="2025-03-25T01:33:54.650583114Z" level=info msg="received exit event container_id:\"93dd13253db5c2e2c3b7007d676830224b1a5a864a478bc9333f62f85047d3d8\" id:\"93dd13253db5c2e2c3b7007d676830224b1a5a864a478bc9333f62f85047d3d8\" pid:3200 exited_at:{seconds:1742866434 nanos:650213893}" Mar 25 01:33:54.654600 containerd[1484]: time="2025-03-25T01:33:54.650696429Z" level=info msg="TaskExit event in podsandbox handler container_id:\"93dd13253db5c2e2c3b7007d676830224b1a5a864a478bc9333f62f85047d3d8\" id:\"93dd13253db5c2e2c3b7007d676830224b1a5a864a478bc9333f62f85047d3d8\" pid:3200 exited_at:{seconds:1742866434 nanos:650213893}" Mar 25 01:33:54.666758 containerd[1484]: time="2025-03-25T01:33:54.666704238Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b20196338dff5e31d708216d81bcbd11c93dc70c1410c50f8ebe8de620ad3f1f\" id:\"5425bfcc40c76df4ecf7170d1a3d6cc6cf2504dd7dfc862f11687533fb02ccd6\" pid:4210 exited_at:{seconds:1742866434 nanos:666139146}" Mar 25 01:33:54.668677 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-93dd13253db5c2e2c3b7007d676830224b1a5a864a478bc9333f62f85047d3d8-rootfs.mount: Deactivated successfully. Mar 25 01:33:54.670579 containerd[1484]: time="2025-03-25T01:33:54.670551808Z" level=info msg="StopContainer for \"b20196338dff5e31d708216d81bcbd11c93dc70c1410c50f8ebe8de620ad3f1f\" with timeout 2 (s)" Mar 25 01:33:54.670916 containerd[1484]: time="2025-03-25T01:33:54.670835154Z" level=info msg="Stop container \"b20196338dff5e31d708216d81bcbd11c93dc70c1410c50f8ebe8de620ad3f1f\" with signal terminated" Mar 25 01:33:54.672328 containerd[1484]: time="2025-03-25T01:33:54.672289122Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 25 01:33:54.677831 systemd-networkd[1406]: lxc_health: Link DOWN Mar 25 01:33:54.677837 systemd-networkd[1406]: lxc_health: Lost carrier Mar 25 01:33:54.681157 containerd[1484]: time="2025-03-25T01:33:54.681060168Z" level=info msg="StopContainer for \"93dd13253db5c2e2c3b7007d676830224b1a5a864a478bc9333f62f85047d3d8\" returns successfully" Mar 25 01:33:54.682917 containerd[1484]: time="2025-03-25T01:33:54.682721166Z" level=info msg="StopPodSandbox for \"52a5e3ee74fef16701b698af9785908125cec6dca14020d67492dd13656c7ddc\"" Mar 25 01:33:54.689086 containerd[1484]: time="2025-03-25T01:33:54.689043294Z" level=info msg="Container to stop \"93dd13253db5c2e2c3b7007d676830224b1a5a864a478bc9333f62f85047d3d8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:33:54.693550 systemd[1]: cri-containerd-b20196338dff5e31d708216d81bcbd11c93dc70c1410c50f8ebe8de620ad3f1f.scope: Deactivated successfully. Mar 25 01:33:54.693955 systemd[1]: cri-containerd-b20196338dff5e31d708216d81bcbd11c93dc70c1410c50f8ebe8de620ad3f1f.scope: Consumed 6.380s CPU time, 122.2M memory peak, 156K read from disk, 12.9M written to disk. Mar 25 01:33:54.696101 containerd[1484]: time="2025-03-25T01:33:54.695913954Z" level=info msg="received exit event container_id:\"b20196338dff5e31d708216d81bcbd11c93dc70c1410c50f8ebe8de620ad3f1f\" id:\"b20196338dff5e31d708216d81bcbd11c93dc70c1410c50f8ebe8de620ad3f1f\" pid:3236 exited_at:{seconds:1742866434 nanos:693829657}" Mar 25 01:33:54.696101 containerd[1484]: time="2025-03-25T01:33:54.696020389Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b20196338dff5e31d708216d81bcbd11c93dc70c1410c50f8ebe8de620ad3f1f\" id:\"b20196338dff5e31d708216d81bcbd11c93dc70c1410c50f8ebe8de620ad3f1f\" pid:3236 exited_at:{seconds:1742866434 nanos:693829657}" Mar 25 01:33:54.697113 systemd[1]: cri-containerd-52a5e3ee74fef16701b698af9785908125cec6dca14020d67492dd13656c7ddc.scope: Deactivated successfully. Mar 25 01:33:54.709332 containerd[1484]: time="2025-03-25T01:33:54.709298093Z" level=info msg="TaskExit event in podsandbox handler container_id:\"52a5e3ee74fef16701b698af9785908125cec6dca14020d67492dd13656c7ddc\" id:\"52a5e3ee74fef16701b698af9785908125cec6dca14020d67492dd13656c7ddc\" pid:2815 exit_status:137 exited_at:{seconds:1742866434 nanos:708831836}" Mar 25 01:33:54.715304 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b20196338dff5e31d708216d81bcbd11c93dc70c1410c50f8ebe8de620ad3f1f-rootfs.mount: Deactivated successfully. Mar 25 01:33:54.725133 containerd[1484]: time="2025-03-25T01:33:54.725086073Z" level=info msg="StopContainer for \"b20196338dff5e31d708216d81bcbd11c93dc70c1410c50f8ebe8de620ad3f1f\" returns successfully" Mar 25 01:33:54.725741 containerd[1484]: time="2025-03-25T01:33:54.725529851Z" level=info msg="StopPodSandbox for \"60968c8638effe57e89fe18730c2f32c7178be3ed56f6273b8829811093204e6\"" Mar 25 01:33:54.725741 containerd[1484]: time="2025-03-25T01:33:54.725595608Z" level=info msg="Container to stop \"9f2f200a4e5c2b8e71116c82161bfa033d971c959e61ffd757caa834fd3a0d35\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:33:54.725741 containerd[1484]: time="2025-03-25T01:33:54.725609727Z" level=info msg="Container to stop \"b20196338dff5e31d708216d81bcbd11c93dc70c1410c50f8ebe8de620ad3f1f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:33:54.725741 containerd[1484]: time="2025-03-25T01:33:54.725619886Z" level=info msg="Container to stop \"769ba3c1c8f0d6c84aa21995cc4eb62b0e623c28aa2003df07679993eb2cbde9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:33:54.725741 containerd[1484]: time="2025-03-25T01:33:54.725628126Z" level=info msg="Container to stop \"393a9334b32d7cb9f32f9b3b6efcdaa2c7b99fbe1e0f8c2a2e260931924d9aa8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:33:54.725741 containerd[1484]: time="2025-03-25T01:33:54.725636166Z" level=info msg="Container to stop \"7a11f606307e133131fb85be25ad374edc5ff6709693743162a6f3d2db2f0106\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:33:54.731582 systemd[1]: cri-containerd-60968c8638effe57e89fe18730c2f32c7178be3ed56f6273b8829811093204e6.scope: Deactivated successfully. Mar 25 01:33:54.738973 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52a5e3ee74fef16701b698af9785908125cec6dca14020d67492dd13656c7ddc-rootfs.mount: Deactivated successfully. Mar 25 01:33:54.744766 containerd[1484]: time="2025-03-25T01:33:54.744634387Z" level=info msg="TaskExit event in podsandbox handler container_id:\"60968c8638effe57e89fe18730c2f32c7178be3ed56f6273b8829811093204e6\" id:\"60968c8638effe57e89fe18730c2f32c7178be3ed56f6273b8829811093204e6\" pid:2732 exit_status:137 exited_at:{seconds:1742866434 nanos:731408720}" Mar 25 01:33:54.746197 containerd[1484]: time="2025-03-25T01:33:54.744764340Z" level=info msg="TearDown network for sandbox \"52a5e3ee74fef16701b698af9785908125cec6dca14020d67492dd13656c7ddc\" successfully" Mar 25 01:33:54.746197 containerd[1484]: time="2025-03-25T01:33:54.745711334Z" level=info msg="StopPodSandbox for \"52a5e3ee74fef16701b698af9785908125cec6dca14020d67492dd13656c7ddc\" returns successfully" Mar 25 01:33:54.746197 containerd[1484]: time="2025-03-25T01:33:54.744869775Z" level=info msg="shim disconnected" id=52a5e3ee74fef16701b698af9785908125cec6dca14020d67492dd13656c7ddc namespace=k8s.io Mar 25 01:33:54.746197 containerd[1484]: time="2025-03-25T01:33:54.745800449Z" level=warning msg="cleaning up after shim disconnected" id=52a5e3ee74fef16701b698af9785908125cec6dca14020d67492dd13656c7ddc namespace=k8s.io Mar 25 01:33:54.746197 containerd[1484]: time="2025-03-25T01:33:54.745844967Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 25 01:33:54.746381 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-52a5e3ee74fef16701b698af9785908125cec6dca14020d67492dd13656c7ddc-shm.mount: Deactivated successfully. Mar 25 01:33:54.749440 containerd[1484]: time="2025-03-25T01:33:54.749403631Z" level=info msg="received exit event sandbox_id:\"52a5e3ee74fef16701b698af9785908125cec6dca14020d67492dd13656c7ddc\" exit_status:137 exited_at:{seconds:1742866434 nanos:708831836}" Mar 25 01:33:54.755204 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-60968c8638effe57e89fe18730c2f32c7178be3ed56f6273b8829811093204e6-rootfs.mount: Deactivated successfully. Mar 25 01:33:54.759219 containerd[1484]: time="2025-03-25T01:33:54.759181308Z" level=info msg="received exit event sandbox_id:\"60968c8638effe57e89fe18730c2f32c7178be3ed56f6273b8829811093204e6\" exit_status:137 exited_at:{seconds:1742866434 nanos:731408720}" Mar 25 01:33:54.759568 containerd[1484]: time="2025-03-25T01:33:54.759337820Z" level=info msg="shim disconnected" id=60968c8638effe57e89fe18730c2f32c7178be3ed56f6273b8829811093204e6 namespace=k8s.io Mar 25 01:33:54.759568 containerd[1484]: time="2025-03-25T01:33:54.759360379Z" level=warning msg="cleaning up after shim disconnected" id=60968c8638effe57e89fe18730c2f32c7178be3ed56f6273b8829811093204e6 namespace=k8s.io Mar 25 01:33:54.759568 containerd[1484]: time="2025-03-25T01:33:54.759387898Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 25 01:33:54.760788 containerd[1484]: time="2025-03-25T01:33:54.760741831Z" level=info msg="TearDown network for sandbox \"60968c8638effe57e89fe18730c2f32c7178be3ed56f6273b8829811093204e6\" successfully" Mar 25 01:33:54.760989 containerd[1484]: time="2025-03-25T01:33:54.760970780Z" level=info msg="StopPodSandbox for \"60968c8638effe57e89fe18730c2f32c7178be3ed56f6273b8829811093204e6\" returns successfully" Mar 25 01:33:54.902403 kubelet[2579]: I0325 01:33:54.902300 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/754d5315-6528-4b3e-87a3-834fcc09b71f-lib-modules\") pod \"754d5315-6528-4b3e-87a3-834fcc09b71f\" (UID: \"754d5315-6528-4b3e-87a3-834fcc09b71f\") " Mar 25 01:33:54.903183 kubelet[2579]: I0325 01:33:54.902783 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/754d5315-6528-4b3e-87a3-834fcc09b71f-xtables-lock\") pod \"754d5315-6528-4b3e-87a3-834fcc09b71f\" (UID: \"754d5315-6528-4b3e-87a3-834fcc09b71f\") " Mar 25 01:33:54.903183 kubelet[2579]: I0325 01:33:54.902819 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/482fbfe6-5429-4fc6-a231-ca59762b9968-cilium-config-path\") pod \"482fbfe6-5429-4fc6-a231-ca59762b9968\" (UID: \"482fbfe6-5429-4fc6-a231-ca59762b9968\") " Mar 25 01:33:54.903183 kubelet[2579]: I0325 01:33:54.902840 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/754d5315-6528-4b3e-87a3-834fcc09b71f-cilium-run\") pod \"754d5315-6528-4b3e-87a3-834fcc09b71f\" (UID: \"754d5315-6528-4b3e-87a3-834fcc09b71f\") " Mar 25 01:33:54.903183 kubelet[2579]: I0325 01:33:54.902876 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/754d5315-6528-4b3e-87a3-834fcc09b71f-cni-path\") pod \"754d5315-6528-4b3e-87a3-834fcc09b71f\" (UID: \"754d5315-6528-4b3e-87a3-834fcc09b71f\") " Mar 25 01:33:54.903183 kubelet[2579]: I0325 01:33:54.902895 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzzrb\" (UniqueName: \"kubernetes.io/projected/482fbfe6-5429-4fc6-a231-ca59762b9968-kube-api-access-zzzrb\") pod \"482fbfe6-5429-4fc6-a231-ca59762b9968\" (UID: \"482fbfe6-5429-4fc6-a231-ca59762b9968\") " Mar 25 01:33:54.903183 kubelet[2579]: I0325 01:33:54.902913 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/754d5315-6528-4b3e-87a3-834fcc09b71f-clustermesh-secrets\") pod \"754d5315-6528-4b3e-87a3-834fcc09b71f\" (UID: \"754d5315-6528-4b3e-87a3-834fcc09b71f\") " Mar 25 01:33:54.903383 kubelet[2579]: I0325 01:33:54.902927 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/754d5315-6528-4b3e-87a3-834fcc09b71f-hostproc\") pod \"754d5315-6528-4b3e-87a3-834fcc09b71f\" (UID: \"754d5315-6528-4b3e-87a3-834fcc09b71f\") " Mar 25 01:33:54.903383 kubelet[2579]: I0325 01:33:54.902944 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/754d5315-6528-4b3e-87a3-834fcc09b71f-cilium-config-path\") pod \"754d5315-6528-4b3e-87a3-834fcc09b71f\" (UID: \"754d5315-6528-4b3e-87a3-834fcc09b71f\") " Mar 25 01:33:54.903383 kubelet[2579]: I0325 01:33:54.902960 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/754d5315-6528-4b3e-87a3-834fcc09b71f-hubble-tls\") pod \"754d5315-6528-4b3e-87a3-834fcc09b71f\" (UID: \"754d5315-6528-4b3e-87a3-834fcc09b71f\") " Mar 25 01:33:54.903383 kubelet[2579]: I0325 01:33:54.902977 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/754d5315-6528-4b3e-87a3-834fcc09b71f-host-proc-sys-kernel\") pod \"754d5315-6528-4b3e-87a3-834fcc09b71f\" (UID: \"754d5315-6528-4b3e-87a3-834fcc09b71f\") " Mar 25 01:33:54.903383 kubelet[2579]: I0325 01:33:54.902991 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/754d5315-6528-4b3e-87a3-834fcc09b71f-etc-cni-netd\") pod \"754d5315-6528-4b3e-87a3-834fcc09b71f\" (UID: \"754d5315-6528-4b3e-87a3-834fcc09b71f\") " Mar 25 01:33:54.903383 kubelet[2579]: I0325 01:33:54.903004 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/754d5315-6528-4b3e-87a3-834fcc09b71f-cilium-cgroup\") pod \"754d5315-6528-4b3e-87a3-834fcc09b71f\" (UID: \"754d5315-6528-4b3e-87a3-834fcc09b71f\") " Mar 25 01:33:54.903501 kubelet[2579]: I0325 01:33:54.903019 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/754d5315-6528-4b3e-87a3-834fcc09b71f-host-proc-sys-net\") pod \"754d5315-6528-4b3e-87a3-834fcc09b71f\" (UID: \"754d5315-6528-4b3e-87a3-834fcc09b71f\") " Mar 25 01:33:54.903501 kubelet[2579]: I0325 01:33:54.903034 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/754d5315-6528-4b3e-87a3-834fcc09b71f-bpf-maps\") pod \"754d5315-6528-4b3e-87a3-834fcc09b71f\" (UID: \"754d5315-6528-4b3e-87a3-834fcc09b71f\") " Mar 25 01:33:54.903501 kubelet[2579]: I0325 01:33:54.903050 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nv2hk\" (UniqueName: \"kubernetes.io/projected/754d5315-6528-4b3e-87a3-834fcc09b71f-kube-api-access-nv2hk\") pod \"754d5315-6528-4b3e-87a3-834fcc09b71f\" (UID: \"754d5315-6528-4b3e-87a3-834fcc09b71f\") " Mar 25 01:33:54.907824 kubelet[2579]: I0325 01:33:54.907121 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/754d5315-6528-4b3e-87a3-834fcc09b71f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "754d5315-6528-4b3e-87a3-834fcc09b71f" (UID: "754d5315-6528-4b3e-87a3-834fcc09b71f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:33:54.907824 kubelet[2579]: I0325 01:33:54.907218 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/754d5315-6528-4b3e-87a3-834fcc09b71f-cni-path" (OuterVolumeSpecName: "cni-path") pod "754d5315-6528-4b3e-87a3-834fcc09b71f" (UID: "754d5315-6528-4b3e-87a3-834fcc09b71f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:33:54.909036 kubelet[2579]: I0325 01:33:54.908970 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/754d5315-6528-4b3e-87a3-834fcc09b71f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "754d5315-6528-4b3e-87a3-834fcc09b71f" (UID: "754d5315-6528-4b3e-87a3-834fcc09b71f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:33:54.910949 kubelet[2579]: I0325 01:33:54.910646 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/482fbfe6-5429-4fc6-a231-ca59762b9968-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "482fbfe6-5429-4fc6-a231-ca59762b9968" (UID: "482fbfe6-5429-4fc6-a231-ca59762b9968"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 25 01:33:54.910949 kubelet[2579]: I0325 01:33:54.910695 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/754d5315-6528-4b3e-87a3-834fcc09b71f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "754d5315-6528-4b3e-87a3-834fcc09b71f" (UID: "754d5315-6528-4b3e-87a3-834fcc09b71f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:33:54.912405 kubelet[2579]: I0325 01:33:54.911141 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/754d5315-6528-4b3e-87a3-834fcc09b71f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "754d5315-6528-4b3e-87a3-834fcc09b71f" (UID: "754d5315-6528-4b3e-87a3-834fcc09b71f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 25 01:33:54.912405 kubelet[2579]: I0325 01:33:54.911189 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/754d5315-6528-4b3e-87a3-834fcc09b71f-hostproc" (OuterVolumeSpecName: "hostproc") pod "754d5315-6528-4b3e-87a3-834fcc09b71f" (UID: "754d5315-6528-4b3e-87a3-834fcc09b71f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:33:54.912405 kubelet[2579]: I0325 01:33:54.911209 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/754d5315-6528-4b3e-87a3-834fcc09b71f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "754d5315-6528-4b3e-87a3-834fcc09b71f" (UID: "754d5315-6528-4b3e-87a3-834fcc09b71f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:33:54.912405 kubelet[2579]: I0325 01:33:54.911339 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/754d5315-6528-4b3e-87a3-834fcc09b71f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "754d5315-6528-4b3e-87a3-834fcc09b71f" (UID: "754d5315-6528-4b3e-87a3-834fcc09b71f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:33:54.912405 kubelet[2579]: I0325 01:33:54.911369 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/754d5315-6528-4b3e-87a3-834fcc09b71f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "754d5315-6528-4b3e-87a3-834fcc09b71f" (UID: "754d5315-6528-4b3e-87a3-834fcc09b71f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:33:54.912589 kubelet[2579]: I0325 01:33:54.911386 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/754d5315-6528-4b3e-87a3-834fcc09b71f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "754d5315-6528-4b3e-87a3-834fcc09b71f" (UID: "754d5315-6528-4b3e-87a3-834fcc09b71f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:33:54.912589 kubelet[2579]: I0325 01:33:54.912395 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/754d5315-6528-4b3e-87a3-834fcc09b71f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "754d5315-6528-4b3e-87a3-834fcc09b71f" (UID: "754d5315-6528-4b3e-87a3-834fcc09b71f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 25 01:33:54.912589 kubelet[2579]: I0325 01:33:54.912441 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/754d5315-6528-4b3e-87a3-834fcc09b71f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "754d5315-6528-4b3e-87a3-834fcc09b71f" (UID: "754d5315-6528-4b3e-87a3-834fcc09b71f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:33:54.913214 kubelet[2579]: I0325 01:33:54.913188 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/482fbfe6-5429-4fc6-a231-ca59762b9968-kube-api-access-zzzrb" (OuterVolumeSpecName: "kube-api-access-zzzrb") pod "482fbfe6-5429-4fc6-a231-ca59762b9968" (UID: "482fbfe6-5429-4fc6-a231-ca59762b9968"). InnerVolumeSpecName "kube-api-access-zzzrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 25 01:33:54.913324 kubelet[2579]: I0325 01:33:54.913187 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/754d5315-6528-4b3e-87a3-834fcc09b71f-kube-api-access-nv2hk" (OuterVolumeSpecName: "kube-api-access-nv2hk") pod "754d5315-6528-4b3e-87a3-834fcc09b71f" (UID: "754d5315-6528-4b3e-87a3-834fcc09b71f"). InnerVolumeSpecName "kube-api-access-nv2hk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 25 01:33:54.913378 kubelet[2579]: I0325 01:33:54.913355 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/754d5315-6528-4b3e-87a3-834fcc09b71f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "754d5315-6528-4b3e-87a3-834fcc09b71f" (UID: "754d5315-6528-4b3e-87a3-834fcc09b71f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 25 01:33:55.003981 kubelet[2579]: I0325 01:33:55.003934 2579 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/754d5315-6528-4b3e-87a3-834fcc09b71f-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 25 01:33:55.003981 kubelet[2579]: I0325 01:33:55.003966 2579 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/754d5315-6528-4b3e-87a3-834fcc09b71f-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 25 01:33:55.003981 kubelet[2579]: I0325 01:33:55.003975 2579 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/482fbfe6-5429-4fc6-a231-ca59762b9968-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 25 01:33:55.003981 kubelet[2579]: I0325 01:33:55.003986 2579 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/754d5315-6528-4b3e-87a3-834fcc09b71f-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 25 01:33:55.004179 kubelet[2579]: I0325 01:33:55.003996 2579 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/754d5315-6528-4b3e-87a3-834fcc09b71f-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 25 01:33:55.004179 kubelet[2579]: I0325 01:33:55.004004 2579 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-zzzrb\" (UniqueName: \"kubernetes.io/projected/482fbfe6-5429-4fc6-a231-ca59762b9968-kube-api-access-zzzrb\") on node \"localhost\" DevicePath \"\"" Mar 25 01:33:55.004179 kubelet[2579]: I0325 01:33:55.004012 2579 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/754d5315-6528-4b3e-87a3-834fcc09b71f-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 25 01:33:55.004179 kubelet[2579]: I0325 01:33:55.004019 2579 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/754d5315-6528-4b3e-87a3-834fcc09b71f-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 25 01:33:55.004179 kubelet[2579]: I0325 01:33:55.004026 2579 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/754d5315-6528-4b3e-87a3-834fcc09b71f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 25 01:33:55.004179 kubelet[2579]: I0325 01:33:55.004033 2579 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/754d5315-6528-4b3e-87a3-834fcc09b71f-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 25 01:33:55.004179 kubelet[2579]: I0325 01:33:55.004040 2579 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/754d5315-6528-4b3e-87a3-834fcc09b71f-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 25 01:33:55.004179 kubelet[2579]: I0325 01:33:55.004048 2579 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/754d5315-6528-4b3e-87a3-834fcc09b71f-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 25 01:33:55.004332 kubelet[2579]: I0325 01:33:55.004055 2579 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/754d5315-6528-4b3e-87a3-834fcc09b71f-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 25 01:33:55.004332 kubelet[2579]: I0325 01:33:55.004062 2579 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/754d5315-6528-4b3e-87a3-834fcc09b71f-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 25 01:33:55.004332 kubelet[2579]: I0325 01:33:55.004069 2579 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/754d5315-6528-4b3e-87a3-834fcc09b71f-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 25 01:33:55.004332 kubelet[2579]: I0325 01:33:55.004077 2579 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-nv2hk\" (UniqueName: \"kubernetes.io/projected/754d5315-6528-4b3e-87a3-834fcc09b71f-kube-api-access-nv2hk\") on node \"localhost\" DevicePath \"\"" Mar 25 01:33:55.668792 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-60968c8638effe57e89fe18730c2f32c7178be3ed56f6273b8829811093204e6-shm.mount: Deactivated successfully. Mar 25 01:33:55.668921 systemd[1]: var-lib-kubelet-pods-482fbfe6\x2d5429\x2d4fc6\x2da231\x2dca59762b9968-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzzzrb.mount: Deactivated successfully. Mar 25 01:33:55.668985 systemd[1]: var-lib-kubelet-pods-754d5315\x2d6528\x2d4b3e\x2d87a3\x2d834fcc09b71f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnv2hk.mount: Deactivated successfully. Mar 25 01:33:55.669035 systemd[1]: var-lib-kubelet-pods-754d5315\x2d6528\x2d4b3e\x2d87a3\x2d834fcc09b71f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 25 01:33:55.669101 systemd[1]: var-lib-kubelet-pods-754d5315\x2d6528\x2d4b3e\x2d87a3\x2d834fcc09b71f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 25 01:33:55.680395 kubelet[2579]: I0325 01:33:55.680344 2579 scope.go:117] "RemoveContainer" containerID="b20196338dff5e31d708216d81bcbd11c93dc70c1410c50f8ebe8de620ad3f1f" Mar 25 01:33:55.686183 containerd[1484]: time="2025-03-25T01:33:55.686120671Z" level=info msg="RemoveContainer for \"b20196338dff5e31d708216d81bcbd11c93dc70c1410c50f8ebe8de620ad3f1f\"" Mar 25 01:33:55.686461 systemd[1]: Removed slice kubepods-burstable-pod754d5315_6528_4b3e_87a3_834fcc09b71f.slice - libcontainer container kubepods-burstable-pod754d5315_6528_4b3e_87a3_834fcc09b71f.slice. Mar 25 01:33:55.686541 systemd[1]: kubepods-burstable-pod754d5315_6528_4b3e_87a3_834fcc09b71f.slice: Consumed 6.562s CPU time, 122.5M memory peak, 172K read from disk, 12.9M written to disk. Mar 25 01:33:55.689604 systemd[1]: Removed slice kubepods-besteffort-pod482fbfe6_5429_4fc6_a231_ca59762b9968.slice - libcontainer container kubepods-besteffort-pod482fbfe6_5429_4fc6_a231_ca59762b9968.slice. Mar 25 01:33:55.690973 kubelet[2579]: I0325 01:33:55.690555 2579 scope.go:117] "RemoveContainer" containerID="7a11f606307e133131fb85be25ad374edc5ff6709693743162a6f3d2db2f0106" Mar 25 01:33:55.691008 containerd[1484]: time="2025-03-25T01:33:55.690260841Z" level=info msg="RemoveContainer for \"b20196338dff5e31d708216d81bcbd11c93dc70c1410c50f8ebe8de620ad3f1f\" returns successfully" Mar 25 01:33:55.692296 containerd[1484]: time="2025-03-25T01:33:55.692026800Z" level=info msg="RemoveContainer for \"7a11f606307e133131fb85be25ad374edc5ff6709693743162a6f3d2db2f0106\"" Mar 25 01:33:55.700791 containerd[1484]: time="2025-03-25T01:33:55.700753359Z" level=info msg="RemoveContainer for \"7a11f606307e133131fb85be25ad374edc5ff6709693743162a6f3d2db2f0106\" returns successfully" Mar 25 01:33:55.701027 kubelet[2579]: I0325 01:33:55.700978 2579 scope.go:117] "RemoveContainer" containerID="393a9334b32d7cb9f32f9b3b6efcdaa2c7b99fbe1e0f8c2a2e260931924d9aa8" Mar 25 01:33:55.703615 containerd[1484]: time="2025-03-25T01:33:55.703589908Z" level=info msg="RemoveContainer for \"393a9334b32d7cb9f32f9b3b6efcdaa2c7b99fbe1e0f8c2a2e260931924d9aa8\"" Mar 25 01:33:55.708605 containerd[1484]: time="2025-03-25T01:33:55.708572639Z" level=info msg="RemoveContainer for \"393a9334b32d7cb9f32f9b3b6efcdaa2c7b99fbe1e0f8c2a2e260931924d9aa8\" returns successfully" Mar 25 01:33:55.708753 kubelet[2579]: I0325 01:33:55.708727 2579 scope.go:117] "RemoveContainer" containerID="769ba3c1c8f0d6c84aa21995cc4eb62b0e623c28aa2003df07679993eb2cbde9" Mar 25 01:33:55.710863 containerd[1484]: time="2025-03-25T01:33:55.710834495Z" level=info msg="RemoveContainer for \"769ba3c1c8f0d6c84aa21995cc4eb62b0e623c28aa2003df07679993eb2cbde9\"" Mar 25 01:33:55.713983 containerd[1484]: time="2025-03-25T01:33:55.713959672Z" level=info msg="RemoveContainer for \"769ba3c1c8f0d6c84aa21995cc4eb62b0e623c28aa2003df07679993eb2cbde9\" returns successfully" Mar 25 01:33:55.714352 kubelet[2579]: I0325 01:33:55.714333 2579 scope.go:117] "RemoveContainer" containerID="9f2f200a4e5c2b8e71116c82161bfa033d971c959e61ffd757caa834fd3a0d35" Mar 25 01:33:55.716229 containerd[1484]: time="2025-03-25T01:33:55.716204969Z" level=info msg="RemoveContainer for \"9f2f200a4e5c2b8e71116c82161bfa033d971c959e61ffd757caa834fd3a0d35\"" Mar 25 01:33:55.718785 containerd[1484]: time="2025-03-25T01:33:55.718702254Z" level=info msg="RemoveContainer for \"9f2f200a4e5c2b8e71116c82161bfa033d971c959e61ffd757caa834fd3a0d35\" returns successfully" Mar 25 01:33:55.718885 kubelet[2579]: I0325 01:33:55.718848 2579 scope.go:117] "RemoveContainer" containerID="b20196338dff5e31d708216d81bcbd11c93dc70c1410c50f8ebe8de620ad3f1f" Mar 25 01:33:55.719079 containerd[1484]: time="2025-03-25T01:33:55.719046358Z" level=error msg="ContainerStatus for \"b20196338dff5e31d708216d81bcbd11c93dc70c1410c50f8ebe8de620ad3f1f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b20196338dff5e31d708216d81bcbd11c93dc70c1410c50f8ebe8de620ad3f1f\": not found" Mar 25 01:33:55.726558 kubelet[2579]: E0325 01:33:55.726517 2579 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b20196338dff5e31d708216d81bcbd11c93dc70c1410c50f8ebe8de620ad3f1f\": not found" containerID="b20196338dff5e31d708216d81bcbd11c93dc70c1410c50f8ebe8de620ad3f1f" Mar 25 01:33:55.726656 kubelet[2579]: I0325 01:33:55.726565 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b20196338dff5e31d708216d81bcbd11c93dc70c1410c50f8ebe8de620ad3f1f"} err="failed to get container status \"b20196338dff5e31d708216d81bcbd11c93dc70c1410c50f8ebe8de620ad3f1f\": rpc error: code = NotFound desc = an error occurred when try to find container \"b20196338dff5e31d708216d81bcbd11c93dc70c1410c50f8ebe8de620ad3f1f\": not found" Mar 25 01:33:55.726656 kubelet[2579]: I0325 01:33:55.726648 2579 scope.go:117] "RemoveContainer" containerID="7a11f606307e133131fb85be25ad374edc5ff6709693743162a6f3d2db2f0106" Mar 25 01:33:55.726934 containerd[1484]: time="2025-03-25T01:33:55.726850799Z" level=error msg="ContainerStatus for \"7a11f606307e133131fb85be25ad374edc5ff6709693743162a6f3d2db2f0106\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7a11f606307e133131fb85be25ad374edc5ff6709693743162a6f3d2db2f0106\": not found" Mar 25 01:33:55.727059 kubelet[2579]: E0325 01:33:55.726996 2579 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7a11f606307e133131fb85be25ad374edc5ff6709693743162a6f3d2db2f0106\": not found" containerID="7a11f606307e133131fb85be25ad374edc5ff6709693743162a6f3d2db2f0106" Mar 25 01:33:55.727110 kubelet[2579]: I0325 01:33:55.727054 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7a11f606307e133131fb85be25ad374edc5ff6709693743162a6f3d2db2f0106"} err="failed to get container status \"7a11f606307e133131fb85be25ad374edc5ff6709693743162a6f3d2db2f0106\": rpc error: code = NotFound desc = an error occurred when try to find container \"7a11f606307e133131fb85be25ad374edc5ff6709693743162a6f3d2db2f0106\": not found" Mar 25 01:33:55.727110 kubelet[2579]: I0325 01:33:55.727071 2579 scope.go:117] "RemoveContainer" containerID="393a9334b32d7cb9f32f9b3b6efcdaa2c7b99fbe1e0f8c2a2e260931924d9aa8" Mar 25 01:33:55.727301 containerd[1484]: time="2025-03-25T01:33:55.727258341Z" level=error msg="ContainerStatus for \"393a9334b32d7cb9f32f9b3b6efcdaa2c7b99fbe1e0f8c2a2e260931924d9aa8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"393a9334b32d7cb9f32f9b3b6efcdaa2c7b99fbe1e0f8c2a2e260931924d9aa8\": not found" Mar 25 01:33:55.727390 kubelet[2579]: E0325 01:33:55.727368 2579 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"393a9334b32d7cb9f32f9b3b6efcdaa2c7b99fbe1e0f8c2a2e260931924d9aa8\": not found" containerID="393a9334b32d7cb9f32f9b3b6efcdaa2c7b99fbe1e0f8c2a2e260931924d9aa8" Mar 25 01:33:55.727418 kubelet[2579]: I0325 01:33:55.727394 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"393a9334b32d7cb9f32f9b3b6efcdaa2c7b99fbe1e0f8c2a2e260931924d9aa8"} err="failed to get container status \"393a9334b32d7cb9f32f9b3b6efcdaa2c7b99fbe1e0f8c2a2e260931924d9aa8\": rpc error: code = NotFound desc = an error occurred when try to find container \"393a9334b32d7cb9f32f9b3b6efcdaa2c7b99fbe1e0f8c2a2e260931924d9aa8\": not found" Mar 25 01:33:55.727418 kubelet[2579]: I0325 01:33:55.727409 2579 scope.go:117] "RemoveContainer" containerID="769ba3c1c8f0d6c84aa21995cc4eb62b0e623c28aa2003df07679993eb2cbde9" Mar 25 01:33:55.727625 containerd[1484]: time="2025-03-25T01:33:55.727562407Z" level=error msg="ContainerStatus for \"769ba3c1c8f0d6c84aa21995cc4eb62b0e623c28aa2003df07679993eb2cbde9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"769ba3c1c8f0d6c84aa21995cc4eb62b0e623c28aa2003df07679993eb2cbde9\": not found" Mar 25 01:33:55.727682 kubelet[2579]: E0325 01:33:55.727667 2579 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"769ba3c1c8f0d6c84aa21995cc4eb62b0e623c28aa2003df07679993eb2cbde9\": not found" containerID="769ba3c1c8f0d6c84aa21995cc4eb62b0e623c28aa2003df07679993eb2cbde9" Mar 25 01:33:55.727713 kubelet[2579]: I0325 01:33:55.727686 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"769ba3c1c8f0d6c84aa21995cc4eb62b0e623c28aa2003df07679993eb2cbde9"} err="failed to get container status \"769ba3c1c8f0d6c84aa21995cc4eb62b0e623c28aa2003df07679993eb2cbde9\": rpc error: code = NotFound desc = an error occurred when try to find container \"769ba3c1c8f0d6c84aa21995cc4eb62b0e623c28aa2003df07679993eb2cbde9\": not found" Mar 25 01:33:55.727713 kubelet[2579]: I0325 01:33:55.727700 2579 scope.go:117] "RemoveContainer" containerID="9f2f200a4e5c2b8e71116c82161bfa033d971c959e61ffd757caa834fd3a0d35" Mar 25 01:33:55.727906 containerd[1484]: time="2025-03-25T01:33:55.727851753Z" level=error msg="ContainerStatus for \"9f2f200a4e5c2b8e71116c82161bfa033d971c959e61ffd757caa834fd3a0d35\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9f2f200a4e5c2b8e71116c82161bfa033d971c959e61ffd757caa834fd3a0d35\": not found" Mar 25 01:33:55.728008 kubelet[2579]: E0325 01:33:55.727987 2579 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9f2f200a4e5c2b8e71116c82161bfa033d971c959e61ffd757caa834fd3a0d35\": not found" containerID="9f2f200a4e5c2b8e71116c82161bfa033d971c959e61ffd757caa834fd3a0d35" Mar 25 01:33:55.728046 kubelet[2579]: I0325 01:33:55.728010 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9f2f200a4e5c2b8e71116c82161bfa033d971c959e61ffd757caa834fd3a0d35"} err="failed to get container status \"9f2f200a4e5c2b8e71116c82161bfa033d971c959e61ffd757caa834fd3a0d35\": rpc error: code = NotFound desc = an error occurred when try to find container \"9f2f200a4e5c2b8e71116c82161bfa033d971c959e61ffd757caa834fd3a0d35\": not found" Mar 25 01:33:55.728046 kubelet[2579]: I0325 01:33:55.728027 2579 scope.go:117] "RemoveContainer" containerID="93dd13253db5c2e2c3b7007d676830224b1a5a864a478bc9333f62f85047d3d8" Mar 25 01:33:55.729819 containerd[1484]: time="2025-03-25T01:33:55.729351884Z" level=info msg="RemoveContainer for \"93dd13253db5c2e2c3b7007d676830224b1a5a864a478bc9333f62f85047d3d8\"" Mar 25 01:33:55.731728 containerd[1484]: time="2025-03-25T01:33:55.731641419Z" level=info msg="RemoveContainer for \"93dd13253db5c2e2c3b7007d676830224b1a5a864a478bc9333f62f85047d3d8\" returns successfully" Mar 25 01:33:55.731810 kubelet[2579]: I0325 01:33:55.731787 2579 scope.go:117] "RemoveContainer" containerID="93dd13253db5c2e2c3b7007d676830224b1a5a864a478bc9333f62f85047d3d8" Mar 25 01:33:55.732016 containerd[1484]: time="2025-03-25T01:33:55.731984683Z" level=error msg="ContainerStatus for \"93dd13253db5c2e2c3b7007d676830224b1a5a864a478bc9333f62f85047d3d8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"93dd13253db5c2e2c3b7007d676830224b1a5a864a478bc9333f62f85047d3d8\": not found" Mar 25 01:33:55.732148 kubelet[2579]: E0325 01:33:55.732127 2579 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"93dd13253db5c2e2c3b7007d676830224b1a5a864a478bc9333f62f85047d3d8\": not found" containerID="93dd13253db5c2e2c3b7007d676830224b1a5a864a478bc9333f62f85047d3d8" Mar 25 01:33:55.732186 kubelet[2579]: I0325 01:33:55.732152 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"93dd13253db5c2e2c3b7007d676830224b1a5a864a478bc9333f62f85047d3d8"} err="failed to get container status \"93dd13253db5c2e2c3b7007d676830224b1a5a864a478bc9333f62f85047d3d8\": rpc error: code = NotFound desc = an error occurred when try to find container \"93dd13253db5c2e2c3b7007d676830224b1a5a864a478bc9333f62f85047d3d8\": not found" Mar 25 01:33:56.483761 kubelet[2579]: E0325 01:33:56.483722 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:33:56.486305 kubelet[2579]: I0325 01:33:56.486259 2579 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="482fbfe6-5429-4fc6-a231-ca59762b9968" path="/var/lib/kubelet/pods/482fbfe6-5429-4fc6-a231-ca59762b9968/volumes" Mar 25 01:33:56.486796 kubelet[2579]: I0325 01:33:56.486733 2579 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="754d5315-6528-4b3e-87a3-834fcc09b71f" path="/var/lib/kubelet/pods/754d5315-6528-4b3e-87a3-834fcc09b71f/volumes" Mar 25 01:33:56.603443 sshd[4183]: Connection closed by 10.0.0.1 port 58314 Mar 25 01:33:56.604090 sshd-session[4180]: pam_unix(sshd:session): session closed for user core Mar 25 01:33:56.615149 systemd[1]: sshd@22-10.0.0.141:22-10.0.0.1:58314.service: Deactivated successfully. Mar 25 01:33:56.616661 systemd[1]: session-23.scope: Deactivated successfully. Mar 25 01:33:56.617322 systemd-logind[1467]: Session 23 logged out. Waiting for processes to exit. Mar 25 01:33:56.619134 systemd[1]: Started sshd@23-10.0.0.141:22-10.0.0.1:58322.service - OpenSSH per-connection server daemon (10.0.0.1:58322). Mar 25 01:33:56.620241 systemd-logind[1467]: Removed session 23. Mar 25 01:33:56.670328 sshd[4332]: Accepted publickey for core from 10.0.0.1 port 58322 ssh2: RSA SHA256:RyyrKoKHvyGTiWIDeMwuNNfmpVLXChNPYxUIZdc99cw Mar 25 01:33:56.671609 sshd-session[4332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:33:56.676515 systemd-logind[1467]: New session 24 of user core. Mar 25 01:33:56.682995 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 25 01:33:57.536193 kubelet[2579]: E0325 01:33:57.536142 2579 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 25 01:33:57.605937 sshd[4335]: Connection closed by 10.0.0.1 port 58322 Mar 25 01:33:57.606283 sshd-session[4332]: pam_unix(sshd:session): session closed for user core Mar 25 01:33:57.620799 systemd[1]: sshd@23-10.0.0.141:22-10.0.0.1:58322.service: Deactivated successfully. Mar 25 01:33:57.624626 systemd[1]: session-24.scope: Deactivated successfully. Mar 25 01:33:57.625968 systemd-logind[1467]: Session 24 logged out. Waiting for processes to exit. Mar 25 01:33:57.629470 systemd[1]: Started sshd@24-10.0.0.141:22-10.0.0.1:58332.service - OpenSSH per-connection server daemon (10.0.0.1:58332). Mar 25 01:33:57.631561 systemd-logind[1467]: Removed session 24. Mar 25 01:33:57.641930 kubelet[2579]: E0325 01:33:57.638427 2579 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="754d5315-6528-4b3e-87a3-834fcc09b71f" containerName="mount-cgroup" Mar 25 01:33:57.641930 kubelet[2579]: E0325 01:33:57.638451 2579 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="754d5315-6528-4b3e-87a3-834fcc09b71f" containerName="apply-sysctl-overwrites" Mar 25 01:33:57.641930 kubelet[2579]: E0325 01:33:57.638458 2579 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="482fbfe6-5429-4fc6-a231-ca59762b9968" containerName="cilium-operator" Mar 25 01:33:57.641930 kubelet[2579]: E0325 01:33:57.638464 2579 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="754d5315-6528-4b3e-87a3-834fcc09b71f" containerName="mount-bpf-fs" Mar 25 01:33:57.641930 kubelet[2579]: E0325 01:33:57.638470 2579 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="754d5315-6528-4b3e-87a3-834fcc09b71f" containerName="clean-cilium-state" Mar 25 01:33:57.641930 kubelet[2579]: E0325 01:33:57.638475 2579 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="754d5315-6528-4b3e-87a3-834fcc09b71f" containerName="cilium-agent" Mar 25 01:33:57.641930 kubelet[2579]: I0325 01:33:57.638497 2579 memory_manager.go:354] "RemoveStaleState removing state" podUID="482fbfe6-5429-4fc6-a231-ca59762b9968" containerName="cilium-operator" Mar 25 01:33:57.641930 kubelet[2579]: I0325 01:33:57.638503 2579 memory_manager.go:354] "RemoveStaleState removing state" podUID="754d5315-6528-4b3e-87a3-834fcc09b71f" containerName="cilium-agent" Mar 25 01:33:57.652424 systemd[1]: Created slice kubepods-burstable-pod740d5565_f94c_4512_90a7_42d9a6509fc7.slice - libcontainer container kubepods-burstable-pod740d5565_f94c_4512_90a7_42d9a6509fc7.slice. Mar 25 01:33:57.699823 sshd[4346]: Accepted publickey for core from 10.0.0.1 port 58332 ssh2: RSA SHA256:RyyrKoKHvyGTiWIDeMwuNNfmpVLXChNPYxUIZdc99cw Mar 25 01:33:57.701093 sshd-session[4346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:33:57.705538 systemd-logind[1467]: New session 25 of user core. Mar 25 01:33:57.717064 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 25 01:33:57.770210 sshd[4349]: Connection closed by 10.0.0.1 port 58332 Mar 25 01:33:57.770069 sshd-session[4346]: pam_unix(sshd:session): session closed for user core Mar 25 01:33:57.788488 systemd[1]: sshd@24-10.0.0.141:22-10.0.0.1:58332.service: Deactivated successfully. Mar 25 01:33:57.790131 systemd[1]: session-25.scope: Deactivated successfully. Mar 25 01:33:57.790711 systemd-logind[1467]: Session 25 logged out. Waiting for processes to exit. Mar 25 01:33:57.792683 systemd[1]: Started sshd@25-10.0.0.141:22-10.0.0.1:58334.service - OpenSSH per-connection server daemon (10.0.0.1:58334). Mar 25 01:33:57.793535 systemd-logind[1467]: Removed session 25. Mar 25 01:33:57.819696 kubelet[2579]: I0325 01:33:57.819658 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/740d5565-f94c-4512-90a7-42d9a6509fc7-cilium-cgroup\") pod \"cilium-kqj4g\" (UID: \"740d5565-f94c-4512-90a7-42d9a6509fc7\") " pod="kube-system/cilium-kqj4g" Mar 25 01:33:57.819696 kubelet[2579]: I0325 01:33:57.819696 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/740d5565-f94c-4512-90a7-42d9a6509fc7-etc-cni-netd\") pod \"cilium-kqj4g\" (UID: \"740d5565-f94c-4512-90a7-42d9a6509fc7\") " pod="kube-system/cilium-kqj4g" Mar 25 01:33:57.819812 kubelet[2579]: I0325 01:33:57.819719 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/740d5565-f94c-4512-90a7-42d9a6509fc7-bpf-maps\") pod \"cilium-kqj4g\" (UID: \"740d5565-f94c-4512-90a7-42d9a6509fc7\") " pod="kube-system/cilium-kqj4g" Mar 25 01:33:57.819812 kubelet[2579]: I0325 01:33:57.819765 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbpgw\" (UniqueName: \"kubernetes.io/projected/740d5565-f94c-4512-90a7-42d9a6509fc7-kube-api-access-fbpgw\") pod \"cilium-kqj4g\" (UID: \"740d5565-f94c-4512-90a7-42d9a6509fc7\") " pod="kube-system/cilium-kqj4g" Mar 25 01:33:57.819812 kubelet[2579]: I0325 01:33:57.819804 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/740d5565-f94c-4512-90a7-42d9a6509fc7-cni-path\") pod \"cilium-kqj4g\" (UID: \"740d5565-f94c-4512-90a7-42d9a6509fc7\") " pod="kube-system/cilium-kqj4g" Mar 25 01:33:57.819966 kubelet[2579]: I0325 01:33:57.819822 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/740d5565-f94c-4512-90a7-42d9a6509fc7-lib-modules\") pod \"cilium-kqj4g\" (UID: \"740d5565-f94c-4512-90a7-42d9a6509fc7\") " pod="kube-system/cilium-kqj4g" Mar 25 01:33:57.819966 kubelet[2579]: I0325 01:33:57.819838 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/740d5565-f94c-4512-90a7-42d9a6509fc7-xtables-lock\") pod \"cilium-kqj4g\" (UID: \"740d5565-f94c-4512-90a7-42d9a6509fc7\") " pod="kube-system/cilium-kqj4g" Mar 25 01:33:57.819966 kubelet[2579]: I0325 01:33:57.819867 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/740d5565-f94c-4512-90a7-42d9a6509fc7-clustermesh-secrets\") pod \"cilium-kqj4g\" (UID: \"740d5565-f94c-4512-90a7-42d9a6509fc7\") " pod="kube-system/cilium-kqj4g" Mar 25 01:33:57.819966 kubelet[2579]: I0325 01:33:57.819891 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/740d5565-f94c-4512-90a7-42d9a6509fc7-hubble-tls\") pod \"cilium-kqj4g\" (UID: \"740d5565-f94c-4512-90a7-42d9a6509fc7\") " pod="kube-system/cilium-kqj4g" Mar 25 01:33:57.819966 kubelet[2579]: I0325 01:33:57.819909 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/740d5565-f94c-4512-90a7-42d9a6509fc7-hostproc\") pod \"cilium-kqj4g\" (UID: \"740d5565-f94c-4512-90a7-42d9a6509fc7\") " pod="kube-system/cilium-kqj4g" Mar 25 01:33:57.819966 kubelet[2579]: I0325 01:33:57.819924 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/740d5565-f94c-4512-90a7-42d9a6509fc7-host-proc-sys-kernel\") pod \"cilium-kqj4g\" (UID: \"740d5565-f94c-4512-90a7-42d9a6509fc7\") " pod="kube-system/cilium-kqj4g" Mar 25 01:33:57.820084 kubelet[2579]: I0325 01:33:57.819941 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/740d5565-f94c-4512-90a7-42d9a6509fc7-cilium-config-path\") pod \"cilium-kqj4g\" (UID: \"740d5565-f94c-4512-90a7-42d9a6509fc7\") " pod="kube-system/cilium-kqj4g" Mar 25 01:33:57.820084 kubelet[2579]: I0325 01:33:57.819969 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/740d5565-f94c-4512-90a7-42d9a6509fc7-cilium-ipsec-secrets\") pod \"cilium-kqj4g\" (UID: \"740d5565-f94c-4512-90a7-42d9a6509fc7\") " pod="kube-system/cilium-kqj4g" Mar 25 01:33:57.820084 kubelet[2579]: I0325 01:33:57.820002 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/740d5565-f94c-4512-90a7-42d9a6509fc7-cilium-run\") pod \"cilium-kqj4g\" (UID: \"740d5565-f94c-4512-90a7-42d9a6509fc7\") " pod="kube-system/cilium-kqj4g" Mar 25 01:33:57.820084 kubelet[2579]: I0325 01:33:57.820022 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/740d5565-f94c-4512-90a7-42d9a6509fc7-host-proc-sys-net\") pod \"cilium-kqj4g\" (UID: \"740d5565-f94c-4512-90a7-42d9a6509fc7\") " pod="kube-system/cilium-kqj4g" Mar 25 01:33:57.837612 sshd[4355]: Accepted publickey for core from 10.0.0.1 port 58334 ssh2: RSA SHA256:RyyrKoKHvyGTiWIDeMwuNNfmpVLXChNPYxUIZdc99cw Mar 25 01:33:57.838632 sshd-session[4355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:33:57.842754 systemd-logind[1467]: New session 26 of user core. Mar 25 01:33:57.852052 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 25 01:33:57.957354 kubelet[2579]: E0325 01:33:57.957320 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:33:57.959036 containerd[1484]: time="2025-03-25T01:33:57.958708680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kqj4g,Uid:740d5565-f94c-4512-90a7-42d9a6509fc7,Namespace:kube-system,Attempt:0,}" Mar 25 01:33:57.988430 containerd[1484]: time="2025-03-25T01:33:57.988117042Z" level=info msg="connecting to shim ea869e59b8e454a036cd24fc0759387583013ff88dfaeae5597f70bbe79d12c7" address="unix:///run/containerd/s/69107f2b983c3b69b81c86fac9e9aa0fa961fe4f7522900453fe3c146b424b4d" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:33:58.005021 systemd[1]: Started cri-containerd-ea869e59b8e454a036cd24fc0759387583013ff88dfaeae5597f70bbe79d12c7.scope - libcontainer container ea869e59b8e454a036cd24fc0759387583013ff88dfaeae5597f70bbe79d12c7. Mar 25 01:33:58.029648 containerd[1484]: time="2025-03-25T01:33:58.029609659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kqj4g,Uid:740d5565-f94c-4512-90a7-42d9a6509fc7,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea869e59b8e454a036cd24fc0759387583013ff88dfaeae5597f70bbe79d12c7\"" Mar 25 01:33:58.030265 kubelet[2579]: E0325 01:33:58.030243 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:33:58.031988 containerd[1484]: time="2025-03-25T01:33:58.031960813Z" level=info msg="CreateContainer within sandbox \"ea869e59b8e454a036cd24fc0759387583013ff88dfaeae5597f70bbe79d12c7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 25 01:33:58.036614 containerd[1484]: time="2025-03-25T01:33:58.036584686Z" level=info msg="Container 6f68fcab70e10bec9625573d2fde9c61ee1860f997bf30091c0909da117529af: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:33:58.042229 containerd[1484]: time="2025-03-25T01:33:58.042149884Z" level=info msg="CreateContainer within sandbox \"ea869e59b8e454a036cd24fc0759387583013ff88dfaeae5597f70bbe79d12c7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6f68fcab70e10bec9625573d2fde9c61ee1860f997bf30091c0909da117529af\"" Mar 25 01:33:58.043542 containerd[1484]: time="2025-03-25T01:33:58.043510355Z" level=info msg="StartContainer for \"6f68fcab70e10bec9625573d2fde9c61ee1860f997bf30091c0909da117529af\"" Mar 25 01:33:58.044868 containerd[1484]: time="2025-03-25T01:33:58.044823987Z" level=info msg="connecting to shim 6f68fcab70e10bec9625573d2fde9c61ee1860f997bf30091c0909da117529af" address="unix:///run/containerd/s/69107f2b983c3b69b81c86fac9e9aa0fa961fe4f7522900453fe3c146b424b4d" protocol=ttrpc version=3 Mar 25 01:33:58.069061 systemd[1]: Started cri-containerd-6f68fcab70e10bec9625573d2fde9c61ee1860f997bf30091c0909da117529af.scope - libcontainer container 6f68fcab70e10bec9625573d2fde9c61ee1860f997bf30091c0909da117529af. Mar 25 01:33:58.092645 containerd[1484]: time="2025-03-25T01:33:58.092612736Z" level=info msg="StartContainer for \"6f68fcab70e10bec9625573d2fde9c61ee1860f997bf30091c0909da117529af\" returns successfully" Mar 25 01:33:58.102646 systemd[1]: cri-containerd-6f68fcab70e10bec9625573d2fde9c61ee1860f997bf30091c0909da117529af.scope: Deactivated successfully. Mar 25 01:33:58.105208 containerd[1484]: time="2025-03-25T01:33:58.105174841Z" level=info msg="received exit event container_id:\"6f68fcab70e10bec9625573d2fde9c61ee1860f997bf30091c0909da117529af\" id:\"6f68fcab70e10bec9625573d2fde9c61ee1860f997bf30091c0909da117529af\" pid:4430 exited_at:{seconds:1742866438 nanos:104885651}" Mar 25 01:33:58.124783 containerd[1484]: time="2025-03-25T01:33:58.124722173Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6f68fcab70e10bec9625573d2fde9c61ee1860f997bf30091c0909da117529af\" id:\"6f68fcab70e10bec9625573d2fde9c61ee1860f997bf30091c0909da117529af\" pid:4430 exited_at:{seconds:1742866438 nanos:104885651}" Mar 25 01:33:58.688353 kubelet[2579]: E0325 01:33:58.688311 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:33:58.701633 containerd[1484]: time="2025-03-25T01:33:58.701583872Z" level=info msg="CreateContainer within sandbox \"ea869e59b8e454a036cd24fc0759387583013ff88dfaeae5597f70bbe79d12c7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 25 01:33:58.714926 containerd[1484]: time="2025-03-25T01:33:58.714616800Z" level=info msg="Container e5ed6971159daaee21b580465e125e7032ae7402b0e0593f5c904429d05ed474: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:33:58.719263 containerd[1484]: time="2025-03-25T01:33:58.719232593Z" level=info msg="CreateContainer within sandbox \"ea869e59b8e454a036cd24fc0759387583013ff88dfaeae5597f70bbe79d12c7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e5ed6971159daaee21b580465e125e7032ae7402b0e0593f5c904429d05ed474\"" Mar 25 01:33:58.719709 containerd[1484]: time="2025-03-25T01:33:58.719652817Z" level=info msg="StartContainer for \"e5ed6971159daaee21b580465e125e7032ae7402b0e0593f5c904429d05ed474\"" Mar 25 01:33:58.723031 containerd[1484]: time="2025-03-25T01:33:58.722998296Z" level=info msg="connecting to shim e5ed6971159daaee21b580465e125e7032ae7402b0e0593f5c904429d05ed474" address="unix:///run/containerd/s/69107f2b983c3b69b81c86fac9e9aa0fa961fe4f7522900453fe3c146b424b4d" protocol=ttrpc version=3 Mar 25 01:33:58.740090 systemd[1]: Started cri-containerd-e5ed6971159daaee21b580465e125e7032ae7402b0e0593f5c904429d05ed474.scope - libcontainer container e5ed6971159daaee21b580465e125e7032ae7402b0e0593f5c904429d05ed474. Mar 25 01:33:58.764455 containerd[1484]: time="2025-03-25T01:33:58.764421035Z" level=info msg="StartContainer for \"e5ed6971159daaee21b580465e125e7032ae7402b0e0593f5c904429d05ed474\" returns successfully" Mar 25 01:33:58.768146 systemd[1]: cri-containerd-e5ed6971159daaee21b580465e125e7032ae7402b0e0593f5c904429d05ed474.scope: Deactivated successfully. Mar 25 01:33:58.770785 containerd[1484]: time="2025-03-25T01:33:58.770743846Z" level=info msg="received exit event container_id:\"e5ed6971159daaee21b580465e125e7032ae7402b0e0593f5c904429d05ed474\" id:\"e5ed6971159daaee21b580465e125e7032ae7402b0e0593f5c904429d05ed474\" pid:4477 exited_at:{seconds:1742866438 nanos:770498415}" Mar 25 01:33:58.770969 containerd[1484]: time="2025-03-25T01:33:58.770922520Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e5ed6971159daaee21b580465e125e7032ae7402b0e0593f5c904429d05ed474\" id:\"e5ed6971159daaee21b580465e125e7032ae7402b0e0593f5c904429d05ed474\" pid:4477 exited_at:{seconds:1742866438 nanos:770498415}" Mar 25 01:33:59.692423 kubelet[2579]: E0325 01:33:59.692391 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:33:59.693903 containerd[1484]: time="2025-03-25T01:33:59.693843142Z" level=info msg="CreateContainer within sandbox \"ea869e59b8e454a036cd24fc0759387583013ff88dfaeae5597f70bbe79d12c7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 25 01:33:59.718484 containerd[1484]: time="2025-03-25T01:33:59.717418039Z" level=info msg="Container b70c2742287d7a73148888260bb811bebc8731ad84dc352fd6c2c061432a02ed: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:33:59.724927 containerd[1484]: time="2025-03-25T01:33:59.724664239Z" level=info msg="CreateContainer within sandbox \"ea869e59b8e454a036cd24fc0759387583013ff88dfaeae5597f70bbe79d12c7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b70c2742287d7a73148888260bb811bebc8731ad84dc352fd6c2c061432a02ed\"" Mar 25 01:33:59.725518 containerd[1484]: time="2025-03-25T01:33:59.725480332Z" level=info msg="StartContainer for \"b70c2742287d7a73148888260bb811bebc8731ad84dc352fd6c2c061432a02ed\"" Mar 25 01:33:59.729651 containerd[1484]: time="2025-03-25T01:33:59.729063773Z" level=info msg="connecting to shim b70c2742287d7a73148888260bb811bebc8731ad84dc352fd6c2c061432a02ed" address="unix:///run/containerd/s/69107f2b983c3b69b81c86fac9e9aa0fa961fe4f7522900453fe3c146b424b4d" protocol=ttrpc version=3 Mar 25 01:33:59.749008 systemd[1]: Started cri-containerd-b70c2742287d7a73148888260bb811bebc8731ad84dc352fd6c2c061432a02ed.scope - libcontainer container b70c2742287d7a73148888260bb811bebc8731ad84dc352fd6c2c061432a02ed. Mar 25 01:33:59.777141 containerd[1484]: time="2025-03-25T01:33:59.777110538Z" level=info msg="StartContainer for \"b70c2742287d7a73148888260bb811bebc8731ad84dc352fd6c2c061432a02ed\" returns successfully" Mar 25 01:33:59.777622 systemd[1]: cri-containerd-b70c2742287d7a73148888260bb811bebc8731ad84dc352fd6c2c061432a02ed.scope: Deactivated successfully. Mar 25 01:33:59.779396 containerd[1484]: time="2025-03-25T01:33:59.779270587Z" level=info msg="received exit event container_id:\"b70c2742287d7a73148888260bb811bebc8731ad84dc352fd6c2c061432a02ed\" id:\"b70c2742287d7a73148888260bb811bebc8731ad84dc352fd6c2c061432a02ed\" pid:4521 exited_at:{seconds:1742866439 nanos:778577090}" Mar 25 01:33:59.779396 containerd[1484]: time="2025-03-25T01:33:59.779365023Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b70c2742287d7a73148888260bb811bebc8731ad84dc352fd6c2c061432a02ed\" id:\"b70c2742287d7a73148888260bb811bebc8731ad84dc352fd6c2c061432a02ed\" pid:4521 exited_at:{seconds:1742866439 nanos:778577090}" Mar 25 01:33:59.797809 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b70c2742287d7a73148888260bb811bebc8731ad84dc352fd6c2c061432a02ed-rootfs.mount: Deactivated successfully. Mar 25 01:34:00.697216 kubelet[2579]: E0325 01:34:00.696555 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:34:00.698512 containerd[1484]: time="2025-03-25T01:34:00.698477049Z" level=info msg="CreateContainer within sandbox \"ea869e59b8e454a036cd24fc0759387583013ff88dfaeae5597f70bbe79d12c7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 25 01:34:00.707564 containerd[1484]: time="2025-03-25T01:34:00.707522655Z" level=info msg="Container 8765ca59a901768675600f2d19268e1bdf32c327f0cabc3b8564f2c2295ab8e3: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:34:00.715230 containerd[1484]: time="2025-03-25T01:34:00.715186984Z" level=info msg="CreateContainer within sandbox \"ea869e59b8e454a036cd24fc0759387583013ff88dfaeae5597f70bbe79d12c7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8765ca59a901768675600f2d19268e1bdf32c327f0cabc3b8564f2c2295ab8e3\"" Mar 25 01:34:00.715641 containerd[1484]: time="2025-03-25T01:34:00.715607571Z" level=info msg="StartContainer for \"8765ca59a901768675600f2d19268e1bdf32c327f0cabc3b8564f2c2295ab8e3\"" Mar 25 01:34:00.716426 containerd[1484]: time="2025-03-25T01:34:00.716380108Z" level=info msg="connecting to shim 8765ca59a901768675600f2d19268e1bdf32c327f0cabc3b8564f2c2295ab8e3" address="unix:///run/containerd/s/69107f2b983c3b69b81c86fac9e9aa0fa961fe4f7522900453fe3c146b424b4d" protocol=ttrpc version=3 Mar 25 01:34:00.736985 systemd[1]: Started cri-containerd-8765ca59a901768675600f2d19268e1bdf32c327f0cabc3b8564f2c2295ab8e3.scope - libcontainer container 8765ca59a901768675600f2d19268e1bdf32c327f0cabc3b8564f2c2295ab8e3. Mar 25 01:34:00.762835 systemd[1]: cri-containerd-8765ca59a901768675600f2d19268e1bdf32c327f0cabc3b8564f2c2295ab8e3.scope: Deactivated successfully. Mar 25 01:34:00.763936 containerd[1484]: time="2025-03-25T01:34:00.763904550Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8765ca59a901768675600f2d19268e1bdf32c327f0cabc3b8564f2c2295ab8e3\" id:\"8765ca59a901768675600f2d19268e1bdf32c327f0cabc3b8564f2c2295ab8e3\" pid:4560 exited_at:{seconds:1742866440 nanos:763665038}" Mar 25 01:34:00.764002 containerd[1484]: time="2025-03-25T01:34:00.763935109Z" level=info msg="received exit event container_id:\"8765ca59a901768675600f2d19268e1bdf32c327f0cabc3b8564f2c2295ab8e3\" id:\"8765ca59a901768675600f2d19268e1bdf32c327f0cabc3b8564f2c2295ab8e3\" pid:4560 exited_at:{seconds:1742866440 nanos:763665038}" Mar 25 01:34:00.769765 containerd[1484]: time="2025-03-25T01:34:00.769726214Z" level=info msg="StartContainer for \"8765ca59a901768675600f2d19268e1bdf32c327f0cabc3b8564f2c2295ab8e3\" returns successfully" Mar 25 01:34:00.779245 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8765ca59a901768675600f2d19268e1bdf32c327f0cabc3b8564f2c2295ab8e3-rootfs.mount: Deactivated successfully. Mar 25 01:34:01.704475 kubelet[2579]: E0325 01:34:01.704441 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:34:01.709886 containerd[1484]: time="2025-03-25T01:34:01.708082611Z" level=info msg="CreateContainer within sandbox \"ea869e59b8e454a036cd24fc0759387583013ff88dfaeae5597f70bbe79d12c7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 25 01:34:01.730159 containerd[1484]: time="2025-03-25T01:34:01.729358548Z" level=info msg="Container 9584dc66349fdabee6932ce0246207dcb7e15dbf30faa143dbf347ccd9659512: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:34:01.734153 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2561479498.mount: Deactivated successfully. Mar 25 01:34:01.737222 containerd[1484]: time="2025-03-25T01:34:01.737167494Z" level=info msg="CreateContainer within sandbox \"ea869e59b8e454a036cd24fc0759387583013ff88dfaeae5597f70bbe79d12c7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9584dc66349fdabee6932ce0246207dcb7e15dbf30faa143dbf347ccd9659512\"" Mar 25 01:34:01.738974 containerd[1484]: time="2025-03-25T01:34:01.738933686Z" level=info msg="StartContainer for \"9584dc66349fdabee6932ce0246207dcb7e15dbf30faa143dbf347ccd9659512\"" Mar 25 01:34:01.739865 containerd[1484]: time="2025-03-25T01:34:01.739828861Z" level=info msg="connecting to shim 9584dc66349fdabee6932ce0246207dcb7e15dbf30faa143dbf347ccd9659512" address="unix:///run/containerd/s/69107f2b983c3b69b81c86fac9e9aa0fa961fe4f7522900453fe3c146b424b4d" protocol=ttrpc version=3 Mar 25 01:34:01.759984 systemd[1]: Started cri-containerd-9584dc66349fdabee6932ce0246207dcb7e15dbf30faa143dbf347ccd9659512.scope - libcontainer container 9584dc66349fdabee6932ce0246207dcb7e15dbf30faa143dbf347ccd9659512. Mar 25 01:34:01.790460 containerd[1484]: time="2025-03-25T01:34:01.790416876Z" level=info msg="StartContainer for \"9584dc66349fdabee6932ce0246207dcb7e15dbf30faa143dbf347ccd9659512\" returns successfully" Mar 25 01:34:01.841052 containerd[1484]: time="2025-03-25T01:34:01.841009370Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9584dc66349fdabee6932ce0246207dcb7e15dbf30faa143dbf347ccd9659512\" id:\"78b0f7babaf9d7b7d72673a3dc9f21ed38187da1c25f4c81c82a5bd876539b27\" pid:4627 exited_at:{seconds:1742866441 nanos:840677339}" Mar 25 01:34:02.052928 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Mar 25 01:34:02.709759 kubelet[2579]: E0325 01:34:02.709679 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:34:02.731076 kubelet[2579]: I0325 01:34:02.730592 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kqj4g" podStartSLOduration=5.730576586 podStartE2EDuration="5.730576586s" podCreationTimestamp="2025-03-25 01:33:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:34:02.729459373 +0000 UTC m=+80.318742519" watchObservedRunningTime="2025-03-25 01:34:02.730576586 +0000 UTC m=+80.319859732" Mar 25 01:34:03.958797 kubelet[2579]: E0325 01:34:03.958754 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:34:04.224121 containerd[1484]: time="2025-03-25T01:34:04.224013427Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9584dc66349fdabee6932ce0246207dcb7e15dbf30faa143dbf347ccd9659512\" id:\"91068642b8c5c11b5d05f4539bc7f8d61034af3da14d018bb0dd4f681643cda7\" pid:4948 exit_status:1 exited_at:{seconds:1742866444 nanos:223571035}" Mar 25 01:34:04.856910 systemd-networkd[1406]: lxc_health: Link UP Mar 25 01:34:04.857212 systemd-networkd[1406]: lxc_health: Gained carrier Mar 25 01:34:05.961150 kubelet[2579]: E0325 01:34:05.961089 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:34:06.357343 containerd[1484]: time="2025-03-25T01:34:06.357304154Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9584dc66349fdabee6932ce0246207dcb7e15dbf30faa143dbf347ccd9659512\" id:\"ee3a00861c911bf82c6fd7479bb3334c9324987d6f5ffaf69938984ed1de01ff\" pid:5169 exited_at:{seconds:1742866446 nanos:357014238}" Mar 25 01:34:06.589988 systemd-networkd[1406]: lxc_health: Gained IPv6LL Mar 25 01:34:06.717388 kubelet[2579]: E0325 01:34:06.717131 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:34:07.484304 kubelet[2579]: E0325 01:34:07.484273 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:34:07.718905 kubelet[2579]: E0325 01:34:07.718649 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:34:08.455734 containerd[1484]: time="2025-03-25T01:34:08.455653537Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9584dc66349fdabee6932ce0246207dcb7e15dbf30faa143dbf347ccd9659512\" id:\"7919e3cfaa91ef5449ea5e39de81e8d0ccf1bc195bc16b3eec6b9ea4d65b4369\" pid:5201 exited_at:{seconds:1742866448 nanos:455166382}" Mar 25 01:34:10.564597 containerd[1484]: time="2025-03-25T01:34:10.564511722Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9584dc66349fdabee6932ce0246207dcb7e15dbf30faa143dbf347ccd9659512\" id:\"61f5558c4f2012bf44549201d8df985f8a2f2d12b93507c4749219163e7ca5ea\" pid:5226 exited_at:{seconds:1742866450 nanos:564216924}" Mar 25 01:34:10.568765 sshd[4358]: Connection closed by 10.0.0.1 port 58334 Mar 25 01:34:10.569262 sshd-session[4355]: pam_unix(sshd:session): session closed for user core Mar 25 01:34:10.572407 systemd[1]: sshd@25-10.0.0.141:22-10.0.0.1:58334.service: Deactivated successfully. Mar 25 01:34:10.574325 systemd[1]: session-26.scope: Deactivated successfully. Mar 25 01:34:10.574956 systemd-logind[1467]: Session 26 logged out. Waiting for processes to exit. Mar 25 01:34:10.575819 systemd-logind[1467]: Removed session 26. Mar 25 01:34:12.487839 kubelet[2579]: E0325 01:34:12.487805 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"