Jul 15 23:25:58.834661 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 15 23:25:58.834683 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue Jul 15 22:00:45 -00 2025 Jul 15 23:25:58.834693 kernel: KASLR enabled Jul 15 23:25:58.834698 kernel: efi: EFI v2.7 by EDK II Jul 15 23:25:58.834704 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Jul 15 23:25:58.834709 kernel: random: crng init done Jul 15 23:25:58.834715 kernel: secureboot: Secure boot disabled Jul 15 23:25:58.834721 kernel: ACPI: Early table checksum verification disabled Jul 15 23:25:58.834726 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Jul 15 23:25:58.834734 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 15 23:25:58.834739 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:25:58.834745 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:25:58.834751 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:25:58.834757 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:25:58.834764 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:25:58.834771 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:25:58.834777 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:25:58.834783 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:25:58.834789 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:25:58.834795 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 15 23:25:58.834801 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 15 23:25:58.834807 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 15 23:25:58.834813 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Jul 15 23:25:58.834819 kernel: Zone ranges: Jul 15 23:25:58.834825 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 15 23:25:58.834832 kernel: DMA32 empty Jul 15 23:25:58.834838 kernel: Normal empty Jul 15 23:25:58.834844 kernel: Device empty Jul 15 23:25:58.834850 kernel: Movable zone start for each node Jul 15 23:25:58.834855 kernel: Early memory node ranges Jul 15 23:25:58.834862 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Jul 15 23:25:58.834868 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Jul 15 23:25:58.834874 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Jul 15 23:25:58.834880 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Jul 15 23:25:58.834886 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Jul 15 23:25:58.834892 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Jul 15 23:25:58.834898 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Jul 15 23:25:58.834905 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Jul 15 23:25:58.834911 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Jul 15 23:25:58.834918 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 15 23:25:58.834926 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 15 23:25:58.834933 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 15 23:25:58.834939 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 15 23:25:58.834947 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 15 23:25:58.834954 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 15 23:25:58.834961 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Jul 15 23:25:58.834967 kernel: psci: probing for conduit method from ACPI. Jul 15 23:25:58.834973 kernel: psci: PSCIv1.1 detected in firmware. Jul 15 23:25:58.834980 kernel: psci: Using standard PSCI v0.2 function IDs Jul 15 23:25:58.834986 kernel: psci: Trusted OS migration not required Jul 15 23:25:58.834992 kernel: psci: SMC Calling Convention v1.1 Jul 15 23:25:58.834999 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 15 23:25:58.835006 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 15 23:25:58.835014 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 15 23:25:58.835021 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 15 23:25:58.835027 kernel: Detected PIPT I-cache on CPU0 Jul 15 23:25:58.835034 kernel: CPU features: detected: GIC system register CPU interface Jul 15 23:25:58.835040 kernel: CPU features: detected: Spectre-v4 Jul 15 23:25:58.835047 kernel: CPU features: detected: Spectre-BHB Jul 15 23:25:58.835078 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 15 23:25:58.835085 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 15 23:25:58.835092 kernel: CPU features: detected: ARM erratum 1418040 Jul 15 23:25:58.835099 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 15 23:25:58.835105 kernel: alternatives: applying boot alternatives Jul 15 23:25:58.835113 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=6efbcbd16e8e41b645be9f8e34b328753e37d282675200dab08e504f8e58a578 Jul 15 23:25:58.835122 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 15 23:25:58.835129 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 15 23:25:58.835135 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 15 23:25:58.835142 kernel: Fallback order for Node 0: 0 Jul 15 23:25:58.835148 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Jul 15 23:25:58.835154 kernel: Policy zone: DMA Jul 15 23:25:58.835161 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 15 23:25:58.835167 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Jul 15 23:25:58.835174 kernel: software IO TLB: area num 4. Jul 15 23:25:58.835181 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Jul 15 23:25:58.835195 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Jul 15 23:25:58.835204 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 15 23:25:58.835211 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 15 23:25:58.835218 kernel: rcu: RCU event tracing is enabled. Jul 15 23:25:58.835225 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 15 23:25:58.835232 kernel: Trampoline variant of Tasks RCU enabled. Jul 15 23:25:58.835238 kernel: Tracing variant of Tasks RCU enabled. Jul 15 23:25:58.835245 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 15 23:25:58.835252 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 15 23:25:58.835259 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 15 23:25:58.835265 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 15 23:25:58.835272 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 15 23:25:58.835280 kernel: GICv3: 256 SPIs implemented Jul 15 23:25:58.835287 kernel: GICv3: 0 Extended SPIs implemented Jul 15 23:25:58.835293 kernel: Root IRQ handler: gic_handle_irq Jul 15 23:25:58.835300 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 15 23:25:58.835306 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jul 15 23:25:58.835313 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 15 23:25:58.835320 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 15 23:25:58.835326 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Jul 15 23:25:58.835334 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Jul 15 23:25:58.835341 kernel: GICv3: using LPI property table @0x0000000040130000 Jul 15 23:25:58.835348 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Jul 15 23:25:58.835355 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 15 23:25:58.835363 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 15 23:25:58.835370 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 15 23:25:58.835377 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 15 23:25:58.835384 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 15 23:25:58.835391 kernel: arm-pv: using stolen time PV Jul 15 23:25:58.835397 kernel: Console: colour dummy device 80x25 Jul 15 23:25:58.835404 kernel: ACPI: Core revision 20240827 Jul 15 23:25:58.835411 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 15 23:25:58.835418 kernel: pid_max: default: 32768 minimum: 301 Jul 15 23:25:58.835425 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 15 23:25:58.835434 kernel: landlock: Up and running. Jul 15 23:25:58.835440 kernel: SELinux: Initializing. Jul 15 23:25:58.835447 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 23:25:58.835454 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 23:25:58.835460 kernel: rcu: Hierarchical SRCU implementation. Jul 15 23:25:58.835467 kernel: rcu: Max phase no-delay instances is 400. Jul 15 23:25:58.835473 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 15 23:25:58.835481 kernel: Remapping and enabling EFI services. Jul 15 23:25:58.835487 kernel: smp: Bringing up secondary CPUs ... Jul 15 23:25:58.835500 kernel: Detected PIPT I-cache on CPU1 Jul 15 23:25:58.835507 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 15 23:25:58.835514 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Jul 15 23:25:58.835522 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 15 23:25:58.835529 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 15 23:25:58.835536 kernel: Detected PIPT I-cache on CPU2 Jul 15 23:25:58.835543 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 15 23:25:58.835550 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Jul 15 23:25:58.835559 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 15 23:25:58.835565 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 15 23:25:58.835572 kernel: Detected PIPT I-cache on CPU3 Jul 15 23:25:58.835579 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 15 23:25:58.835586 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Jul 15 23:25:58.835593 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 15 23:25:58.835600 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 15 23:25:58.835607 kernel: smp: Brought up 1 node, 4 CPUs Jul 15 23:25:58.835614 kernel: SMP: Total of 4 processors activated. Jul 15 23:25:58.835622 kernel: CPU: All CPU(s) started at EL1 Jul 15 23:25:58.835629 kernel: CPU features: detected: 32-bit EL0 Support Jul 15 23:25:58.835636 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 15 23:25:58.835643 kernel: CPU features: detected: Common not Private translations Jul 15 23:25:58.835650 kernel: CPU features: detected: CRC32 instructions Jul 15 23:25:58.835658 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 15 23:25:58.835665 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 15 23:25:58.835672 kernel: CPU features: detected: LSE atomic instructions Jul 15 23:25:58.835680 kernel: CPU features: detected: Privileged Access Never Jul 15 23:25:58.835688 kernel: CPU features: detected: RAS Extension Support Jul 15 23:25:58.835695 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 15 23:25:58.835712 kernel: alternatives: applying system-wide alternatives Jul 15 23:25:58.835719 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Jul 15 23:25:58.835726 kernel: Memory: 2423968K/2572288K available (11136K kernel code, 2436K rwdata, 9076K rodata, 39488K init, 1038K bss, 125984K reserved, 16384K cma-reserved) Jul 15 23:25:58.835734 kernel: devtmpfs: initialized Jul 15 23:25:58.835741 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 15 23:25:58.835749 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 15 23:25:58.835756 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 15 23:25:58.835764 kernel: 0 pages in range for non-PLT usage Jul 15 23:25:58.835771 kernel: 508432 pages in range for PLT usage Jul 15 23:25:58.835778 kernel: pinctrl core: initialized pinctrl subsystem Jul 15 23:25:58.835785 kernel: SMBIOS 3.0.0 present. Jul 15 23:25:58.835792 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jul 15 23:25:58.835799 kernel: DMI: Memory slots populated: 1/1 Jul 15 23:25:58.835805 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 15 23:25:58.835812 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 15 23:25:58.835819 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 15 23:25:58.835843 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 15 23:25:58.835850 kernel: audit: initializing netlink subsys (disabled) Jul 15 23:25:58.835857 kernel: audit: type=2000 audit(0.021:1): state=initialized audit_enabled=0 res=1 Jul 15 23:25:58.835864 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 15 23:25:58.835871 kernel: cpuidle: using governor menu Jul 15 23:25:58.835878 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 15 23:25:58.835885 kernel: ASID allocator initialised with 32768 entries Jul 15 23:25:58.835893 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 15 23:25:58.835900 kernel: Serial: AMBA PL011 UART driver Jul 15 23:25:58.835908 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 15 23:25:58.835915 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 15 23:25:58.835922 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 15 23:25:58.835929 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 15 23:25:58.835936 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 15 23:25:58.835943 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 15 23:25:58.835950 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 15 23:25:58.835957 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 15 23:25:58.835964 kernel: ACPI: Added _OSI(Module Device) Jul 15 23:25:58.835972 kernel: ACPI: Added _OSI(Processor Device) Jul 15 23:25:58.835979 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 15 23:25:58.835986 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 15 23:25:58.835993 kernel: ACPI: Interpreter enabled Jul 15 23:25:58.835999 kernel: ACPI: Using GIC for interrupt routing Jul 15 23:25:58.836006 kernel: ACPI: MCFG table detected, 1 entries Jul 15 23:25:58.836013 kernel: ACPI: CPU0 has been hot-added Jul 15 23:25:58.836020 kernel: ACPI: CPU1 has been hot-added Jul 15 23:25:58.836027 kernel: ACPI: CPU2 has been hot-added Jul 15 23:25:58.836034 kernel: ACPI: CPU3 has been hot-added Jul 15 23:25:58.836042 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 15 23:25:58.836065 kernel: printk: legacy console [ttyAMA0] enabled Jul 15 23:25:58.836073 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 15 23:25:58.836217 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 15 23:25:58.836285 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 15 23:25:58.836343 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 15 23:25:58.836401 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 15 23:25:58.836459 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 15 23:25:58.836471 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 15 23:25:58.836478 kernel: PCI host bridge to bus 0000:00 Jul 15 23:25:58.836544 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 15 23:25:58.836598 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 15 23:25:58.836651 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 15 23:25:58.836702 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 15 23:25:58.836784 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jul 15 23:25:58.836871 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 15 23:25:58.836931 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Jul 15 23:25:58.836990 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Jul 15 23:25:58.837047 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jul 15 23:25:58.837121 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jul 15 23:25:58.837180 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Jul 15 23:25:58.837254 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Jul 15 23:25:58.837309 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 15 23:25:58.837360 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 15 23:25:58.837411 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 15 23:25:58.837420 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 15 23:25:58.837427 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 15 23:25:58.837434 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 15 23:25:58.837443 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 15 23:25:58.837450 kernel: iommu: Default domain type: Translated Jul 15 23:25:58.837456 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 15 23:25:58.837463 kernel: efivars: Registered efivars operations Jul 15 23:25:58.837470 kernel: vgaarb: loaded Jul 15 23:25:58.837477 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 15 23:25:58.837484 kernel: VFS: Disk quotas dquot_6.6.0 Jul 15 23:25:58.837491 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 15 23:25:58.837498 kernel: pnp: PnP ACPI init Jul 15 23:25:58.837570 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 15 23:25:58.837579 kernel: pnp: PnP ACPI: found 1 devices Jul 15 23:25:58.837586 kernel: NET: Registered PF_INET protocol family Jul 15 23:25:58.837593 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 15 23:25:58.837600 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 15 23:25:58.837607 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 15 23:25:58.837614 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 15 23:25:58.837621 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 15 23:25:58.837630 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 15 23:25:58.837637 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 23:25:58.837644 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 23:25:58.837651 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 15 23:25:58.837658 kernel: PCI: CLS 0 bytes, default 64 Jul 15 23:25:58.837665 kernel: kvm [1]: HYP mode not available Jul 15 23:25:58.837672 kernel: Initialise system trusted keyrings Jul 15 23:25:58.837679 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 15 23:25:58.837686 kernel: Key type asymmetric registered Jul 15 23:25:58.837694 kernel: Asymmetric key parser 'x509' registered Jul 15 23:25:58.837701 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 15 23:25:58.837708 kernel: io scheduler mq-deadline registered Jul 15 23:25:58.837715 kernel: io scheduler kyber registered Jul 15 23:25:58.837722 kernel: io scheduler bfq registered Jul 15 23:25:58.837729 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 15 23:25:58.837736 kernel: ACPI: button: Power Button [PWRB] Jul 15 23:25:58.837743 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 15 23:25:58.837800 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 15 23:25:58.837810 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 15 23:25:58.837817 kernel: thunder_xcv, ver 1.0 Jul 15 23:25:58.837824 kernel: thunder_bgx, ver 1.0 Jul 15 23:25:58.837831 kernel: nicpf, ver 1.0 Jul 15 23:25:58.837838 kernel: nicvf, ver 1.0 Jul 15 23:25:58.837908 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 15 23:25:58.837965 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-15T23:25:58 UTC (1752621958) Jul 15 23:25:58.837975 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 15 23:25:58.837982 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 15 23:25:58.837990 kernel: watchdog: NMI not fully supported Jul 15 23:25:58.837997 kernel: watchdog: Hard watchdog permanently disabled Jul 15 23:25:58.838005 kernel: NET: Registered PF_INET6 protocol family Jul 15 23:25:58.838011 kernel: Segment Routing with IPv6 Jul 15 23:25:58.838018 kernel: In-situ OAM (IOAM) with IPv6 Jul 15 23:25:58.838025 kernel: NET: Registered PF_PACKET protocol family Jul 15 23:25:58.838032 kernel: Key type dns_resolver registered Jul 15 23:25:58.838038 kernel: registered taskstats version 1 Jul 15 23:25:58.838045 kernel: Loading compiled-in X.509 certificates Jul 15 23:25:58.838064 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: 2e049b1166d7080a2074348abe7e86e115624bdd' Jul 15 23:25:58.838071 kernel: Demotion targets for Node 0: null Jul 15 23:25:58.838078 kernel: Key type .fscrypt registered Jul 15 23:25:58.838084 kernel: Key type fscrypt-provisioning registered Jul 15 23:25:58.838103 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 15 23:25:58.838110 kernel: ima: Allocated hash algorithm: sha1 Jul 15 23:25:58.838117 kernel: ima: No architecture policies found Jul 15 23:25:58.838124 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 15 23:25:58.838133 kernel: clk: Disabling unused clocks Jul 15 23:25:58.838140 kernel: PM: genpd: Disabling unused power domains Jul 15 23:25:58.838147 kernel: Warning: unable to open an initial console. Jul 15 23:25:58.838154 kernel: Freeing unused kernel memory: 39488K Jul 15 23:25:58.838161 kernel: Run /init as init process Jul 15 23:25:58.838168 kernel: with arguments: Jul 15 23:25:58.838175 kernel: /init Jul 15 23:25:58.838187 kernel: with environment: Jul 15 23:25:58.838195 kernel: HOME=/ Jul 15 23:25:58.838202 kernel: TERM=linux Jul 15 23:25:58.838210 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 15 23:25:58.838218 systemd[1]: Successfully made /usr/ read-only. Jul 15 23:25:58.838228 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 23:25:58.838236 systemd[1]: Detected virtualization kvm. Jul 15 23:25:58.838243 systemd[1]: Detected architecture arm64. Jul 15 23:25:58.838250 systemd[1]: Running in initrd. Jul 15 23:25:58.838257 systemd[1]: No hostname configured, using default hostname. Jul 15 23:25:58.838267 systemd[1]: Hostname set to . Jul 15 23:25:58.838274 systemd[1]: Initializing machine ID from VM UUID. Jul 15 23:25:58.838281 systemd[1]: Queued start job for default target initrd.target. Jul 15 23:25:58.838289 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 23:25:58.838296 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 23:25:58.838304 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 15 23:25:58.838311 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 23:25:58.838319 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 15 23:25:58.838329 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 15 23:25:58.838337 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 15 23:25:58.838345 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 15 23:25:58.838352 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 23:25:58.838359 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 23:25:58.838367 systemd[1]: Reached target paths.target - Path Units. Jul 15 23:25:58.838374 systemd[1]: Reached target slices.target - Slice Units. Jul 15 23:25:58.838383 systemd[1]: Reached target swap.target - Swaps. Jul 15 23:25:58.838391 systemd[1]: Reached target timers.target - Timer Units. Jul 15 23:25:58.838398 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 23:25:58.838406 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 23:25:58.838413 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 15 23:25:58.838420 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 15 23:25:58.838428 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 23:25:58.838436 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 23:25:58.838444 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 23:25:58.838452 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 23:25:58.838459 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 15 23:25:58.838467 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 23:25:58.838474 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 15 23:25:58.838482 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 15 23:25:58.838490 systemd[1]: Starting systemd-fsck-usr.service... Jul 15 23:25:58.838497 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 23:25:58.838504 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 23:25:58.838513 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:25:58.838521 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 15 23:25:58.838529 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 23:25:58.838536 systemd[1]: Finished systemd-fsck-usr.service. Jul 15 23:25:58.838545 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 23:25:58.838569 systemd-journald[244]: Collecting audit messages is disabled. Jul 15 23:25:58.838587 systemd-journald[244]: Journal started Jul 15 23:25:58.838606 systemd-journald[244]: Runtime Journal (/run/log/journal/a340faba2fa7472c85832f82e52c4bde) is 6M, max 48.5M, 42.4M free. Jul 15 23:25:58.846411 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 15 23:25:58.846442 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:25:58.829165 systemd-modules-load[245]: Inserted module 'overlay' Jul 15 23:25:58.850059 kernel: Bridge firewalling registered Jul 15 23:25:58.850034 systemd-modules-load[245]: Inserted module 'br_netfilter' Jul 15 23:25:58.852816 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 23:25:58.853258 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 23:25:58.854590 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 23:25:58.859093 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 15 23:25:58.860871 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 23:25:58.863120 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 23:25:58.874646 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 23:25:58.881394 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 23:25:58.884697 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:25:58.887741 systemd-tmpfiles[269]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 15 23:25:58.890564 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 23:25:58.893761 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 23:25:58.897300 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 23:25:58.906634 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 15 23:25:58.921124 dracut-cmdline[289]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=6efbcbd16e8e41b645be9f8e34b328753e37d282675200dab08e504f8e58a578 Jul 15 23:25:58.936824 systemd-resolved[287]: Positive Trust Anchors: Jul 15 23:25:58.936843 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 23:25:58.936876 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 23:25:58.941610 systemd-resolved[287]: Defaulting to hostname 'linux'. Jul 15 23:25:58.942620 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 23:25:58.946844 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 23:25:58.997121 kernel: SCSI subsystem initialized Jul 15 23:25:59.002073 kernel: Loading iSCSI transport class v2.0-870. Jul 15 23:25:59.009077 kernel: iscsi: registered transport (tcp) Jul 15 23:25:59.022086 kernel: iscsi: registered transport (qla4xxx) Jul 15 23:25:59.022131 kernel: QLogic iSCSI HBA Driver Jul 15 23:25:59.038126 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 23:25:59.054108 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 23:25:59.056269 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 23:25:59.099341 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 15 23:25:59.101614 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 15 23:25:59.160089 kernel: raid6: neonx8 gen() 15687 MB/s Jul 15 23:25:59.177083 kernel: raid6: neonx4 gen() 15789 MB/s Jul 15 23:25:59.194079 kernel: raid6: neonx2 gen() 13127 MB/s Jul 15 23:25:59.211079 kernel: raid6: neonx1 gen() 10401 MB/s Jul 15 23:25:59.228105 kernel: raid6: int64x8 gen() 6895 MB/s Jul 15 23:25:59.245090 kernel: raid6: int64x4 gen() 7344 MB/s Jul 15 23:25:59.262091 kernel: raid6: int64x2 gen() 6096 MB/s Jul 15 23:25:59.279188 kernel: raid6: int64x1 gen() 5052 MB/s Jul 15 23:25:59.279226 kernel: raid6: using algorithm neonx4 gen() 15789 MB/s Jul 15 23:25:59.297147 kernel: raid6: .... xor() 12392 MB/s, rmw enabled Jul 15 23:25:59.297171 kernel: raid6: using neon recovery algorithm Jul 15 23:25:59.302074 kernel: xor: measuring software checksum speed Jul 15 23:25:59.305195 kernel: 8regs : 1683 MB/sec Jul 15 23:25:59.305218 kernel: 32regs : 21658 MB/sec Jul 15 23:25:59.306412 kernel: arm64_neon : 27917 MB/sec Jul 15 23:25:59.306423 kernel: xor: using function: arm64_neon (27917 MB/sec) Jul 15 23:25:59.360090 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 15 23:25:59.366305 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 15 23:25:59.368731 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 23:25:59.402221 systemd-udevd[497]: Using default interface naming scheme 'v255'. Jul 15 23:25:59.406282 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 23:25:59.408677 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 15 23:25:59.434416 dracut-pre-trigger[507]: rd.md=0: removing MD RAID activation Jul 15 23:25:59.455289 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 23:25:59.457486 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 23:25:59.512496 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 23:25:59.515725 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 15 23:25:59.565125 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 15 23:25:59.565352 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 15 23:25:59.568249 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 23:25:59.568367 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:25:59.575231 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:25:59.585260 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 15 23:25:59.585283 kernel: GPT:9289727 != 19775487 Jul 15 23:25:59.585293 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 15 23:25:59.585302 kernel: GPT:9289727 != 19775487 Jul 15 23:25:59.585310 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 15 23:25:59.585318 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 23:25:59.577236 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:25:59.603805 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 15 23:25:59.609840 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:25:59.618087 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 15 23:25:59.626715 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 15 23:25:59.634477 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 15 23:25:59.640696 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 15 23:25:59.642065 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 15 23:25:59.645038 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 23:25:59.647228 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 23:25:59.649276 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 23:25:59.651977 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 15 23:25:59.653790 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 15 23:25:59.667927 disk-uuid[590]: Primary Header is updated. Jul 15 23:25:59.667927 disk-uuid[590]: Secondary Entries is updated. Jul 15 23:25:59.667927 disk-uuid[590]: Secondary Header is updated. Jul 15 23:25:59.672298 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 15 23:25:59.677186 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 23:26:00.682735 disk-uuid[593]: The operation has completed successfully. Jul 15 23:26:00.684085 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 23:26:00.701719 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 15 23:26:00.701816 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 15 23:26:00.732704 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 15 23:26:00.762018 sh[609]: Success Jul 15 23:26:00.777739 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 15 23:26:00.777796 kernel: device-mapper: uevent: version 1.0.3 Jul 15 23:26:00.780087 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 15 23:26:00.787553 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 15 23:26:00.810287 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 15 23:26:00.822637 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 15 23:26:00.824984 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 15 23:26:00.835110 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 15 23:26:00.835145 kernel: BTRFS: device fsid e70e9257-c19d-4e0a-b2ee-631da7d0eb2b devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (621) Jul 15 23:26:00.836681 kernel: BTRFS info (device dm-0): first mount of filesystem e70e9257-c19d-4e0a-b2ee-631da7d0eb2b Jul 15 23:26:00.836709 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 15 23:26:00.838282 kernel: BTRFS info (device dm-0): using free-space-tree Jul 15 23:26:00.841725 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 15 23:26:00.843107 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 15 23:26:00.844507 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 15 23:26:00.845374 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 15 23:26:00.846886 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 15 23:26:00.871083 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (650) Jul 15 23:26:00.873738 kernel: BTRFS info (device vda6): first mount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 15 23:26:00.873779 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 15 23:26:00.873790 kernel: BTRFS info (device vda6): using free-space-tree Jul 15 23:26:00.880067 kernel: BTRFS info (device vda6): last unmount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 15 23:26:00.880400 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 15 23:26:00.882631 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 15 23:26:00.956581 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 23:26:00.960697 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 23:26:01.001807 systemd-networkd[796]: lo: Link UP Jul 15 23:26:01.001818 systemd-networkd[796]: lo: Gained carrier Jul 15 23:26:01.002629 systemd-networkd[796]: Enumeration completed Jul 15 23:26:01.002731 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 23:26:01.003120 systemd-networkd[796]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:26:01.003123 systemd-networkd[796]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 23:26:01.003704 systemd-networkd[796]: eth0: Link UP Jul 15 23:26:01.003707 systemd-networkd[796]: eth0: Gained carrier Jul 15 23:26:01.003715 systemd-networkd[796]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:26:01.005158 systemd[1]: Reached target network.target - Network. Jul 15 23:26:01.029091 systemd-networkd[796]: eth0: DHCPv4 address 10.0.0.112/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 15 23:26:01.034212 ignition[696]: Ignition 2.21.0 Jul 15 23:26:01.034227 ignition[696]: Stage: fetch-offline Jul 15 23:26:01.034266 ignition[696]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:26:01.034274 ignition[696]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 23:26:01.034458 ignition[696]: parsed url from cmdline: "" Jul 15 23:26:01.034461 ignition[696]: no config URL provided Jul 15 23:26:01.034465 ignition[696]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 23:26:01.034476 ignition[696]: no config at "/usr/lib/ignition/user.ign" Jul 15 23:26:01.034494 ignition[696]: op(1): [started] loading QEMU firmware config module Jul 15 23:26:01.034498 ignition[696]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 15 23:26:01.043449 ignition[696]: op(1): [finished] loading QEMU firmware config module Jul 15 23:26:01.080614 ignition[696]: parsing config with SHA512: 7b5a075105f51dfea4ca5fc3ee4a9148d6f826bd988ded4839ad2ca85fbf26baba948c8ca9476b45f63d2e17aefca58a83751ed80635aba13a3ca1e15932baa8 Jul 15 23:26:01.084897 unknown[696]: fetched base config from "system" Jul 15 23:26:01.084909 unknown[696]: fetched user config from "qemu" Jul 15 23:26:01.085319 ignition[696]: fetch-offline: fetch-offline passed Jul 15 23:26:01.085385 ignition[696]: Ignition finished successfully Jul 15 23:26:01.087945 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 23:26:01.089350 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 15 23:26:01.090122 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 15 23:26:01.119831 ignition[810]: Ignition 2.21.0 Jul 15 23:26:01.119850 ignition[810]: Stage: kargs Jul 15 23:26:01.119998 ignition[810]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:26:01.120007 ignition[810]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 23:26:01.122581 ignition[810]: kargs: kargs passed Jul 15 23:26:01.125722 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 15 23:26:01.122672 ignition[810]: Ignition finished successfully Jul 15 23:26:01.127747 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 15 23:26:01.149424 ignition[818]: Ignition 2.21.0 Jul 15 23:26:01.149439 ignition[818]: Stage: disks Jul 15 23:26:01.149575 ignition[818]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:26:01.149585 ignition[818]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 23:26:01.151269 ignition[818]: disks: disks passed Jul 15 23:26:01.152954 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 15 23:26:01.151332 ignition[818]: Ignition finished successfully Jul 15 23:26:01.154321 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 15 23:26:01.155633 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 15 23:26:01.157496 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 23:26:01.159002 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 23:26:01.160884 systemd[1]: Reached target basic.target - Basic System. Jul 15 23:26:01.163643 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 15 23:26:01.190038 systemd-fsck[828]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 15 23:26:01.194246 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 15 23:26:01.197001 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 15 23:26:01.259071 kernel: EXT4-fs (vda9): mounted filesystem db08fdf6-07fd-45a1-bb3b-a7d0399d70fd r/w with ordered data mode. Quota mode: none. Jul 15 23:26:01.259860 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 15 23:26:01.261115 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 15 23:26:01.265263 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 23:26:01.266842 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 15 23:26:01.267827 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 15 23:26:01.267866 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 15 23:26:01.267888 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 23:26:01.280434 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 15 23:26:01.282798 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 15 23:26:01.288665 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (836) Jul 15 23:26:01.288688 kernel: BTRFS info (device vda6): first mount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 15 23:26:01.288698 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 15 23:26:01.288713 kernel: BTRFS info (device vda6): using free-space-tree Jul 15 23:26:01.291366 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 23:26:01.335279 initrd-setup-root[861]: cut: /sysroot/etc/passwd: No such file or directory Jul 15 23:26:01.339648 initrd-setup-root[868]: cut: /sysroot/etc/group: No such file or directory Jul 15 23:26:01.343482 initrd-setup-root[875]: cut: /sysroot/etc/shadow: No such file or directory Jul 15 23:26:01.346288 initrd-setup-root[882]: cut: /sysroot/etc/gshadow: No such file or directory Jul 15 23:26:01.413087 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 15 23:26:01.414991 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 15 23:26:01.416512 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 15 23:26:01.435083 kernel: BTRFS info (device vda6): last unmount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 15 23:26:01.444832 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 15 23:26:01.452985 ignition[950]: INFO : Ignition 2.21.0 Jul 15 23:26:01.452985 ignition[950]: INFO : Stage: mount Jul 15 23:26:01.454483 ignition[950]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 23:26:01.454483 ignition[950]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 23:26:01.457344 ignition[950]: INFO : mount: mount passed Jul 15 23:26:01.457344 ignition[950]: INFO : Ignition finished successfully Jul 15 23:26:01.456818 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 15 23:26:01.460269 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 15 23:26:01.833801 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 15 23:26:01.835330 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 23:26:01.853873 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (962) Jul 15 23:26:01.853906 kernel: BTRFS info (device vda6): first mount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 15 23:26:01.853916 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 15 23:26:01.854802 kernel: BTRFS info (device vda6): using free-space-tree Jul 15 23:26:01.858125 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 23:26:01.882054 ignition[979]: INFO : Ignition 2.21.0 Jul 15 23:26:01.882054 ignition[979]: INFO : Stage: files Jul 15 23:26:01.884413 ignition[979]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 23:26:01.884413 ignition[979]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 23:26:01.886585 ignition[979]: DEBUG : files: compiled without relabeling support, skipping Jul 15 23:26:01.886585 ignition[979]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 15 23:26:01.886585 ignition[979]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 15 23:26:01.890481 ignition[979]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 15 23:26:01.890481 ignition[979]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 15 23:26:01.890481 ignition[979]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 15 23:26:01.890481 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 15 23:26:01.890481 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 15 23:26:01.887523 unknown[979]: wrote ssh authorized keys file for user: core Jul 15 23:26:01.941538 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 15 23:26:02.311027 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 15 23:26:02.311027 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 15 23:26:02.314586 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 15 23:26:02.318196 systemd-networkd[796]: eth0: Gained IPv6LL Jul 15 23:26:02.503521 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 15 23:26:02.619551 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 15 23:26:02.619551 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 15 23:26:02.623184 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 15 23:26:02.623184 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 15 23:26:02.623184 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 15 23:26:02.623184 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 23:26:02.623184 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 23:26:02.623184 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 23:26:02.623184 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 23:26:02.634880 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 23:26:02.634880 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 23:26:02.634880 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 15 23:26:02.634880 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 15 23:26:02.634880 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 15 23:26:02.634880 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 15 23:26:03.298864 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 15 23:26:03.750421 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 15 23:26:03.750421 ignition[979]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 15 23:26:03.754297 ignition[979]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 23:26:03.754297 ignition[979]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 23:26:03.754297 ignition[979]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 15 23:26:03.754297 ignition[979]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 15 23:26:03.754297 ignition[979]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 23:26:03.754297 ignition[979]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 23:26:03.754297 ignition[979]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 15 23:26:03.754297 ignition[979]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 15 23:26:03.768157 ignition[979]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 23:26:03.770118 ignition[979]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 23:26:03.773010 ignition[979]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 15 23:26:03.773010 ignition[979]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 15 23:26:03.773010 ignition[979]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 15 23:26:03.773010 ignition[979]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 15 23:26:03.773010 ignition[979]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 15 23:26:03.773010 ignition[979]: INFO : files: files passed Jul 15 23:26:03.773010 ignition[979]: INFO : Ignition finished successfully Jul 15 23:26:03.773722 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 15 23:26:03.776538 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 15 23:26:03.778678 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 15 23:26:03.803503 initrd-setup-root-after-ignition[1008]: grep: /sysroot/oem/oem-release: No such file or directory Jul 15 23:26:03.802468 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 15 23:26:03.802566 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 15 23:26:03.807413 initrd-setup-root-after-ignition[1010]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 23:26:03.807413 initrd-setup-root-after-ignition[1010]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 15 23:26:03.810766 initrd-setup-root-after-ignition[1014]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 23:26:03.810795 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 23:26:03.812292 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 15 23:26:03.815894 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 15 23:26:03.844763 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 15 23:26:03.844875 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 15 23:26:03.847286 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 15 23:26:03.849105 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 15 23:26:03.851081 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 15 23:26:03.851899 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 15 23:26:03.866259 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 23:26:03.868934 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 15 23:26:03.896213 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 15 23:26:03.897499 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 23:26:03.899522 systemd[1]: Stopped target timers.target - Timer Units. Jul 15 23:26:03.901290 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 15 23:26:03.901421 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 23:26:03.903999 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 15 23:26:03.906208 systemd[1]: Stopped target basic.target - Basic System. Jul 15 23:26:03.907906 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 15 23:26:03.909686 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 23:26:03.911644 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 15 23:26:03.913601 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 15 23:26:03.915520 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 15 23:26:03.917357 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 23:26:03.919456 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 15 23:26:03.921638 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 15 23:26:03.923377 systemd[1]: Stopped target swap.target - Swaps. Jul 15 23:26:03.924923 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 15 23:26:03.925075 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 15 23:26:03.927418 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 15 23:26:03.929376 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 23:26:03.931308 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 15 23:26:03.935117 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 23:26:03.936312 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 15 23:26:03.936445 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 15 23:26:03.939280 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 15 23:26:03.939404 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 23:26:03.941231 systemd[1]: Stopped target paths.target - Path Units. Jul 15 23:26:03.942698 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 15 23:26:03.942815 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 23:26:03.944779 systemd[1]: Stopped target slices.target - Slice Units. Jul 15 23:26:03.946254 systemd[1]: Stopped target sockets.target - Socket Units. Jul 15 23:26:03.947881 systemd[1]: iscsid.socket: Deactivated successfully. Jul 15 23:26:03.947968 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 23:26:03.949957 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 15 23:26:03.950037 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 23:26:03.951521 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 15 23:26:03.951648 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 23:26:03.953200 systemd[1]: ignition-files.service: Deactivated successfully. Jul 15 23:26:03.953298 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 15 23:26:03.955605 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 15 23:26:03.957877 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 15 23:26:03.958831 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 15 23:26:03.958954 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 23:26:03.961021 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 15 23:26:03.961144 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 23:26:03.967680 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 15 23:26:03.967761 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 15 23:26:03.974255 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 15 23:26:03.980719 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 15 23:26:03.980826 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 15 23:26:03.983983 ignition[1034]: INFO : Ignition 2.21.0 Jul 15 23:26:03.983983 ignition[1034]: INFO : Stage: umount Jul 15 23:26:03.983983 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 23:26:03.983983 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 23:26:03.983983 ignition[1034]: INFO : umount: umount passed Jul 15 23:26:03.983983 ignition[1034]: INFO : Ignition finished successfully Jul 15 23:26:03.984851 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 15 23:26:03.984940 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 15 23:26:03.987088 systemd[1]: Stopped target network.target - Network. Jul 15 23:26:03.988296 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 15 23:26:03.988368 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 15 23:26:03.990240 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 15 23:26:03.990292 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 15 23:26:03.991914 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 15 23:26:03.991969 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 15 23:26:03.993661 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 15 23:26:03.993708 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 15 23:26:03.995372 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 15 23:26:03.995426 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 15 23:26:03.997335 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 15 23:26:03.998981 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 15 23:26:04.002225 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 15 23:26:04.002329 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 15 23:26:04.005388 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 15 23:26:04.005993 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 15 23:26:04.006085 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 23:26:04.009550 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 15 23:26:04.009731 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 15 23:26:04.011592 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 15 23:26:04.014385 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 15 23:26:04.014535 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 15 23:26:04.015869 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 15 23:26:04.015907 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 15 23:26:04.019251 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 15 23:26:04.020669 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 15 23:26:04.020733 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 23:26:04.023121 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 23:26:04.023184 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:26:04.026116 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 15 23:26:04.026164 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 15 23:26:04.028483 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 23:26:04.031898 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 23:26:04.048800 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 15 23:26:04.052192 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 23:26:04.053694 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 15 23:26:04.053739 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 15 23:26:04.055972 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 15 23:26:04.056005 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 23:26:04.057807 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 15 23:26:04.057857 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 15 23:26:04.060908 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 15 23:26:04.060960 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 15 23:26:04.063591 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 15 23:26:04.063645 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 23:26:04.067461 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 15 23:26:04.068547 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 15 23:26:04.068605 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 23:26:04.071664 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 15 23:26:04.071712 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 23:26:04.074883 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 15 23:26:04.074926 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 23:26:04.078291 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 15 23:26:04.078332 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 23:26:04.080643 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 23:26:04.080690 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:26:04.084465 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 15 23:26:04.089233 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 15 23:26:04.094266 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 15 23:26:04.094368 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 15 23:26:04.097343 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 15 23:26:04.099075 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 15 23:26:04.118559 systemd[1]: Switching root. Jul 15 23:26:04.158433 systemd-journald[244]: Journal stopped Jul 15 23:26:04.934541 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Jul 15 23:26:04.934586 kernel: SELinux: policy capability network_peer_controls=1 Jul 15 23:26:04.934602 kernel: SELinux: policy capability open_perms=1 Jul 15 23:26:04.934611 kernel: SELinux: policy capability extended_socket_class=1 Jul 15 23:26:04.934621 kernel: SELinux: policy capability always_check_network=0 Jul 15 23:26:04.934632 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 15 23:26:04.934646 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 15 23:26:04.934656 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 15 23:26:04.934665 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 15 23:26:04.934675 kernel: SELinux: policy capability userspace_initial_context=0 Jul 15 23:26:04.934689 kernel: audit: type=1403 audit(1752621964.338:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 15 23:26:04.934699 systemd[1]: Successfully loaded SELinux policy in 51.165ms. Jul 15 23:26:04.934715 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.374ms. Jul 15 23:26:04.934726 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 23:26:04.934737 systemd[1]: Detected virtualization kvm. Jul 15 23:26:04.934746 systemd[1]: Detected architecture arm64. Jul 15 23:26:04.934757 systemd[1]: Detected first boot. Jul 15 23:26:04.934767 systemd[1]: Initializing machine ID from VM UUID. Jul 15 23:26:04.934780 zram_generator::config[1079]: No configuration found. Jul 15 23:26:04.934794 kernel: NET: Registered PF_VSOCK protocol family Jul 15 23:26:04.934804 systemd[1]: Populated /etc with preset unit settings. Jul 15 23:26:04.934814 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 15 23:26:04.934824 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 15 23:26:04.934835 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 15 23:26:04.934845 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 15 23:26:04.934855 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 15 23:26:04.934866 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 15 23:26:04.934877 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 15 23:26:04.934887 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 15 23:26:04.934897 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 15 23:26:04.934907 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 15 23:26:04.934917 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 15 23:26:04.934927 systemd[1]: Created slice user.slice - User and Session Slice. Jul 15 23:26:04.934937 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 23:26:04.934948 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 23:26:04.934958 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 15 23:26:04.934970 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 15 23:26:04.934980 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 15 23:26:04.934991 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 23:26:04.935001 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 15 23:26:04.935011 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 23:26:04.935021 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 23:26:04.935031 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 15 23:26:04.935043 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 15 23:26:04.935077 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 15 23:26:04.935089 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 15 23:26:04.935100 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 23:26:04.935110 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 23:26:04.935120 systemd[1]: Reached target slices.target - Slice Units. Jul 15 23:26:04.935130 systemd[1]: Reached target swap.target - Swaps. Jul 15 23:26:04.935140 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 15 23:26:04.935150 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 15 23:26:04.935167 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 15 23:26:04.935181 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 23:26:04.935191 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 23:26:04.935201 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 23:26:04.935212 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 15 23:26:04.935222 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 15 23:26:04.935232 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 15 23:26:04.935242 systemd[1]: Mounting media.mount - External Media Directory... Jul 15 23:26:04.935253 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 15 23:26:04.935264 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 15 23:26:04.935274 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 15 23:26:04.935285 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 15 23:26:04.935295 systemd[1]: Reached target machines.target - Containers. Jul 15 23:26:04.935305 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 15 23:26:04.935316 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:26:04.935326 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 23:26:04.935336 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 15 23:26:04.935346 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 23:26:04.935358 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 23:26:04.935369 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 23:26:04.935379 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 15 23:26:04.935389 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 23:26:04.935400 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 15 23:26:04.935410 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 15 23:26:04.935423 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 15 23:26:04.935433 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 15 23:26:04.935445 systemd[1]: Stopped systemd-fsck-usr.service. Jul 15 23:26:04.935456 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:26:04.935465 kernel: fuse: init (API version 7.41) Jul 15 23:26:04.935475 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 23:26:04.935484 kernel: loop: module loaded Jul 15 23:26:04.935494 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 23:26:04.935504 kernel: ACPI: bus type drm_connector registered Jul 15 23:26:04.935514 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 23:26:04.935524 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 15 23:26:04.935535 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 15 23:26:04.935545 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 23:26:04.935555 systemd[1]: verity-setup.service: Deactivated successfully. Jul 15 23:26:04.935566 systemd[1]: Stopped verity-setup.service. Jul 15 23:26:04.935576 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 15 23:26:04.935608 systemd-journald[1146]: Collecting audit messages is disabled. Jul 15 23:26:04.935630 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 15 23:26:04.935641 systemd-journald[1146]: Journal started Jul 15 23:26:04.935661 systemd-journald[1146]: Runtime Journal (/run/log/journal/a340faba2fa7472c85832f82e52c4bde) is 6M, max 48.5M, 42.4M free. Jul 15 23:26:04.715481 systemd[1]: Queued start job for default target multi-user.target. Jul 15 23:26:04.733011 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 15 23:26:04.733418 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 15 23:26:04.938614 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 23:26:04.939261 systemd[1]: Mounted media.mount - External Media Directory. Jul 15 23:26:04.940349 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 15 23:26:04.941508 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 15 23:26:04.942703 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 15 23:26:04.945124 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 15 23:26:04.946535 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 23:26:04.947995 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 15 23:26:04.948195 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 15 23:26:04.949561 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 23:26:04.949721 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 23:26:04.951249 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 23:26:04.951417 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 23:26:04.952795 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 23:26:04.952954 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 23:26:04.954408 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 15 23:26:04.954557 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 15 23:26:04.955902 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 23:26:04.956074 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 23:26:04.957425 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 23:26:04.958868 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 23:26:04.960526 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 15 23:26:04.962007 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 15 23:26:04.974012 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 23:26:04.976655 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 15 23:26:04.978627 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 15 23:26:04.979811 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 15 23:26:04.979851 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 23:26:04.981828 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 15 23:26:04.984036 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 15 23:26:04.985375 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:26:04.986490 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 15 23:26:04.988404 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 15 23:26:04.989621 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 23:26:04.991851 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 15 23:26:04.993003 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 23:26:04.996204 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 23:26:04.999592 systemd-journald[1146]: Time spent on flushing to /var/log/journal/a340faba2fa7472c85832f82e52c4bde is 13.187ms for 886 entries. Jul 15 23:26:04.999592 systemd-journald[1146]: System Journal (/var/log/journal/a340faba2fa7472c85832f82e52c4bde) is 8M, max 195.6M, 187.6M free. Jul 15 23:26:05.024629 systemd-journald[1146]: Received client request to flush runtime journal. Jul 15 23:26:05.024687 kernel: loop0: detected capacity change from 0 to 138376 Jul 15 23:26:05.000800 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 15 23:26:05.003254 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 23:26:05.007103 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 23:26:05.008737 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 15 23:26:05.010616 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 15 23:26:05.012481 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 15 23:26:05.019487 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 15 23:26:05.023198 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 15 23:26:05.027389 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 15 23:26:05.040221 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 15 23:26:05.043036 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:26:05.047604 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Jul 15 23:26:05.047622 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Jul 15 23:26:05.053134 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 23:26:05.055734 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 15 23:26:05.060319 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 15 23:26:05.067139 kernel: loop1: detected capacity change from 0 to 203944 Jul 15 23:26:05.085207 kernel: loop2: detected capacity change from 0 to 107312 Jul 15 23:26:05.092600 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 15 23:26:05.096072 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 23:26:05.113086 kernel: loop3: detected capacity change from 0 to 138376 Jul 15 23:26:05.120085 kernel: loop4: detected capacity change from 0 to 203944 Jul 15 23:26:05.123395 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Jul 15 23:26:05.123411 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Jul 15 23:26:05.126079 kernel: loop5: detected capacity change from 0 to 107312 Jul 15 23:26:05.130776 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 23:26:05.131394 (sd-merge)[1222]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 15 23:26:05.132156 (sd-merge)[1222]: Merged extensions into '/usr'. Jul 15 23:26:05.135925 systemd[1]: Reload requested from client PID 1197 ('systemd-sysext') (unit systemd-sysext.service)... Jul 15 23:26:05.135939 systemd[1]: Reloading... Jul 15 23:26:05.205077 zram_generator::config[1253]: No configuration found. Jul 15 23:26:05.247033 ldconfig[1192]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 15 23:26:05.283278 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:26:05.354030 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 15 23:26:05.354405 systemd[1]: Reloading finished in 218 ms. Jul 15 23:26:05.393095 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 15 23:26:05.394486 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 15 23:26:05.413380 systemd[1]: Starting ensure-sysext.service... Jul 15 23:26:05.415173 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 23:26:05.426361 systemd[1]: Reload requested from client PID 1285 ('systemctl') (unit ensure-sysext.service)... Jul 15 23:26:05.426376 systemd[1]: Reloading... Jul 15 23:26:05.434769 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 15 23:26:05.434799 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 15 23:26:05.434999 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 15 23:26:05.435564 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 15 23:26:05.436346 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 15 23:26:05.436656 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Jul 15 23:26:05.436760 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Jul 15 23:26:05.439652 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 23:26:05.439747 systemd-tmpfiles[1286]: Skipping /boot Jul 15 23:26:05.448620 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 23:26:05.448719 systemd-tmpfiles[1286]: Skipping /boot Jul 15 23:26:05.474117 zram_generator::config[1313]: No configuration found. Jul 15 23:26:05.540537 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:26:05.610221 systemd[1]: Reloading finished in 183 ms. Jul 15 23:26:05.625799 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 15 23:26:05.631603 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 23:26:05.641369 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 23:26:05.643808 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 15 23:26:05.653863 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 15 23:26:05.658360 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 23:26:05.661183 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 23:26:05.664497 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 15 23:26:05.675952 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 15 23:26:05.680221 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 15 23:26:05.682806 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:26:05.684547 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 23:26:05.688439 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 23:26:05.690563 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 23:26:05.691705 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:26:05.691822 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:26:05.695924 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 15 23:26:05.699378 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 15 23:26:05.702498 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 23:26:05.702669 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 23:26:05.706622 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 23:26:05.706769 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 23:26:05.708546 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 23:26:05.708685 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 23:26:05.712852 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:26:05.714419 systemd-udevd[1356]: Using default interface naming scheme 'v255'. Jul 15 23:26:05.716326 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 23:26:05.724450 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 23:26:05.729141 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 23:26:05.730273 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:26:05.730394 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:26:05.732277 augenrules[1387]: No rules Jul 15 23:26:05.732704 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 15 23:26:05.736801 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 23:26:05.737001 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 23:26:05.738640 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 15 23:26:05.740628 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 23:26:05.740786 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 23:26:05.742345 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 15 23:26:05.743929 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 23:26:05.744085 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 23:26:05.745669 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 23:26:05.745792 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 23:26:05.753455 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 23:26:05.757205 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 23:26:05.758246 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:26:05.760435 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 23:26:05.767867 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 23:26:05.771268 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 23:26:05.773461 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 23:26:05.774940 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:26:05.775072 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:26:05.777352 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 23:26:05.778400 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 23:26:05.781512 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 23:26:05.783082 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 23:26:05.789824 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 23:26:05.790093 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 23:26:05.794304 systemd[1]: Finished ensure-sysext.service. Jul 15 23:26:05.795508 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 23:26:05.795779 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 23:26:05.799034 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 23:26:05.799225 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 23:26:05.803857 augenrules[1404]: /sbin/augenrules: No change Jul 15 23:26:05.806374 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 23:26:05.806431 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 23:26:05.809018 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 15 23:26:05.821243 augenrules[1461]: No rules Jul 15 23:26:05.824406 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 23:26:05.829298 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 23:26:05.839272 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 15 23:26:05.896304 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 15 23:26:05.901336 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 15 23:26:05.939686 systemd-resolved[1353]: Positive Trust Anchors: Jul 15 23:26:05.939705 systemd-resolved[1353]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 23:26:05.939739 systemd-resolved[1353]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 23:26:05.945426 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 15 23:26:05.949089 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 15 23:26:05.951129 systemd[1]: Reached target time-set.target - System Time Set. Jul 15 23:26:05.953365 systemd-resolved[1353]: Defaulting to hostname 'linux'. Jul 15 23:26:05.954627 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 23:26:05.956254 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 23:26:05.957499 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 23:26:05.958849 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 15 23:26:05.958993 systemd-networkd[1440]: lo: Link UP Jul 15 23:26:05.959006 systemd-networkd[1440]: lo: Gained carrier Jul 15 23:26:05.959814 systemd-networkd[1440]: Enumeration completed Jul 15 23:26:05.960277 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 15 23:26:05.960739 systemd-networkd[1440]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:26:05.960751 systemd-networkd[1440]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 23:26:05.961640 systemd-networkd[1440]: eth0: Link UP Jul 15 23:26:05.961784 systemd-networkd[1440]: eth0: Gained carrier Jul 15 23:26:05.961802 systemd-networkd[1440]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:26:05.961881 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 15 23:26:05.963351 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 15 23:26:05.964781 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 15 23:26:05.966509 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 15 23:26:05.966542 systemd[1]: Reached target paths.target - Path Units. Jul 15 23:26:05.967444 systemd[1]: Reached target timers.target - Timer Units. Jul 15 23:26:05.969607 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 15 23:26:05.972029 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 15 23:26:05.977098 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 15 23:26:05.980349 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 15 23:26:05.981100 systemd-networkd[1440]: eth0: DHCPv4 address 10.0.0.112/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 15 23:26:05.981635 systemd-timesyncd[1458]: Network configuration changed, trying to establish connection. Jul 15 23:26:05.982451 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 15 23:26:05.984183 systemd-timesyncd[1458]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 15 23:26:05.984237 systemd-timesyncd[1458]: Initial clock synchronization to Tue 2025-07-15 23:26:05.876239 UTC. Jul 15 23:26:05.993404 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 15 23:26:05.995618 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 15 23:26:05.997315 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 23:26:05.998623 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 15 23:26:06.000241 systemd[1]: Reached target network.target - Network. Jul 15 23:26:06.001160 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 23:26:06.002876 systemd[1]: Reached target basic.target - Basic System. Jul 15 23:26:06.003912 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 15 23:26:06.003939 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 15 23:26:06.004882 systemd[1]: Starting containerd.service - containerd container runtime... Jul 15 23:26:06.007287 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 15 23:26:06.015262 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 15 23:26:06.017292 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 15 23:26:06.019142 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 15 23:26:06.025574 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 15 23:26:06.026561 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 15 23:26:06.028714 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 15 23:26:06.030880 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 15 23:26:06.031212 jq[1498]: false Jul 15 23:26:06.035190 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 15 23:26:06.038289 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 15 23:26:06.040283 extend-filesystems[1499]: Found /dev/vda6 Jul 15 23:26:06.042523 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 15 23:26:06.045255 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 15 23:26:06.047761 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 15 23:26:06.048575 extend-filesystems[1499]: Found /dev/vda9 Jul 15 23:26:06.048595 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 15 23:26:06.049105 systemd[1]: Starting update-engine.service - Update Engine... Jul 15 23:26:06.050806 extend-filesystems[1499]: Checking size of /dev/vda9 Jul 15 23:26:06.052298 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 15 23:26:06.057025 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 15 23:26:06.058501 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 15 23:26:06.058671 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 15 23:26:06.058973 systemd[1]: motdgen.service: Deactivated successfully. Jul 15 23:26:06.059160 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 15 23:26:06.060954 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 15 23:26:06.061128 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 15 23:26:06.071124 jq[1520]: true Jul 15 23:26:06.084482 extend-filesystems[1499]: Resized partition /dev/vda9 Jul 15 23:26:06.086511 (ntainerd)[1526]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 15 23:26:06.089115 extend-filesystems[1539]: resize2fs 1.47.2 (1-Jan-2025) Jul 15 23:26:06.094208 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:26:06.105117 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 15 23:26:06.105464 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 15 23:26:06.109060 jq[1531]: true Jul 15 23:26:06.130435 dbus-daemon[1496]: [system] SELinux support is enabled Jul 15 23:26:06.131264 tar[1523]: linux-arm64/helm Jul 15 23:26:06.131492 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 15 23:26:06.135737 systemd-logind[1509]: Watching system buttons on /dev/input/event0 (Power Button) Jul 15 23:26:06.138469 systemd-logind[1509]: New seat seat0. Jul 15 23:26:06.140249 systemd[1]: Started systemd-logind.service - User Login Management. Jul 15 23:26:06.143584 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 15 23:26:06.143622 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 15 23:26:06.145189 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 15 23:26:06.145213 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 15 23:26:06.152494 update_engine[1518]: I20250715 23:26:06.148917 1518 main.cc:92] Flatcar Update Engine starting Jul 15 23:26:06.151912 dbus-daemon[1496]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 15 23:26:06.156221 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 15 23:26:06.161290 systemd[1]: Started update-engine.service - Update Engine. Jul 15 23:26:06.161491 update_engine[1518]: I20250715 23:26:06.161450 1518 update_check_scheduler.cc:74] Next update check in 11m55s Jul 15 23:26:06.163718 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 15 23:26:06.165769 extend-filesystems[1539]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 15 23:26:06.165769 extend-filesystems[1539]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 15 23:26:06.165769 extend-filesystems[1539]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 15 23:26:06.171187 extend-filesystems[1499]: Resized filesystem in /dev/vda9 Jul 15 23:26:06.167119 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 15 23:26:06.167683 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 15 23:26:06.193310 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:26:06.218389 bash[1564]: Updated "/home/core/.ssh/authorized_keys" Jul 15 23:26:06.220790 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 15 23:26:06.224292 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 15 23:26:06.241438 locksmithd[1549]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 15 23:26:06.344034 containerd[1526]: time="2025-07-15T23:26:06Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 15 23:26:06.344565 containerd[1526]: time="2025-07-15T23:26:06.344530796Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 15 23:26:06.353949 containerd[1526]: time="2025-07-15T23:26:06.353909819Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.298µs" Jul 15 23:26:06.353989 containerd[1526]: time="2025-07-15T23:26:06.353947047Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 15 23:26:06.354033 containerd[1526]: time="2025-07-15T23:26:06.354014272Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 15 23:26:06.354270 containerd[1526]: time="2025-07-15T23:26:06.354245068Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 15 23:26:06.354309 containerd[1526]: time="2025-07-15T23:26:06.354273555Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 15 23:26:06.354309 containerd[1526]: time="2025-07-15T23:26:06.354299857Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 23:26:06.354429 containerd[1526]: time="2025-07-15T23:26:06.354407329Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 23:26:06.354429 containerd[1526]: time="2025-07-15T23:26:06.354425526Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 23:26:06.354814 containerd[1526]: time="2025-07-15T23:26:06.354786798Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 23:26:06.354839 containerd[1526]: time="2025-07-15T23:26:06.354812623Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 23:26:06.354839 containerd[1526]: time="2025-07-15T23:26:06.354827046Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 23:26:06.354839 containerd[1526]: time="2025-07-15T23:26:06.354835946Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 15 23:26:06.354996 containerd[1526]: time="2025-07-15T23:26:06.354973653Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 15 23:26:06.355347 containerd[1526]: time="2025-07-15T23:26:06.355321854Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 23:26:06.355435 containerd[1526]: time="2025-07-15T23:26:06.355366233Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 23:26:06.355461 containerd[1526]: time="2025-07-15T23:26:06.355437550Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 15 23:26:06.356306 containerd[1526]: time="2025-07-15T23:26:06.356276189Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 15 23:26:06.356686 containerd[1526]: time="2025-07-15T23:26:06.356661618Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 15 23:26:06.356768 containerd[1526]: time="2025-07-15T23:26:06.356750178Z" level=info msg="metadata content store policy set" policy=shared Jul 15 23:26:06.360134 containerd[1526]: time="2025-07-15T23:26:06.360102349Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 15 23:26:06.360184 containerd[1526]: time="2025-07-15T23:26:06.360159164Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 15 23:26:06.360202 containerd[1526]: time="2025-07-15T23:26:06.360185029Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 15 23:26:06.360219 containerd[1526]: time="2025-07-15T23:26:06.360200643Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 15 23:26:06.360219 containerd[1526]: time="2025-07-15T23:26:06.360212761Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 15 23:26:06.360266 containerd[1526]: time="2025-07-15T23:26:06.360224283Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 15 23:26:06.360266 containerd[1526]: time="2025-07-15T23:26:06.360243354Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 15 23:26:06.360266 containerd[1526]: time="2025-07-15T23:26:06.360256028Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 15 23:26:06.360266 containerd[1526]: time="2025-07-15T23:26:06.360265961Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 15 23:26:06.360326 containerd[1526]: time="2025-07-15T23:26:06.360275933Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 15 23:26:06.360326 containerd[1526]: time="2025-07-15T23:26:06.360284912Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 15 23:26:06.360326 containerd[1526]: time="2025-07-15T23:26:06.360296792Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 15 23:26:06.360439 containerd[1526]: time="2025-07-15T23:26:06.360416700Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 15 23:26:06.360463 containerd[1526]: time="2025-07-15T23:26:06.360444750Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 15 23:26:06.360481 containerd[1526]: time="2025-07-15T23:26:06.360461794Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 15 23:26:06.360481 containerd[1526]: time="2025-07-15T23:26:06.360473038Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 15 23:26:06.360512 containerd[1526]: time="2025-07-15T23:26:06.360483289Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 15 23:26:06.360512 containerd[1526]: time="2025-07-15T23:26:06.360494612Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 15 23:26:06.360548 containerd[1526]: time="2025-07-15T23:26:06.360514596Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 15 23:26:06.360548 containerd[1526]: time="2025-07-15T23:26:06.360524768Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 15 23:26:06.360548 containerd[1526]: time="2025-07-15T23:26:06.360535296Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 15 23:26:06.360548 containerd[1526]: time="2025-07-15T23:26:06.360545467Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 15 23:26:06.360620 containerd[1526]: time="2025-07-15T23:26:06.360555559Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 15 23:26:06.360764 containerd[1526]: time="2025-07-15T23:26:06.360746704Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 15 23:26:06.360789 containerd[1526]: time="2025-07-15T23:26:06.360771973Z" level=info msg="Start snapshots syncer" Jul 15 23:26:06.360806 containerd[1526]: time="2025-07-15T23:26:06.360800500Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 15 23:26:06.361215 containerd[1526]: time="2025-07-15T23:26:06.361021562Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 15 23:26:06.361309 containerd[1526]: time="2025-07-15T23:26:06.361239486Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 15 23:26:06.361418 containerd[1526]: time="2025-07-15T23:26:06.361391218Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 15 23:26:06.361580 containerd[1526]: time="2025-07-15T23:26:06.361554949Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 15 23:26:06.361607 containerd[1526]: time="2025-07-15T23:26:06.361590349Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 15 23:26:06.361607 containerd[1526]: time="2025-07-15T23:26:06.361601672Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 15 23:26:06.361639 containerd[1526]: time="2025-07-15T23:26:06.361613591Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 15 23:26:06.361639 containerd[1526]: time="2025-07-15T23:26:06.361626385Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 15 23:26:06.361639 containerd[1526]: time="2025-07-15T23:26:06.361637112Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 15 23:26:06.361693 containerd[1526]: time="2025-07-15T23:26:06.361655070Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 15 23:26:06.361762 containerd[1526]: time="2025-07-15T23:26:06.361741684Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 15 23:26:06.361785 containerd[1526]: time="2025-07-15T23:26:06.361765165Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 15 23:26:06.361785 containerd[1526]: time="2025-07-15T23:26:06.361778236Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 15 23:26:06.361840 containerd[1526]: time="2025-07-15T23:26:06.361824721Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 23:26:06.361912 containerd[1526]: time="2025-07-15T23:26:06.361892542Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 23:26:06.361939 containerd[1526]: time="2025-07-15T23:26:06.361911493Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 23:26:06.361939 containerd[1526]: time="2025-07-15T23:26:06.361922578Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 23:26:06.361939 containerd[1526]: time="2025-07-15T23:26:06.361930842Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 15 23:26:06.361987 containerd[1526]: time="2025-07-15T23:26:06.361941013Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 15 23:26:06.361987 containerd[1526]: time="2025-07-15T23:26:06.361952098Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 15 23:26:06.362148 containerd[1526]: time="2025-07-15T23:26:06.362129814Z" level=info msg="runtime interface created" Jul 15 23:26:06.362148 containerd[1526]: time="2025-07-15T23:26:06.362145508Z" level=info msg="created NRI interface" Jul 15 23:26:06.362185 containerd[1526]: time="2025-07-15T23:26:06.362156911Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 15 23:26:06.362185 containerd[1526]: time="2025-07-15T23:26:06.362170777Z" level=info msg="Connect containerd service" Jul 15 23:26:06.362225 containerd[1526]: time="2025-07-15T23:26:06.362208402Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 15 23:26:06.364054 containerd[1526]: time="2025-07-15T23:26:06.363258369Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 23:26:06.483496 containerd[1526]: time="2025-07-15T23:26:06.483452176Z" level=info msg="Start subscribing containerd event" Jul 15 23:26:06.483584 containerd[1526]: time="2025-07-15T23:26:06.483510143Z" level=info msg="Start recovering state" Jul 15 23:26:06.483604 containerd[1526]: time="2025-07-15T23:26:06.483591870Z" level=info msg="Start event monitor" Jul 15 23:26:06.483621 containerd[1526]: time="2025-07-15T23:26:06.483611576Z" level=info msg="Start cni network conf syncer for default" Jul 15 23:26:06.483639 containerd[1526]: time="2025-07-15T23:26:06.483621072Z" level=info msg="Start streaming server" Jul 15 23:26:06.483639 containerd[1526]: time="2025-07-15T23:26:06.483629336Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 15 23:26:06.483639 containerd[1526]: time="2025-07-15T23:26:06.483636567Z" level=info msg="runtime interface starting up..." Jul 15 23:26:06.483705 containerd[1526]: time="2025-07-15T23:26:06.483642367Z" level=info msg="starting plugins..." Jul 15 23:26:06.483705 containerd[1526]: time="2025-07-15T23:26:06.483656949Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 15 23:26:06.484124 containerd[1526]: time="2025-07-15T23:26:06.484100464Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 15 23:26:06.484203 containerd[1526]: time="2025-07-15T23:26:06.484150724Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 15 23:26:06.484203 containerd[1526]: time="2025-07-15T23:26:06.484197964Z" level=info msg="containerd successfully booted in 0.140552s" Jul 15 23:26:06.484369 systemd[1]: Started containerd.service - containerd container runtime. Jul 15 23:26:06.522408 tar[1523]: linux-arm64/LICENSE Jul 15 23:26:06.522491 tar[1523]: linux-arm64/README.md Jul 15 23:26:06.545094 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 15 23:26:06.709866 sshd_keygen[1527]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 15 23:26:06.728693 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 15 23:26:06.732039 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 15 23:26:06.754497 systemd[1]: issuegen.service: Deactivated successfully. Jul 15 23:26:06.755112 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 15 23:26:06.757695 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 15 23:26:06.777533 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 15 23:26:06.780401 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 15 23:26:06.782455 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 15 23:26:06.783714 systemd[1]: Reached target getty.target - Login Prompts. Jul 15 23:26:07.438173 systemd-networkd[1440]: eth0: Gained IPv6LL Jul 15 23:26:07.440805 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 15 23:26:07.443489 systemd[1]: Reached target network-online.target - Network is Online. Jul 15 23:26:07.446301 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 15 23:26:07.448843 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:26:07.460236 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 15 23:26:07.473308 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 15 23:26:07.473526 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 15 23:26:07.475211 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 15 23:26:07.479527 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 15 23:26:08.009669 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:26:08.012154 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 15 23:26:08.013779 (kubelet)[1638]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 23:26:08.017525 systemd[1]: Startup finished in 2.127s (kernel) + 5.690s (initrd) + 3.736s (userspace) = 11.554s. Jul 15 23:26:08.433087 kubelet[1638]: E0715 23:26:08.431944 1638 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 23:26:08.434388 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 23:26:08.434540 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 23:26:08.435075 systemd[1]: kubelet.service: Consumed 817ms CPU time, 256.5M memory peak. Jul 15 23:26:11.881463 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 15 23:26:11.882731 systemd[1]: Started sshd@0-10.0.0.112:22-10.0.0.1:48670.service - OpenSSH per-connection server daemon (10.0.0.1:48670). Jul 15 23:26:11.977377 sshd[1651]: Accepted publickey for core from 10.0.0.1 port 48670 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:26:11.979161 sshd-session[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:26:11.985301 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 15 23:26:11.986227 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 15 23:26:11.992501 systemd-logind[1509]: New session 1 of user core. Jul 15 23:26:12.009225 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 15 23:26:12.011991 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 15 23:26:12.034769 (systemd)[1655]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 15 23:26:12.036931 systemd-logind[1509]: New session c1 of user core. Jul 15 23:26:12.145190 systemd[1655]: Queued start job for default target default.target. Jul 15 23:26:12.152956 systemd[1655]: Created slice app.slice - User Application Slice. Jul 15 23:26:12.152990 systemd[1655]: Reached target paths.target - Paths. Jul 15 23:26:12.153029 systemd[1655]: Reached target timers.target - Timers. Jul 15 23:26:12.154321 systemd[1655]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 15 23:26:12.163320 systemd[1655]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 15 23:26:12.163385 systemd[1655]: Reached target sockets.target - Sockets. Jul 15 23:26:12.163426 systemd[1655]: Reached target basic.target - Basic System. Jul 15 23:26:12.163454 systemd[1655]: Reached target default.target - Main User Target. Jul 15 23:26:12.163482 systemd[1655]: Startup finished in 121ms. Jul 15 23:26:12.163649 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 15 23:26:12.165030 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 15 23:26:12.228119 systemd[1]: Started sshd@1-10.0.0.112:22-10.0.0.1:48686.service - OpenSSH per-connection server daemon (10.0.0.1:48686). Jul 15 23:26:12.273354 sshd[1666]: Accepted publickey for core from 10.0.0.1 port 48686 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:26:12.274627 sshd-session[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:26:12.278697 systemd-logind[1509]: New session 2 of user core. Jul 15 23:26:12.289215 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 15 23:26:12.343200 sshd[1668]: Connection closed by 10.0.0.1 port 48686 Jul 15 23:26:12.343612 sshd-session[1666]: pam_unix(sshd:session): session closed for user core Jul 15 23:26:12.353176 systemd[1]: sshd@1-10.0.0.112:22-10.0.0.1:48686.service: Deactivated successfully. Jul 15 23:26:12.355776 systemd[1]: session-2.scope: Deactivated successfully. Jul 15 23:26:12.356534 systemd-logind[1509]: Session 2 logged out. Waiting for processes to exit. Jul 15 23:26:12.358925 systemd[1]: Started sshd@2-10.0.0.112:22-10.0.0.1:48698.service - OpenSSH per-connection server daemon (10.0.0.1:48698). Jul 15 23:26:12.359568 systemd-logind[1509]: Removed session 2. Jul 15 23:26:12.409651 sshd[1674]: Accepted publickey for core from 10.0.0.1 port 48698 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:26:12.411077 sshd-session[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:26:12.418643 systemd-logind[1509]: New session 3 of user core. Jul 15 23:26:12.426211 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 15 23:26:12.474735 sshd[1676]: Connection closed by 10.0.0.1 port 48698 Jul 15 23:26:12.475039 sshd-session[1674]: pam_unix(sshd:session): session closed for user core Jul 15 23:26:12.486933 systemd[1]: sshd@2-10.0.0.112:22-10.0.0.1:48698.service: Deactivated successfully. Jul 15 23:26:12.488631 systemd[1]: session-3.scope: Deactivated successfully. Jul 15 23:26:12.490582 systemd-logind[1509]: Session 3 logged out. Waiting for processes to exit. Jul 15 23:26:12.492884 systemd[1]: Started sshd@3-10.0.0.112:22-10.0.0.1:48708.service - OpenSSH per-connection server daemon (10.0.0.1:48708). Jul 15 23:26:12.493484 systemd-logind[1509]: Removed session 3. Jul 15 23:26:12.541808 sshd[1682]: Accepted publickey for core from 10.0.0.1 port 48708 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:26:12.542297 sshd-session[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:26:12.547113 systemd-logind[1509]: New session 4 of user core. Jul 15 23:26:12.557204 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 15 23:26:12.607562 sshd[1684]: Connection closed by 10.0.0.1 port 48708 Jul 15 23:26:12.607982 sshd-session[1682]: pam_unix(sshd:session): session closed for user core Jul 15 23:26:12.618759 systemd[1]: sshd@3-10.0.0.112:22-10.0.0.1:48708.service: Deactivated successfully. Jul 15 23:26:12.621389 systemd[1]: session-4.scope: Deactivated successfully. Jul 15 23:26:12.622029 systemd-logind[1509]: Session 4 logged out. Waiting for processes to exit. Jul 15 23:26:12.624423 systemd[1]: Started sshd@4-10.0.0.112:22-10.0.0.1:58880.service - OpenSSH per-connection server daemon (10.0.0.1:58880). Jul 15 23:26:12.624917 systemd-logind[1509]: Removed session 4. Jul 15 23:26:12.666965 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 58880 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:26:12.668655 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:26:12.673632 systemd-logind[1509]: New session 5 of user core. Jul 15 23:26:12.683307 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 15 23:26:12.741778 sudo[1694]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 15 23:26:12.742067 sudo[1694]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:26:12.755675 sudo[1694]: pam_unix(sudo:session): session closed for user root Jul 15 23:26:12.757141 sshd[1693]: Connection closed by 10.0.0.1 port 58880 Jul 15 23:26:12.757637 sshd-session[1690]: pam_unix(sshd:session): session closed for user core Jul 15 23:26:12.772288 systemd[1]: sshd@4-10.0.0.112:22-10.0.0.1:58880.service: Deactivated successfully. Jul 15 23:26:12.774434 systemd[1]: session-5.scope: Deactivated successfully. Jul 15 23:26:12.775134 systemd-logind[1509]: Session 5 logged out. Waiting for processes to exit. Jul 15 23:26:12.777422 systemd[1]: Started sshd@5-10.0.0.112:22-10.0.0.1:58892.service - OpenSSH per-connection server daemon (10.0.0.1:58892). Jul 15 23:26:12.778043 systemd-logind[1509]: Removed session 5. Jul 15 23:26:12.834004 sshd[1700]: Accepted publickey for core from 10.0.0.1 port 58892 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:26:12.835313 sshd-session[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:26:12.843264 systemd-logind[1509]: New session 6 of user core. Jul 15 23:26:12.867296 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 15 23:26:12.917677 sudo[1704]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 15 23:26:12.918356 sudo[1704]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:26:12.986830 sudo[1704]: pam_unix(sudo:session): session closed for user root Jul 15 23:26:12.991618 sudo[1703]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 15 23:26:12.991884 sudo[1703]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:26:12.999929 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 23:26:13.048645 augenrules[1726]: No rules Jul 15 23:26:13.049813 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 23:26:13.050023 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 23:26:13.052114 sudo[1703]: pam_unix(sudo:session): session closed for user root Jul 15 23:26:13.053199 sshd[1702]: Connection closed by 10.0.0.1 port 58892 Jul 15 23:26:13.053506 sshd-session[1700]: pam_unix(sshd:session): session closed for user core Jul 15 23:26:13.064080 systemd[1]: sshd@5-10.0.0.112:22-10.0.0.1:58892.service: Deactivated successfully. Jul 15 23:26:13.066236 systemd[1]: session-6.scope: Deactivated successfully. Jul 15 23:26:13.068210 systemd-logind[1509]: Session 6 logged out. Waiting for processes to exit. Jul 15 23:26:13.070609 systemd[1]: Started sshd@6-10.0.0.112:22-10.0.0.1:58902.service - OpenSSH per-connection server daemon (10.0.0.1:58902). Jul 15 23:26:13.071124 systemd-logind[1509]: Removed session 6. Jul 15 23:26:13.121387 sshd[1735]: Accepted publickey for core from 10.0.0.1 port 58902 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:26:13.122468 sshd-session[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:26:13.126852 systemd-logind[1509]: New session 7 of user core. Jul 15 23:26:13.133258 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 15 23:26:13.185948 sudo[1738]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 15 23:26:13.186631 sudo[1738]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:26:13.572285 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 15 23:26:13.582423 (dockerd)[1759]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 15 23:26:13.851430 dockerd[1759]: time="2025-07-15T23:26:13.851294638Z" level=info msg="Starting up" Jul 15 23:26:13.853473 dockerd[1759]: time="2025-07-15T23:26:13.853277111Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 15 23:26:13.894348 dockerd[1759]: time="2025-07-15T23:26:13.894297192Z" level=info msg="Loading containers: start." Jul 15 23:26:13.902082 kernel: Initializing XFRM netlink socket Jul 15 23:26:14.101191 systemd-networkd[1440]: docker0: Link UP Jul 15 23:26:14.107767 dockerd[1759]: time="2025-07-15T23:26:14.107673697Z" level=info msg="Loading containers: done." Jul 15 23:26:14.120333 dockerd[1759]: time="2025-07-15T23:26:14.120273941Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 15 23:26:14.120501 dockerd[1759]: time="2025-07-15T23:26:14.120370034Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 15 23:26:14.120501 dockerd[1759]: time="2025-07-15T23:26:14.120467322Z" level=info msg="Initializing buildkit" Jul 15 23:26:14.147663 dockerd[1759]: time="2025-07-15T23:26:14.147614880Z" level=info msg="Completed buildkit initialization" Jul 15 23:26:14.152554 dockerd[1759]: time="2025-07-15T23:26:14.152510345Z" level=info msg="Daemon has completed initialization" Jul 15 23:26:14.152617 dockerd[1759]: time="2025-07-15T23:26:14.152559467Z" level=info msg="API listen on /run/docker.sock" Jul 15 23:26:14.152771 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 15 23:26:14.768169 containerd[1526]: time="2025-07-15T23:26:14.768082548Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Jul 15 23:26:15.483663 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3658360904.mount: Deactivated successfully. Jul 15 23:26:16.634677 containerd[1526]: time="2025-07-15T23:26:16.634607375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:26:16.635219 containerd[1526]: time="2025-07-15T23:26:16.635179668Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.11: active requests=0, bytes read=25651815" Jul 15 23:26:16.635981 containerd[1526]: time="2025-07-15T23:26:16.635950220Z" level=info msg="ImageCreate event name:\"sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:26:16.638598 containerd[1526]: time="2025-07-15T23:26:16.638573689Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:26:16.639671 containerd[1526]: time="2025-07-15T23:26:16.639549154Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.11\" with image id \"sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\", size \"25648613\" in 1.871422858s" Jul 15 23:26:16.639671 containerd[1526]: time="2025-07-15T23:26:16.639590448Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27\"" Jul 15 23:26:16.640752 containerd[1526]: time="2025-07-15T23:26:16.640728418Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Jul 15 23:26:17.827988 containerd[1526]: time="2025-07-15T23:26:17.827928559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:26:17.828726 containerd[1526]: time="2025-07-15T23:26:17.828655067Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.11: active requests=0, bytes read=22460285" Jul 15 23:26:17.829200 containerd[1526]: time="2025-07-15T23:26:17.829167048Z" level=info msg="ImageCreate event name:\"sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:26:17.832158 containerd[1526]: time="2025-07-15T23:26:17.832108046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:26:17.833080 containerd[1526]: time="2025-07-15T23:26:17.833034291Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.11\" with image id \"sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\", size \"23996073\" in 1.192276774s" Jul 15 23:26:17.833150 containerd[1526]: time="2025-07-15T23:26:17.833083567Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71\"" Jul 15 23:26:17.833510 containerd[1526]: time="2025-07-15T23:26:17.833486948Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Jul 15 23:26:18.684924 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 15 23:26:18.688266 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:26:18.816507 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:26:18.820301 (kubelet)[2039]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 23:26:18.938689 containerd[1526]: time="2025-07-15T23:26:18.938563116Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:26:18.940238 containerd[1526]: time="2025-07-15T23:26:18.940207400Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.11: active requests=0, bytes read=17125091" Jul 15 23:26:18.941296 containerd[1526]: time="2025-07-15T23:26:18.941268379Z" level=info msg="ImageCreate event name:\"sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:26:18.944164 containerd[1526]: time="2025-07-15T23:26:18.944094945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:26:18.945036 containerd[1526]: time="2025-07-15T23:26:18.945002242Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.11\" with image id \"sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\", size \"18660897\" in 1.111483517s" Jul 15 23:26:18.945036 containerd[1526]: time="2025-07-15T23:26:18.945033385Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746\"" Jul 15 23:26:18.945615 containerd[1526]: time="2025-07-15T23:26:18.945586305Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Jul 15 23:26:18.950456 kubelet[2039]: E0715 23:26:18.950408 2039 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 23:26:18.953543 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 23:26:18.953673 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 23:26:18.955124 systemd[1]: kubelet.service: Consumed 148ms CPU time, 107.6M memory peak. Jul 15 23:26:19.975487 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1727880491.mount: Deactivated successfully. Jul 15 23:26:20.310542 containerd[1526]: time="2025-07-15T23:26:20.310414988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:26:20.311209 containerd[1526]: time="2025-07-15T23:26:20.311174751Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.11: active requests=0, bytes read=26915995" Jul 15 23:26:20.312025 containerd[1526]: time="2025-07-15T23:26:20.311998978Z" level=info msg="ImageCreate event name:\"sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:26:20.316529 containerd[1526]: time="2025-07-15T23:26:20.316489862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:26:20.317191 containerd[1526]: time="2025-07-15T23:26:20.316994004Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.11\" with image id \"sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0\", repo tag \"registry.k8s.io/kube-proxy:v1.31.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\", size \"26915012\" in 1.371380501s" Jul 15 23:26:20.317191 containerd[1526]: time="2025-07-15T23:26:20.317038762Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0\"" Jul 15 23:26:20.317802 containerd[1526]: time="2025-07-15T23:26:20.317712600Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 15 23:26:20.987553 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount240923214.mount: Deactivated successfully. Jul 15 23:26:21.586367 containerd[1526]: time="2025-07-15T23:26:21.586318959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:26:21.587321 containerd[1526]: time="2025-07-15T23:26:21.587282729Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 15 23:26:21.588118 containerd[1526]: time="2025-07-15T23:26:21.588065085Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:26:21.591552 containerd[1526]: time="2025-07-15T23:26:21.591484563Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:26:21.594074 containerd[1526]: time="2025-07-15T23:26:21.593086698Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.274674984s" Jul 15 23:26:21.594074 containerd[1526]: time="2025-07-15T23:26:21.593127593Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 15 23:26:21.595492 containerd[1526]: time="2025-07-15T23:26:21.595463249Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 15 23:26:22.043348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1952233301.mount: Deactivated successfully. Jul 15 23:26:22.047449 containerd[1526]: time="2025-07-15T23:26:22.047400791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 23:26:22.048573 containerd[1526]: time="2025-07-15T23:26:22.048540135Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 15 23:26:22.049495 containerd[1526]: time="2025-07-15T23:26:22.049459886Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 23:26:22.051978 containerd[1526]: time="2025-07-15T23:26:22.051942723Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 23:26:22.052517 containerd[1526]: time="2025-07-15T23:26:22.052478596Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 456.98267ms" Jul 15 23:26:22.052517 containerd[1526]: time="2025-07-15T23:26:22.052512595Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 15 23:26:22.052982 containerd[1526]: time="2025-07-15T23:26:22.052940567Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 15 23:26:22.556394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1740324984.mount: Deactivated successfully. Jul 15 23:26:24.050825 containerd[1526]: time="2025-07-15T23:26:24.050778073Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:26:24.051417 containerd[1526]: time="2025-07-15T23:26:24.051386669Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" Jul 15 23:26:24.052840 containerd[1526]: time="2025-07-15T23:26:24.052811223Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:26:24.171657 containerd[1526]: time="2025-07-15T23:26:24.171608292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:26:24.172825 containerd[1526]: time="2025-07-15T23:26:24.172735074Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.119759988s" Jul 15 23:26:24.172825 containerd[1526]: time="2025-07-15T23:26:24.172775030Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 15 23:26:28.698235 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:26:28.698382 systemd[1]: kubelet.service: Consumed 148ms CPU time, 107.6M memory peak. Jul 15 23:26:28.702334 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:26:28.729274 systemd[1]: Reload requested from client PID 2196 ('systemctl') (unit session-7.scope)... Jul 15 23:26:28.729289 systemd[1]: Reloading... Jul 15 23:26:28.817091 zram_generator::config[2239]: No configuration found. Jul 15 23:26:28.925787 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:26:29.018998 systemd[1]: Reloading finished in 289 ms. Jul 15 23:26:29.055402 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:26:29.057889 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 23:26:29.058131 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:26:29.058184 systemd[1]: kubelet.service: Consumed 90ms CPU time, 95.1M memory peak. Jul 15 23:26:29.059635 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:26:29.183686 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:26:29.187094 (kubelet)[2287]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 23:26:29.224677 kubelet[2287]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:26:29.224677 kubelet[2287]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 15 23:26:29.224677 kubelet[2287]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:26:29.224677 kubelet[2287]: I0715 23:26:29.224571 2287 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 23:26:30.302822 kubelet[2287]: I0715 23:26:30.302787 2287 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 15 23:26:30.303289 kubelet[2287]: I0715 23:26:30.303273 2287 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 23:26:30.303644 kubelet[2287]: I0715 23:26:30.303627 2287 server.go:934] "Client rotation is on, will bootstrap in background" Jul 15 23:26:30.373770 kubelet[2287]: E0715 23:26:30.373722 2287 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.112:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:26:30.374774 kubelet[2287]: I0715 23:26:30.374738 2287 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 23:26:30.383937 kubelet[2287]: I0715 23:26:30.383920 2287 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 23:26:30.387268 kubelet[2287]: I0715 23:26:30.387241 2287 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 23:26:30.388063 kubelet[2287]: I0715 23:26:30.388031 2287 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 15 23:26:30.388213 kubelet[2287]: I0715 23:26:30.388181 2287 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 23:26:30.388387 kubelet[2287]: I0715 23:26:30.388213 2287 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 23:26:30.388476 kubelet[2287]: I0715 23:26:30.388451 2287 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 23:26:30.388476 kubelet[2287]: I0715 23:26:30.388462 2287 container_manager_linux.go:300] "Creating device plugin manager" Jul 15 23:26:30.388715 kubelet[2287]: I0715 23:26:30.388692 2287 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:26:30.390992 kubelet[2287]: I0715 23:26:30.390966 2287 kubelet.go:408] "Attempting to sync node with API server" Jul 15 23:26:30.391040 kubelet[2287]: I0715 23:26:30.390995 2287 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 23:26:30.391040 kubelet[2287]: I0715 23:26:30.391020 2287 kubelet.go:314] "Adding apiserver pod source" Jul 15 23:26:30.391262 kubelet[2287]: I0715 23:26:30.391110 2287 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 23:26:30.393786 kubelet[2287]: W0715 23:26:30.393674 2287 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.112:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Jul 15 23:26:30.393849 kubelet[2287]: E0715 23:26:30.393803 2287 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.112:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:26:30.394281 kubelet[2287]: W0715 23:26:30.394184 2287 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Jul 15 23:26:30.394281 kubelet[2287]: E0715 23:26:30.394243 2287 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:26:30.395594 kubelet[2287]: I0715 23:26:30.395496 2287 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 15 23:26:30.398451 kubelet[2287]: I0715 23:26:30.398433 2287 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 23:26:30.398557 kubelet[2287]: W0715 23:26:30.398545 2287 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 15 23:26:30.399636 kubelet[2287]: I0715 23:26:30.399506 2287 server.go:1274] "Started kubelet" Jul 15 23:26:30.400302 kubelet[2287]: I0715 23:26:30.400249 2287 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 23:26:30.400582 kubelet[2287]: I0715 23:26:30.400561 2287 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 23:26:30.400640 kubelet[2287]: I0715 23:26:30.400621 2287 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 23:26:30.400729 kubelet[2287]: I0715 23:26:30.400707 2287 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 23:26:30.404269 kubelet[2287]: I0715 23:26:30.401648 2287 server.go:449] "Adding debug handlers to kubelet server" Jul 15 23:26:30.404269 kubelet[2287]: I0715 23:26:30.402947 2287 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 23:26:30.404269 kubelet[2287]: I0715 23:26:30.404231 2287 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 15 23:26:30.404359 kubelet[2287]: I0715 23:26:30.404344 2287 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 15 23:26:30.404530 kubelet[2287]: I0715 23:26:30.404396 2287 reconciler.go:26] "Reconciler: start to sync state" Jul 15 23:26:30.409073 kubelet[2287]: W0715 23:26:30.404902 2287 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Jul 15 23:26:30.409073 kubelet[2287]: E0715 23:26:30.404945 2287 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:26:30.409073 kubelet[2287]: I0715 23:26:30.405338 2287 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 23:26:30.409073 kubelet[2287]: E0715 23:26:30.407402 2287 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:26:30.409073 kubelet[2287]: I0715 23:26:30.407519 2287 factory.go:221] Registration of the containerd container factory successfully Jul 15 23:26:30.409073 kubelet[2287]: I0715 23:26:30.407528 2287 factory.go:221] Registration of the systemd container factory successfully Jul 15 23:26:30.409225 kubelet[2287]: E0715 23:26:30.409117 2287 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="200ms" Jul 15 23:26:30.410411 kubelet[2287]: E0715 23:26:30.409463 2287 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.112:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.112:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1852906a1b4c909d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-15 23:26:30.399479965 +0000 UTC m=+1.209596819,LastTimestamp:2025-07-15 23:26:30.399479965 +0000 UTC m=+1.209596819,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 15 23:26:30.412775 kubelet[2287]: E0715 23:26:30.412745 2287 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 23:26:30.417630 kubelet[2287]: I0715 23:26:30.417581 2287 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 23:26:30.418543 kubelet[2287]: I0715 23:26:30.418515 2287 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 23:26:30.418543 kubelet[2287]: I0715 23:26:30.418541 2287 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 15 23:26:30.418603 kubelet[2287]: I0715 23:26:30.418558 2287 kubelet.go:2321] "Starting kubelet main sync loop" Jul 15 23:26:30.418622 kubelet[2287]: E0715 23:26:30.418598 2287 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 23:26:30.419084 kubelet[2287]: W0715 23:26:30.419028 2287 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Jul 15 23:26:30.419152 kubelet[2287]: E0715 23:26:30.419092 2287 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:26:30.419760 kubelet[2287]: I0715 23:26:30.419739 2287 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 15 23:26:30.419760 kubelet[2287]: I0715 23:26:30.419753 2287 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 15 23:26:30.419830 kubelet[2287]: I0715 23:26:30.419768 2287 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:26:30.507858 kubelet[2287]: E0715 23:26:30.507812 2287 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:26:30.518981 kubelet[2287]: E0715 23:26:30.518955 2287 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 23:26:30.538893 kubelet[2287]: I0715 23:26:30.538860 2287 policy_none.go:49] "None policy: Start" Jul 15 23:26:30.539815 kubelet[2287]: I0715 23:26:30.539792 2287 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 15 23:26:30.539872 kubelet[2287]: I0715 23:26:30.539821 2287 state_mem.go:35] "Initializing new in-memory state store" Jul 15 23:26:30.550250 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 15 23:26:30.563990 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 15 23:26:30.567473 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 15 23:26:30.586123 kubelet[2287]: I0715 23:26:30.585899 2287 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 23:26:30.586123 kubelet[2287]: I0715 23:26:30.586120 2287 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 23:26:30.586248 kubelet[2287]: I0715 23:26:30.586130 2287 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 23:26:30.586580 kubelet[2287]: I0715 23:26:30.586551 2287 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 23:26:30.588359 kubelet[2287]: E0715 23:26:30.588330 2287 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 15 23:26:30.609748 kubelet[2287]: E0715 23:26:30.609656 2287 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="400ms" Jul 15 23:26:30.687868 kubelet[2287]: I0715 23:26:30.687826 2287 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 23:26:30.688398 kubelet[2287]: E0715 23:26:30.688354 2287 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost" Jul 15 23:26:30.749118 systemd[1]: Created slice kubepods-burstable-podaf08624e782a4c221250594eff3aaa06.slice - libcontainer container kubepods-burstable-podaf08624e782a4c221250594eff3aaa06.slice. Jul 15 23:26:30.763088 systemd[1]: Created slice kubepods-burstable-pod27e4a50e94f48ec00f6bd509cb48ed05.slice - libcontainer container kubepods-burstable-pod27e4a50e94f48ec00f6bd509cb48ed05.slice. Jul 15 23:26:30.787433 systemd[1]: Created slice kubepods-burstable-pod407c569889bb86d746b0274843003fd0.slice - libcontainer container kubepods-burstable-pod407c569889bb86d746b0274843003fd0.slice. Jul 15 23:26:30.806262 kubelet[2287]: I0715 23:26:30.806230 2287 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:26:30.806473 kubelet[2287]: I0715 23:26:30.806458 2287 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af08624e782a4c221250594eff3aaa06-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"af08624e782a4c221250594eff3aaa06\") " pod="kube-system/kube-apiserver-localhost" Jul 15 23:26:30.806583 kubelet[2287]: I0715 23:26:30.806566 2287 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af08624e782a4c221250594eff3aaa06-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"af08624e782a4c221250594eff3aaa06\") " pod="kube-system/kube-apiserver-localhost" Jul 15 23:26:30.806721 kubelet[2287]: I0715 23:26:30.806685 2287 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:26:30.806866 kubelet[2287]: I0715 23:26:30.806812 2287 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:26:30.806866 kubelet[2287]: I0715 23:26:30.806837 2287 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Jul 15 23:26:30.807008 kubelet[2287]: I0715 23:26:30.806960 2287 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af08624e782a4c221250594eff3aaa06-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"af08624e782a4c221250594eff3aaa06\") " pod="kube-system/kube-apiserver-localhost" Jul 15 23:26:30.807008 kubelet[2287]: I0715 23:26:30.806982 2287 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:26:30.807150 kubelet[2287]: I0715 23:26:30.807134 2287 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:26:30.890735 kubelet[2287]: I0715 23:26:30.890640 2287 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 23:26:30.891298 kubelet[2287]: E0715 23:26:30.891271 2287 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost" Jul 15 23:26:31.010682 kubelet[2287]: E0715 23:26:31.010641 2287 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="800ms" Jul 15 23:26:31.061894 kubelet[2287]: E0715 23:26:31.061800 2287 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:26:31.062500 containerd[1526]: time="2025-07-15T23:26:31.062456814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:af08624e782a4c221250594eff3aaa06,Namespace:kube-system,Attempt:0,}" Jul 15 23:26:31.078726 containerd[1526]: time="2025-07-15T23:26:31.078688988Z" level=info msg="connecting to shim 10d53b1f2ca4ec71e918893f4909bd82a7398b84d6ca0922a8078f37264bf992" address="unix:///run/containerd/s/bdc53c9a1945e4d26b464069f219b0807c3d673eae19b47dbbe1e3463aea7c51" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:26:31.085734 kubelet[2287]: E0715 23:26:31.085709 2287 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:26:31.086174 containerd[1526]: time="2025-07-15T23:26:31.086146179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,}" Jul 15 23:26:31.090422 kubelet[2287]: E0715 23:26:31.090385 2287 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:26:31.090833 containerd[1526]: time="2025-07-15T23:26:31.090747963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,}" Jul 15 23:26:31.104873 systemd[1]: Started cri-containerd-10d53b1f2ca4ec71e918893f4909bd82a7398b84d6ca0922a8078f37264bf992.scope - libcontainer container 10d53b1f2ca4ec71e918893f4909bd82a7398b84d6ca0922a8078f37264bf992. Jul 15 23:26:31.111467 containerd[1526]: time="2025-07-15T23:26:31.111403920Z" level=info msg="connecting to shim 0e4e4e60d909d62cb2660d2221ea910869df975efa4f68cc4e1bbc0f259d69bf" address="unix:///run/containerd/s/a66bb06046f4042f5a7d85c8e14f4b04d8aff6d0df606b70060b041fdd28af61" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:26:31.115136 containerd[1526]: time="2025-07-15T23:26:31.115091730Z" level=info msg="connecting to shim 2aad933ccfaaa49747fca65ab5811f385e2c66d9be0e25449f0b14460b43bc23" address="unix:///run/containerd/s/08ced0b5b3b1e9fc5e5b7faf8ec217093036238c8c14477953de85424e7179e8" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:26:31.139224 systemd[1]: Started cri-containerd-0e4e4e60d909d62cb2660d2221ea910869df975efa4f68cc4e1bbc0f259d69bf.scope - libcontainer container 0e4e4e60d909d62cb2660d2221ea910869df975efa4f68cc4e1bbc0f259d69bf. Jul 15 23:26:31.142640 systemd[1]: Started cri-containerd-2aad933ccfaaa49747fca65ab5811f385e2c66d9be0e25449f0b14460b43bc23.scope - libcontainer container 2aad933ccfaaa49747fca65ab5811f385e2c66d9be0e25449f0b14460b43bc23. Jul 15 23:26:31.153555 containerd[1526]: time="2025-07-15T23:26:31.153486279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:af08624e782a4c221250594eff3aaa06,Namespace:kube-system,Attempt:0,} returns sandbox id \"10d53b1f2ca4ec71e918893f4909bd82a7398b84d6ca0922a8078f37264bf992\"" Jul 15 23:26:31.156660 kubelet[2287]: E0715 23:26:31.156470 2287 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:26:31.159431 containerd[1526]: time="2025-07-15T23:26:31.159378252Z" level=info msg="CreateContainer within sandbox \"10d53b1f2ca4ec71e918893f4909bd82a7398b84d6ca0922a8078f37264bf992\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 15 23:26:31.167300 containerd[1526]: time="2025-07-15T23:26:31.167263269Z" level=info msg="Container 57c94359567101a5efb3ebbfa55f63adc427b85e6aebf26e2065ded8146c2e91: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:26:31.177082 containerd[1526]: time="2025-07-15T23:26:31.176780655Z" level=info msg="CreateContainer within sandbox \"10d53b1f2ca4ec71e918893f4909bd82a7398b84d6ca0922a8078f37264bf992\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"57c94359567101a5efb3ebbfa55f63adc427b85e6aebf26e2065ded8146c2e91\"" Jul 15 23:26:31.177675 containerd[1526]: time="2025-07-15T23:26:31.177647651Z" level=info msg="StartContainer for \"57c94359567101a5efb3ebbfa55f63adc427b85e6aebf26e2065ded8146c2e91\"" Jul 15 23:26:31.178763 containerd[1526]: time="2025-07-15T23:26:31.178731157Z" level=info msg="connecting to shim 57c94359567101a5efb3ebbfa55f63adc427b85e6aebf26e2065ded8146c2e91" address="unix:///run/containerd/s/bdc53c9a1945e4d26b464069f219b0807c3d673eae19b47dbbe1e3463aea7c51" protocol=ttrpc version=3 Jul 15 23:26:31.179865 containerd[1526]: time="2025-07-15T23:26:31.178869332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e4e4e60d909d62cb2660d2221ea910869df975efa4f68cc4e1bbc0f259d69bf\"" Jul 15 23:26:31.180829 kubelet[2287]: E0715 23:26:31.180805 2287 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:26:31.182247 containerd[1526]: time="2025-07-15T23:26:31.182218796Z" level=info msg="CreateContainer within sandbox \"0e4e4e60d909d62cb2660d2221ea910869df975efa4f68cc4e1bbc0f259d69bf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 15 23:26:31.188018 containerd[1526]: time="2025-07-15T23:26:31.187987654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,} returns sandbox id \"2aad933ccfaaa49747fca65ab5811f385e2c66d9be0e25449f0b14460b43bc23\"" Jul 15 23:26:31.188663 kubelet[2287]: E0715 23:26:31.188641 2287 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:26:31.190420 containerd[1526]: time="2025-07-15T23:26:31.190372253Z" level=info msg="CreateContainer within sandbox \"2aad933ccfaaa49747fca65ab5811f385e2c66d9be0e25449f0b14460b43bc23\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 15 23:26:31.191542 containerd[1526]: time="2025-07-15T23:26:31.191495026Z" level=info msg="Container 0ef4ca00bdd05f5e43875bcf4730b34c4f658a7ef4bdb7d30d396962f05a65da: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:26:31.198843 containerd[1526]: time="2025-07-15T23:26:31.198334606Z" level=info msg="Container 036d33dab93b6563c0c2bca1f7a83b7513d2429d3d5a6d4908ca79a23a5b4a0c: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:26:31.200773 containerd[1526]: time="2025-07-15T23:26:31.200718407Z" level=info msg="CreateContainer within sandbox \"0e4e4e60d909d62cb2660d2221ea910869df975efa4f68cc4e1bbc0f259d69bf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0ef4ca00bdd05f5e43875bcf4730b34c4f658a7ef4bdb7d30d396962f05a65da\"" Jul 15 23:26:31.201333 containerd[1526]: time="2025-07-15T23:26:31.201257763Z" level=info msg="StartContainer for \"0ef4ca00bdd05f5e43875bcf4730b34c4f658a7ef4bdb7d30d396962f05a65da\"" Jul 15 23:26:31.202473 containerd[1526]: time="2025-07-15T23:26:31.202449444Z" level=info msg="connecting to shim 0ef4ca00bdd05f5e43875bcf4730b34c4f658a7ef4bdb7d30d396962f05a65da" address="unix:///run/containerd/s/a66bb06046f4042f5a7d85c8e14f4b04d8aff6d0df606b70060b041fdd28af61" protocol=ttrpc version=3 Jul 15 23:26:31.203205 systemd[1]: Started cri-containerd-57c94359567101a5efb3ebbfa55f63adc427b85e6aebf26e2065ded8146c2e91.scope - libcontainer container 57c94359567101a5efb3ebbfa55f63adc427b85e6aebf26e2065ded8146c2e91. Jul 15 23:26:31.204930 containerd[1526]: time="2025-07-15T23:26:31.204894922Z" level=info msg="CreateContainer within sandbox \"2aad933ccfaaa49747fca65ab5811f385e2c66d9be0e25449f0b14460b43bc23\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"036d33dab93b6563c0c2bca1f7a83b7513d2429d3d5a6d4908ca79a23a5b4a0c\"" Jul 15 23:26:31.205717 containerd[1526]: time="2025-07-15T23:26:31.205682664Z" level=info msg="StartContainer for \"036d33dab93b6563c0c2bca1f7a83b7513d2429d3d5a6d4908ca79a23a5b4a0c\"" Jul 15 23:26:31.208917 containerd[1526]: time="2025-07-15T23:26:31.208876937Z" level=info msg="connecting to shim 036d33dab93b6563c0c2bca1f7a83b7513d2429d3d5a6d4908ca79a23a5b4a0c" address="unix:///run/containerd/s/08ced0b5b3b1e9fc5e5b7faf8ec217093036238c8c14477953de85424e7179e8" protocol=ttrpc version=3 Jul 15 23:26:31.228223 systemd[1]: Started cri-containerd-0ef4ca00bdd05f5e43875bcf4730b34c4f658a7ef4bdb7d30d396962f05a65da.scope - libcontainer container 0ef4ca00bdd05f5e43875bcf4730b34c4f658a7ef4bdb7d30d396962f05a65da. Jul 15 23:26:31.231922 systemd[1]: Started cri-containerd-036d33dab93b6563c0c2bca1f7a83b7513d2429d3d5a6d4908ca79a23a5b4a0c.scope - libcontainer container 036d33dab93b6563c0c2bca1f7a83b7513d2429d3d5a6d4908ca79a23a5b4a0c. Jul 15 23:26:31.258275 containerd[1526]: time="2025-07-15T23:26:31.258196783Z" level=info msg="StartContainer for \"57c94359567101a5efb3ebbfa55f63adc427b85e6aebf26e2065ded8146c2e91\" returns successfully" Jul 15 23:26:31.282852 containerd[1526]: time="2025-07-15T23:26:31.282812026Z" level=info msg="StartContainer for \"036d33dab93b6563c0c2bca1f7a83b7513d2429d3d5a6d4908ca79a23a5b4a0c\" returns successfully" Jul 15 23:26:31.298260 kubelet[2287]: I0715 23:26:31.298212 2287 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 23:26:31.298676 kubelet[2287]: E0715 23:26:31.298547 2287 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost" Jul 15 23:26:31.308869 containerd[1526]: time="2025-07-15T23:26:31.308802183Z" level=info msg="StartContainer for \"0ef4ca00bdd05f5e43875bcf4730b34c4f658a7ef4bdb7d30d396962f05a65da\" returns successfully" Jul 15 23:26:31.430674 kubelet[2287]: E0715 23:26:31.430125 2287 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:26:31.432290 kubelet[2287]: E0715 23:26:31.432268 2287 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:26:31.436357 kubelet[2287]: E0715 23:26:31.436338 2287 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:26:32.100800 kubelet[2287]: I0715 23:26:32.100771 2287 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 23:26:32.439311 kubelet[2287]: E0715 23:26:32.438987 2287 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:26:32.440222 kubelet[2287]: E0715 23:26:32.440198 2287 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:26:32.551076 kubelet[2287]: E0715 23:26:32.550140 2287 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 15 23:26:32.669943 kubelet[2287]: E0715 23:26:32.669848 2287 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1852906a1b4c909d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-15 23:26:30.399479965 +0000 UTC m=+1.209596819,LastTimestamp:2025-07-15 23:26:30.399479965 +0000 UTC m=+1.209596819,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 15 23:26:32.724988 kubelet[2287]: I0715 23:26:32.724658 2287 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 15 23:26:32.724988 kubelet[2287]: E0715 23:26:32.724695 2287 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 15 23:26:33.395370 kubelet[2287]: I0715 23:26:33.395325 2287 apiserver.go:52] "Watching apiserver" Jul 15 23:26:33.405198 kubelet[2287]: I0715 23:26:33.405148 2287 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 15 23:26:34.352293 kubelet[2287]: E0715 23:26:34.352244 2287 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:26:34.441260 kubelet[2287]: E0715 23:26:34.441231 2287 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:26:34.463561 kubelet[2287]: E0715 23:26:34.463386 2287 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:26:34.657513 systemd[1]: Reload requested from client PID 2567 ('systemctl') (unit session-7.scope)... Jul 15 23:26:34.657528 systemd[1]: Reloading... Jul 15 23:26:34.719089 zram_generator::config[2613]: No configuration found. Jul 15 23:26:34.791666 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:26:34.897416 systemd[1]: Reloading finished in 239 ms. Jul 15 23:26:34.924279 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:26:34.942047 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 23:26:34.943132 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:26:34.943190 systemd[1]: kubelet.service: Consumed 1.732s CPU time, 131.6M memory peak. Jul 15 23:26:34.944847 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:26:35.103542 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:26:35.114327 (kubelet)[2652]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 23:26:35.154064 kubelet[2652]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:26:35.154064 kubelet[2652]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 15 23:26:35.154064 kubelet[2652]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:26:35.154388 kubelet[2652]: I0715 23:26:35.154114 2652 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 23:26:35.159096 kubelet[2652]: I0715 23:26:35.159063 2652 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 15 23:26:35.159096 kubelet[2652]: I0715 23:26:35.159089 2652 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 23:26:35.159313 kubelet[2652]: I0715 23:26:35.159282 2652 server.go:934] "Client rotation is on, will bootstrap in background" Jul 15 23:26:35.160658 kubelet[2652]: I0715 23:26:35.160634 2652 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 15 23:26:35.162564 kubelet[2652]: I0715 23:26:35.162538 2652 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 23:26:35.165653 kubelet[2652]: I0715 23:26:35.165632 2652 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 23:26:35.168266 kubelet[2652]: I0715 23:26:35.168235 2652 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 23:26:35.168366 kubelet[2652]: I0715 23:26:35.168343 2652 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 15 23:26:35.168594 kubelet[2652]: I0715 23:26:35.168559 2652 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 23:26:35.169069 kubelet[2652]: I0715 23:26:35.168589 2652 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 23:26:35.169069 kubelet[2652]: I0715 23:26:35.168917 2652 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 23:26:35.169069 kubelet[2652]: I0715 23:26:35.168929 2652 container_manager_linux.go:300] "Creating device plugin manager" Jul 15 23:26:35.169069 kubelet[2652]: I0715 23:26:35.168978 2652 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:26:35.169577 kubelet[2652]: I0715 23:26:35.169558 2652 kubelet.go:408] "Attempting to sync node with API server" Jul 15 23:26:35.169657 kubelet[2652]: I0715 23:26:35.169647 2652 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 23:26:35.169779 kubelet[2652]: I0715 23:26:35.169769 2652 kubelet.go:314] "Adding apiserver pod source" Jul 15 23:26:35.172115 kubelet[2652]: I0715 23:26:35.172093 2652 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 23:26:35.173316 kubelet[2652]: I0715 23:26:35.173292 2652 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 15 23:26:35.173772 kubelet[2652]: I0715 23:26:35.173735 2652 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 23:26:35.174189 kubelet[2652]: I0715 23:26:35.174172 2652 server.go:1274] "Started kubelet" Jul 15 23:26:35.174312 kubelet[2652]: I0715 23:26:35.174285 2652 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 23:26:35.174594 kubelet[2652]: I0715 23:26:35.174489 2652 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 23:26:35.174737 kubelet[2652]: I0715 23:26:35.174714 2652 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 23:26:35.175370 kubelet[2652]: I0715 23:26:35.175340 2652 server.go:449] "Adding debug handlers to kubelet server" Jul 15 23:26:35.178135 kubelet[2652]: I0715 23:26:35.178116 2652 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 23:26:35.180261 kubelet[2652]: I0715 23:26:35.180230 2652 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 23:26:35.181590 kubelet[2652]: E0715 23:26:35.181558 2652 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:26:35.181590 kubelet[2652]: I0715 23:26:35.181592 2652 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 15 23:26:35.181751 kubelet[2652]: I0715 23:26:35.181733 2652 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 15 23:26:35.181894 kubelet[2652]: I0715 23:26:35.181845 2652 reconciler.go:26] "Reconciler: start to sync state" Jul 15 23:26:35.187835 kubelet[2652]: I0715 23:26:35.187802 2652 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 23:26:35.190623 kubelet[2652]: I0715 23:26:35.190303 2652 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 23:26:35.190623 kubelet[2652]: I0715 23:26:35.190326 2652 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 15 23:26:35.190623 kubelet[2652]: I0715 23:26:35.190352 2652 kubelet.go:2321] "Starting kubelet main sync loop" Jul 15 23:26:35.190623 kubelet[2652]: E0715 23:26:35.190396 2652 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 23:26:35.191265 kubelet[2652]: I0715 23:26:35.191225 2652 factory.go:221] Registration of the systemd container factory successfully Jul 15 23:26:35.191417 kubelet[2652]: I0715 23:26:35.191387 2652 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 23:26:35.194278 kubelet[2652]: I0715 23:26:35.194247 2652 factory.go:221] Registration of the containerd container factory successfully Jul 15 23:26:35.194359 kubelet[2652]: E0715 23:26:35.194303 2652 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 23:26:35.226301 kubelet[2652]: I0715 23:26:35.226268 2652 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 15 23:26:35.226301 kubelet[2652]: I0715 23:26:35.226288 2652 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 15 23:26:35.226441 kubelet[2652]: I0715 23:26:35.226315 2652 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:26:35.226462 kubelet[2652]: I0715 23:26:35.226446 2652 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 15 23:26:35.226483 kubelet[2652]: I0715 23:26:35.226456 2652 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 15 23:26:35.226483 kubelet[2652]: I0715 23:26:35.226477 2652 policy_none.go:49] "None policy: Start" Jul 15 23:26:35.227253 kubelet[2652]: I0715 23:26:35.227234 2652 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 15 23:26:35.227849 kubelet[2652]: I0715 23:26:35.227360 2652 state_mem.go:35] "Initializing new in-memory state store" Jul 15 23:26:35.227849 kubelet[2652]: I0715 23:26:35.227500 2652 state_mem.go:75] "Updated machine memory state" Jul 15 23:26:35.231559 kubelet[2652]: I0715 23:26:35.231540 2652 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 23:26:35.231999 kubelet[2652]: I0715 23:26:35.231939 2652 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 23:26:35.231999 kubelet[2652]: I0715 23:26:35.231954 2652 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 23:26:35.233115 kubelet[2652]: I0715 23:26:35.232825 2652 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 23:26:35.297118 kubelet[2652]: E0715 23:26:35.297080 2652 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 15 23:26:35.297439 kubelet[2652]: E0715 23:26:35.297419 2652 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 15 23:26:35.339297 kubelet[2652]: I0715 23:26:35.339270 2652 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 23:26:35.346091 kubelet[2652]: I0715 23:26:35.346016 2652 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 15 23:26:35.346284 kubelet[2652]: I0715 23:26:35.346218 2652 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 15 23:26:35.483679 kubelet[2652]: I0715 23:26:35.483533 2652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:26:35.483679 kubelet[2652]: I0715 23:26:35.483587 2652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:26:35.483679 kubelet[2652]: I0715 23:26:35.483613 2652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Jul 15 23:26:35.483679 kubelet[2652]: I0715 23:26:35.483630 2652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af08624e782a4c221250594eff3aaa06-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"af08624e782a4c221250594eff3aaa06\") " pod="kube-system/kube-apiserver-localhost" Jul 15 23:26:35.483679 kubelet[2652]: I0715 23:26:35.483646 2652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af08624e782a4c221250594eff3aaa06-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"af08624e782a4c221250594eff3aaa06\") " pod="kube-system/kube-apiserver-localhost" Jul 15 23:26:35.483887 kubelet[2652]: I0715 23:26:35.483672 2652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:26:35.483887 kubelet[2652]: I0715 23:26:35.483686 2652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:26:35.483887 kubelet[2652]: I0715 23:26:35.483703 2652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af08624e782a4c221250594eff3aaa06-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"af08624e782a4c221250594eff3aaa06\") " pod="kube-system/kube-apiserver-localhost" Jul 15 23:26:35.483887 kubelet[2652]: I0715 23:26:35.483718 2652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:26:35.597585 kubelet[2652]: E0715 23:26:35.597542 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:26:35.597708 kubelet[2652]: E0715 23:26:35.597679 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:26:35.597937 kubelet[2652]: E0715 23:26:35.597892 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:26:35.661257 sudo[2687]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 15 23:26:35.661852 sudo[2687]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 15 23:26:36.083732 sudo[2687]: pam_unix(sudo:session): session closed for user root Jul 15 23:26:36.172550 kubelet[2652]: I0715 23:26:36.172522 2652 apiserver.go:52] "Watching apiserver" Jul 15 23:26:36.182061 kubelet[2652]: I0715 23:26:36.182029 2652 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 15 23:26:36.209988 kubelet[2652]: E0715 23:26:36.209961 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:26:36.219126 kubelet[2652]: E0715 23:26:36.219087 2652 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 15 23:26:36.219254 kubelet[2652]: E0715 23:26:36.219135 2652 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 15 23:26:36.220287 kubelet[2652]: E0715 23:26:36.220258 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:26:36.220433 kubelet[2652]: E0715 23:26:36.220416 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:26:36.233130 kubelet[2652]: I0715 23:26:36.233077 2652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.233032063 podStartE2EDuration="2.233032063s" podCreationTimestamp="2025-07-15 23:26:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:26:36.232875775 +0000 UTC m=+1.113599394" watchObservedRunningTime="2025-07-15 23:26:36.233032063 +0000 UTC m=+1.113755682" Jul 15 23:26:36.250747 kubelet[2652]: I0715 23:26:36.250609 2652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.250592721 podStartE2EDuration="1.250592721s" podCreationTimestamp="2025-07-15 23:26:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:26:36.243181162 +0000 UTC m=+1.123904781" watchObservedRunningTime="2025-07-15 23:26:36.250592721 +0000 UTC m=+1.131316300" Jul 15 23:26:36.250901 kubelet[2652]: I0715 23:26:36.250780 2652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.250774345 podStartE2EDuration="2.250774345s" podCreationTimestamp="2025-07-15 23:26:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:26:36.25056271 +0000 UTC m=+1.131286369" watchObservedRunningTime="2025-07-15 23:26:36.250774345 +0000 UTC m=+1.131497964" Jul 15 23:26:37.211063 kubelet[2652]: E0715 23:26:37.210934 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:26:37.211063 kubelet[2652]: E0715 23:26:37.211039 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:26:37.213062 kubelet[2652]: E0715 23:26:37.212174 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:26:38.085650 sudo[1738]: pam_unix(sudo:session): session closed for user root Jul 15 23:26:38.086802 sshd[1737]: Connection closed by 10.0.0.1 port 58902 Jul 15 23:26:38.087200 sshd-session[1735]: pam_unix(sshd:session): session closed for user core Jul 15 23:26:38.091123 systemd[1]: sshd@6-10.0.0.112:22-10.0.0.1:58902.service: Deactivated successfully. Jul 15 23:26:38.093000 systemd[1]: session-7.scope: Deactivated successfully. Jul 15 23:26:38.093288 systemd[1]: session-7.scope: Consumed 7.200s CPU time, 263.6M memory peak. Jul 15 23:26:38.094231 systemd-logind[1509]: Session 7 logged out. Waiting for processes to exit. Jul 15 23:26:38.095300 systemd-logind[1509]: Removed session 7. Jul 15 23:26:41.489452 kubelet[2652]: I0715 23:26:41.489383 2652 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 15 23:26:41.490142 kubelet[2652]: I0715 23:26:41.489912 2652 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 15 23:26:41.490175 containerd[1526]: time="2025-07-15T23:26:41.489718561Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 15 23:26:42.344930 systemd[1]: Created slice kubepods-besteffort-pod47c501de_6c79_4bde_ad15_a8a2fec50987.slice - libcontainer container kubepods-besteffort-pod47c501de_6c79_4bde_ad15_a8a2fec50987.slice. Jul 15 23:26:42.364911 systemd[1]: Created slice kubepods-burstable-pod7fc73677_9c82_4a92_bbd8_2900ae94b719.slice - libcontainer container kubepods-burstable-pod7fc73677_9c82_4a92_bbd8_2900ae94b719.slice. Jul 15 23:26:42.430739 kubelet[2652]: I0715 23:26:42.430688 2652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7fc73677-9c82-4a92-bbd8-2900ae94b719-host-proc-sys-net\") pod \"cilium-w82sq\" (UID: \"7fc73677-9c82-4a92-bbd8-2900ae94b719\") " pod="kube-system/cilium-w82sq" Jul 15 23:26:42.430739 kubelet[2652]: I0715 23:26:42.430746 2652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/47c501de-6c79-4bde-ad15-a8a2fec50987-kube-proxy\") pod \"kube-proxy-bggvd\" (UID: \"47c501de-6c79-4bde-ad15-a8a2fec50987\") " pod="kube-system/kube-proxy-bggvd" Jul 15 23:26:42.430880 kubelet[2652]: I0715 23:26:42.430764 2652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7fc73677-9c82-4a92-bbd8-2900ae94b719-bpf-maps\") pod \"cilium-w82sq\" (UID: \"7fc73677-9c82-4a92-bbd8-2900ae94b719\") " pod="kube-system/cilium-w82sq" Jul 15 23:26:42.430880 kubelet[2652]: I0715 23:26:42.430782 2652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7fc73677-9c82-4a92-bbd8-2900ae94b719-xtables-lock\") pod \"cilium-w82sq\" (UID: \"7fc73677-9c82-4a92-bbd8-2900ae94b719\") " pod="kube-system/cilium-w82sq" Jul 15 23:26:42.430880 kubelet[2652]: I0715 23:26:42.430798 2652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7fc73677-9c82-4a92-bbd8-2900ae94b719-cilium-config-path\") pod \"cilium-w82sq\" (UID: \"7fc73677-9c82-4a92-bbd8-2900ae94b719\") " pod="kube-system/cilium-w82sq" Jul 15 23:26:42.430880 kubelet[2652]: I0715 23:26:42.430813 2652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7fc73677-9c82-4a92-bbd8-2900ae94b719-host-proc-sys-kernel\") pod \"cilium-w82sq\" (UID: \"7fc73677-9c82-4a92-bbd8-2900ae94b719\") " pod="kube-system/cilium-w82sq" Jul 15 23:26:42.430880 kubelet[2652]: I0715 23:26:42.430828 2652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/47c501de-6c79-4bde-ad15-a8a2fec50987-xtables-lock\") pod \"kube-proxy-bggvd\" (UID: \"47c501de-6c79-4bde-ad15-a8a2fec50987\") " pod="kube-system/kube-proxy-bggvd" Jul 15 23:26:42.431003 kubelet[2652]: I0715 23:26:42.430865 2652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c87nt\" (UniqueName: \"kubernetes.io/projected/47c501de-6c79-4bde-ad15-a8a2fec50987-kube-api-access-c87nt\") pod \"kube-proxy-bggvd\" (UID: \"47c501de-6c79-4bde-ad15-a8a2fec50987\") " pod="kube-system/kube-proxy-bggvd" Jul 15 23:26:42.431003 kubelet[2652]: I0715 23:26:42.430916 2652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7fc73677-9c82-4a92-bbd8-2900ae94b719-cni-path\") pod \"cilium-w82sq\" (UID: \"7fc73677-9c82-4a92-bbd8-2900ae94b719\") " pod="kube-system/cilium-w82sq" Jul 15 23:26:42.431003 kubelet[2652]: I0715 23:26:42.430948 2652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7fc73677-9c82-4a92-bbd8-2900ae94b719-clustermesh-secrets\") pod \"cilium-w82sq\" (UID: \"7fc73677-9c82-4a92-bbd8-2900ae94b719\") " pod="kube-system/cilium-w82sq" Jul 15 23:26:42.431003 kubelet[2652]: I0715 23:26:42.430967 2652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/47c501de-6c79-4bde-ad15-a8a2fec50987-lib-modules\") pod \"kube-proxy-bggvd\" (UID: \"47c501de-6c79-4bde-ad15-a8a2fec50987\") " pod="kube-system/kube-proxy-bggvd" Jul 15 23:26:42.431003 kubelet[2652]: I0715 23:26:42.430981 2652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7fc73677-9c82-4a92-bbd8-2900ae94b719-cilium-run\") pod \"cilium-w82sq\" (UID: \"7fc73677-9c82-4a92-bbd8-2900ae94b719\") " pod="kube-system/cilium-w82sq" Jul 15 23:26:42.431003 kubelet[2652]: I0715 23:26:42.430995 2652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7fc73677-9c82-4a92-bbd8-2900ae94b719-etc-cni-netd\") pod \"cilium-w82sq\" (UID: \"7fc73677-9c82-4a92-bbd8-2900ae94b719\") " pod="kube-system/cilium-w82sq" Jul 15 23:26:42.431133 kubelet[2652]: I0715 23:26:42.431019 2652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7fc73677-9c82-4a92-bbd8-2900ae94b719-lib-modules\") pod \"cilium-w82sq\" (UID: \"7fc73677-9c82-4a92-bbd8-2900ae94b719\") " pod="kube-system/cilium-w82sq" Jul 15 23:26:42.431133 kubelet[2652]: I0715 23:26:42.431092 2652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7fc73677-9c82-4a92-bbd8-2900ae94b719-hostproc\") pod \"cilium-w82sq\" (UID: \"7fc73677-9c82-4a92-bbd8-2900ae94b719\") " pod="kube-system/cilium-w82sq" Jul 15 23:26:42.431133 kubelet[2652]: I0715 23:26:42.431116 2652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7fc73677-9c82-4a92-bbd8-2900ae94b719-cilium-cgroup\") pod \"cilium-w82sq\" (UID: \"7fc73677-9c82-4a92-bbd8-2900ae94b719\") " pod="kube-system/cilium-w82sq" Jul 15 23:26:42.431204 kubelet[2652]: I0715 23:26:42.431143 2652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7fc73677-9c82-4a92-bbd8-2900ae94b719-hubble-tls\") pod \"cilium-w82sq\" (UID: \"7fc73677-9c82-4a92-bbd8-2900ae94b719\") " pod="kube-system/cilium-w82sq" Jul 15 23:26:42.431204 kubelet[2652]: I0715 23:26:42.431175 2652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmzdm\" (UniqueName: \"kubernetes.io/projected/7fc73677-9c82-4a92-bbd8-2900ae94b719-kube-api-access-lmzdm\") pod \"cilium-w82sq\" (UID: \"7fc73677-9c82-4a92-bbd8-2900ae94b719\") " pod="kube-system/cilium-w82sq" Jul 15 23:26:42.586344 systemd[1]: Created slice kubepods-besteffort-pod50b2b2aa_8964_419a_b17c_2250a437abab.slice - libcontainer container kubepods-besteffort-pod50b2b2aa_8964_419a_b17c_2250a437abab.slice. Jul 15 23:26:42.633212 kubelet[2652]: I0715 23:26:42.633096 2652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/50b2b2aa-8964-419a-b17c-2250a437abab-cilium-config-path\") pod \"cilium-operator-5d85765b45-ngh4c\" (UID: \"50b2b2aa-8964-419a-b17c-2250a437abab\") " pod="kube-system/cilium-operator-5d85765b45-ngh4c" Jul 15 23:26:42.633212 kubelet[2652]: I0715 23:26:42.633138 2652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m44f\" (UniqueName: \"kubernetes.io/projected/50b2b2aa-8964-419a-b17c-2250a437abab-kube-api-access-2m44f\") pod \"cilium-operator-5d85765b45-ngh4c\" (UID: \"50b2b2aa-8964-419a-b17c-2250a437abab\") " pod="kube-system/cilium-operator-5d85765b45-ngh4c" Jul 15 23:26:42.663471 kubelet[2652]: E0715 23:26:42.663429 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:26:42.664496 containerd[1526]: time="2025-07-15T23:26:42.664446135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bggvd,Uid:47c501de-6c79-4bde-ad15-a8a2fec50987,Namespace:kube-system,Attempt:0,}" Jul 15 23:26:42.669780 kubelet[2652]: E0715 23:26:42.669748 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:26:42.670422 containerd[1526]: time="2025-07-15T23:26:42.670388017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w82sq,Uid:7fc73677-9c82-4a92-bbd8-2900ae94b719,Namespace:kube-system,Attempt:0,}" Jul 15 23:26:42.681254 containerd[1526]: time="2025-07-15T23:26:42.681216675Z" level=info msg="connecting to shim 4bd76c77424d6e91b142199d0723de8e872c0bf7d919645db791d6a1ffba8906" address="unix:///run/containerd/s/784dedb64e2aeeb9418749545f31b772ff0cc3babc874f71d646a48f7c6c646e" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:26:42.694379 containerd[1526]: time="2025-07-15T23:26:42.694279781Z" level=info msg="connecting to shim 08b3cb83c7e5ae69b57ea759d780b90f5d3b1b7f2ee3789ce12cf65133f0f166" address="unix:///run/containerd/s/a9830b9cafe9a6be1ac4e9abba12afe9528cb22e554edf20f4d6273f59d09ea3" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:26:42.706250 systemd[1]: Started cri-containerd-4bd76c77424d6e91b142199d0723de8e872c0bf7d919645db791d6a1ffba8906.scope - libcontainer container 4bd76c77424d6e91b142199d0723de8e872c0bf7d919645db791d6a1ffba8906. Jul 15 23:26:42.710438 systemd[1]: Started cri-containerd-08b3cb83c7e5ae69b57ea759d780b90f5d3b1b7f2ee3789ce12cf65133f0f166.scope - libcontainer container 08b3cb83c7e5ae69b57ea759d780b90f5d3b1b7f2ee3789ce12cf65133f0f166. Jul 15 23:26:42.737145 containerd[1526]: time="2025-07-15T23:26:42.736854585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bggvd,Uid:47c501de-6c79-4bde-ad15-a8a2fec50987,Namespace:kube-system,Attempt:0,} returns sandbox id \"4bd76c77424d6e91b142199d0723de8e872c0bf7d919645db791d6a1ffba8906\"" Jul 15 23:26:42.740070 containerd[1526]: time="2025-07-15T23:26:42.738723232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w82sq,Uid:7fc73677-9c82-4a92-bbd8-2900ae94b719,Namespace:kube-system,Attempt:0,} returns sandbox id \"08b3cb83c7e5ae69b57ea759d780b90f5d3b1b7f2ee3789ce12cf65133f0f166\"" Jul 15 23:26:42.740164 kubelet[2652]: E0715 23:26:42.738875 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:26:42.740809 kubelet[2652]: E0715 23:26:42.740782 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:26:42.743084 containerd[1526]: time="2025-07-15T23:26:42.741935634Z" level=info msg="CreateContainer within sandbox \"4bd76c77424d6e91b142199d0723de8e872c0bf7d919645db791d6a1ffba8906\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 15 23:26:42.743296 containerd[1526]: time="2025-07-15T23:26:42.743270314Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 15 23:26:42.753707 containerd[1526]: time="2025-07-15T23:26:42.753675252Z" level=info msg="Container 245555b51c552020f6d607b5a3ebac9f5dbb62c8e5772aad5b6cb55bcd3285e7: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:26:42.760010 containerd[1526]: time="2025-07-15T23:26:42.759962066Z" level=info msg="CreateContainer within sandbox \"4bd76c77424d6e91b142199d0723de8e872c0bf7d919645db791d6a1ffba8906\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"245555b51c552020f6d607b5a3ebac9f5dbb62c8e5772aad5b6cb55bcd3285e7\"" Jul 15 23:26:42.760799 containerd[1526]: time="2025-07-15T23:26:42.760765257Z" level=info msg="StartContainer for \"245555b51c552020f6d607b5a3ebac9f5dbb62c8e5772aad5b6cb55bcd3285e7\"" Jul 15 23:26:42.762371 containerd[1526]: time="2025-07-15T23:26:42.762344295Z" level=info msg="connecting to shim 245555b51c552020f6d607b5a3ebac9f5dbb62c8e5772aad5b6cb55bcd3285e7" address="unix:///run/containerd/s/784dedb64e2aeeb9418749545f31b772ff0cc3babc874f71d646a48f7c6c646e" protocol=ttrpc version=3 Jul 15 23:26:42.782226 systemd[1]: Started cri-containerd-245555b51c552020f6d607b5a3ebac9f5dbb62c8e5772aad5b6cb55bcd3285e7.scope - libcontainer container 245555b51c552020f6d607b5a3ebac9f5dbb62c8e5772aad5b6cb55bcd3285e7. Jul 15 23:26:42.821590 containerd[1526]: time="2025-07-15T23:26:42.821529385Z" level=info msg="StartContainer for \"245555b51c552020f6d607b5a3ebac9f5dbb62c8e5772aad5b6cb55bcd3285e7\" returns successfully" Jul 15 23:26:42.890067 kubelet[2652]: E0715 23:26:42.889933 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:26:42.891310 containerd[1526]: time="2025-07-15T23:26:42.891269155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-ngh4c,Uid:50b2b2aa-8964-419a-b17c-2250a437abab,Namespace:kube-system,Attempt:0,}" Jul 15 23:26:42.914401 containerd[1526]: time="2025-07-15T23:26:42.914356610Z" level=info msg="connecting to shim b1eeeed98f27d67d1d88bc22e9f89ac17a61f09e385484a7e7bbb353afd7d5e8" address="unix:///run/containerd/s/53e728f4532e5044b04483697cee05302d8748692e7c2dd8582cfa2ae081aed4" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:26:42.939254 systemd[1]: Started cri-containerd-b1eeeed98f27d67d1d88bc22e9f89ac17a61f09e385484a7e7bbb353afd7d5e8.scope - libcontainer container b1eeeed98f27d67d1d88bc22e9f89ac17a61f09e385484a7e7bbb353afd7d5e8. Jul 15 23:26:42.972225 containerd[1526]: time="2025-07-15T23:26:42.972156293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-ngh4c,Uid:50b2b2aa-8964-419a-b17c-2250a437abab,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1eeeed98f27d67d1d88bc22e9f89ac17a61f09e385484a7e7bbb353afd7d5e8\"" Jul 15 23:26:42.972866 kubelet[2652]: E0715 23:26:42.972847 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:26:43.098865 kubelet[2652]: E0715 23:26:43.098827 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:26:43.223860 kubelet[2652]: E0715 23:26:43.223729 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:26:43.225635 kubelet[2652]: E0715 23:26:43.225589 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:26:44.907981 kubelet[2652]: E0715 23:26:44.907950 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:26:44.925263 kubelet[2652]: I0715 23:26:44.925185 2652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bggvd" podStartSLOduration=2.925165734 podStartE2EDuration="2.925165734s" podCreationTimestamp="2025-07-15 23:26:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:26:43.244022589 +0000 UTC m=+8.124746168" watchObservedRunningTime="2025-07-15 23:26:44.925165734 +0000 UTC m=+9.805889313" Jul 15 23:26:45.226461 kubelet[2652]: E0715 23:26:45.226335 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:26:46.245019 kubelet[2652]: E0715 23:26:46.243105 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:26:51.467540 update_engine[1518]: I20250715 23:26:51.467480 1518 update_attempter.cc:509] Updating boot flags... Jul 15 23:26:52.509854 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3759776037.mount: Deactivated successfully. Jul 15 23:26:53.771286 containerd[1526]: time="2025-07-15T23:26:53.771226288Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:26:53.771872 containerd[1526]: time="2025-07-15T23:26:53.771822615Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 15 23:26:53.772639 containerd[1526]: time="2025-07-15T23:26:53.772612799Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:26:53.774249 containerd[1526]: time="2025-07-15T23:26:53.774156819Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 11.030780452s" Jul 15 23:26:53.774249 containerd[1526]: time="2025-07-15T23:26:53.774196446Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 15 23:26:53.782505 containerd[1526]: time="2025-07-15T23:26:53.782411543Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 15 23:26:53.787485 containerd[1526]: time="2025-07-15T23:26:53.787444592Z" level=info msg="CreateContainer within sandbox \"08b3cb83c7e5ae69b57ea759d780b90f5d3b1b7f2ee3789ce12cf65133f0f166\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 15 23:26:53.800210 containerd[1526]: time="2025-07-15T23:26:53.800156711Z" level=info msg="Container af05c42bc963a527559d9987b6f3f66f24b4d0ac4c56360e3466fead31e0de0d: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:26:53.806205 containerd[1526]: time="2025-07-15T23:26:53.806159805Z" level=info msg="CreateContainer within sandbox \"08b3cb83c7e5ae69b57ea759d780b90f5d3b1b7f2ee3789ce12cf65133f0f166\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"af05c42bc963a527559d9987b6f3f66f24b4d0ac4c56360e3466fead31e0de0d\"" Jul 15 23:26:53.806710 containerd[1526]: time="2025-07-15T23:26:53.806682196Z" level=info msg="StartContainer for \"af05c42bc963a527559d9987b6f3f66f24b4d0ac4c56360e3466fead31e0de0d\"" Jul 15 23:26:53.808605 containerd[1526]: time="2025-07-15T23:26:53.808546152Z" level=info msg="connecting to shim af05c42bc963a527559d9987b6f3f66f24b4d0ac4c56360e3466fead31e0de0d" address="unix:///run/containerd/s/a9830b9cafe9a6be1ac4e9abba12afe9528cb22e554edf20f4d6273f59d09ea3" protocol=ttrpc version=3 Jul 15 23:26:53.851477 systemd[1]: Started cri-containerd-af05c42bc963a527559d9987b6f3f66f24b4d0ac4c56360e3466fead31e0de0d.scope - libcontainer container af05c42bc963a527559d9987b6f3f66f24b4d0ac4c56360e3466fead31e0de0d. Jul 15 23:26:53.901323 containerd[1526]: time="2025-07-15T23:26:53.901276974Z" level=info msg="StartContainer for \"af05c42bc963a527559d9987b6f3f66f24b4d0ac4c56360e3466fead31e0de0d\" returns successfully" Jul 15 23:26:54.004874 systemd[1]: cri-containerd-af05c42bc963a527559d9987b6f3f66f24b4d0ac4c56360e3466fead31e0de0d.scope: Deactivated successfully. Jul 15 23:26:54.041436 containerd[1526]: time="2025-07-15T23:26:54.040828947Z" level=info msg="received exit event container_id:\"af05c42bc963a527559d9987b6f3f66f24b4d0ac4c56360e3466fead31e0de0d\" id:\"af05c42bc963a527559d9987b6f3f66f24b4d0ac4c56360e3466fead31e0de0d\" pid:3091 exited_at:{seconds:1752622014 nanos:26668010}" Jul 15 23:26:54.053448 containerd[1526]: time="2025-07-15T23:26:54.053404765Z" level=info msg="TaskExit event in podsandbox handler container_id:\"af05c42bc963a527559d9987b6f3f66f24b4d0ac4c56360e3466fead31e0de0d\" id:\"af05c42bc963a527559d9987b6f3f66f24b4d0ac4c56360e3466fead31e0de0d\" pid:3091 exited_at:{seconds:1752622014 nanos:26668010}" Jul 15 23:26:54.249811 kubelet[2652]: E0715 23:26:54.249770 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:26:54.254510 containerd[1526]: time="2025-07-15T23:26:54.254453993Z" level=info msg="CreateContainer within sandbox \"08b3cb83c7e5ae69b57ea759d780b90f5d3b1b7f2ee3789ce12cf65133f0f166\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 15 23:26:54.261155 containerd[1526]: time="2025-07-15T23:26:54.261109210Z" level=info msg="Container 7daf7527074c9581abd6944d4ffd95673f411372842e212b7af7bc73492adeaa: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:26:54.266948 containerd[1526]: time="2025-07-15T23:26:54.266891733Z" level=info msg="CreateContainer within sandbox \"08b3cb83c7e5ae69b57ea759d780b90f5d3b1b7f2ee3789ce12cf65133f0f166\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7daf7527074c9581abd6944d4ffd95673f411372842e212b7af7bc73492adeaa\"" Jul 15 23:26:54.276864 containerd[1526]: time="2025-07-15T23:26:54.276798003Z" level=info msg="StartContainer for \"7daf7527074c9581abd6944d4ffd95673f411372842e212b7af7bc73492adeaa\"" Jul 15 23:26:54.277816 containerd[1526]: time="2025-07-15T23:26:54.277782824Z" level=info msg="connecting to shim 7daf7527074c9581abd6944d4ffd95673f411372842e212b7af7bc73492adeaa" address="unix:///run/containerd/s/a9830b9cafe9a6be1ac4e9abba12afe9528cb22e554edf20f4d6273f59d09ea3" protocol=ttrpc version=3 Jul 15 23:26:54.298248 systemd[1]: Started cri-containerd-7daf7527074c9581abd6944d4ffd95673f411372842e212b7af7bc73492adeaa.scope - libcontainer container 7daf7527074c9581abd6944d4ffd95673f411372842e212b7af7bc73492adeaa. Jul 15 23:26:54.341997 containerd[1526]: time="2025-07-15T23:26:54.341847156Z" level=info msg="StartContainer for \"7daf7527074c9581abd6944d4ffd95673f411372842e212b7af7bc73492adeaa\" returns successfully" Jul 15 23:26:54.353974 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 23:26:54.354249 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:26:54.354625 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 15 23:26:54.356128 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 23:26:54.357696 systemd[1]: cri-containerd-7daf7527074c9581abd6944d4ffd95673f411372842e212b7af7bc73492adeaa.scope: Deactivated successfully. Jul 15 23:26:54.357978 containerd[1526]: time="2025-07-15T23:26:54.357825181Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7daf7527074c9581abd6944d4ffd95673f411372842e212b7af7bc73492adeaa\" id:\"7daf7527074c9581abd6944d4ffd95673f411372842e212b7af7bc73492adeaa\" pid:3136 exited_at:{seconds:1752622014 nanos:357436939}" Jul 15 23:26:54.359063 containerd[1526]: time="2025-07-15T23:26:54.359010141Z" level=info msg="received exit event container_id:\"7daf7527074c9581abd6944d4ffd95673f411372842e212b7af7bc73492adeaa\" id:\"7daf7527074c9581abd6944d4ffd95673f411372842e212b7af7bc73492adeaa\" pid:3136 exited_at:{seconds:1752622014 nanos:357436939}" Jul 15 23:26:54.384245 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:26:54.798369 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af05c42bc963a527559d9987b6f3f66f24b4d0ac4c56360e3466fead31e0de0d-rootfs.mount: Deactivated successfully. Jul 15 23:26:55.098443 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1877482593.mount: Deactivated successfully. Jul 15 23:26:55.253407 kubelet[2652]: E0715 23:26:55.253261 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:26:55.263170 containerd[1526]: time="2025-07-15T23:26:55.261623186Z" level=info msg="CreateContainer within sandbox \"08b3cb83c7e5ae69b57ea759d780b90f5d3b1b7f2ee3789ce12cf65133f0f166\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 15 23:26:55.316896 containerd[1526]: time="2025-07-15T23:26:55.316842215Z" level=info msg="Container eeef50a3c2d0a6035ea14929984a3174c8942dc0d877438f3296d4e69e60b3d0: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:26:55.319844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3453828507.mount: Deactivated successfully. Jul 15 23:26:55.324629 containerd[1526]: time="2025-07-15T23:26:55.324576372Z" level=info msg="CreateContainer within sandbox \"08b3cb83c7e5ae69b57ea759d780b90f5d3b1b7f2ee3789ce12cf65133f0f166\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"eeef50a3c2d0a6035ea14929984a3174c8942dc0d877438f3296d4e69e60b3d0\"" Jul 15 23:26:55.325765 containerd[1526]: time="2025-07-15T23:26:55.325429569Z" level=info msg="StartContainer for \"eeef50a3c2d0a6035ea14929984a3174c8942dc0d877438f3296d4e69e60b3d0\"" Jul 15 23:26:55.327480 containerd[1526]: time="2025-07-15T23:26:55.327438437Z" level=info msg="connecting to shim eeef50a3c2d0a6035ea14929984a3174c8942dc0d877438f3296d4e69e60b3d0" address="unix:///run/containerd/s/a9830b9cafe9a6be1ac4e9abba12afe9528cb22e554edf20f4d6273f59d09ea3" protocol=ttrpc version=3 Jul 15 23:26:55.351315 systemd[1]: Started cri-containerd-eeef50a3c2d0a6035ea14929984a3174c8942dc0d877438f3296d4e69e60b3d0.scope - libcontainer container eeef50a3c2d0a6035ea14929984a3174c8942dc0d877438f3296d4e69e60b3d0. Jul 15 23:26:55.385216 containerd[1526]: time="2025-07-15T23:26:55.385176909Z" level=info msg="StartContainer for \"eeef50a3c2d0a6035ea14929984a3174c8942dc0d877438f3296d4e69e60b3d0\" returns successfully" Jul 15 23:26:55.398943 systemd[1]: cri-containerd-eeef50a3c2d0a6035ea14929984a3174c8942dc0d877438f3296d4e69e60b3d0.scope: Deactivated successfully. Jul 15 23:26:55.407189 containerd[1526]: time="2025-07-15T23:26:55.407142771Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eeef50a3c2d0a6035ea14929984a3174c8942dc0d877438f3296d4e69e60b3d0\" id:\"eeef50a3c2d0a6035ea14929984a3174c8942dc0d877438f3296d4e69e60b3d0\" pid:3187 exited_at:{seconds:1752622015 nanos:406849375}" Jul 15 23:26:55.407314 containerd[1526]: time="2025-07-15T23:26:55.407197116Z" level=info msg="received exit event container_id:\"eeef50a3c2d0a6035ea14929984a3174c8942dc0d877438f3296d4e69e60b3d0\" id:\"eeef50a3c2d0a6035ea14929984a3174c8942dc0d877438f3296d4e69e60b3d0\" pid:3187 exited_at:{seconds:1752622015 nanos:406849375}" Jul 15 23:26:56.260471 kubelet[2652]: E0715 23:26:56.260441 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:26:56.274430 containerd[1526]: time="2025-07-15T23:26:56.274376108Z" level=info msg="CreateContainer within sandbox \"08b3cb83c7e5ae69b57ea759d780b90f5d3b1b7f2ee3789ce12cf65133f0f166\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 15 23:26:56.283569 containerd[1526]: time="2025-07-15T23:26:56.283521985Z" level=info msg="Container 091a42470d6b24ce6d9268f5e1ac5a3e225a2a503165f4014e9454f4af49bb51: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:26:56.293142 containerd[1526]: time="2025-07-15T23:26:56.293028366Z" level=info msg="CreateContainer within sandbox \"08b3cb83c7e5ae69b57ea759d780b90f5d3b1b7f2ee3789ce12cf65133f0f166\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"091a42470d6b24ce6d9268f5e1ac5a3e225a2a503165f4014e9454f4af49bb51\"" Jul 15 23:26:56.294036 containerd[1526]: time="2025-07-15T23:26:56.293817356Z" level=info msg="StartContainer for \"091a42470d6b24ce6d9268f5e1ac5a3e225a2a503165f4014e9454f4af49bb51\"" Jul 15 23:26:56.295437 containerd[1526]: time="2025-07-15T23:26:56.295375299Z" level=info msg="connecting to shim 091a42470d6b24ce6d9268f5e1ac5a3e225a2a503165f4014e9454f4af49bb51" address="unix:///run/containerd/s/a9830b9cafe9a6be1ac4e9abba12afe9528cb22e554edf20f4d6273f59d09ea3" protocol=ttrpc version=3 Jul 15 23:26:56.320323 systemd[1]: Started cri-containerd-091a42470d6b24ce6d9268f5e1ac5a3e225a2a503165f4014e9454f4af49bb51.scope - libcontainer container 091a42470d6b24ce6d9268f5e1ac5a3e225a2a503165f4014e9454f4af49bb51. Jul 15 23:26:56.348719 systemd[1]: cri-containerd-091a42470d6b24ce6d9268f5e1ac5a3e225a2a503165f4014e9454f4af49bb51.scope: Deactivated successfully. Jul 15 23:26:56.351089 containerd[1526]: time="2025-07-15T23:26:56.349861228Z" level=info msg="TaskExit event in podsandbox handler container_id:\"091a42470d6b24ce6d9268f5e1ac5a3e225a2a503165f4014e9454f4af49bb51\" id:\"091a42470d6b24ce6d9268f5e1ac5a3e225a2a503165f4014e9454f4af49bb51\" pid:3237 exited_at:{seconds:1752622016 nanos:349540754}" Jul 15 23:26:56.351089 containerd[1526]: time="2025-07-15T23:26:56.350189301Z" level=info msg="received exit event container_id:\"091a42470d6b24ce6d9268f5e1ac5a3e225a2a503165f4014e9454f4af49bb51\" id:\"091a42470d6b24ce6d9268f5e1ac5a3e225a2a503165f4014e9454f4af49bb51\" pid:3237 exited_at:{seconds:1752622016 nanos:349540754}" Jul 15 23:26:56.352547 containerd[1526]: time="2025-07-15T23:26:56.352508881Z" level=info msg="StartContainer for \"091a42470d6b24ce6d9268f5e1ac5a3e225a2a503165f4014e9454f4af49bb51\" returns successfully" Jul 15 23:26:56.376363 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-091a42470d6b24ce6d9268f5e1ac5a3e225a2a503165f4014e9454f4af49bb51-rootfs.mount: Deactivated successfully. Jul 15 23:26:56.469981 containerd[1526]: time="2025-07-15T23:26:56.469922244Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:26:56.471010 containerd[1526]: time="2025-07-15T23:26:56.470978082Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 15 23:26:56.471637 containerd[1526]: time="2025-07-15T23:26:56.471609074Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:26:56.473453 containerd[1526]: time="2025-07-15T23:26:56.473418871Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.690964741s" Jul 15 23:26:56.473495 containerd[1526]: time="2025-07-15T23:26:56.473455861Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 15 23:26:56.475713 containerd[1526]: time="2025-07-15T23:26:56.475678427Z" level=info msg="CreateContainer within sandbox \"b1eeeed98f27d67d1d88bc22e9f89ac17a61f09e385484a7e7bbb353afd7d5e8\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 15 23:26:56.482130 containerd[1526]: time="2025-07-15T23:26:56.482076958Z" level=info msg="Container 7c2cb580d6046755a76e8f1175a7aecb373f59cc041a86bfe9af1c108e8e10e1: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:26:56.487248 containerd[1526]: time="2025-07-15T23:26:56.487206468Z" level=info msg="CreateContainer within sandbox \"b1eeeed98f27d67d1d88bc22e9f89ac17a61f09e385484a7e7bbb353afd7d5e8\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"7c2cb580d6046755a76e8f1175a7aecb373f59cc041a86bfe9af1c108e8e10e1\"" Jul 15 23:26:56.487850 containerd[1526]: time="2025-07-15T23:26:56.487779115Z" level=info msg="StartContainer for \"7c2cb580d6046755a76e8f1175a7aecb373f59cc041a86bfe9af1c108e8e10e1\"" Jul 15 23:26:56.488894 containerd[1526]: time="2025-07-15T23:26:56.488864905Z" level=info msg="connecting to shim 7c2cb580d6046755a76e8f1175a7aecb373f59cc041a86bfe9af1c108e8e10e1" address="unix:///run/containerd/s/53e728f4532e5044b04483697cee05302d8748692e7c2dd8582cfa2ae081aed4" protocol=ttrpc version=3 Jul 15 23:26:56.511291 systemd[1]: Started cri-containerd-7c2cb580d6046755a76e8f1175a7aecb373f59cc041a86bfe9af1c108e8e10e1.scope - libcontainer container 7c2cb580d6046755a76e8f1175a7aecb373f59cc041a86bfe9af1c108e8e10e1. Jul 15 23:26:56.542121 containerd[1526]: time="2025-07-15T23:26:56.541427628Z" level=info msg="StartContainer for \"7c2cb580d6046755a76e8f1175a7aecb373f59cc041a86bfe9af1c108e8e10e1\" returns successfully" Jul 15 23:26:57.266068 kubelet[2652]: E0715 23:26:57.265735 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:26:57.275967 kubelet[2652]: E0715 23:26:57.275915 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:26:57.285075 containerd[1526]: time="2025-07-15T23:26:57.283725147Z" level=info msg="CreateContainer within sandbox \"08b3cb83c7e5ae69b57ea759d780b90f5d3b1b7f2ee3789ce12cf65133f0f166\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 15 23:26:57.299838 kubelet[2652]: I0715 23:26:57.299762 2652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-ngh4c" podStartSLOduration=1.7991709139999998 podStartE2EDuration="15.299742776s" podCreationTimestamp="2025-07-15 23:26:42 +0000 UTC" firstStartedPulling="2025-07-15 23:26:42.973532626 +0000 UTC m=+7.854256244" lastFinishedPulling="2025-07-15 23:26:56.474104487 +0000 UTC m=+21.354828106" observedRunningTime="2025-07-15 23:26:57.281781713 +0000 UTC m=+22.162505332" watchObservedRunningTime="2025-07-15 23:26:57.299742776 +0000 UTC m=+22.180466395" Jul 15 23:26:57.337927 containerd[1526]: time="2025-07-15T23:26:57.337725347Z" level=info msg="Container b07b860f6236f1fdcc7a47c9814ae7a34fd7624e98bda9c093bdd6168e5813ec: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:26:57.341876 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2780245695.mount: Deactivated successfully. Jul 15 23:26:57.346710 containerd[1526]: time="2025-07-15T23:26:57.346664389Z" level=info msg="CreateContainer within sandbox \"08b3cb83c7e5ae69b57ea759d780b90f5d3b1b7f2ee3789ce12cf65133f0f166\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b07b860f6236f1fdcc7a47c9814ae7a34fd7624e98bda9c093bdd6168e5813ec\"" Jul 15 23:26:57.347617 containerd[1526]: time="2025-07-15T23:26:57.347577880Z" level=info msg="StartContainer for \"b07b860f6236f1fdcc7a47c9814ae7a34fd7624e98bda9c093bdd6168e5813ec\"" Jul 15 23:26:57.348559 containerd[1526]: time="2025-07-15T23:26:57.348528122Z" level=info msg="connecting to shim b07b860f6236f1fdcc7a47c9814ae7a34fd7624e98bda9c093bdd6168e5813ec" address="unix:///run/containerd/s/a9830b9cafe9a6be1ac4e9abba12afe9528cb22e554edf20f4d6273f59d09ea3" protocol=ttrpc version=3 Jul 15 23:26:57.368220 systemd[1]: Started cri-containerd-b07b860f6236f1fdcc7a47c9814ae7a34fd7624e98bda9c093bdd6168e5813ec.scope - libcontainer container b07b860f6236f1fdcc7a47c9814ae7a34fd7624e98bda9c093bdd6168e5813ec. Jul 15 23:26:57.400395 containerd[1526]: time="2025-07-15T23:26:57.400345269Z" level=info msg="StartContainer for \"b07b860f6236f1fdcc7a47c9814ae7a34fd7624e98bda9c093bdd6168e5813ec\" returns successfully" Jul 15 23:26:57.553457 containerd[1526]: time="2025-07-15T23:26:57.552761949Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b07b860f6236f1fdcc7a47c9814ae7a34fd7624e98bda9c093bdd6168e5813ec\" id:\"8ef3cc1338b4ee5e5fe7755cb08e909f5b959a9ac5f9f688f5b1e76238b4fa5f\" pid:3343 exited_at:{seconds:1752622017 nanos:552239759}" Jul 15 23:26:57.588083 kubelet[2652]: I0715 23:26:57.588036 2652 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 15 23:26:57.661508 systemd[1]: Created slice kubepods-burstable-pode94fdae6_ab5e_44d5_8936_684f20d301d3.slice - libcontainer container kubepods-burstable-pode94fdae6_ab5e_44d5_8936_684f20d301d3.slice. Jul 15 23:26:57.669805 systemd[1]: Created slice kubepods-burstable-pod686e6de4_df1c_4c44_b1c1_4aa1b4f8321c.slice - libcontainer container kubepods-burstable-pod686e6de4_df1c_4c44_b1c1_4aa1b4f8321c.slice. Jul 15 23:26:57.739854 kubelet[2652]: I0715 23:26:57.739811 2652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjgp8\" (UniqueName: \"kubernetes.io/projected/e94fdae6-ab5e-44d5-8936-684f20d301d3-kube-api-access-tjgp8\") pod \"coredns-7c65d6cfc9-wp2dq\" (UID: \"e94fdae6-ab5e-44d5-8936-684f20d301d3\") " pod="kube-system/coredns-7c65d6cfc9-wp2dq" Jul 15 23:26:57.739854 kubelet[2652]: I0715 23:26:57.739860 2652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e94fdae6-ab5e-44d5-8936-684f20d301d3-config-volume\") pod \"coredns-7c65d6cfc9-wp2dq\" (UID: \"e94fdae6-ab5e-44d5-8936-684f20d301d3\") " pod="kube-system/coredns-7c65d6cfc9-wp2dq" Jul 15 23:26:57.740045 kubelet[2652]: I0715 23:26:57.739882 2652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsvdw\" (UniqueName: \"kubernetes.io/projected/686e6de4-df1c-4c44-b1c1-4aa1b4f8321c-kube-api-access-dsvdw\") pod \"coredns-7c65d6cfc9-bzq5r\" (UID: \"686e6de4-df1c-4c44-b1c1-4aa1b4f8321c\") " pod="kube-system/coredns-7c65d6cfc9-bzq5r" Jul 15 23:26:57.740045 kubelet[2652]: I0715 23:26:57.739901 2652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/686e6de4-df1c-4c44-b1c1-4aa1b4f8321c-config-volume\") pod \"coredns-7c65d6cfc9-bzq5r\" (UID: \"686e6de4-df1c-4c44-b1c1-4aa1b4f8321c\") " pod="kube-system/coredns-7c65d6cfc9-bzq5r" Jul 15 23:26:57.966077 kubelet[2652]: E0715 23:26:57.965729 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:26:57.966325 containerd[1526]: time="2025-07-15T23:26:57.966292734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wp2dq,Uid:e94fdae6-ab5e-44d5-8936-684f20d301d3,Namespace:kube-system,Attempt:0,}" Jul 15 23:26:57.973853 kubelet[2652]: E0715 23:26:57.973823 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:26:57.974276 containerd[1526]: time="2025-07-15T23:26:57.974243943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bzq5r,Uid:686e6de4-df1c-4c44-b1c1-4aa1b4f8321c,Namespace:kube-system,Attempt:0,}" Jul 15 23:26:58.284428 kubelet[2652]: E0715 23:26:58.284387 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:26:58.284989 kubelet[2652]: E0715 23:26:58.284963 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:26:58.301130 kubelet[2652]: I0715 23:26:58.301066 2652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-w82sq" podStartSLOduration=5.261175903 podStartE2EDuration="16.30103602s" podCreationTimestamp="2025-07-15 23:26:42 +0000 UTC" firstStartedPulling="2025-07-15 23:26:42.741839497 +0000 UTC m=+7.622563116" lastFinishedPulling="2025-07-15 23:26:53.781699614 +0000 UTC m=+18.662423233" observedRunningTime="2025-07-15 23:26:58.300997869 +0000 UTC m=+23.181721488" watchObservedRunningTime="2025-07-15 23:26:58.30103602 +0000 UTC m=+23.181759639" Jul 15 23:26:59.286660 kubelet[2652]: E0715 23:26:59.286576 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:27:00.289731 kubelet[2652]: E0715 23:27:00.289700 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:27:00.450551 systemd-networkd[1440]: cilium_host: Link UP Jul 15 23:27:00.450668 systemd-networkd[1440]: cilium_net: Link UP Jul 15 23:27:00.450798 systemd-networkd[1440]: cilium_net: Gained carrier Jul 15 23:27:00.450916 systemd-networkd[1440]: cilium_host: Gained carrier Jul 15 23:27:00.536991 systemd-networkd[1440]: cilium_vxlan: Link UP Jul 15 23:27:00.536997 systemd-networkd[1440]: cilium_vxlan: Gained carrier Jul 15 23:27:00.867097 kernel: NET: Registered PF_ALG protocol family Jul 15 23:27:00.926281 systemd-networkd[1440]: cilium_host: Gained IPv6LL Jul 15 23:27:01.390222 systemd-networkd[1440]: cilium_net: Gained IPv6LL Jul 15 23:27:01.458022 systemd-networkd[1440]: lxc_health: Link UP Jul 15 23:27:01.458291 systemd-networkd[1440]: lxc_health: Gained carrier Jul 15 23:27:01.714715 systemd-networkd[1440]: lxc79efbedcc65e: Link UP Jul 15 23:27:01.717073 kernel: eth0: renamed from tmp92aba Jul 15 23:27:01.717513 systemd-networkd[1440]: lxcba3ae1664e31: Link UP Jul 15 23:27:01.730163 kernel: eth0: renamed from tmp8c9e9 Jul 15 23:27:01.730723 systemd-networkd[1440]: lxcba3ae1664e31: Gained carrier Jul 15 23:27:01.731033 systemd-networkd[1440]: lxc79efbedcc65e: Gained carrier Jul 15 23:27:02.095226 systemd-networkd[1440]: cilium_vxlan: Gained IPv6LL Jul 15 23:27:02.688244 kubelet[2652]: E0715 23:27:02.688213 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:27:03.054241 systemd-networkd[1440]: lxcba3ae1664e31: Gained IPv6LL Jul 15 23:27:03.182238 systemd-networkd[1440]: lxc_health: Gained IPv6LL Jul 15 23:27:03.295444 kubelet[2652]: E0715 23:27:03.295412 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:27:03.502291 systemd-networkd[1440]: lxc79efbedcc65e: Gained IPv6LL Jul 15 23:27:03.879392 systemd[1]: Started sshd@7-10.0.0.112:22-10.0.0.1:40170.service - OpenSSH per-connection server daemon (10.0.0.1:40170). Jul 15 23:27:03.949323 sshd[3826]: Accepted publickey for core from 10.0.0.1 port 40170 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:27:03.950921 sshd-session[3826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:27:03.955631 systemd-logind[1509]: New session 8 of user core. Jul 15 23:27:03.963259 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 15 23:27:04.103248 sshd[3828]: Connection closed by 10.0.0.1 port 40170 Jul 15 23:27:04.103828 sshd-session[3826]: pam_unix(sshd:session): session closed for user core Jul 15 23:27:04.107695 systemd[1]: sshd@7-10.0.0.112:22-10.0.0.1:40170.service: Deactivated successfully. Jul 15 23:27:04.109699 systemd[1]: session-8.scope: Deactivated successfully. Jul 15 23:27:04.110554 systemd-logind[1509]: Session 8 logged out. Waiting for processes to exit. Jul 15 23:27:04.111933 systemd-logind[1509]: Removed session 8. Jul 15 23:27:05.525077 containerd[1526]: time="2025-07-15T23:27:05.524488713Z" level=info msg="connecting to shim 8c9e9a14608270decbde3547b91774a09c931f4dc90d569e42c00b353799eef1" address="unix:///run/containerd/s/afd5b73b773ea9cd37fea04f353e7f0f97b035c8ff538e1b0d791ec0db670c4d" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:27:05.525780 containerd[1526]: time="2025-07-15T23:27:05.525736727Z" level=info msg="connecting to shim 92aba32259f0947b2477f76728390ebdcea66a4e467ff82b51907ed9fcff99ff" address="unix:///run/containerd/s/edba50541b1955f08b560c56d7accd8279caac268e07ef4c76e34b2a39b2e7ca" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:27:05.556354 systemd[1]: Started cri-containerd-92aba32259f0947b2477f76728390ebdcea66a4e467ff82b51907ed9fcff99ff.scope - libcontainer container 92aba32259f0947b2477f76728390ebdcea66a4e467ff82b51907ed9fcff99ff. Jul 15 23:27:05.576559 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 23:27:05.602724 containerd[1526]: time="2025-07-15T23:27:05.602668075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wp2dq,Uid:e94fdae6-ab5e-44d5-8936-684f20d301d3,Namespace:kube-system,Attempt:0,} returns sandbox id \"92aba32259f0947b2477f76728390ebdcea66a4e467ff82b51907ed9fcff99ff\"" Jul 15 23:27:05.609349 kubelet[2652]: E0715 23:27:05.609299 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:27:05.620871 containerd[1526]: time="2025-07-15T23:27:05.620823362Z" level=info msg="CreateContainer within sandbox \"92aba32259f0947b2477f76728390ebdcea66a4e467ff82b51907ed9fcff99ff\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 23:27:05.624308 systemd[1]: Started cri-containerd-8c9e9a14608270decbde3547b91774a09c931f4dc90d569e42c00b353799eef1.scope - libcontainer container 8c9e9a14608270decbde3547b91774a09c931f4dc90d569e42c00b353799eef1. Jul 15 23:27:05.632571 containerd[1526]: time="2025-07-15T23:27:05.632517855Z" level=info msg="Container c9c702e3b2810746f1d2e708cf181f647e75eac77db00baea94d0be700603c72: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:27:05.638286 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 23:27:05.641750 containerd[1526]: time="2025-07-15T23:27:05.641708283Z" level=info msg="CreateContainer within sandbox \"92aba32259f0947b2477f76728390ebdcea66a4e467ff82b51907ed9fcff99ff\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c9c702e3b2810746f1d2e708cf181f647e75eac77db00baea94d0be700603c72\"" Jul 15 23:27:05.642670 containerd[1526]: time="2025-07-15T23:27:05.642633624Z" level=info msg="StartContainer for \"c9c702e3b2810746f1d2e708cf181f647e75eac77db00baea94d0be700603c72\"" Jul 15 23:27:05.644818 containerd[1526]: time="2025-07-15T23:27:05.644776104Z" level=info msg="connecting to shim c9c702e3b2810746f1d2e708cf181f647e75eac77db00baea94d0be700603c72" address="unix:///run/containerd/s/edba50541b1955f08b560c56d7accd8279caac268e07ef4c76e34b2a39b2e7ca" protocol=ttrpc version=3 Jul 15 23:27:05.664127 containerd[1526]: time="2025-07-15T23:27:05.662903556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bzq5r,Uid:686e6de4-df1c-4c44-b1c1-4aa1b4f8321c,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c9e9a14608270decbde3547b91774a09c931f4dc90d569e42c00b353799eef1\"" Jul 15 23:27:05.664283 kubelet[2652]: E0715 23:27:05.664194 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:27:05.668317 systemd[1]: Started cri-containerd-c9c702e3b2810746f1d2e708cf181f647e75eac77db00baea94d0be700603c72.scope - libcontainer container c9c702e3b2810746f1d2e708cf181f647e75eac77db00baea94d0be700603c72. Jul 15 23:27:05.669240 containerd[1526]: time="2025-07-15T23:27:05.669040640Z" level=info msg="CreateContainer within sandbox \"8c9e9a14608270decbde3547b91774a09c931f4dc90d569e42c00b353799eef1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 23:27:05.680248 containerd[1526]: time="2025-07-15T23:27:05.680120824Z" level=info msg="Container 3daf556a6a2722f6f873ff440bb9164378409793750907f0cd7b1cf1f4a0ca40: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:27:05.687036 containerd[1526]: time="2025-07-15T23:27:05.686797907Z" level=info msg="CreateContainer within sandbox \"8c9e9a14608270decbde3547b91774a09c931f4dc90d569e42c00b353799eef1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3daf556a6a2722f6f873ff440bb9164378409793750907f0cd7b1cf1f4a0ca40\"" Jul 15 23:27:05.687453 containerd[1526]: time="2025-07-15T23:27:05.687425413Z" level=info msg="StartContainer for \"3daf556a6a2722f6f873ff440bb9164378409793750907f0cd7b1cf1f4a0ca40\"" Jul 15 23:27:05.688475 containerd[1526]: time="2025-07-15T23:27:05.688443381Z" level=info msg="connecting to shim 3daf556a6a2722f6f873ff440bb9164378409793750907f0cd7b1cf1f4a0ca40" address="unix:///run/containerd/s/afd5b73b773ea9cd37fea04f353e7f0f97b035c8ff538e1b0d791ec0db670c4d" protocol=ttrpc version=3 Jul 15 23:27:05.708721 containerd[1526]: time="2025-07-15T23:27:05.708666200Z" level=info msg="StartContainer for \"c9c702e3b2810746f1d2e708cf181f647e75eac77db00baea94d0be700603c72\" returns successfully" Jul 15 23:27:05.721298 systemd[1]: Started cri-containerd-3daf556a6a2722f6f873ff440bb9164378409793750907f0cd7b1cf1f4a0ca40.scope - libcontainer container 3daf556a6a2722f6f873ff440bb9164378409793750907f0cd7b1cf1f4a0ca40. Jul 15 23:27:05.793148 containerd[1526]: time="2025-07-15T23:27:05.783096961Z" level=info msg="StartContainer for \"3daf556a6a2722f6f873ff440bb9164378409793750907f0cd7b1cf1f4a0ca40\" returns successfully" Jul 15 23:27:06.305696 kubelet[2652]: E0715 23:27:06.305335 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:27:06.306778 kubelet[2652]: E0715 23:27:06.306741 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:27:06.323816 kubelet[2652]: I0715 23:27:06.323547 2652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-bzq5r" podStartSLOduration=24.323531 podStartE2EDuration="24.323531s" podCreationTimestamp="2025-07-15 23:26:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:27:06.319308832 +0000 UTC m=+31.200032451" watchObservedRunningTime="2025-07-15 23:27:06.323531 +0000 UTC m=+31.204254619" Jul 15 23:27:06.345752 kubelet[2652]: I0715 23:27:06.345692 2652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-wp2dq" podStartSLOduration=24.345548797 podStartE2EDuration="24.345548797s" podCreationTimestamp="2025-07-15 23:26:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:27:06.332534339 +0000 UTC m=+31.213257958" watchObservedRunningTime="2025-07-15 23:27:06.345548797 +0000 UTC m=+31.226272416" Jul 15 23:27:06.509447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount817594957.mount: Deactivated successfully. Jul 15 23:27:07.309085 kubelet[2652]: E0715 23:27:07.308884 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:27:07.310589 kubelet[2652]: E0715 23:27:07.310531 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:27:08.310182 kubelet[2652]: E0715 23:27:08.310092 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:27:08.311093 kubelet[2652]: E0715 23:27:08.310295 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:27:09.117025 systemd[1]: Started sshd@8-10.0.0.112:22-10.0.0.1:40174.service - OpenSSH per-connection server daemon (10.0.0.1:40174). Jul 15 23:27:09.171854 sshd[4020]: Accepted publickey for core from 10.0.0.1 port 40174 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:27:09.173313 sshd-session[4020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:27:09.177722 systemd-logind[1509]: New session 9 of user core. Jul 15 23:27:09.184255 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 15 23:27:09.300142 sshd[4022]: Connection closed by 10.0.0.1 port 40174 Jul 15 23:27:09.300812 sshd-session[4020]: pam_unix(sshd:session): session closed for user core Jul 15 23:27:09.303638 systemd[1]: sshd@8-10.0.0.112:22-10.0.0.1:40174.service: Deactivated successfully. Jul 15 23:27:09.305316 systemd[1]: session-9.scope: Deactivated successfully. Jul 15 23:27:09.306618 systemd-logind[1509]: Session 9 logged out. Waiting for processes to exit. Jul 15 23:27:09.308071 systemd-logind[1509]: Removed session 9. Jul 15 23:27:14.312353 systemd[1]: Started sshd@9-10.0.0.112:22-10.0.0.1:58110.service - OpenSSH per-connection server daemon (10.0.0.1:58110). Jul 15 23:27:14.365267 sshd[4039]: Accepted publickey for core from 10.0.0.1 port 58110 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:27:14.365994 sshd-session[4039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:27:14.370581 systemd-logind[1509]: New session 10 of user core. Jul 15 23:27:14.383226 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 15 23:27:14.494589 sshd[4041]: Connection closed by 10.0.0.1 port 58110 Jul 15 23:27:14.495254 sshd-session[4039]: pam_unix(sshd:session): session closed for user core Jul 15 23:27:14.498492 systemd[1]: sshd@9-10.0.0.112:22-10.0.0.1:58110.service: Deactivated successfully. Jul 15 23:27:14.500153 systemd[1]: session-10.scope: Deactivated successfully. Jul 15 23:27:14.502630 systemd-logind[1509]: Session 10 logged out. Waiting for processes to exit. Jul 15 23:27:14.503837 systemd-logind[1509]: Removed session 10. Jul 15 23:27:19.515831 systemd[1]: Started sshd@10-10.0.0.112:22-10.0.0.1:58120.service - OpenSSH per-connection server daemon (10.0.0.1:58120). Jul 15 23:27:19.585472 sshd[4056]: Accepted publickey for core from 10.0.0.1 port 58120 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:27:19.586656 sshd-session[4056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:27:19.591004 systemd-logind[1509]: New session 11 of user core. Jul 15 23:27:19.605243 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 15 23:27:19.721095 sshd[4058]: Connection closed by 10.0.0.1 port 58120 Jul 15 23:27:19.722408 sshd-session[4056]: pam_unix(sshd:session): session closed for user core Jul 15 23:27:19.729273 systemd[1]: sshd@10-10.0.0.112:22-10.0.0.1:58120.service: Deactivated successfully. Jul 15 23:27:19.731013 systemd[1]: session-11.scope: Deactivated successfully. Jul 15 23:27:19.732394 systemd-logind[1509]: Session 11 logged out. Waiting for processes to exit. Jul 15 23:27:19.734638 systemd[1]: Started sshd@11-10.0.0.112:22-10.0.0.1:58126.service - OpenSSH per-connection server daemon (10.0.0.1:58126). Jul 15 23:27:19.735594 systemd-logind[1509]: Removed session 11. Jul 15 23:27:19.787747 sshd[4074]: Accepted publickey for core from 10.0.0.1 port 58126 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:27:19.790439 sshd-session[4074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:27:19.795425 systemd-logind[1509]: New session 12 of user core. Jul 15 23:27:19.814213 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 15 23:27:19.963560 sshd[4076]: Connection closed by 10.0.0.1 port 58126 Jul 15 23:27:19.965999 sshd-session[4074]: pam_unix(sshd:session): session closed for user core Jul 15 23:27:19.975346 systemd[1]: sshd@11-10.0.0.112:22-10.0.0.1:58126.service: Deactivated successfully. Jul 15 23:27:19.978039 systemd[1]: session-12.scope: Deactivated successfully. Jul 15 23:27:19.979724 systemd-logind[1509]: Session 12 logged out. Waiting for processes to exit. Jul 15 23:27:19.983244 systemd[1]: Started sshd@12-10.0.0.112:22-10.0.0.1:58128.service - OpenSSH per-connection server daemon (10.0.0.1:58128). Jul 15 23:27:19.984920 systemd-logind[1509]: Removed session 12. Jul 15 23:27:20.035900 sshd[4087]: Accepted publickey for core from 10.0.0.1 port 58128 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:27:20.036967 sshd-session[4087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:27:20.040940 systemd-logind[1509]: New session 13 of user core. Jul 15 23:27:20.051203 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 15 23:27:20.166197 sshd[4089]: Connection closed by 10.0.0.1 port 58128 Jul 15 23:27:20.166510 sshd-session[4087]: pam_unix(sshd:session): session closed for user core Jul 15 23:27:20.169865 systemd[1]: sshd@12-10.0.0.112:22-10.0.0.1:58128.service: Deactivated successfully. Jul 15 23:27:20.171639 systemd[1]: session-13.scope: Deactivated successfully. Jul 15 23:27:20.172419 systemd-logind[1509]: Session 13 logged out. Waiting for processes to exit. Jul 15 23:27:20.173857 systemd-logind[1509]: Removed session 13. Jul 15 23:27:25.182110 systemd[1]: Started sshd@13-10.0.0.112:22-10.0.0.1:57978.service - OpenSSH per-connection server daemon (10.0.0.1:57978). Jul 15 23:27:25.228352 sshd[4105]: Accepted publickey for core from 10.0.0.1 port 57978 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:27:25.229539 sshd-session[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:27:25.233440 systemd-logind[1509]: New session 14 of user core. Jul 15 23:27:25.242218 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 15 23:27:25.358605 sshd[4107]: Connection closed by 10.0.0.1 port 57978 Jul 15 23:27:25.359114 sshd-session[4105]: pam_unix(sshd:session): session closed for user core Jul 15 23:27:25.362322 systemd[1]: sshd@13-10.0.0.112:22-10.0.0.1:57978.service: Deactivated successfully. Jul 15 23:27:25.363976 systemd[1]: session-14.scope: Deactivated successfully. Jul 15 23:27:25.365868 systemd-logind[1509]: Session 14 logged out. Waiting for processes to exit. Jul 15 23:27:25.367325 systemd-logind[1509]: Removed session 14. Jul 15 23:27:30.379005 systemd[1]: Started sshd@14-10.0.0.112:22-10.0.0.1:57984.service - OpenSSH per-connection server daemon (10.0.0.1:57984). Jul 15 23:27:30.437611 sshd[4120]: Accepted publickey for core from 10.0.0.1 port 57984 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:27:30.442941 sshd-session[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:27:30.452333 systemd-logind[1509]: New session 15 of user core. Jul 15 23:27:30.459250 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 15 23:27:30.612034 sshd[4122]: Connection closed by 10.0.0.1 port 57984 Jul 15 23:27:30.612954 sshd-session[4120]: pam_unix(sshd:session): session closed for user core Jul 15 23:27:30.627647 systemd[1]: sshd@14-10.0.0.112:22-10.0.0.1:57984.service: Deactivated successfully. Jul 15 23:27:30.630276 systemd[1]: session-15.scope: Deactivated successfully. Jul 15 23:27:30.631711 systemd-logind[1509]: Session 15 logged out. Waiting for processes to exit. Jul 15 23:27:30.635483 systemd[1]: Started sshd@15-10.0.0.112:22-10.0.0.1:57988.service - OpenSSH per-connection server daemon (10.0.0.1:57988). Jul 15 23:27:30.637091 systemd-logind[1509]: Removed session 15. Jul 15 23:27:30.691665 sshd[4135]: Accepted publickey for core from 10.0.0.1 port 57988 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:27:30.693580 sshd-session[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:27:30.697947 systemd-logind[1509]: New session 16 of user core. Jul 15 23:27:30.704212 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 15 23:27:30.946085 sshd[4137]: Connection closed by 10.0.0.1 port 57988 Jul 15 23:27:30.946839 sshd-session[4135]: pam_unix(sshd:session): session closed for user core Jul 15 23:27:30.958707 systemd[1]: sshd@15-10.0.0.112:22-10.0.0.1:57988.service: Deactivated successfully. Jul 15 23:27:30.960448 systemd[1]: session-16.scope: Deactivated successfully. Jul 15 23:27:30.961240 systemd-logind[1509]: Session 16 logged out. Waiting for processes to exit. Jul 15 23:27:30.963578 systemd[1]: Started sshd@16-10.0.0.112:22-10.0.0.1:58002.service - OpenSSH per-connection server daemon (10.0.0.1:58002). Jul 15 23:27:30.964733 systemd-logind[1509]: Removed session 16. Jul 15 23:27:31.022584 sshd[4148]: Accepted publickey for core from 10.0.0.1 port 58002 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:27:31.023921 sshd-session[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:27:31.028565 systemd-logind[1509]: New session 17 of user core. Jul 15 23:27:31.040255 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 15 23:27:32.236572 sshd[4150]: Connection closed by 10.0.0.1 port 58002 Jul 15 23:27:32.237112 sshd-session[4148]: pam_unix(sshd:session): session closed for user core Jul 15 23:27:32.247345 systemd[1]: sshd@16-10.0.0.112:22-10.0.0.1:58002.service: Deactivated successfully. Jul 15 23:27:32.250659 systemd[1]: session-17.scope: Deactivated successfully. Jul 15 23:27:32.252730 systemd-logind[1509]: Session 17 logged out. Waiting for processes to exit. Jul 15 23:27:32.258949 systemd[1]: Started sshd@17-10.0.0.112:22-10.0.0.1:58010.service - OpenSSH per-connection server daemon (10.0.0.1:58010). Jul 15 23:27:32.260112 systemd-logind[1509]: Removed session 17. Jul 15 23:27:32.309040 sshd[4169]: Accepted publickey for core from 10.0.0.1 port 58010 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:27:32.310572 sshd-session[4169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:27:32.315101 systemd-logind[1509]: New session 18 of user core. Jul 15 23:27:32.325241 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 15 23:27:32.547588 sshd[4171]: Connection closed by 10.0.0.1 port 58010 Jul 15 23:27:32.548323 sshd-session[4169]: pam_unix(sshd:session): session closed for user core Jul 15 23:27:32.557952 systemd[1]: sshd@17-10.0.0.112:22-10.0.0.1:58010.service: Deactivated successfully. Jul 15 23:27:32.561232 systemd[1]: session-18.scope: Deactivated successfully. Jul 15 23:27:32.563576 systemd-logind[1509]: Session 18 logged out. Waiting for processes to exit. Jul 15 23:27:32.565164 systemd[1]: Started sshd@18-10.0.0.112:22-10.0.0.1:52534.service - OpenSSH per-connection server daemon (10.0.0.1:52534). Jul 15 23:27:32.566568 systemd-logind[1509]: Removed session 18. Jul 15 23:27:32.621456 sshd[4182]: Accepted publickey for core from 10.0.0.1 port 52534 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:27:32.622896 sshd-session[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:27:32.627610 systemd-logind[1509]: New session 19 of user core. Jul 15 23:27:32.633239 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 15 23:27:32.747244 sshd[4184]: Connection closed by 10.0.0.1 port 52534 Jul 15 23:27:32.747587 sshd-session[4182]: pam_unix(sshd:session): session closed for user core Jul 15 23:27:32.751155 systemd-logind[1509]: Session 19 logged out. Waiting for processes to exit. Jul 15 23:27:32.751281 systemd[1]: sshd@18-10.0.0.112:22-10.0.0.1:52534.service: Deactivated successfully. Jul 15 23:27:32.753314 systemd[1]: session-19.scope: Deactivated successfully. Jul 15 23:27:32.755740 systemd-logind[1509]: Removed session 19. Jul 15 23:27:37.762621 systemd[1]: Started sshd@19-10.0.0.112:22-10.0.0.1:52536.service - OpenSSH per-connection server daemon (10.0.0.1:52536). Jul 15 23:27:37.801776 sshd[4202]: Accepted publickey for core from 10.0.0.1 port 52536 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:27:37.802965 sshd-session[4202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:27:37.806758 systemd-logind[1509]: New session 20 of user core. Jul 15 23:27:37.820370 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 15 23:27:37.929281 sshd[4204]: Connection closed by 10.0.0.1 port 52536 Jul 15 23:27:37.929571 sshd-session[4202]: pam_unix(sshd:session): session closed for user core Jul 15 23:27:37.933667 systemd[1]: sshd@19-10.0.0.112:22-10.0.0.1:52536.service: Deactivated successfully. Jul 15 23:27:37.935546 systemd[1]: session-20.scope: Deactivated successfully. Jul 15 23:27:37.936422 systemd-logind[1509]: Session 20 logged out. Waiting for processes to exit. Jul 15 23:27:37.938220 systemd-logind[1509]: Removed session 20. Jul 15 23:27:42.941299 systemd[1]: Started sshd@20-10.0.0.112:22-10.0.0.1:45214.service - OpenSSH per-connection server daemon (10.0.0.1:45214). Jul 15 23:27:42.979580 sshd[4217]: Accepted publickey for core from 10.0.0.1 port 45214 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:27:42.980963 sshd-session[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:27:42.988974 systemd-logind[1509]: New session 21 of user core. Jul 15 23:27:43.001395 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 15 23:27:43.111941 sshd[4219]: Connection closed by 10.0.0.1 port 45214 Jul 15 23:27:43.112268 sshd-session[4217]: pam_unix(sshd:session): session closed for user core Jul 15 23:27:43.115580 systemd[1]: sshd@20-10.0.0.112:22-10.0.0.1:45214.service: Deactivated successfully. Jul 15 23:27:43.117149 systemd[1]: session-21.scope: Deactivated successfully. Jul 15 23:27:43.117880 systemd-logind[1509]: Session 21 logged out. Waiting for processes to exit. Jul 15 23:27:43.119209 systemd-logind[1509]: Removed session 21. Jul 15 23:27:48.123804 systemd[1]: Started sshd@21-10.0.0.112:22-10.0.0.1:45240.service - OpenSSH per-connection server daemon (10.0.0.1:45240). Jul 15 23:27:48.177849 sshd[4234]: Accepted publickey for core from 10.0.0.1 port 45240 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:27:48.179213 sshd-session[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:27:48.182978 systemd-logind[1509]: New session 22 of user core. Jul 15 23:27:48.195211 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 15 23:27:48.304312 sshd[4236]: Connection closed by 10.0.0.1 port 45240 Jul 15 23:27:48.304806 sshd-session[4234]: pam_unix(sshd:session): session closed for user core Jul 15 23:27:48.319331 systemd[1]: sshd@21-10.0.0.112:22-10.0.0.1:45240.service: Deactivated successfully. Jul 15 23:27:48.321336 systemd[1]: session-22.scope: Deactivated successfully. Jul 15 23:27:48.322338 systemd-logind[1509]: Session 22 logged out. Waiting for processes to exit. Jul 15 23:27:48.325272 systemd[1]: Started sshd@22-10.0.0.112:22-10.0.0.1:45254.service - OpenSSH per-connection server daemon (10.0.0.1:45254). Jul 15 23:27:48.326084 systemd-logind[1509]: Removed session 22. Jul 15 23:27:48.385693 sshd[4249]: Accepted publickey for core from 10.0.0.1 port 45254 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:27:48.387163 sshd-session[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:27:48.391074 systemd-logind[1509]: New session 23 of user core. Jul 15 23:27:48.399200 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 15 23:27:49.192494 kubelet[2652]: E0715 23:27:49.192445 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:27:50.474037 containerd[1526]: time="2025-07-15T23:27:50.473900214Z" level=info msg="StopContainer for \"7c2cb580d6046755a76e8f1175a7aecb373f59cc041a86bfe9af1c108e8e10e1\" with timeout 30 (s)" Jul 15 23:27:50.475164 containerd[1526]: time="2025-07-15T23:27:50.475127517Z" level=info msg="Stop container \"7c2cb580d6046755a76e8f1175a7aecb373f59cc041a86bfe9af1c108e8e10e1\" with signal terminated" Jul 15 23:27:50.487713 systemd[1]: cri-containerd-7c2cb580d6046755a76e8f1175a7aecb373f59cc041a86bfe9af1c108e8e10e1.scope: Deactivated successfully. Jul 15 23:27:50.490248 containerd[1526]: time="2025-07-15T23:27:50.490218591Z" level=info msg="received exit event container_id:\"7c2cb580d6046755a76e8f1175a7aecb373f59cc041a86bfe9af1c108e8e10e1\" id:\"7c2cb580d6046755a76e8f1175a7aecb373f59cc041a86bfe9af1c108e8e10e1\" pid:3277 exited_at:{seconds:1752622070 nanos:489397403}" Jul 15 23:27:50.490681 containerd[1526]: time="2025-07-15T23:27:50.490398229Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7c2cb580d6046755a76e8f1175a7aecb373f59cc041a86bfe9af1c108e8e10e1\" id:\"7c2cb580d6046755a76e8f1175a7aecb373f59cc041a86bfe9af1c108e8e10e1\" pid:3277 exited_at:{seconds:1752622070 nanos:489397403}" Jul 15 23:27:50.506262 containerd[1526]: time="2025-07-15T23:27:50.506164774Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 23:27:50.511710 containerd[1526]: time="2025-07-15T23:27:50.511657300Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b07b860f6236f1fdcc7a47c9814ae7a34fd7624e98bda9c093bdd6168e5813ec\" id:\"025f550c4a36328a32cfdee16d5c9afe680212f7d70eed53e35284c130b55604\" pid:4280 exited_at:{seconds:1752622070 nanos:511366983}" Jul 15 23:27:50.513915 containerd[1526]: time="2025-07-15T23:27:50.513889429Z" level=info msg="StopContainer for \"b07b860f6236f1fdcc7a47c9814ae7a34fd7624e98bda9c093bdd6168e5813ec\" with timeout 2 (s)" Jul 15 23:27:50.514320 containerd[1526]: time="2025-07-15T23:27:50.514300784Z" level=info msg="Stop container \"b07b860f6236f1fdcc7a47c9814ae7a34fd7624e98bda9c093bdd6168e5813ec\" with signal terminated" Jul 15 23:27:50.518949 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c2cb580d6046755a76e8f1175a7aecb373f59cc041a86bfe9af1c108e8e10e1-rootfs.mount: Deactivated successfully. Jul 15 23:27:50.521932 systemd-networkd[1440]: lxc_health: Link DOWN Jul 15 23:27:50.521938 systemd-networkd[1440]: lxc_health: Lost carrier Jul 15 23:27:50.539477 containerd[1526]: time="2025-07-15T23:27:50.539423721Z" level=info msg="StopContainer for \"7c2cb580d6046755a76e8f1175a7aecb373f59cc041a86bfe9af1c108e8e10e1\" returns successfully" Jul 15 23:27:50.542201 containerd[1526]: time="2025-07-15T23:27:50.542129445Z" level=info msg="StopPodSandbox for \"b1eeeed98f27d67d1d88bc22e9f89ac17a61f09e385484a7e7bbb353afd7d5e8\"" Jul 15 23:27:50.542330 containerd[1526]: time="2025-07-15T23:27:50.542312122Z" level=info msg="Container to stop \"7c2cb580d6046755a76e8f1175a7aecb373f59cc041a86bfe9af1c108e8e10e1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 23:27:50.544668 systemd[1]: cri-containerd-b07b860f6236f1fdcc7a47c9814ae7a34fd7624e98bda9c093bdd6168e5813ec.scope: Deactivated successfully. Jul 15 23:27:50.544989 systemd[1]: cri-containerd-b07b860f6236f1fdcc7a47c9814ae7a34fd7624e98bda9c093bdd6168e5813ec.scope: Consumed 6.869s CPU time, 122.7M memory peak, 148K read from disk, 12.9M written to disk. Jul 15 23:27:50.546088 containerd[1526]: time="2025-07-15T23:27:50.546027831Z" level=info msg="received exit event container_id:\"b07b860f6236f1fdcc7a47c9814ae7a34fd7624e98bda9c093bdd6168e5813ec\" id:\"b07b860f6236f1fdcc7a47c9814ae7a34fd7624e98bda9c093bdd6168e5813ec\" pid:3313 exited_at:{seconds:1752622070 nanos:545811874}" Jul 15 23:27:50.546267 containerd[1526]: time="2025-07-15T23:27:50.546238309Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b07b860f6236f1fdcc7a47c9814ae7a34fd7624e98bda9c093bdd6168e5813ec\" id:\"b07b860f6236f1fdcc7a47c9814ae7a34fd7624e98bda9c093bdd6168e5813ec\" pid:3313 exited_at:{seconds:1752622070 nanos:545811874}" Jul 15 23:27:50.555124 systemd[1]: cri-containerd-b1eeeed98f27d67d1d88bc22e9f89ac17a61f09e385484a7e7bbb353afd7d5e8.scope: Deactivated successfully. Jul 15 23:27:50.560623 containerd[1526]: time="2025-07-15T23:27:50.560586433Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b1eeeed98f27d67d1d88bc22e9f89ac17a61f09e385484a7e7bbb353afd7d5e8\" id:\"b1eeeed98f27d67d1d88bc22e9f89ac17a61f09e385484a7e7bbb353afd7d5e8\" pid:2897 exit_status:137 exited_at:{seconds:1752622070 nanos:560279077}" Jul 15 23:27:50.566975 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b07b860f6236f1fdcc7a47c9814ae7a34fd7624e98bda9c093bdd6168e5813ec-rootfs.mount: Deactivated successfully. Jul 15 23:27:50.579597 containerd[1526]: time="2025-07-15T23:27:50.579559975Z" level=info msg="StopContainer for \"b07b860f6236f1fdcc7a47c9814ae7a34fd7624e98bda9c093bdd6168e5813ec\" returns successfully" Jul 15 23:27:50.580251 containerd[1526]: time="2025-07-15T23:27:50.580223606Z" level=info msg="StopPodSandbox for \"08b3cb83c7e5ae69b57ea759d780b90f5d3b1b7f2ee3789ce12cf65133f0f166\"" Jul 15 23:27:50.580458 containerd[1526]: time="2025-07-15T23:27:50.580429963Z" level=info msg="Container to stop \"7daf7527074c9581abd6944d4ffd95673f411372842e212b7af7bc73492adeaa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 23:27:50.580557 containerd[1526]: time="2025-07-15T23:27:50.580541201Z" level=info msg="Container to stop \"eeef50a3c2d0a6035ea14929984a3174c8942dc0d877438f3296d4e69e60b3d0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 23:27:50.580629 containerd[1526]: time="2025-07-15T23:27:50.580616880Z" level=info msg="Container to stop \"091a42470d6b24ce6d9268f5e1ac5a3e225a2a503165f4014e9454f4af49bb51\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 23:27:50.580771 containerd[1526]: time="2025-07-15T23:27:50.580673520Z" level=info msg="Container to stop \"af05c42bc963a527559d9987b6f3f66f24b4d0ac4c56360e3466fead31e0de0d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 23:27:50.580771 containerd[1526]: time="2025-07-15T23:27:50.580700599Z" level=info msg="Container to stop \"b07b860f6236f1fdcc7a47c9814ae7a34fd7624e98bda9c093bdd6168e5813ec\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 23:27:50.586216 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b1eeeed98f27d67d1d88bc22e9f89ac17a61f09e385484a7e7bbb353afd7d5e8-rootfs.mount: Deactivated successfully. Jul 15 23:27:50.587635 systemd[1]: cri-containerd-08b3cb83c7e5ae69b57ea759d780b90f5d3b1b7f2ee3789ce12cf65133f0f166.scope: Deactivated successfully. Jul 15 23:27:50.593146 containerd[1526]: time="2025-07-15T23:27:50.593033391Z" level=info msg="shim disconnected" id=b1eeeed98f27d67d1d88bc22e9f89ac17a61f09e385484a7e7bbb353afd7d5e8 namespace=k8s.io Jul 15 23:27:50.600878 containerd[1526]: time="2025-07-15T23:27:50.593132390Z" level=warning msg="cleaning up after shim disconnected" id=b1eeeed98f27d67d1d88bc22e9f89ac17a61f09e385484a7e7bbb353afd7d5e8 namespace=k8s.io Jul 15 23:27:50.600979 containerd[1526]: time="2025-07-15T23:27:50.600879164Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 15 23:27:50.609282 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-08b3cb83c7e5ae69b57ea759d780b90f5d3b1b7f2ee3789ce12cf65133f0f166-rootfs.mount: Deactivated successfully. Jul 15 23:27:50.611392 containerd[1526]: time="2025-07-15T23:27:50.611259303Z" level=info msg="shim disconnected" id=08b3cb83c7e5ae69b57ea759d780b90f5d3b1b7f2ee3789ce12cf65133f0f166 namespace=k8s.io Jul 15 23:27:50.611495 containerd[1526]: time="2025-07-15T23:27:50.611390741Z" level=warning msg="cleaning up after shim disconnected" id=08b3cb83c7e5ae69b57ea759d780b90f5d3b1b7f2ee3789ce12cf65133f0f166 namespace=k8s.io Jul 15 23:27:50.611495 containerd[1526]: time="2025-07-15T23:27:50.611422101Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 15 23:27:50.618332 containerd[1526]: time="2025-07-15T23:27:50.618277967Z" level=info msg="received exit event sandbox_id:\"b1eeeed98f27d67d1d88bc22e9f89ac17a61f09e385484a7e7bbb353afd7d5e8\" exit_status:137 exited_at:{seconds:1752622070 nanos:560279077}" Jul 15 23:27:50.619222 containerd[1526]: time="2025-07-15T23:27:50.619188475Z" level=info msg="TearDown network for sandbox \"b1eeeed98f27d67d1d88bc22e9f89ac17a61f09e385484a7e7bbb353afd7d5e8\" successfully" Jul 15 23:27:50.619222 containerd[1526]: time="2025-07-15T23:27:50.619223155Z" level=info msg="StopPodSandbox for \"b1eeeed98f27d67d1d88bc22e9f89ac17a61f09e385484a7e7bbb353afd7d5e8\" returns successfully" Jul 15 23:27:50.619580 containerd[1526]: time="2025-07-15T23:27:50.619550830Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08b3cb83c7e5ae69b57ea759d780b90f5d3b1b7f2ee3789ce12cf65133f0f166\" id:\"08b3cb83c7e5ae69b57ea759d780b90f5d3b1b7f2ee3789ce12cf65133f0f166\" pid:2808 exit_status:137 exited_at:{seconds:1752622070 nanos:588105178}" Jul 15 23:27:50.619667 containerd[1526]: time="2025-07-15T23:27:50.619649029Z" level=info msg="received exit event sandbox_id:\"08b3cb83c7e5ae69b57ea759d780b90f5d3b1b7f2ee3789ce12cf65133f0f166\" exit_status:137 exited_at:{seconds:1752622070 nanos:588105178}" Jul 15 23:27:50.619732 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b1eeeed98f27d67d1d88bc22e9f89ac17a61f09e385484a7e7bbb353afd7d5e8-shm.mount: Deactivated successfully. Jul 15 23:27:50.621124 containerd[1526]: time="2025-07-15T23:27:50.621092929Z" level=info msg="TearDown network for sandbox \"08b3cb83c7e5ae69b57ea759d780b90f5d3b1b7f2ee3789ce12cf65133f0f166\" successfully" Jul 15 23:27:50.621124 containerd[1526]: time="2025-07-15T23:27:50.621121209Z" level=info msg="StopPodSandbox for \"08b3cb83c7e5ae69b57ea759d780b90f5d3b1b7f2ee3789ce12cf65133f0f166\" returns successfully" Jul 15 23:27:50.690828 kubelet[2652]: I0715 23:27:50.690784 2652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7fc73677-9c82-4a92-bbd8-2900ae94b719-cni-path\") pod \"7fc73677-9c82-4a92-bbd8-2900ae94b719\" (UID: \"7fc73677-9c82-4a92-bbd8-2900ae94b719\") " Jul 15 23:27:50.691230 kubelet[2652]: I0715 23:27:50.690877 2652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/50b2b2aa-8964-419a-b17c-2250a437abab-cilium-config-path\") pod \"50b2b2aa-8964-419a-b17c-2250a437abab\" (UID: \"50b2b2aa-8964-419a-b17c-2250a437abab\") " Jul 15 23:27:50.691230 kubelet[2652]: I0715 23:27:50.690903 2652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7fc73677-9c82-4a92-bbd8-2900ae94b719-hubble-tls\") pod \"7fc73677-9c82-4a92-bbd8-2900ae94b719\" (UID: \"7fc73677-9c82-4a92-bbd8-2900ae94b719\") " Jul 15 23:27:50.691230 kubelet[2652]: I0715 23:27:50.690920 2652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmzdm\" (UniqueName: \"kubernetes.io/projected/7fc73677-9c82-4a92-bbd8-2900ae94b719-kube-api-access-lmzdm\") pod \"7fc73677-9c82-4a92-bbd8-2900ae94b719\" (UID: \"7fc73677-9c82-4a92-bbd8-2900ae94b719\") " Jul 15 23:27:50.691230 kubelet[2652]: I0715 23:27:50.690938 2652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2m44f\" (UniqueName: \"kubernetes.io/projected/50b2b2aa-8964-419a-b17c-2250a437abab-kube-api-access-2m44f\") pod \"50b2b2aa-8964-419a-b17c-2250a437abab\" (UID: \"50b2b2aa-8964-419a-b17c-2250a437abab\") " Jul 15 23:27:50.691230 kubelet[2652]: I0715 23:27:50.690952 2652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7fc73677-9c82-4a92-bbd8-2900ae94b719-host-proc-sys-net\") pod \"7fc73677-9c82-4a92-bbd8-2900ae94b719\" (UID: \"7fc73677-9c82-4a92-bbd8-2900ae94b719\") " Jul 15 23:27:50.691230 kubelet[2652]: I0715 23:27:50.690966 2652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7fc73677-9c82-4a92-bbd8-2900ae94b719-bpf-maps\") pod \"7fc73677-9c82-4a92-bbd8-2900ae94b719\" (UID: \"7fc73677-9c82-4a92-bbd8-2900ae94b719\") " Jul 15 23:27:50.691369 kubelet[2652]: I0715 23:27:50.690989 2652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7fc73677-9c82-4a92-bbd8-2900ae94b719-clustermesh-secrets\") pod \"7fc73677-9c82-4a92-bbd8-2900ae94b719\" (UID: \"7fc73677-9c82-4a92-bbd8-2900ae94b719\") " Jul 15 23:27:50.691369 kubelet[2652]: I0715 23:27:50.691005 2652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7fc73677-9c82-4a92-bbd8-2900ae94b719-hostproc\") pod \"7fc73677-9c82-4a92-bbd8-2900ae94b719\" (UID: \"7fc73677-9c82-4a92-bbd8-2900ae94b719\") " Jul 15 23:27:50.691369 kubelet[2652]: I0715 23:27:50.691019 2652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7fc73677-9c82-4a92-bbd8-2900ae94b719-etc-cni-netd\") pod \"7fc73677-9c82-4a92-bbd8-2900ae94b719\" (UID: \"7fc73677-9c82-4a92-bbd8-2900ae94b719\") " Jul 15 23:27:50.691369 kubelet[2652]: I0715 23:27:50.691035 2652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7fc73677-9c82-4a92-bbd8-2900ae94b719-xtables-lock\") pod \"7fc73677-9c82-4a92-bbd8-2900ae94b719\" (UID: \"7fc73677-9c82-4a92-bbd8-2900ae94b719\") " Jul 15 23:27:50.691369 kubelet[2652]: I0715 23:27:50.691076 2652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7fc73677-9c82-4a92-bbd8-2900ae94b719-host-proc-sys-kernel\") pod \"7fc73677-9c82-4a92-bbd8-2900ae94b719\" (UID: \"7fc73677-9c82-4a92-bbd8-2900ae94b719\") " Jul 15 23:27:50.691369 kubelet[2652]: I0715 23:27:50.691097 2652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7fc73677-9c82-4a92-bbd8-2900ae94b719-cilium-config-path\") pod \"7fc73677-9c82-4a92-bbd8-2900ae94b719\" (UID: \"7fc73677-9c82-4a92-bbd8-2900ae94b719\") " Jul 15 23:27:50.691483 kubelet[2652]: I0715 23:27:50.691110 2652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7fc73677-9c82-4a92-bbd8-2900ae94b719-cilium-cgroup\") pod \"7fc73677-9c82-4a92-bbd8-2900ae94b719\" (UID: \"7fc73677-9c82-4a92-bbd8-2900ae94b719\") " Jul 15 23:27:50.691483 kubelet[2652]: I0715 23:27:50.691128 2652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7fc73677-9c82-4a92-bbd8-2900ae94b719-cilium-run\") pod \"7fc73677-9c82-4a92-bbd8-2900ae94b719\" (UID: \"7fc73677-9c82-4a92-bbd8-2900ae94b719\") " Jul 15 23:27:50.691483 kubelet[2652]: I0715 23:27:50.691141 2652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7fc73677-9c82-4a92-bbd8-2900ae94b719-lib-modules\") pod \"7fc73677-9c82-4a92-bbd8-2900ae94b719\" (UID: \"7fc73677-9c82-4a92-bbd8-2900ae94b719\") " Jul 15 23:27:50.695066 kubelet[2652]: I0715 23:27:50.694841 2652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fc73677-9c82-4a92-bbd8-2900ae94b719-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7fc73677-9c82-4a92-bbd8-2900ae94b719" (UID: "7fc73677-9c82-4a92-bbd8-2900ae94b719"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 23:27:50.695066 kubelet[2652]: I0715 23:27:50.694875 2652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fc73677-9c82-4a92-bbd8-2900ae94b719-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7fc73677-9c82-4a92-bbd8-2900ae94b719" (UID: "7fc73677-9c82-4a92-bbd8-2900ae94b719"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 23:27:50.695066 kubelet[2652]: I0715 23:27:50.694933 2652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fc73677-9c82-4a92-bbd8-2900ae94b719-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7fc73677-9c82-4a92-bbd8-2900ae94b719" (UID: "7fc73677-9c82-4a92-bbd8-2900ae94b719"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 23:27:50.695066 kubelet[2652]: I0715 23:27:50.694935 2652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fc73677-9c82-4a92-bbd8-2900ae94b719-hostproc" (OuterVolumeSpecName: "hostproc") pod "7fc73677-9c82-4a92-bbd8-2900ae94b719" (UID: "7fc73677-9c82-4a92-bbd8-2900ae94b719"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 23:27:50.695066 kubelet[2652]: I0715 23:27:50.694961 2652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fc73677-9c82-4a92-bbd8-2900ae94b719-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7fc73677-9c82-4a92-bbd8-2900ae94b719" (UID: "7fc73677-9c82-4a92-bbd8-2900ae94b719"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 23:27:50.695297 kubelet[2652]: I0715 23:27:50.694980 2652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fc73677-9c82-4a92-bbd8-2900ae94b719-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7fc73677-9c82-4a92-bbd8-2900ae94b719" (UID: "7fc73677-9c82-4a92-bbd8-2900ae94b719"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 23:27:50.695297 kubelet[2652]: I0715 23:27:50.694997 2652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fc73677-9c82-4a92-bbd8-2900ae94b719-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7fc73677-9c82-4a92-bbd8-2900ae94b719" (UID: "7fc73677-9c82-4a92-bbd8-2900ae94b719"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 23:27:50.695427 kubelet[2652]: I0715 23:27:50.695397 2652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fc73677-9c82-4a92-bbd8-2900ae94b719-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7fc73677-9c82-4a92-bbd8-2900ae94b719" (UID: "7fc73677-9c82-4a92-bbd8-2900ae94b719"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 23:27:50.696085 kubelet[2652]: I0715 23:27:50.696016 2652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fc73677-9c82-4a92-bbd8-2900ae94b719-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7fc73677-9c82-4a92-bbd8-2900ae94b719" (UID: "7fc73677-9c82-4a92-bbd8-2900ae94b719"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 23:27:50.696297 kubelet[2652]: I0715 23:27:50.696254 2652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fc73677-9c82-4a92-bbd8-2900ae94b719-cni-path" (OuterVolumeSpecName: "cni-path") pod "7fc73677-9c82-4a92-bbd8-2900ae94b719" (UID: "7fc73677-9c82-4a92-bbd8-2900ae94b719"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 23:27:50.696617 kubelet[2652]: I0715 23:27:50.696347 2652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fc73677-9c82-4a92-bbd8-2900ae94b719-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7fc73677-9c82-4a92-bbd8-2900ae94b719" (UID: "7fc73677-9c82-4a92-bbd8-2900ae94b719"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 15 23:27:50.697662 kubelet[2652]: I0715 23:27:50.697629 2652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fc73677-9c82-4a92-bbd8-2900ae94b719-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7fc73677-9c82-4a92-bbd8-2900ae94b719" (UID: "7fc73677-9c82-4a92-bbd8-2900ae94b719"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 15 23:27:50.698017 kubelet[2652]: I0715 23:27:50.697967 2652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fc73677-9c82-4a92-bbd8-2900ae94b719-kube-api-access-lmzdm" (OuterVolumeSpecName: "kube-api-access-lmzdm") pod "7fc73677-9c82-4a92-bbd8-2900ae94b719" (UID: "7fc73677-9c82-4a92-bbd8-2900ae94b719"). InnerVolumeSpecName "kube-api-access-lmzdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 15 23:27:50.698226 kubelet[2652]: I0715 23:27:50.698197 2652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50b2b2aa-8964-419a-b17c-2250a437abab-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "50b2b2aa-8964-419a-b17c-2250a437abab" (UID: "50b2b2aa-8964-419a-b17c-2250a437abab"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 15 23:27:50.698783 kubelet[2652]: I0715 23:27:50.698744 2652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fc73677-9c82-4a92-bbd8-2900ae94b719-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7fc73677-9c82-4a92-bbd8-2900ae94b719" (UID: "7fc73677-9c82-4a92-bbd8-2900ae94b719"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 15 23:27:50.699029 kubelet[2652]: I0715 23:27:50.698994 2652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50b2b2aa-8964-419a-b17c-2250a437abab-kube-api-access-2m44f" (OuterVolumeSpecName: "kube-api-access-2m44f") pod "50b2b2aa-8964-419a-b17c-2250a437abab" (UID: "50b2b2aa-8964-419a-b17c-2250a437abab"). InnerVolumeSpecName "kube-api-access-2m44f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 15 23:27:50.791493 kubelet[2652]: I0715 23:27:50.791427 2652 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/50b2b2aa-8964-419a-b17c-2250a437abab-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 15 23:27:50.791493 kubelet[2652]: I0715 23:27:50.791467 2652 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7fc73677-9c82-4a92-bbd8-2900ae94b719-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 15 23:27:50.791493 kubelet[2652]: I0715 23:27:50.791478 2652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmzdm\" (UniqueName: \"kubernetes.io/projected/7fc73677-9c82-4a92-bbd8-2900ae94b719-kube-api-access-lmzdm\") on node \"localhost\" DevicePath \"\"" Jul 15 23:27:50.791493 kubelet[2652]: I0715 23:27:50.791486 2652 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7fc73677-9c82-4a92-bbd8-2900ae94b719-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 15 23:27:50.791493 kubelet[2652]: I0715 23:27:50.791495 2652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2m44f\" (UniqueName: \"kubernetes.io/projected/50b2b2aa-8964-419a-b17c-2250a437abab-kube-api-access-2m44f\") on node \"localhost\" DevicePath \"\"" Jul 15 23:27:50.791493 kubelet[2652]: I0715 23:27:50.791507 2652 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7fc73677-9c82-4a92-bbd8-2900ae94b719-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 15 23:27:50.791493 kubelet[2652]: I0715 23:27:50.791515 2652 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7fc73677-9c82-4a92-bbd8-2900ae94b719-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 15 23:27:50.791493 kubelet[2652]: I0715 23:27:50.791523 2652 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7fc73677-9c82-4a92-bbd8-2900ae94b719-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 15 23:27:50.791855 kubelet[2652]: I0715 23:27:50.791531 2652 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7fc73677-9c82-4a92-bbd8-2900ae94b719-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 15 23:27:50.791855 kubelet[2652]: I0715 23:27:50.791539 2652 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7fc73677-9c82-4a92-bbd8-2900ae94b719-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 15 23:27:50.791855 kubelet[2652]: I0715 23:27:50.791546 2652 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7fc73677-9c82-4a92-bbd8-2900ae94b719-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 15 23:27:50.791855 kubelet[2652]: I0715 23:27:50.791554 2652 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7fc73677-9c82-4a92-bbd8-2900ae94b719-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 15 23:27:50.791855 kubelet[2652]: I0715 23:27:50.791561 2652 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7fc73677-9c82-4a92-bbd8-2900ae94b719-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 15 23:27:50.791855 kubelet[2652]: I0715 23:27:50.791568 2652 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7fc73677-9c82-4a92-bbd8-2900ae94b719-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 15 23:27:50.791855 kubelet[2652]: I0715 23:27:50.791575 2652 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7fc73677-9c82-4a92-bbd8-2900ae94b719-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 15 23:27:50.791855 kubelet[2652]: I0715 23:27:50.791582 2652 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7fc73677-9c82-4a92-bbd8-2900ae94b719-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 15 23:27:51.199235 systemd[1]: Removed slice kubepods-burstable-pod7fc73677_9c82_4a92_bbd8_2900ae94b719.slice - libcontainer container kubepods-burstable-pod7fc73677_9c82_4a92_bbd8_2900ae94b719.slice. Jul 15 23:27:51.199335 systemd[1]: kubepods-burstable-pod7fc73677_9c82_4a92_bbd8_2900ae94b719.slice: Consumed 7.076s CPU time, 123M memory peak, 152K read from disk, 12.9M written to disk. Jul 15 23:27:51.202471 systemd[1]: Removed slice kubepods-besteffort-pod50b2b2aa_8964_419a_b17c_2250a437abab.slice - libcontainer container kubepods-besteffort-pod50b2b2aa_8964_419a_b17c_2250a437abab.slice. Jul 15 23:27:51.406929 kubelet[2652]: I0715 23:27:51.406898 2652 scope.go:117] "RemoveContainer" containerID="b07b860f6236f1fdcc7a47c9814ae7a34fd7624e98bda9c093bdd6168e5813ec" Jul 15 23:27:51.409581 containerd[1526]: time="2025-07-15T23:27:51.409547868Z" level=info msg="RemoveContainer for \"b07b860f6236f1fdcc7a47c9814ae7a34fd7624e98bda9c093bdd6168e5813ec\"" Jul 15 23:27:51.424515 containerd[1526]: time="2025-07-15T23:27:51.424444190Z" level=info msg="RemoveContainer for \"b07b860f6236f1fdcc7a47c9814ae7a34fd7624e98bda9c093bdd6168e5813ec\" returns successfully" Jul 15 23:27:51.424923 kubelet[2652]: I0715 23:27:51.424876 2652 scope.go:117] "RemoveContainer" containerID="091a42470d6b24ce6d9268f5e1ac5a3e225a2a503165f4014e9454f4af49bb51" Jul 15 23:27:51.426695 containerd[1526]: time="2025-07-15T23:27:51.426628041Z" level=info msg="RemoveContainer for \"091a42470d6b24ce6d9268f5e1ac5a3e225a2a503165f4014e9454f4af49bb51\"" Jul 15 23:27:51.429981 containerd[1526]: time="2025-07-15T23:27:51.429946796Z" level=info msg="RemoveContainer for \"091a42470d6b24ce6d9268f5e1ac5a3e225a2a503165f4014e9454f4af49bb51\" returns successfully" Jul 15 23:27:51.430730 kubelet[2652]: I0715 23:27:51.430701 2652 scope.go:117] "RemoveContainer" containerID="eeef50a3c2d0a6035ea14929984a3174c8942dc0d877438f3296d4e69e60b3d0" Jul 15 23:27:51.433233 containerd[1526]: time="2025-07-15T23:27:51.433203513Z" level=info msg="RemoveContainer for \"eeef50a3c2d0a6035ea14929984a3174c8942dc0d877438f3296d4e69e60b3d0\"" Jul 15 23:27:51.438332 containerd[1526]: time="2025-07-15T23:27:51.437809932Z" level=info msg="RemoveContainer for \"eeef50a3c2d0a6035ea14929984a3174c8942dc0d877438f3296d4e69e60b3d0\" returns successfully" Jul 15 23:27:51.438934 kubelet[2652]: I0715 23:27:51.438894 2652 scope.go:117] "RemoveContainer" containerID="7daf7527074c9581abd6944d4ffd95673f411372842e212b7af7bc73492adeaa" Jul 15 23:27:51.441571 containerd[1526]: time="2025-07-15T23:27:51.441460683Z" level=info msg="RemoveContainer for \"7daf7527074c9581abd6944d4ffd95673f411372842e212b7af7bc73492adeaa\"" Jul 15 23:27:51.445413 containerd[1526]: time="2025-07-15T23:27:51.445331711Z" level=info msg="RemoveContainer for \"7daf7527074c9581abd6944d4ffd95673f411372842e212b7af7bc73492adeaa\" returns successfully" Jul 15 23:27:51.445855 kubelet[2652]: I0715 23:27:51.445514 2652 scope.go:117] "RemoveContainer" containerID="af05c42bc963a527559d9987b6f3f66f24b4d0ac4c56360e3466fead31e0de0d" Jul 15 23:27:51.447644 containerd[1526]: time="2025-07-15T23:27:51.447618401Z" level=info msg="RemoveContainer for \"af05c42bc963a527559d9987b6f3f66f24b4d0ac4c56360e3466fead31e0de0d\"" Jul 15 23:27:51.450963 containerd[1526]: time="2025-07-15T23:27:51.450860758Z" level=info msg="RemoveContainer for \"af05c42bc963a527559d9987b6f3f66f24b4d0ac4c56360e3466fead31e0de0d\" returns successfully" Jul 15 23:27:51.451249 kubelet[2652]: I0715 23:27:51.451150 2652 scope.go:117] "RemoveContainer" containerID="b07b860f6236f1fdcc7a47c9814ae7a34fd7624e98bda9c093bdd6168e5813ec" Jul 15 23:27:51.451458 containerd[1526]: time="2025-07-15T23:27:51.451374471Z" level=error msg="ContainerStatus for \"b07b860f6236f1fdcc7a47c9814ae7a34fd7624e98bda9c093bdd6168e5813ec\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b07b860f6236f1fdcc7a47c9814ae7a34fd7624e98bda9c093bdd6168e5813ec\": not found" Jul 15 23:27:51.452609 kubelet[2652]: E0715 23:27:51.452560 2652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b07b860f6236f1fdcc7a47c9814ae7a34fd7624e98bda9c093bdd6168e5813ec\": not found" containerID="b07b860f6236f1fdcc7a47c9814ae7a34fd7624e98bda9c093bdd6168e5813ec" Jul 15 23:27:51.452701 kubelet[2652]: I0715 23:27:51.452613 2652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b07b860f6236f1fdcc7a47c9814ae7a34fd7624e98bda9c093bdd6168e5813ec"} err="failed to get container status \"b07b860f6236f1fdcc7a47c9814ae7a34fd7624e98bda9c093bdd6168e5813ec\": rpc error: code = NotFound desc = an error occurred when try to find container \"b07b860f6236f1fdcc7a47c9814ae7a34fd7624e98bda9c093bdd6168e5813ec\": not found" Jul 15 23:27:51.452701 kubelet[2652]: I0715 23:27:51.452694 2652 scope.go:117] "RemoveContainer" containerID="091a42470d6b24ce6d9268f5e1ac5a3e225a2a503165f4014e9454f4af49bb51" Jul 15 23:27:51.452916 containerd[1526]: time="2025-07-15T23:27:51.452884451Z" level=error msg="ContainerStatus for \"091a42470d6b24ce6d9268f5e1ac5a3e225a2a503165f4014e9454f4af49bb51\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"091a42470d6b24ce6d9268f5e1ac5a3e225a2a503165f4014e9454f4af49bb51\": not found" Jul 15 23:27:51.453155 kubelet[2652]: E0715 23:27:51.453037 2652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"091a42470d6b24ce6d9268f5e1ac5a3e225a2a503165f4014e9454f4af49bb51\": not found" containerID="091a42470d6b24ce6d9268f5e1ac5a3e225a2a503165f4014e9454f4af49bb51" Jul 15 23:27:51.453155 kubelet[2652]: I0715 23:27:51.453075 2652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"091a42470d6b24ce6d9268f5e1ac5a3e225a2a503165f4014e9454f4af49bb51"} err="failed to get container status \"091a42470d6b24ce6d9268f5e1ac5a3e225a2a503165f4014e9454f4af49bb51\": rpc error: code = NotFound desc = an error occurred when try to find container \"091a42470d6b24ce6d9268f5e1ac5a3e225a2a503165f4014e9454f4af49bb51\": not found" Jul 15 23:27:51.453155 kubelet[2652]: I0715 23:27:51.453093 2652 scope.go:117] "RemoveContainer" containerID="eeef50a3c2d0a6035ea14929984a3174c8942dc0d877438f3296d4e69e60b3d0" Jul 15 23:27:51.453267 containerd[1526]: time="2025-07-15T23:27:51.453241486Z" level=error msg="ContainerStatus for \"eeef50a3c2d0a6035ea14929984a3174c8942dc0d877438f3296d4e69e60b3d0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eeef50a3c2d0a6035ea14929984a3174c8942dc0d877438f3296d4e69e60b3d0\": not found" Jul 15 23:27:51.453402 kubelet[2652]: E0715 23:27:51.453350 2652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eeef50a3c2d0a6035ea14929984a3174c8942dc0d877438f3296d4e69e60b3d0\": not found" containerID="eeef50a3c2d0a6035ea14929984a3174c8942dc0d877438f3296d4e69e60b3d0" Jul 15 23:27:51.453402 kubelet[2652]: I0715 23:27:51.453381 2652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eeef50a3c2d0a6035ea14929984a3174c8942dc0d877438f3296d4e69e60b3d0"} err="failed to get container status \"eeef50a3c2d0a6035ea14929984a3174c8942dc0d877438f3296d4e69e60b3d0\": rpc error: code = NotFound desc = an error occurred when try to find container \"eeef50a3c2d0a6035ea14929984a3174c8942dc0d877438f3296d4e69e60b3d0\": not found" Jul 15 23:27:51.453402 kubelet[2652]: I0715 23:27:51.453397 2652 scope.go:117] "RemoveContainer" containerID="7daf7527074c9581abd6944d4ffd95673f411372842e212b7af7bc73492adeaa" Jul 15 23:27:51.453601 containerd[1526]: time="2025-07-15T23:27:51.453550922Z" level=error msg="ContainerStatus for \"7daf7527074c9581abd6944d4ffd95673f411372842e212b7af7bc73492adeaa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7daf7527074c9581abd6944d4ffd95673f411372842e212b7af7bc73492adeaa\": not found" Jul 15 23:27:51.453701 kubelet[2652]: E0715 23:27:51.453670 2652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7daf7527074c9581abd6944d4ffd95673f411372842e212b7af7bc73492adeaa\": not found" containerID="7daf7527074c9581abd6944d4ffd95673f411372842e212b7af7bc73492adeaa" Jul 15 23:27:51.453701 kubelet[2652]: I0715 23:27:51.453687 2652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7daf7527074c9581abd6944d4ffd95673f411372842e212b7af7bc73492adeaa"} err="failed to get container status \"7daf7527074c9581abd6944d4ffd95673f411372842e212b7af7bc73492adeaa\": rpc error: code = NotFound desc = an error occurred when try to find container \"7daf7527074c9581abd6944d4ffd95673f411372842e212b7af7bc73492adeaa\": not found" Jul 15 23:27:51.453701 kubelet[2652]: I0715 23:27:51.453700 2652 scope.go:117] "RemoveContainer" containerID="af05c42bc963a527559d9987b6f3f66f24b4d0ac4c56360e3466fead31e0de0d" Jul 15 23:27:51.453955 containerd[1526]: time="2025-07-15T23:27:51.453854438Z" level=error msg="ContainerStatus for \"af05c42bc963a527559d9987b6f3f66f24b4d0ac4c56360e3466fead31e0de0d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"af05c42bc963a527559d9987b6f3f66f24b4d0ac4c56360e3466fead31e0de0d\": not found" Jul 15 23:27:51.454124 kubelet[2652]: E0715 23:27:51.454097 2652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"af05c42bc963a527559d9987b6f3f66f24b4d0ac4c56360e3466fead31e0de0d\": not found" containerID="af05c42bc963a527559d9987b6f3f66f24b4d0ac4c56360e3466fead31e0de0d" Jul 15 23:27:51.454179 kubelet[2652]: I0715 23:27:51.454130 2652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"af05c42bc963a527559d9987b6f3f66f24b4d0ac4c56360e3466fead31e0de0d"} err="failed to get container status \"af05c42bc963a527559d9987b6f3f66f24b4d0ac4c56360e3466fead31e0de0d\": rpc error: code = NotFound desc = an error occurred when try to find container \"af05c42bc963a527559d9987b6f3f66f24b4d0ac4c56360e3466fead31e0de0d\": not found" Jul 15 23:27:51.454179 kubelet[2652]: I0715 23:27:51.454163 2652 scope.go:117] "RemoveContainer" containerID="7c2cb580d6046755a76e8f1175a7aecb373f59cc041a86bfe9af1c108e8e10e1" Jul 15 23:27:51.455656 containerd[1526]: time="2025-07-15T23:27:51.455632814Z" level=info msg="RemoveContainer for \"7c2cb580d6046755a76e8f1175a7aecb373f59cc041a86bfe9af1c108e8e10e1\"" Jul 15 23:27:51.458351 containerd[1526]: time="2025-07-15T23:27:51.458297139Z" level=info msg="RemoveContainer for \"7c2cb580d6046755a76e8f1175a7aecb373f59cc041a86bfe9af1c108e8e10e1\" returns successfully" Jul 15 23:27:51.458550 kubelet[2652]: I0715 23:27:51.458531 2652 scope.go:117] "RemoveContainer" containerID="7c2cb580d6046755a76e8f1175a7aecb373f59cc041a86bfe9af1c108e8e10e1" Jul 15 23:27:51.458812 containerd[1526]: time="2025-07-15T23:27:51.458703053Z" level=error msg="ContainerStatus for \"7c2cb580d6046755a76e8f1175a7aecb373f59cc041a86bfe9af1c108e8e10e1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7c2cb580d6046755a76e8f1175a7aecb373f59cc041a86bfe9af1c108e8e10e1\": not found" Jul 15 23:27:51.458887 kubelet[2652]: E0715 23:27:51.458837 2652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7c2cb580d6046755a76e8f1175a7aecb373f59cc041a86bfe9af1c108e8e10e1\": not found" containerID="7c2cb580d6046755a76e8f1175a7aecb373f59cc041a86bfe9af1c108e8e10e1" Jul 15 23:27:51.458887 kubelet[2652]: I0715 23:27:51.458871 2652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7c2cb580d6046755a76e8f1175a7aecb373f59cc041a86bfe9af1c108e8e10e1"} err="failed to get container status \"7c2cb580d6046755a76e8f1175a7aecb373f59cc041a86bfe9af1c108e8e10e1\": rpc error: code = NotFound desc = an error occurred when try to find container \"7c2cb580d6046755a76e8f1175a7aecb373f59cc041a86bfe9af1c108e8e10e1\": not found" Jul 15 23:27:51.518133 systemd[1]: var-lib-kubelet-pods-50b2b2aa\x2d8964\x2d419a\x2db17c\x2d2250a437abab-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2m44f.mount: Deactivated successfully. Jul 15 23:27:51.518226 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-08b3cb83c7e5ae69b57ea759d780b90f5d3b1b7f2ee3789ce12cf65133f0f166-shm.mount: Deactivated successfully. Jul 15 23:27:51.518275 systemd[1]: var-lib-kubelet-pods-7fc73677\x2d9c82\x2d4a92\x2dbbd8\x2d2900ae94b719-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlmzdm.mount: Deactivated successfully. Jul 15 23:27:51.518331 systemd[1]: var-lib-kubelet-pods-7fc73677\x2d9c82\x2d4a92\x2dbbd8\x2d2900ae94b719-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 15 23:27:51.518379 systemd[1]: var-lib-kubelet-pods-7fc73677\x2d9c82\x2d4a92\x2dbbd8\x2d2900ae94b719-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 15 23:27:52.190976 kubelet[2652]: E0715 23:27:52.190933 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:27:52.434864 sshd[4251]: Connection closed by 10.0.0.1 port 45254 Jul 15 23:27:52.435212 sshd-session[4249]: pam_unix(sshd:session): session closed for user core Jul 15 23:27:52.455600 systemd[1]: sshd@22-10.0.0.112:22-10.0.0.1:45254.service: Deactivated successfully. Jul 15 23:27:52.457515 systemd[1]: session-23.scope: Deactivated successfully. Jul 15 23:27:52.457740 systemd[1]: session-23.scope: Consumed 1.413s CPU time, 23.8M memory peak. Jul 15 23:27:52.459128 systemd-logind[1509]: Session 23 logged out. Waiting for processes to exit. Jul 15 23:27:52.461493 systemd[1]: Started sshd@23-10.0.0.112:22-10.0.0.1:45266.service - OpenSSH per-connection server daemon (10.0.0.1:45266). Jul 15 23:27:52.462662 systemd-logind[1509]: Removed session 23. Jul 15 23:27:52.517658 sshd[4403]: Accepted publickey for core from 10.0.0.1 port 45266 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:27:52.518746 sshd-session[4403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:27:52.523172 systemd-logind[1509]: New session 24 of user core. Jul 15 23:27:52.533260 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 15 23:27:53.125398 sshd[4405]: Connection closed by 10.0.0.1 port 45266 Jul 15 23:27:53.125740 sshd-session[4403]: pam_unix(sshd:session): session closed for user core Jul 15 23:27:53.139331 systemd[1]: sshd@23-10.0.0.112:22-10.0.0.1:45266.service: Deactivated successfully. Jul 15 23:27:53.146354 systemd[1]: session-24.scope: Deactivated successfully. Jul 15 23:27:53.147365 systemd-logind[1509]: Session 24 logged out. Waiting for processes to exit. Jul 15 23:27:53.151856 kubelet[2652]: E0715 23:27:53.150982 2652 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7fc73677-9c82-4a92-bbd8-2900ae94b719" containerName="mount-bpf-fs" Jul 15 23:27:53.151856 kubelet[2652]: E0715 23:27:53.151014 2652 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7fc73677-9c82-4a92-bbd8-2900ae94b719" containerName="cilium-agent" Jul 15 23:27:53.151856 kubelet[2652]: E0715 23:27:53.151020 2652 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7fc73677-9c82-4a92-bbd8-2900ae94b719" containerName="mount-cgroup" Jul 15 23:27:53.151856 kubelet[2652]: E0715 23:27:53.151025 2652 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7fc73677-9c82-4a92-bbd8-2900ae94b719" containerName="apply-sysctl-overwrites" Jul 15 23:27:53.151856 kubelet[2652]: E0715 23:27:53.151031 2652 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7fc73677-9c82-4a92-bbd8-2900ae94b719" containerName="clean-cilium-state" Jul 15 23:27:53.151856 kubelet[2652]: E0715 23:27:53.151037 2652 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="50b2b2aa-8964-419a-b17c-2250a437abab" containerName="cilium-operator" Jul 15 23:27:53.152577 kubelet[2652]: I0715 23:27:53.152539 2652 memory_manager.go:354] "RemoveStaleState removing state" podUID="50b2b2aa-8964-419a-b17c-2250a437abab" containerName="cilium-operator" Jul 15 23:27:53.152577 kubelet[2652]: I0715 23:27:53.152567 2652 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fc73677-9c82-4a92-bbd8-2900ae94b719" containerName="cilium-agent" Jul 15 23:27:53.154134 systemd-logind[1509]: Removed session 24. Jul 15 23:27:53.159409 systemd[1]: Started sshd@24-10.0.0.112:22-10.0.0.1:41636.service - OpenSSH per-connection server daemon (10.0.0.1:41636). Jul 15 23:27:53.181042 systemd[1]: Created slice kubepods-burstable-pod05085ac4_122d_478c_826a_bf4fc9e8bb59.slice - libcontainer container kubepods-burstable-pod05085ac4_122d_478c_826a_bf4fc9e8bb59.slice. Jul 15 23:27:53.195385 kubelet[2652]: I0715 23:27:53.195340 2652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50b2b2aa-8964-419a-b17c-2250a437abab" path="/var/lib/kubelet/pods/50b2b2aa-8964-419a-b17c-2250a437abab/volumes" Jul 15 23:27:53.195800 kubelet[2652]: I0715 23:27:53.195774 2652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fc73677-9c82-4a92-bbd8-2900ae94b719" path="/var/lib/kubelet/pods/7fc73677-9c82-4a92-bbd8-2900ae94b719/volumes" Jul 15 23:27:53.204278 kubelet[2652]: I0715 23:27:53.204242 2652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/05085ac4-122d-478c-826a-bf4fc9e8bb59-cilium-cgroup\") pod \"cilium-ndj6z\" (UID: \"05085ac4-122d-478c-826a-bf4fc9e8bb59\") " pod="kube-system/cilium-ndj6z" Jul 15 23:27:53.204278 kubelet[2652]: I0715 23:27:53.204276 2652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/05085ac4-122d-478c-826a-bf4fc9e8bb59-clustermesh-secrets\") pod \"cilium-ndj6z\" (UID: \"05085ac4-122d-478c-826a-bf4fc9e8bb59\") " pod="kube-system/cilium-ndj6z" Jul 15 23:27:53.204400 kubelet[2652]: I0715 23:27:53.204296 2652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/05085ac4-122d-478c-826a-bf4fc9e8bb59-cilium-config-path\") pod \"cilium-ndj6z\" (UID: \"05085ac4-122d-478c-826a-bf4fc9e8bb59\") " pod="kube-system/cilium-ndj6z" Jul 15 23:27:53.204400 kubelet[2652]: I0715 23:27:53.204312 2652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/05085ac4-122d-478c-826a-bf4fc9e8bb59-host-proc-sys-kernel\") pod \"cilium-ndj6z\" (UID: \"05085ac4-122d-478c-826a-bf4fc9e8bb59\") " pod="kube-system/cilium-ndj6z" Jul 15 23:27:53.204400 kubelet[2652]: I0715 23:27:53.204369 2652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/05085ac4-122d-478c-826a-bf4fc9e8bb59-cilium-ipsec-secrets\") pod \"cilium-ndj6z\" (UID: \"05085ac4-122d-478c-826a-bf4fc9e8bb59\") " pod="kube-system/cilium-ndj6z" Jul 15 23:27:53.204459 kubelet[2652]: I0715 23:27:53.204407 2652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/05085ac4-122d-478c-826a-bf4fc9e8bb59-cni-path\") pod \"cilium-ndj6z\" (UID: \"05085ac4-122d-478c-826a-bf4fc9e8bb59\") " pod="kube-system/cilium-ndj6z" Jul 15 23:27:53.204459 kubelet[2652]: I0715 23:27:53.204430 2652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/05085ac4-122d-478c-826a-bf4fc9e8bb59-etc-cni-netd\") pod \"cilium-ndj6z\" (UID: \"05085ac4-122d-478c-826a-bf4fc9e8bb59\") " pod="kube-system/cilium-ndj6z" Jul 15 23:27:53.204459 kubelet[2652]: I0715 23:27:53.204444 2652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/05085ac4-122d-478c-826a-bf4fc9e8bb59-host-proc-sys-net\") pod \"cilium-ndj6z\" (UID: \"05085ac4-122d-478c-826a-bf4fc9e8bb59\") " pod="kube-system/cilium-ndj6z" Jul 15 23:27:53.204459 kubelet[2652]: I0715 23:27:53.204459 2652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05085ac4-122d-478c-826a-bf4fc9e8bb59-lib-modules\") pod \"cilium-ndj6z\" (UID: \"05085ac4-122d-478c-826a-bf4fc9e8bb59\") " pod="kube-system/cilium-ndj6z" Jul 15 23:27:53.204559 kubelet[2652]: I0715 23:27:53.204504 2652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/05085ac4-122d-478c-826a-bf4fc9e8bb59-hubble-tls\") pod \"cilium-ndj6z\" (UID: \"05085ac4-122d-478c-826a-bf4fc9e8bb59\") " pod="kube-system/cilium-ndj6z" Jul 15 23:27:53.204559 kubelet[2652]: I0715 23:27:53.204540 2652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/05085ac4-122d-478c-826a-bf4fc9e8bb59-bpf-maps\") pod \"cilium-ndj6z\" (UID: \"05085ac4-122d-478c-826a-bf4fc9e8bb59\") " pod="kube-system/cilium-ndj6z" Jul 15 23:27:53.204604 kubelet[2652]: I0715 23:27:53.204556 2652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn7c5\" (UniqueName: \"kubernetes.io/projected/05085ac4-122d-478c-826a-bf4fc9e8bb59-kube-api-access-vn7c5\") pod \"cilium-ndj6z\" (UID: \"05085ac4-122d-478c-826a-bf4fc9e8bb59\") " pod="kube-system/cilium-ndj6z" Jul 15 23:27:53.204604 kubelet[2652]: I0715 23:27:53.204591 2652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/05085ac4-122d-478c-826a-bf4fc9e8bb59-cilium-run\") pod \"cilium-ndj6z\" (UID: \"05085ac4-122d-478c-826a-bf4fc9e8bb59\") " pod="kube-system/cilium-ndj6z" Jul 15 23:27:53.204645 kubelet[2652]: I0715 23:27:53.204606 2652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/05085ac4-122d-478c-826a-bf4fc9e8bb59-hostproc\") pod \"cilium-ndj6z\" (UID: \"05085ac4-122d-478c-826a-bf4fc9e8bb59\") " pod="kube-system/cilium-ndj6z" Jul 15 23:27:53.204645 kubelet[2652]: I0715 23:27:53.204621 2652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/05085ac4-122d-478c-826a-bf4fc9e8bb59-xtables-lock\") pod \"cilium-ndj6z\" (UID: \"05085ac4-122d-478c-826a-bf4fc9e8bb59\") " pod="kube-system/cilium-ndj6z" Jul 15 23:27:53.215944 sshd[4417]: Accepted publickey for core from 10.0.0.1 port 41636 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:27:53.217316 sshd-session[4417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:27:53.221994 systemd-logind[1509]: New session 25 of user core. Jul 15 23:27:53.232214 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 15 23:27:53.284450 sshd[4419]: Connection closed by 10.0.0.1 port 41636 Jul 15 23:27:53.285160 sshd-session[4417]: pam_unix(sshd:session): session closed for user core Jul 15 23:27:53.299433 systemd[1]: sshd@24-10.0.0.112:22-10.0.0.1:41636.service: Deactivated successfully. Jul 15 23:27:53.300938 systemd[1]: session-25.scope: Deactivated successfully. Jul 15 23:27:53.302791 systemd-logind[1509]: Session 25 logged out. Waiting for processes to exit. Jul 15 23:27:53.305352 systemd-logind[1509]: Removed session 25. Jul 15 23:27:53.307552 systemd[1]: Started sshd@25-10.0.0.112:22-10.0.0.1:41640.service - OpenSSH per-connection server daemon (10.0.0.1:41640). Jul 15 23:27:53.356793 sshd[4428]: Accepted publickey for core from 10.0.0.1 port 41640 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:27:53.358410 sshd-session[4428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:27:53.362913 systemd-logind[1509]: New session 26 of user core. Jul 15 23:27:53.371210 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 15 23:27:53.486192 kubelet[2652]: E0715 23:27:53.485790 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:27:53.486742 containerd[1526]: time="2025-07-15T23:27:53.486665709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ndj6z,Uid:05085ac4-122d-478c-826a-bf4fc9e8bb59,Namespace:kube-system,Attempt:0,}" Jul 15 23:27:53.502934 containerd[1526]: time="2025-07-15T23:27:53.502875182Z" level=info msg="connecting to shim bff1968e061dc888629f9fe2f3353bebb836360fce58edb2bedb80b07448e1b4" address="unix:///run/containerd/s/bdc7767dc76c51bc3e218592b043c58ebc6b295260af1530d40637c10a5cd674" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:27:53.523227 systemd[1]: Started cri-containerd-bff1968e061dc888629f9fe2f3353bebb836360fce58edb2bedb80b07448e1b4.scope - libcontainer container bff1968e061dc888629f9fe2f3353bebb836360fce58edb2bedb80b07448e1b4. Jul 15 23:27:53.551623 containerd[1526]: time="2025-07-15T23:27:53.551583840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ndj6z,Uid:05085ac4-122d-478c-826a-bf4fc9e8bb59,Namespace:kube-system,Attempt:0,} returns sandbox id \"bff1968e061dc888629f9fe2f3353bebb836360fce58edb2bedb80b07448e1b4\"" Jul 15 23:27:53.552430 kubelet[2652]: E0715 23:27:53.552255 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:27:53.553920 containerd[1526]: time="2025-07-15T23:27:53.553891851Z" level=info msg="CreateContainer within sandbox \"bff1968e061dc888629f9fe2f3353bebb836360fce58edb2bedb80b07448e1b4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 15 23:27:53.560113 containerd[1526]: time="2025-07-15T23:27:53.560072532Z" level=info msg="Container d3f08fee0f425d8f9a2be1bab96386912f935e3af972ef649716379c9cb30b60: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:27:53.565799 containerd[1526]: time="2025-07-15T23:27:53.565760579Z" level=info msg="CreateContainer within sandbox \"bff1968e061dc888629f9fe2f3353bebb836360fce58edb2bedb80b07448e1b4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d3f08fee0f425d8f9a2be1bab96386912f935e3af972ef649716379c9cb30b60\"" Jul 15 23:27:53.566318 containerd[1526]: time="2025-07-15T23:27:53.566291453Z" level=info msg="StartContainer for \"d3f08fee0f425d8f9a2be1bab96386912f935e3af972ef649716379c9cb30b60\"" Jul 15 23:27:53.567163 containerd[1526]: time="2025-07-15T23:27:53.567131242Z" level=info msg="connecting to shim d3f08fee0f425d8f9a2be1bab96386912f935e3af972ef649716379c9cb30b60" address="unix:///run/containerd/s/bdc7767dc76c51bc3e218592b043c58ebc6b295260af1530d40637c10a5cd674" protocol=ttrpc version=3 Jul 15 23:27:53.588225 systemd[1]: Started cri-containerd-d3f08fee0f425d8f9a2be1bab96386912f935e3af972ef649716379c9cb30b60.scope - libcontainer container d3f08fee0f425d8f9a2be1bab96386912f935e3af972ef649716379c9cb30b60. Jul 15 23:27:53.624930 containerd[1526]: time="2025-07-15T23:27:53.624822346Z" level=info msg="StartContainer for \"d3f08fee0f425d8f9a2be1bab96386912f935e3af972ef649716379c9cb30b60\" returns successfully" Jul 15 23:27:53.638311 systemd[1]: cri-containerd-d3f08fee0f425d8f9a2be1bab96386912f935e3af972ef649716379c9cb30b60.scope: Deactivated successfully. Jul 15 23:27:53.639619 containerd[1526]: time="2025-07-15T23:27:53.639559238Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d3f08fee0f425d8f9a2be1bab96386912f935e3af972ef649716379c9cb30b60\" id:\"d3f08fee0f425d8f9a2be1bab96386912f935e3af972ef649716379c9cb30b60\" pid:4497 exited_at:{seconds:1752622073 nanos:639166723}" Jul 15 23:27:53.639619 containerd[1526]: time="2025-07-15T23:27:53.639599397Z" level=info msg="received exit event container_id:\"d3f08fee0f425d8f9a2be1bab96386912f935e3af972ef649716379c9cb30b60\" id:\"d3f08fee0f425d8f9a2be1bab96386912f935e3af972ef649716379c9cb30b60\" pid:4497 exited_at:{seconds:1752622073 nanos:639166723}" Jul 15 23:27:54.191106 kubelet[2652]: E0715 23:27:54.191027 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:27:54.420789 kubelet[2652]: E0715 23:27:54.420749 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:27:54.423756 containerd[1526]: time="2025-07-15T23:27:54.423679023Z" level=info msg="CreateContainer within sandbox \"bff1968e061dc888629f9fe2f3353bebb836360fce58edb2bedb80b07448e1b4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 15 23:27:54.431931 containerd[1526]: time="2025-07-15T23:27:54.431897160Z" level=info msg="Container 142abb654cbff34b390aa820f60bb1db955816ff707b1c3f03edb7ddff94b048: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:27:54.439524 containerd[1526]: time="2025-07-15T23:27:54.439403466Z" level=info msg="CreateContainer within sandbox \"bff1968e061dc888629f9fe2f3353bebb836360fce58edb2bedb80b07448e1b4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"142abb654cbff34b390aa820f60bb1db955816ff707b1c3f03edb7ddff94b048\"" Jul 15 23:27:54.440219 containerd[1526]: time="2025-07-15T23:27:54.440191937Z" level=info msg="StartContainer for \"142abb654cbff34b390aa820f60bb1db955816ff707b1c3f03edb7ddff94b048\"" Jul 15 23:27:54.441170 containerd[1526]: time="2025-07-15T23:27:54.441088085Z" level=info msg="connecting to shim 142abb654cbff34b390aa820f60bb1db955816ff707b1c3f03edb7ddff94b048" address="unix:///run/containerd/s/bdc7767dc76c51bc3e218592b043c58ebc6b295260af1530d40637c10a5cd674" protocol=ttrpc version=3 Jul 15 23:27:54.462221 systemd[1]: Started cri-containerd-142abb654cbff34b390aa820f60bb1db955816ff707b1c3f03edb7ddff94b048.scope - libcontainer container 142abb654cbff34b390aa820f60bb1db955816ff707b1c3f03edb7ddff94b048. Jul 15 23:27:54.488118 containerd[1526]: time="2025-07-15T23:27:54.488081098Z" level=info msg="StartContainer for \"142abb654cbff34b390aa820f60bb1db955816ff707b1c3f03edb7ddff94b048\" returns successfully" Jul 15 23:27:54.497879 systemd[1]: cri-containerd-142abb654cbff34b390aa820f60bb1db955816ff707b1c3f03edb7ddff94b048.scope: Deactivated successfully. Jul 15 23:27:54.498431 containerd[1526]: time="2025-07-15T23:27:54.498394729Z" level=info msg="received exit event container_id:\"142abb654cbff34b390aa820f60bb1db955816ff707b1c3f03edb7ddff94b048\" id:\"142abb654cbff34b390aa820f60bb1db955816ff707b1c3f03edb7ddff94b048\" pid:4543 exited_at:{seconds:1752622074 nanos:498193612}" Jul 15 23:27:54.498508 containerd[1526]: time="2025-07-15T23:27:54.498477768Z" level=info msg="TaskExit event in podsandbox handler container_id:\"142abb654cbff34b390aa820f60bb1db955816ff707b1c3f03edb7ddff94b048\" id:\"142abb654cbff34b390aa820f60bb1db955816ff707b1c3f03edb7ddff94b048\" pid:4543 exited_at:{seconds:1752622074 nanos:498193612}" Jul 15 23:27:54.519639 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-142abb654cbff34b390aa820f60bb1db955816ff707b1c3f03edb7ddff94b048-rootfs.mount: Deactivated successfully. Jul 15 23:27:55.253658 kubelet[2652]: E0715 23:27:55.253605 2652 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 15 23:27:55.424969 kubelet[2652]: E0715 23:27:55.423981 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:27:55.426338 containerd[1526]: time="2025-07-15T23:27:55.426307125Z" level=info msg="CreateContainer within sandbox \"bff1968e061dc888629f9fe2f3353bebb836360fce58edb2bedb80b07448e1b4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 15 23:27:55.441505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1941532413.mount: Deactivated successfully. Jul 15 23:27:55.443722 containerd[1526]: time="2025-07-15T23:27:55.443669032Z" level=info msg="Container 9cbcf0fa3fa43c318b00168129d5d6d1aae6ede10a6760bc97bbd0dd742de172: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:27:55.452140 containerd[1526]: time="2025-07-15T23:27:55.452094729Z" level=info msg="CreateContainer within sandbox \"bff1968e061dc888629f9fe2f3353bebb836360fce58edb2bedb80b07448e1b4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9cbcf0fa3fa43c318b00168129d5d6d1aae6ede10a6760bc97bbd0dd742de172\"" Jul 15 23:27:55.452743 containerd[1526]: time="2025-07-15T23:27:55.452703802Z" level=info msg="StartContainer for \"9cbcf0fa3fa43c318b00168129d5d6d1aae6ede10a6760bc97bbd0dd742de172\"" Jul 15 23:27:55.454428 containerd[1526]: time="2025-07-15T23:27:55.454400421Z" level=info msg="connecting to shim 9cbcf0fa3fa43c318b00168129d5d6d1aae6ede10a6760bc97bbd0dd742de172" address="unix:///run/containerd/s/bdc7767dc76c51bc3e218592b043c58ebc6b295260af1530d40637c10a5cd674" protocol=ttrpc version=3 Jul 15 23:27:55.479216 systemd[1]: Started cri-containerd-9cbcf0fa3fa43c318b00168129d5d6d1aae6ede10a6760bc97bbd0dd742de172.scope - libcontainer container 9cbcf0fa3fa43c318b00168129d5d6d1aae6ede10a6760bc97bbd0dd742de172. Jul 15 23:27:55.548046 containerd[1526]: time="2025-07-15T23:27:55.547614841Z" level=info msg="StartContainer for \"9cbcf0fa3fa43c318b00168129d5d6d1aae6ede10a6760bc97bbd0dd742de172\" returns successfully" Jul 15 23:27:55.549483 systemd[1]: cri-containerd-9cbcf0fa3fa43c318b00168129d5d6d1aae6ede10a6760bc97bbd0dd742de172.scope: Deactivated successfully. Jul 15 23:27:55.551372 containerd[1526]: time="2025-07-15T23:27:55.551341875Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9cbcf0fa3fa43c318b00168129d5d6d1aae6ede10a6760bc97bbd0dd742de172\" id:\"9cbcf0fa3fa43c318b00168129d5d6d1aae6ede10a6760bc97bbd0dd742de172\" pid:4587 exited_at:{seconds:1752622075 nanos:551080798}" Jul 15 23:27:55.551554 containerd[1526]: time="2025-07-15T23:27:55.551525593Z" level=info msg="received exit event container_id:\"9cbcf0fa3fa43c318b00168129d5d6d1aae6ede10a6760bc97bbd0dd742de172\" id:\"9cbcf0fa3fa43c318b00168129d5d6d1aae6ede10a6760bc97bbd0dd742de172\" pid:4587 exited_at:{seconds:1752622075 nanos:551080798}" Jul 15 23:27:55.571136 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9cbcf0fa3fa43c318b00168129d5d6d1aae6ede10a6760bc97bbd0dd742de172-rootfs.mount: Deactivated successfully. Jul 15 23:27:56.431545 kubelet[2652]: E0715 23:27:56.430694 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:27:56.434990 containerd[1526]: time="2025-07-15T23:27:56.434958331Z" level=info msg="CreateContainer within sandbox \"bff1968e061dc888629f9fe2f3353bebb836360fce58edb2bedb80b07448e1b4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 15 23:27:56.447089 containerd[1526]: time="2025-07-15T23:27:56.446950868Z" level=info msg="Container d8c54e988d0e62d57c8960949ee1d3972830d557cac82f8b6de748ec058a306a: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:27:56.456987 containerd[1526]: time="2025-07-15T23:27:56.456947468Z" level=info msg="CreateContainer within sandbox \"bff1968e061dc888629f9fe2f3353bebb836360fce58edb2bedb80b07448e1b4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d8c54e988d0e62d57c8960949ee1d3972830d557cac82f8b6de748ec058a306a\"" Jul 15 23:27:56.457743 containerd[1526]: time="2025-07-15T23:27:56.457654579Z" level=info msg="StartContainer for \"d8c54e988d0e62d57c8960949ee1d3972830d557cac82f8b6de748ec058a306a\"" Jul 15 23:27:56.458690 containerd[1526]: time="2025-07-15T23:27:56.458626168Z" level=info msg="connecting to shim d8c54e988d0e62d57c8960949ee1d3972830d557cac82f8b6de748ec058a306a" address="unix:///run/containerd/s/bdc7767dc76c51bc3e218592b043c58ebc6b295260af1530d40637c10a5cd674" protocol=ttrpc version=3 Jul 15 23:27:56.477211 systemd[1]: Started cri-containerd-d8c54e988d0e62d57c8960949ee1d3972830d557cac82f8b6de748ec058a306a.scope - libcontainer container d8c54e988d0e62d57c8960949ee1d3972830d557cac82f8b6de748ec058a306a. Jul 15 23:27:56.499448 systemd[1]: cri-containerd-d8c54e988d0e62d57c8960949ee1d3972830d557cac82f8b6de748ec058a306a.scope: Deactivated successfully. Jul 15 23:27:56.500512 containerd[1526]: time="2025-07-15T23:27:56.500474226Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d8c54e988d0e62d57c8960949ee1d3972830d557cac82f8b6de748ec058a306a\" id:\"d8c54e988d0e62d57c8960949ee1d3972830d557cac82f8b6de748ec058a306a\" pid:4626 exited_at:{seconds:1752622076 nanos:499819714}" Jul 15 23:27:56.501677 containerd[1526]: time="2025-07-15T23:27:56.501643812Z" level=info msg="received exit event container_id:\"d8c54e988d0e62d57c8960949ee1d3972830d557cac82f8b6de748ec058a306a\" id:\"d8c54e988d0e62d57c8960949ee1d3972830d557cac82f8b6de748ec058a306a\" pid:4626 exited_at:{seconds:1752622076 nanos:499819714}" Jul 15 23:27:56.502578 containerd[1526]: time="2025-07-15T23:27:56.502552561Z" level=info msg="StartContainer for \"d8c54e988d0e62d57c8960949ee1d3972830d557cac82f8b6de748ec058a306a\" returns successfully" Jul 15 23:27:56.521634 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8c54e988d0e62d57c8960949ee1d3972830d557cac82f8b6de748ec058a306a-rootfs.mount: Deactivated successfully. Jul 15 23:27:57.300088 kubelet[2652]: I0715 23:27:57.300019 2652 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-15T23:27:57Z","lastTransitionTime":"2025-07-15T23:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 15 23:27:57.436851 kubelet[2652]: E0715 23:27:57.436806 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:27:57.440030 containerd[1526]: time="2025-07-15T23:27:57.439560437Z" level=info msg="CreateContainer within sandbox \"bff1968e061dc888629f9fe2f3353bebb836360fce58edb2bedb80b07448e1b4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 15 23:27:57.451974 containerd[1526]: time="2025-07-15T23:27:57.451219101Z" level=info msg="Container b83c8fde06eabfcb7e204736934e9f914cbd3957caaad2a98a7e9fbf7f8c14a6: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:27:57.460417 containerd[1526]: time="2025-07-15T23:27:57.460121356Z" level=info msg="CreateContainer within sandbox \"bff1968e061dc888629f9fe2f3353bebb836360fce58edb2bedb80b07448e1b4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b83c8fde06eabfcb7e204736934e9f914cbd3957caaad2a98a7e9fbf7f8c14a6\"" Jul 15 23:27:57.462093 containerd[1526]: time="2025-07-15T23:27:57.461153864Z" level=info msg="StartContainer for \"b83c8fde06eabfcb7e204736934e9f914cbd3957caaad2a98a7e9fbf7f8c14a6\"" Jul 15 23:27:57.462093 containerd[1526]: time="2025-07-15T23:27:57.462022214Z" level=info msg="connecting to shim b83c8fde06eabfcb7e204736934e9f914cbd3957caaad2a98a7e9fbf7f8c14a6" address="unix:///run/containerd/s/bdc7767dc76c51bc3e218592b043c58ebc6b295260af1530d40637c10a5cd674" protocol=ttrpc version=3 Jul 15 23:27:57.481199 systemd[1]: Started cri-containerd-b83c8fde06eabfcb7e204736934e9f914cbd3957caaad2a98a7e9fbf7f8c14a6.scope - libcontainer container b83c8fde06eabfcb7e204736934e9f914cbd3957caaad2a98a7e9fbf7f8c14a6. Jul 15 23:27:57.511493 containerd[1526]: time="2025-07-15T23:27:57.511380274Z" level=info msg="StartContainer for \"b83c8fde06eabfcb7e204736934e9f914cbd3957caaad2a98a7e9fbf7f8c14a6\" returns successfully" Jul 15 23:27:57.575306 containerd[1526]: time="2025-07-15T23:27:57.575162845Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b83c8fde06eabfcb7e204736934e9f914cbd3957caaad2a98a7e9fbf7f8c14a6\" id:\"bd7fe76cc40f706cc831fdcfe345c75facb5a8ab7f97f4845290e189f09f36a5\" pid:4693 exited_at:{seconds:1752622077 nanos:574144497}" Jul 15 23:27:57.790090 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 15 23:27:58.443156 kubelet[2652]: E0715 23:27:58.443044 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:27:58.458984 kubelet[2652]: I0715 23:27:58.458926 2652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ndj6z" podStartSLOduration=5.458909016 podStartE2EDuration="5.458909016s" podCreationTimestamp="2025-07-15 23:27:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:27:58.45858866 +0000 UTC m=+83.339312359" watchObservedRunningTime="2025-07-15 23:27:58.458909016 +0000 UTC m=+83.339632635" Jul 15 23:27:59.487865 kubelet[2652]: E0715 23:27:59.487345 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:27:59.750415 containerd[1526]: time="2025-07-15T23:27:59.750344567Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b83c8fde06eabfcb7e204736934e9f914cbd3957caaad2a98a7e9fbf7f8c14a6\" id:\"8ae453c3d1354eadf56161dc76ab3aa8b1874935ab124c2508b1107fad8db067\" pid:4894 exit_status:1 exited_at:{seconds:1752622079 nanos:749882853}" Jul 15 23:28:00.644902 systemd-networkd[1440]: lxc_health: Link UP Jul 15 23:28:00.645152 systemd-networkd[1440]: lxc_health: Gained carrier Jul 15 23:28:01.488549 kubelet[2652]: E0715 23:28:01.488511 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:28:01.879529 containerd[1526]: time="2025-07-15T23:28:01.879480081Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b83c8fde06eabfcb7e204736934e9f914cbd3957caaad2a98a7e9fbf7f8c14a6\" id:\"13dd1f3657de9f43ed7188cd08ec3582048df8f804b1a3040e00d42a81c78f4d\" pid:5226 exited_at:{seconds:1752622081 nanos:878328454}" Jul 15 23:28:02.128187 systemd-networkd[1440]: lxc_health: Gained IPv6LL Jul 15 23:28:02.450840 kubelet[2652]: E0715 23:28:02.450799 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:28:03.452564 kubelet[2652]: E0715 23:28:03.452503 2652 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:28:03.990356 containerd[1526]: time="2025-07-15T23:28:03.990312450Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b83c8fde06eabfcb7e204736934e9f914cbd3957caaad2a98a7e9fbf7f8c14a6\" id:\"4f5390d5d6cea78334c6caa345e0514c7b75474c242d0521d37c41f61485882c\" pid:5254 exited_at:{seconds:1752622083 nanos:989981013}" Jul 15 23:28:06.095030 containerd[1526]: time="2025-07-15T23:28:06.094982885Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b83c8fde06eabfcb7e204736934e9f914cbd3957caaad2a98a7e9fbf7f8c14a6\" id:\"0818cf0945bc15e614880ca93a74a0ec320128be84f8c4a56b915e7f20129c59\" pid:5285 exited_at:{seconds:1752622086 nanos:94435531}" Jul 15 23:28:06.107368 sshd[4432]: Connection closed by 10.0.0.1 port 41640 Jul 15 23:28:06.107847 sshd-session[4428]: pam_unix(sshd:session): session closed for user core Jul 15 23:28:06.111605 systemd[1]: sshd@25-10.0.0.112:22-10.0.0.1:41640.service: Deactivated successfully. Jul 15 23:28:06.113434 systemd[1]: session-26.scope: Deactivated successfully. Jul 15 23:28:06.114841 systemd-logind[1509]: Session 26 logged out. Waiting for processes to exit. Jul 15 23:28:06.116740 systemd-logind[1509]: Removed session 26.