Jul 9 23:48:23.864837 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 9 23:48:23.864860 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Wed Jul 9 22:19:33 -00 2025 Jul 9 23:48:23.864870 kernel: KASLR enabled Jul 9 23:48:23.864875 kernel: efi: EFI v2.7 by EDK II Jul 9 23:48:23.864881 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Jul 9 23:48:23.864886 kernel: random: crng init done Jul 9 23:48:23.864893 kernel: secureboot: Secure boot disabled Jul 9 23:48:23.864899 kernel: ACPI: Early table checksum verification disabled Jul 9 23:48:23.864905 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Jul 9 23:48:23.864912 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 9 23:48:23.864918 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 23:48:23.864924 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 23:48:23.864929 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 23:48:23.864935 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 23:48:23.864942 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 23:48:23.864949 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 23:48:23.864955 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 23:48:23.864961 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 23:48:23.864967 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 23:48:23.864973 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 9 23:48:23.864979 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 9 23:48:23.864994 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 9 23:48:23.865001 kernel: NODE_DATA(0) allocated [mem 0xdc965dc0-0xdc96cfff] Jul 9 23:48:23.865007 kernel: Zone ranges: Jul 9 23:48:23.865013 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 9 23:48:23.865022 kernel: DMA32 empty Jul 9 23:48:23.865028 kernel: Normal empty Jul 9 23:48:23.865034 kernel: Device empty Jul 9 23:48:23.865039 kernel: Movable zone start for each node Jul 9 23:48:23.865045 kernel: Early memory node ranges Jul 9 23:48:23.865051 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Jul 9 23:48:23.865057 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Jul 9 23:48:23.865063 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Jul 9 23:48:23.865069 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Jul 9 23:48:23.865075 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Jul 9 23:48:23.865081 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Jul 9 23:48:23.865087 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Jul 9 23:48:23.865094 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Jul 9 23:48:23.865100 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Jul 9 23:48:23.865107 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 9 23:48:23.865115 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 9 23:48:23.865122 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 9 23:48:23.865128 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 9 23:48:23.865135 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 9 23:48:23.865142 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 9 23:48:23.865149 kernel: psci: probing for conduit method from ACPI. Jul 9 23:48:23.865155 kernel: psci: PSCIv1.1 detected in firmware. Jul 9 23:48:23.865161 kernel: psci: Using standard PSCI v0.2 function IDs Jul 9 23:48:23.865167 kernel: psci: Trusted OS migration not required Jul 9 23:48:23.865174 kernel: psci: SMC Calling Convention v1.1 Jul 9 23:48:23.865180 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 9 23:48:23.865187 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 9 23:48:23.865193 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 9 23:48:23.865201 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 9 23:48:23.865208 kernel: Detected PIPT I-cache on CPU0 Jul 9 23:48:23.865214 kernel: CPU features: detected: GIC system register CPU interface Jul 9 23:48:23.865221 kernel: CPU features: detected: Spectre-v4 Jul 9 23:48:23.865227 kernel: CPU features: detected: Spectre-BHB Jul 9 23:48:23.865234 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 9 23:48:23.865240 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 9 23:48:23.865246 kernel: CPU features: detected: ARM erratum 1418040 Jul 9 23:48:23.865253 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 9 23:48:23.865259 kernel: alternatives: applying boot alternatives Jul 9 23:48:23.865267 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=da23c3aa7de24c290e5e9aff0a0fccd6a322ecaa9bbfc71c29b2f39446459116 Jul 9 23:48:23.865275 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 9 23:48:23.865282 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 9 23:48:23.865288 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 9 23:48:23.865295 kernel: Fallback order for Node 0: 0 Jul 9 23:48:23.865301 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Jul 9 23:48:23.865308 kernel: Policy zone: DMA Jul 9 23:48:23.865314 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 9 23:48:23.865321 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Jul 9 23:48:23.865327 kernel: software IO TLB: area num 4. Jul 9 23:48:23.865334 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Jul 9 23:48:23.865340 kernel: software IO TLB: mapped [mem 0x00000000d8c00000-0x00000000d9000000] (4MB) Jul 9 23:48:23.865347 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 9 23:48:23.865355 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 9 23:48:23.865362 kernel: rcu: RCU event tracing is enabled. Jul 9 23:48:23.865369 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 9 23:48:23.865375 kernel: Trampoline variant of Tasks RCU enabled. Jul 9 23:48:23.865382 kernel: Tracing variant of Tasks RCU enabled. Jul 9 23:48:23.865388 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 9 23:48:23.865395 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 9 23:48:23.865401 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 9 23:48:23.865408 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 9 23:48:23.865415 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 9 23:48:23.865421 kernel: GICv3: 256 SPIs implemented Jul 9 23:48:23.865428 kernel: GICv3: 0 Extended SPIs implemented Jul 9 23:48:23.865435 kernel: Root IRQ handler: gic_handle_irq Jul 9 23:48:23.865441 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 9 23:48:23.865448 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jul 9 23:48:23.865455 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 9 23:48:23.865461 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 9 23:48:23.865468 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Jul 9 23:48:23.865474 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Jul 9 23:48:23.865481 kernel: GICv3: using LPI property table @0x0000000040130000 Jul 9 23:48:23.865487 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Jul 9 23:48:23.865494 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 9 23:48:23.865501 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 9 23:48:23.865508 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 9 23:48:23.865515 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 9 23:48:23.865522 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 9 23:48:23.865528 kernel: arm-pv: using stolen time PV Jul 9 23:48:23.865535 kernel: Console: colour dummy device 80x25 Jul 9 23:48:23.865542 kernel: ACPI: Core revision 20240827 Jul 9 23:48:23.865549 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 9 23:48:23.865556 kernel: pid_max: default: 32768 minimum: 301 Jul 9 23:48:23.865562 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 9 23:48:23.865570 kernel: landlock: Up and running. Jul 9 23:48:23.865577 kernel: SELinux: Initializing. Jul 9 23:48:23.865584 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 9 23:48:23.865591 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 9 23:48:23.865597 kernel: rcu: Hierarchical SRCU implementation. Jul 9 23:48:23.865604 kernel: rcu: Max phase no-delay instances is 400. Jul 9 23:48:23.865611 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 9 23:48:23.865618 kernel: Remapping and enabling EFI services. Jul 9 23:48:23.865625 kernel: smp: Bringing up secondary CPUs ... Jul 9 23:48:23.865631 kernel: Detected PIPT I-cache on CPU1 Jul 9 23:48:23.865643 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 9 23:48:23.865650 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Jul 9 23:48:23.865659 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 9 23:48:23.865666 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 9 23:48:23.865674 kernel: Detected PIPT I-cache on CPU2 Jul 9 23:48:23.865681 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 9 23:48:23.865697 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Jul 9 23:48:23.865706 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 9 23:48:23.865713 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 9 23:48:23.865720 kernel: Detected PIPT I-cache on CPU3 Jul 9 23:48:23.865726 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 9 23:48:23.865733 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Jul 9 23:48:23.865741 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 9 23:48:23.865747 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 9 23:48:23.865754 kernel: smp: Brought up 1 node, 4 CPUs Jul 9 23:48:23.865761 kernel: SMP: Total of 4 processors activated. Jul 9 23:48:23.865768 kernel: CPU: All CPU(s) started at EL1 Jul 9 23:48:23.865776 kernel: CPU features: detected: 32-bit EL0 Support Jul 9 23:48:23.865783 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 9 23:48:23.865790 kernel: CPU features: detected: Common not Private translations Jul 9 23:48:23.865797 kernel: CPU features: detected: CRC32 instructions Jul 9 23:48:23.865803 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 9 23:48:23.865810 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 9 23:48:23.865817 kernel: CPU features: detected: LSE atomic instructions Jul 9 23:48:23.865824 kernel: CPU features: detected: Privileged Access Never Jul 9 23:48:23.865831 kernel: CPU features: detected: RAS Extension Support Jul 9 23:48:23.865839 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 9 23:48:23.865846 kernel: alternatives: applying system-wide alternatives Jul 9 23:48:23.865853 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Jul 9 23:48:23.865861 kernel: Memory: 2440420K/2572288K available (11136K kernel code, 2428K rwdata, 9032K rodata, 39488K init, 1035K bss, 125920K reserved, 0K cma-reserved) Jul 9 23:48:23.865868 kernel: devtmpfs: initialized Jul 9 23:48:23.865876 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 9 23:48:23.865883 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 9 23:48:23.865890 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 9 23:48:23.865897 kernel: 0 pages in range for non-PLT usage Jul 9 23:48:23.865905 kernel: 508448 pages in range for PLT usage Jul 9 23:48:23.865912 kernel: pinctrl core: initialized pinctrl subsystem Jul 9 23:48:23.865918 kernel: SMBIOS 3.0.0 present. Jul 9 23:48:23.865925 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jul 9 23:48:23.865932 kernel: DMI: Memory slots populated: 1/1 Jul 9 23:48:23.865939 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 9 23:48:23.865946 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 9 23:48:23.865953 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 9 23:48:23.865960 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 9 23:48:23.865969 kernel: audit: initializing netlink subsys (disabled) Jul 9 23:48:23.865976 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Jul 9 23:48:23.865982 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 9 23:48:23.865994 kernel: cpuidle: using governor menu Jul 9 23:48:23.866001 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 9 23:48:23.866009 kernel: ASID allocator initialised with 32768 entries Jul 9 23:48:23.866016 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 9 23:48:23.866023 kernel: Serial: AMBA PL011 UART driver Jul 9 23:48:23.866030 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 9 23:48:23.866038 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 9 23:48:23.866045 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 9 23:48:23.866052 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 9 23:48:23.866059 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 9 23:48:23.866066 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 9 23:48:23.866073 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 9 23:48:23.866080 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 9 23:48:23.866087 kernel: ACPI: Added _OSI(Module Device) Jul 9 23:48:23.866094 kernel: ACPI: Added _OSI(Processor Device) Jul 9 23:48:23.866102 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 9 23:48:23.866109 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 9 23:48:23.866116 kernel: ACPI: Interpreter enabled Jul 9 23:48:23.866123 kernel: ACPI: Using GIC for interrupt routing Jul 9 23:48:23.866130 kernel: ACPI: MCFG table detected, 1 entries Jul 9 23:48:23.866137 kernel: ACPI: CPU0 has been hot-added Jul 9 23:48:23.866144 kernel: ACPI: CPU1 has been hot-added Jul 9 23:48:23.866151 kernel: ACPI: CPU2 has been hot-added Jul 9 23:48:23.866157 kernel: ACPI: CPU3 has been hot-added Jul 9 23:48:23.866164 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 9 23:48:23.866173 kernel: printk: legacy console [ttyAMA0] enabled Jul 9 23:48:23.866180 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 9 23:48:23.866323 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 9 23:48:23.866395 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 9 23:48:23.866465 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 9 23:48:23.866525 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 9 23:48:23.866587 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 9 23:48:23.866608 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 9 23:48:23.866615 kernel: PCI host bridge to bus 0000:00 Jul 9 23:48:23.866693 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 9 23:48:23.866757 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 9 23:48:23.866813 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 9 23:48:23.866869 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 9 23:48:23.866943 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jul 9 23:48:23.867030 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 9 23:48:23.867098 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Jul 9 23:48:23.867159 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Jul 9 23:48:23.867220 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jul 9 23:48:23.867282 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jul 9 23:48:23.867342 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Jul 9 23:48:23.867407 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Jul 9 23:48:23.867465 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 9 23:48:23.867519 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 9 23:48:23.867574 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 9 23:48:23.867583 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 9 23:48:23.867590 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 9 23:48:23.867597 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 9 23:48:23.867604 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 9 23:48:23.867613 kernel: iommu: Default domain type: Translated Jul 9 23:48:23.867620 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 9 23:48:23.867627 kernel: efivars: Registered efivars operations Jul 9 23:48:23.867633 kernel: vgaarb: loaded Jul 9 23:48:23.867640 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 9 23:48:23.867647 kernel: VFS: Disk quotas dquot_6.6.0 Jul 9 23:48:23.867654 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 9 23:48:23.867661 kernel: pnp: PnP ACPI init Jul 9 23:48:23.867768 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 9 23:48:23.867783 kernel: pnp: PnP ACPI: found 1 devices Jul 9 23:48:23.867790 kernel: NET: Registered PF_INET protocol family Jul 9 23:48:23.867798 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 9 23:48:23.867805 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 9 23:48:23.867814 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 9 23:48:23.867821 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 9 23:48:23.867828 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 9 23:48:23.867835 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 9 23:48:23.867844 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 9 23:48:23.867852 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 9 23:48:23.867860 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 9 23:48:23.867866 kernel: PCI: CLS 0 bytes, default 64 Jul 9 23:48:23.867873 kernel: kvm [1]: HYP mode not available Jul 9 23:48:23.867880 kernel: Initialise system trusted keyrings Jul 9 23:48:23.867887 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 9 23:48:23.867895 kernel: Key type asymmetric registered Jul 9 23:48:23.867901 kernel: Asymmetric key parser 'x509' registered Jul 9 23:48:23.867910 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 9 23:48:23.867917 kernel: io scheduler mq-deadline registered Jul 9 23:48:23.867924 kernel: io scheduler kyber registered Jul 9 23:48:23.867931 kernel: io scheduler bfq registered Jul 9 23:48:23.867938 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 9 23:48:23.867945 kernel: ACPI: button: Power Button [PWRB] Jul 9 23:48:23.867952 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 9 23:48:23.868028 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 9 23:48:23.868040 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 9 23:48:23.868049 kernel: thunder_xcv, ver 1.0 Jul 9 23:48:23.868057 kernel: thunder_bgx, ver 1.0 Jul 9 23:48:23.868064 kernel: nicpf, ver 1.0 Jul 9 23:48:23.868072 kernel: nicvf, ver 1.0 Jul 9 23:48:23.868172 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 9 23:48:23.868232 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-09T23:48:23 UTC (1752104903) Jul 9 23:48:23.868241 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 9 23:48:23.868248 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 9 23:48:23.868257 kernel: watchdog: NMI not fully supported Jul 9 23:48:23.868264 kernel: watchdog: Hard watchdog permanently disabled Jul 9 23:48:23.868272 kernel: NET: Registered PF_INET6 protocol family Jul 9 23:48:23.868279 kernel: Segment Routing with IPv6 Jul 9 23:48:23.868286 kernel: In-situ OAM (IOAM) with IPv6 Jul 9 23:48:23.868294 kernel: NET: Registered PF_PACKET protocol family Jul 9 23:48:23.868301 kernel: Key type dns_resolver registered Jul 9 23:48:23.868308 kernel: registered taskstats version 1 Jul 9 23:48:23.868315 kernel: Loading compiled-in X.509 certificates Jul 9 23:48:23.868321 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: 11eff9deb028731c4f89f27f6fac8d1c08902e5a' Jul 9 23:48:23.868330 kernel: Demotion targets for Node 0: null Jul 9 23:48:23.868337 kernel: Key type .fscrypt registered Jul 9 23:48:23.868343 kernel: Key type fscrypt-provisioning registered Jul 9 23:48:23.868350 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 9 23:48:23.868357 kernel: ima: Allocated hash algorithm: sha1 Jul 9 23:48:23.868364 kernel: ima: No architecture policies found Jul 9 23:48:23.868371 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 9 23:48:23.868378 kernel: clk: Disabling unused clocks Jul 9 23:48:23.868387 kernel: PM: genpd: Disabling unused power domains Jul 9 23:48:23.868394 kernel: Warning: unable to open an initial console. Jul 9 23:48:23.868401 kernel: Freeing unused kernel memory: 39488K Jul 9 23:48:23.868408 kernel: Run /init as init process Jul 9 23:48:23.868415 kernel: with arguments: Jul 9 23:48:23.868422 kernel: /init Jul 9 23:48:23.868429 kernel: with environment: Jul 9 23:48:23.868435 kernel: HOME=/ Jul 9 23:48:23.868442 kernel: TERM=linux Jul 9 23:48:23.868450 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 9 23:48:23.868458 systemd[1]: Successfully made /usr/ read-only. Jul 9 23:48:23.868468 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 9 23:48:23.868476 systemd[1]: Detected virtualization kvm. Jul 9 23:48:23.868483 systemd[1]: Detected architecture arm64. Jul 9 23:48:23.868491 systemd[1]: Running in initrd. Jul 9 23:48:23.868498 systemd[1]: No hostname configured, using default hostname. Jul 9 23:48:23.868507 systemd[1]: Hostname set to . Jul 9 23:48:23.868514 systemd[1]: Initializing machine ID from VM UUID. Jul 9 23:48:23.868521 systemd[1]: Queued start job for default target initrd.target. Jul 9 23:48:23.868529 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 23:48:23.868536 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 23:48:23.868544 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 9 23:48:23.868552 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 9 23:48:23.868559 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 9 23:48:23.868569 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 9 23:48:23.868578 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 9 23:48:23.868585 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 9 23:48:23.868593 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 23:48:23.868600 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 9 23:48:23.868607 systemd[1]: Reached target paths.target - Path Units. Jul 9 23:48:23.868615 systemd[1]: Reached target slices.target - Slice Units. Jul 9 23:48:23.868623 systemd[1]: Reached target swap.target - Swaps. Jul 9 23:48:23.868631 systemd[1]: Reached target timers.target - Timer Units. Jul 9 23:48:23.868638 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 9 23:48:23.868646 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 9 23:48:23.868653 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 9 23:48:23.868661 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 9 23:48:23.868669 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 9 23:48:23.868677 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 9 23:48:23.868694 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 23:48:23.868704 systemd[1]: Reached target sockets.target - Socket Units. Jul 9 23:48:23.868712 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 9 23:48:23.868720 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 9 23:48:23.868727 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 9 23:48:23.868735 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 9 23:48:23.868742 systemd[1]: Starting systemd-fsck-usr.service... Jul 9 23:48:23.868750 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 9 23:48:23.868757 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 9 23:48:23.868766 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:48:23.868774 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 9 23:48:23.868782 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 23:48:23.868790 systemd[1]: Finished systemd-fsck-usr.service. Jul 9 23:48:23.868798 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 9 23:48:23.868823 systemd-journald[244]: Collecting audit messages is disabled. Jul 9 23:48:23.868842 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:48:23.868851 systemd-journald[244]: Journal started Jul 9 23:48:23.868871 systemd-journald[244]: Runtime Journal (/run/log/journal/b1bd73137dbf4d8f977bb49ca35befc1) is 6M, max 48.5M, 42.4M free. Jul 9 23:48:23.859868 systemd-modules-load[247]: Inserted module 'overlay' Jul 9 23:48:23.873727 systemd[1]: Started systemd-journald.service - Journal Service. Jul 9 23:48:23.873762 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 9 23:48:23.875850 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 23:48:23.878952 systemd-modules-load[247]: Inserted module 'br_netfilter' Jul 9 23:48:23.879912 kernel: Bridge firewalling registered Jul 9 23:48:23.879413 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 9 23:48:23.881627 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 9 23:48:23.883744 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 9 23:48:23.895218 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 9 23:48:23.897611 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 23:48:23.904140 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 23:48:23.905610 systemd-tmpfiles[267]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 9 23:48:23.908599 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 23:48:23.912486 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 23:48:23.915486 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 9 23:48:23.916754 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 23:48:23.919827 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 9 23:48:23.942579 dracut-cmdline[291]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=da23c3aa7de24c290e5e9aff0a0fccd6a322ecaa9bbfc71c29b2f39446459116 Jul 9 23:48:23.960477 systemd-resolved[289]: Positive Trust Anchors: Jul 9 23:48:23.960494 systemd-resolved[289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 9 23:48:23.960526 systemd-resolved[289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 9 23:48:23.965506 systemd-resolved[289]: Defaulting to hostname 'linux'. Jul 9 23:48:23.969553 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 9 23:48:23.971468 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 9 23:48:24.030723 kernel: SCSI subsystem initialized Jul 9 23:48:24.036712 kernel: Loading iSCSI transport class v2.0-870. Jul 9 23:48:24.044726 kernel: iscsi: registered transport (tcp) Jul 9 23:48:24.057720 kernel: iscsi: registered transport (qla4xxx) Jul 9 23:48:24.057751 kernel: QLogic iSCSI HBA Driver Jul 9 23:48:24.074916 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 9 23:48:24.089207 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 23:48:24.090786 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 9 23:48:24.141759 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 9 23:48:24.144210 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 9 23:48:24.218727 kernel: raid6: neonx8 gen() 15692 MB/s Jul 9 23:48:24.235711 kernel: raid6: neonx4 gen() 15795 MB/s Jul 9 23:48:24.252714 kernel: raid6: neonx2 gen() 13047 MB/s Jul 9 23:48:24.269713 kernel: raid6: neonx1 gen() 10182 MB/s Jul 9 23:48:24.286707 kernel: raid6: int64x8 gen() 6884 MB/s Jul 9 23:48:24.303710 kernel: raid6: int64x4 gen() 7337 MB/s Jul 9 23:48:24.320708 kernel: raid6: int64x2 gen() 6067 MB/s Jul 9 23:48:24.337910 kernel: raid6: int64x1 gen() 4951 MB/s Jul 9 23:48:24.337924 kernel: raid6: using algorithm neonx4 gen() 15795 MB/s Jul 9 23:48:24.355865 kernel: raid6: .... xor() 12155 MB/s, rmw enabled Jul 9 23:48:24.355898 kernel: raid6: using neon recovery algorithm Jul 9 23:48:24.365712 kernel: xor: measuring software checksum speed Jul 9 23:48:24.365783 kernel: 8regs : 21483 MB/sec Jul 9 23:48:24.365794 kernel: 32regs : 19057 MB/sec Jul 9 23:48:24.367028 kernel: arm64_neon : 27870 MB/sec Jul 9 23:48:24.367042 kernel: xor: using function: arm64_neon (27870 MB/sec) Jul 9 23:48:24.427723 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 9 23:48:24.434964 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 9 23:48:24.439321 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 23:48:24.476060 systemd-udevd[501]: Using default interface naming scheme 'v255'. Jul 9 23:48:24.480474 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 23:48:24.485156 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 9 23:48:24.514364 dracut-pre-trigger[503]: rd.md=0: removing MD RAID activation Jul 9 23:48:24.541732 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 9 23:48:24.544442 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 9 23:48:24.614443 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 23:48:24.620289 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 9 23:48:24.656722 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 9 23:48:24.659567 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 9 23:48:24.662976 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 9 23:48:24.663027 kernel: GPT:9289727 != 19775487 Jul 9 23:48:24.663037 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 9 23:48:24.664086 kernel: GPT:9289727 != 19775487 Jul 9 23:48:24.665322 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 9 23:48:24.665354 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 9 23:48:24.672533 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 23:48:24.673896 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:48:24.676772 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:48:24.678978 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:48:24.696971 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 9 23:48:24.711091 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:48:24.719318 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 9 23:48:24.720885 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 9 23:48:24.733162 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 9 23:48:24.734495 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 9 23:48:24.743195 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 9 23:48:24.744547 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 9 23:48:24.746724 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 23:48:24.748796 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 9 23:48:24.751704 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 9 23:48:24.753657 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 9 23:48:24.771407 disk-uuid[594]: Primary Header is updated. Jul 9 23:48:24.771407 disk-uuid[594]: Secondary Entries is updated. Jul 9 23:48:24.771407 disk-uuid[594]: Secondary Header is updated. Jul 9 23:48:24.775715 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 9 23:48:24.780259 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 9 23:48:25.790445 disk-uuid[598]: The operation has completed successfully. Jul 9 23:48:25.791588 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 9 23:48:25.816464 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 9 23:48:25.816573 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 9 23:48:25.848262 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 9 23:48:25.873914 sh[614]: Success Jul 9 23:48:25.887859 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 9 23:48:25.887933 kernel: device-mapper: uevent: version 1.0.3 Jul 9 23:48:25.889855 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 9 23:48:25.912764 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 9 23:48:25.945896 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 9 23:48:25.948976 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 9 23:48:25.966595 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 9 23:48:25.977658 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 9 23:48:25.977771 kernel: BTRFS: device fsid 0f8170d9-c2a5-4c49-82bc-4e538bfc9b9b devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (626) Jul 9 23:48:25.981062 kernel: BTRFS info (device dm-0): first mount of filesystem 0f8170d9-c2a5-4c49-82bc-4e538bfc9b9b Jul 9 23:48:25.981106 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 9 23:48:25.982810 kernel: BTRFS info (device dm-0): using free-space-tree Jul 9 23:48:25.986451 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 9 23:48:25.987823 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 9 23:48:25.989354 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 9 23:48:25.990234 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 9 23:48:25.992051 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 9 23:48:26.023644 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (659) Jul 9 23:48:26.023721 kernel: BTRFS info (device vda6): first mount of filesystem 3e5253a1-0691-476f-bde5-7794093008ce Jul 9 23:48:26.023733 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 9 23:48:26.025435 kernel: BTRFS info (device vda6): using free-space-tree Jul 9 23:48:26.031713 kernel: BTRFS info (device vda6): last unmount of filesystem 3e5253a1-0691-476f-bde5-7794093008ce Jul 9 23:48:26.032736 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 9 23:48:26.035974 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 9 23:48:26.121635 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 9 23:48:26.125092 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 9 23:48:26.172645 systemd-networkd[800]: lo: Link UP Jul 9 23:48:26.172656 systemd-networkd[800]: lo: Gained carrier Jul 9 23:48:26.173538 systemd-networkd[800]: Enumeration completed Jul 9 23:48:26.173672 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 9 23:48:26.174443 systemd-networkd[800]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 23:48:26.174447 systemd-networkd[800]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 9 23:48:26.177860 systemd-networkd[800]: eth0: Link UP Jul 9 23:48:26.177864 systemd-networkd[800]: eth0: Gained carrier Jul 9 23:48:26.177873 systemd-networkd[800]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 23:48:26.180875 systemd[1]: Reached target network.target - Network. Jul 9 23:48:26.204766 systemd-networkd[800]: eth0: DHCPv4 address 10.0.0.69/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 9 23:48:26.227921 ignition[702]: Ignition 2.21.0 Jul 9 23:48:26.227937 ignition[702]: Stage: fetch-offline Jul 9 23:48:26.227991 ignition[702]: no configs at "/usr/lib/ignition/base.d" Jul 9 23:48:26.228001 ignition[702]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 23:48:26.228205 ignition[702]: parsed url from cmdline: "" Jul 9 23:48:26.228208 ignition[702]: no config URL provided Jul 9 23:48:26.228213 ignition[702]: reading system config file "/usr/lib/ignition/user.ign" Jul 9 23:48:26.228220 ignition[702]: no config at "/usr/lib/ignition/user.ign" Jul 9 23:48:26.228241 ignition[702]: op(1): [started] loading QEMU firmware config module Jul 9 23:48:26.228246 ignition[702]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 9 23:48:26.238184 ignition[702]: op(1): [finished] loading QEMU firmware config module Jul 9 23:48:26.276657 ignition[702]: parsing config with SHA512: 5404e94848c804fd106f2eb6d79953f5fd120076dcf5c3a4eb6400cd052578c7ced518839612ef2581c8e2df8cc6afbbd5601f5da50a4aa2376ea14bdb3c5d3e Jul 9 23:48:26.284893 unknown[702]: fetched base config from "system" Jul 9 23:48:26.284906 unknown[702]: fetched user config from "qemu" Jul 9 23:48:26.285258 ignition[702]: fetch-offline: fetch-offline passed Jul 9 23:48:26.285635 systemd-resolved[289]: Detected conflict on linux IN A 10.0.0.69 Jul 9 23:48:26.285315 ignition[702]: Ignition finished successfully Jul 9 23:48:26.285645 systemd-resolved[289]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. Jul 9 23:48:26.287767 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 9 23:48:26.289081 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 9 23:48:26.291911 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 9 23:48:26.324801 ignition[813]: Ignition 2.21.0 Jul 9 23:48:26.324817 ignition[813]: Stage: kargs Jul 9 23:48:26.324949 ignition[813]: no configs at "/usr/lib/ignition/base.d" Jul 9 23:48:26.324958 ignition[813]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 23:48:26.325726 ignition[813]: kargs: kargs passed Jul 9 23:48:26.326241 ignition[813]: Ignition finished successfully Jul 9 23:48:26.331054 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 9 23:48:26.334340 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 9 23:48:26.368652 ignition[821]: Ignition 2.21.0 Jul 9 23:48:26.368671 ignition[821]: Stage: disks Jul 9 23:48:26.368837 ignition[821]: no configs at "/usr/lib/ignition/base.d" Jul 9 23:48:26.368846 ignition[821]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 23:48:26.370650 ignition[821]: disks: disks passed Jul 9 23:48:26.373278 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 9 23:48:26.370747 ignition[821]: Ignition finished successfully Jul 9 23:48:26.374832 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 9 23:48:26.376462 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 9 23:48:26.378307 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 9 23:48:26.380177 systemd[1]: Reached target sysinit.target - System Initialization. Jul 9 23:48:26.382221 systemd[1]: Reached target basic.target - Basic System. Jul 9 23:48:26.384970 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 9 23:48:26.416609 systemd-fsck[831]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 9 23:48:26.424098 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 9 23:48:26.426825 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 9 23:48:26.501712 kernel: EXT4-fs (vda9): mounted filesystem 961fd3ec-635c-4a87-8aef-ca8f12cd8be8 r/w with ordered data mode. Quota mode: none. Jul 9 23:48:26.501888 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 9 23:48:26.503129 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 9 23:48:26.505540 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 9 23:48:26.507516 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 9 23:48:26.508498 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 9 23:48:26.508542 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 9 23:48:26.508566 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 9 23:48:26.529330 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 9 23:48:26.531889 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 9 23:48:26.537285 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (839) Jul 9 23:48:26.537308 kernel: BTRFS info (device vda6): first mount of filesystem 3e5253a1-0691-476f-bde5-7794093008ce Jul 9 23:48:26.537318 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 9 23:48:26.537328 kernel: BTRFS info (device vda6): using free-space-tree Jul 9 23:48:26.546210 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 9 23:48:26.588397 initrd-setup-root[864]: cut: /sysroot/etc/passwd: No such file or directory Jul 9 23:48:26.591854 initrd-setup-root[871]: cut: /sysroot/etc/group: No such file or directory Jul 9 23:48:26.595800 initrd-setup-root[878]: cut: /sysroot/etc/shadow: No such file or directory Jul 9 23:48:26.598866 initrd-setup-root[885]: cut: /sysroot/etc/gshadow: No such file or directory Jul 9 23:48:26.684197 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 9 23:48:26.686250 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 9 23:48:26.687869 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 9 23:48:26.706715 kernel: BTRFS info (device vda6): last unmount of filesystem 3e5253a1-0691-476f-bde5-7794093008ce Jul 9 23:48:26.719826 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 9 23:48:26.732245 ignition[954]: INFO : Ignition 2.21.0 Jul 9 23:48:26.732245 ignition[954]: INFO : Stage: mount Jul 9 23:48:26.733796 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 23:48:26.733796 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 23:48:26.733796 ignition[954]: INFO : mount: mount passed Jul 9 23:48:26.733796 ignition[954]: INFO : Ignition finished successfully Jul 9 23:48:26.737163 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 9 23:48:26.739446 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 9 23:48:26.976756 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 9 23:48:26.978355 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 9 23:48:27.002736 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (968) Jul 9 23:48:27.004827 kernel: BTRFS info (device vda6): first mount of filesystem 3e5253a1-0691-476f-bde5-7794093008ce Jul 9 23:48:27.004877 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 9 23:48:27.004898 kernel: BTRFS info (device vda6): using free-space-tree Jul 9 23:48:27.008167 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 9 23:48:27.039700 ignition[985]: INFO : Ignition 2.21.0 Jul 9 23:48:27.039700 ignition[985]: INFO : Stage: files Jul 9 23:48:27.041952 ignition[985]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 23:48:27.041952 ignition[985]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 23:48:27.044124 ignition[985]: DEBUG : files: compiled without relabeling support, skipping Jul 9 23:48:27.044124 ignition[985]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 9 23:48:27.044124 ignition[985]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 9 23:48:27.047877 ignition[985]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 9 23:48:27.047877 ignition[985]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 9 23:48:27.047877 ignition[985]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 9 23:48:27.047414 unknown[985]: wrote ssh authorized keys file for user: core Jul 9 23:48:27.052881 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 9 23:48:27.052881 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jul 9 23:48:27.755720 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 9 23:48:27.964828 systemd-networkd[800]: eth0: Gained IPv6LL Jul 9 23:48:29.997179 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 9 23:48:29.997179 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 9 23:48:29.997179 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 9 23:48:30.313397 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 9 23:48:30.409228 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 9 23:48:30.411287 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 9 23:48:30.411287 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 9 23:48:30.411287 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 9 23:48:30.411287 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 9 23:48:30.411287 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 9 23:48:30.411287 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 9 23:48:30.411287 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 9 23:48:30.411287 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 9 23:48:30.427220 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 9 23:48:30.427220 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 9 23:48:30.427220 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 9 23:48:30.427220 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 9 23:48:30.427220 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 9 23:48:30.427220 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 9 23:48:30.844095 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 9 23:48:31.250800 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 9 23:48:31.250800 ignition[985]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 9 23:48:31.254492 ignition[985]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 9 23:48:31.256416 ignition[985]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 9 23:48:31.256416 ignition[985]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 9 23:48:31.256416 ignition[985]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 9 23:48:31.256416 ignition[985]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 9 23:48:31.256416 ignition[985]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 9 23:48:31.256416 ignition[985]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 9 23:48:31.256416 ignition[985]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 9 23:48:31.271620 ignition[985]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 9 23:48:31.275131 ignition[985]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 9 23:48:31.276668 ignition[985]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 9 23:48:31.276668 ignition[985]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 9 23:48:31.276668 ignition[985]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 9 23:48:31.281949 ignition[985]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 9 23:48:31.281949 ignition[985]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 9 23:48:31.281949 ignition[985]: INFO : files: files passed Jul 9 23:48:31.281949 ignition[985]: INFO : Ignition finished successfully Jul 9 23:48:31.280662 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 9 23:48:31.287184 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 9 23:48:31.290864 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 9 23:48:31.300039 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 9 23:48:31.301131 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 9 23:48:31.303621 initrd-setup-root-after-ignition[1014]: grep: /sysroot/oem/oem-release: No such file or directory Jul 9 23:48:31.305251 initrd-setup-root-after-ignition[1016]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 9 23:48:31.306817 initrd-setup-root-after-ignition[1016]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 9 23:48:31.308469 initrd-setup-root-after-ignition[1020]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 9 23:48:31.310616 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 9 23:48:31.312042 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 9 23:48:31.314824 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 9 23:48:31.344871 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 9 23:48:31.345011 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 9 23:48:31.347126 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 9 23:48:31.349061 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 9 23:48:31.350843 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 9 23:48:31.351604 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 9 23:48:31.372992 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 9 23:48:31.375366 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 9 23:48:31.400532 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 9 23:48:31.401758 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 23:48:31.403870 systemd[1]: Stopped target timers.target - Timer Units. Jul 9 23:48:31.405649 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 9 23:48:31.405794 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 9 23:48:31.408309 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 9 23:48:31.410229 systemd[1]: Stopped target basic.target - Basic System. Jul 9 23:48:31.411802 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 9 23:48:31.413462 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 9 23:48:31.415323 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 9 23:48:31.417257 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 9 23:48:31.419093 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 9 23:48:31.422450 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 9 23:48:31.424395 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 9 23:48:31.426308 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 9 23:48:31.428019 systemd[1]: Stopped target swap.target - Swaps. Jul 9 23:48:31.429553 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 9 23:48:31.429682 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 9 23:48:31.437849 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 9 23:48:31.439766 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 23:48:31.442314 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 9 23:48:31.442425 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 23:48:31.444656 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 9 23:48:31.444788 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 9 23:48:31.447815 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 9 23:48:31.447935 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 9 23:48:31.450009 systemd[1]: Stopped target paths.target - Path Units. Jul 9 23:48:31.451580 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 9 23:48:31.454747 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 23:48:31.456574 systemd[1]: Stopped target slices.target - Slice Units. Jul 9 23:48:31.458603 systemd[1]: Stopped target sockets.target - Socket Units. Jul 9 23:48:31.460128 systemd[1]: iscsid.socket: Deactivated successfully. Jul 9 23:48:31.460215 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 9 23:48:31.461746 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 9 23:48:31.461825 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 9 23:48:31.463407 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 9 23:48:31.463523 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 9 23:48:31.465379 systemd[1]: ignition-files.service: Deactivated successfully. Jul 9 23:48:31.465480 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 9 23:48:31.467853 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 9 23:48:31.469643 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 9 23:48:31.469791 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 23:48:31.493279 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 9 23:48:31.494184 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 9 23:48:31.494316 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 23:48:31.496102 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 9 23:48:31.496217 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 9 23:48:31.502141 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 9 23:48:31.502230 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 9 23:48:31.508734 ignition[1040]: INFO : Ignition 2.21.0 Jul 9 23:48:31.508734 ignition[1040]: INFO : Stage: umount Jul 9 23:48:31.510493 ignition[1040]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 23:48:31.510493 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 23:48:31.510493 ignition[1040]: INFO : umount: umount passed Jul 9 23:48:31.510493 ignition[1040]: INFO : Ignition finished successfully Jul 9 23:48:31.509022 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 9 23:48:31.512345 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 9 23:48:31.512471 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 9 23:48:31.514134 systemd[1]: Stopped target network.target - Network. Jul 9 23:48:31.515483 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 9 23:48:31.515545 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 9 23:48:31.517316 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 9 23:48:31.517368 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 9 23:48:31.519161 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 9 23:48:31.519214 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 9 23:48:31.520898 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 9 23:48:31.520942 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 9 23:48:31.523045 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 9 23:48:31.524912 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 9 23:48:31.535001 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 9 23:48:31.535132 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 9 23:48:31.538457 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 9 23:48:31.538674 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 9 23:48:31.540731 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 9 23:48:31.545835 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 9 23:48:31.546349 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 9 23:48:31.548060 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 9 23:48:31.548114 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 9 23:48:31.550898 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 9 23:48:31.551748 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 9 23:48:31.551807 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 9 23:48:31.553679 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 9 23:48:31.553734 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 9 23:48:31.556539 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 9 23:48:31.556582 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 9 23:48:31.558522 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 9 23:48:31.558571 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 23:48:31.561609 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 23:48:31.564626 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 9 23:48:31.564716 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 9 23:48:31.579108 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 9 23:48:31.581237 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 23:48:31.585025 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 9 23:48:31.585105 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 9 23:48:31.604097 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 9 23:48:31.604200 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 23:48:31.608308 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 9 23:48:31.608394 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 9 23:48:31.611341 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 9 23:48:31.611404 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 9 23:48:31.614029 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 9 23:48:31.614089 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 23:48:31.617556 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 9 23:48:31.618647 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 9 23:48:31.618723 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 23:48:31.622767 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 9 23:48:31.622823 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 23:48:31.625114 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 23:48:31.625165 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:48:31.629556 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 9 23:48:31.629606 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 9 23:48:31.629638 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 9 23:48:31.629955 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 9 23:48:31.638149 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 9 23:48:31.639344 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 9 23:48:31.639420 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 9 23:48:31.641452 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 9 23:48:31.641538 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 9 23:48:31.643619 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 9 23:48:31.643744 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 9 23:48:31.645886 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 9 23:48:31.648082 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 9 23:48:31.667622 systemd[1]: Switching root. Jul 9 23:48:31.699055 systemd-journald[244]: Journal stopped Jul 9 23:48:32.642518 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Jul 9 23:48:32.642567 kernel: SELinux: policy capability network_peer_controls=1 Jul 9 23:48:32.642578 kernel: SELinux: policy capability open_perms=1 Jul 9 23:48:32.642588 kernel: SELinux: policy capability extended_socket_class=1 Jul 9 23:48:32.642601 kernel: SELinux: policy capability always_check_network=0 Jul 9 23:48:32.642812 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 9 23:48:32.642829 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 9 23:48:32.642843 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 9 23:48:32.642852 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 9 23:48:32.642861 kernel: SELinux: policy capability userspace_initial_context=0 Jul 9 23:48:32.642877 kernel: audit: type=1403 audit(1752104911.910:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 9 23:48:32.642892 systemd[1]: Successfully loaded SELinux policy in 42.641ms. Jul 9 23:48:32.642912 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.679ms. Jul 9 23:48:32.642925 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 9 23:48:32.642935 systemd[1]: Detected virtualization kvm. Jul 9 23:48:32.642946 systemd[1]: Detected architecture arm64. Jul 9 23:48:32.642956 systemd[1]: Detected first boot. Jul 9 23:48:32.642973 systemd[1]: Initializing machine ID from VM UUID. Jul 9 23:48:32.642983 zram_generator::config[1086]: No configuration found. Jul 9 23:48:32.642994 kernel: NET: Registered PF_VSOCK protocol family Jul 9 23:48:32.643019 systemd[1]: Populated /etc with preset unit settings. Jul 9 23:48:32.643031 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 9 23:48:32.643042 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 9 23:48:32.643052 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 9 23:48:32.643064 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 9 23:48:32.643077 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 9 23:48:32.643087 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 9 23:48:32.643096 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 9 23:48:32.643112 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 9 23:48:32.643122 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 9 23:48:32.643132 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 9 23:48:32.643142 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 9 23:48:32.643153 systemd[1]: Created slice user.slice - User and Session Slice. Jul 9 23:48:32.643164 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 23:48:32.643174 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 23:48:32.643187 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 9 23:48:32.643197 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 9 23:48:32.643207 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 9 23:48:32.643217 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 9 23:48:32.643227 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 9 23:48:32.643237 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 23:48:32.643248 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 9 23:48:32.643259 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 9 23:48:32.643269 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 9 23:48:32.643279 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 9 23:48:32.643289 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 9 23:48:32.643300 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 23:48:32.643311 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 9 23:48:32.643326 systemd[1]: Reached target slices.target - Slice Units. Jul 9 23:48:32.643338 systemd[1]: Reached target swap.target - Swaps. Jul 9 23:48:32.643348 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 9 23:48:32.643357 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 9 23:48:32.643368 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 9 23:48:32.643377 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 9 23:48:32.643387 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 9 23:48:32.643397 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 23:48:32.643407 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 9 23:48:32.643420 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 9 23:48:32.643431 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 9 23:48:32.643443 systemd[1]: Mounting media.mount - External Media Directory... Jul 9 23:48:32.643453 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 9 23:48:32.643463 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 9 23:48:32.643568 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 9 23:48:32.643584 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 9 23:48:32.643595 systemd[1]: Reached target machines.target - Containers. Jul 9 23:48:32.643605 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 9 23:48:32.643624 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 23:48:32.643636 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 9 23:48:32.643647 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 9 23:48:32.643657 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 23:48:32.643667 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 9 23:48:32.643677 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 23:48:32.643715 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 9 23:48:32.643729 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 23:48:32.643739 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 9 23:48:32.643752 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 9 23:48:32.643777 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 9 23:48:32.643787 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 9 23:48:32.643797 systemd[1]: Stopped systemd-fsck-usr.service. Jul 9 23:48:32.643807 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 23:48:32.643817 kernel: fuse: init (API version 7.41) Jul 9 23:48:32.643827 kernel: loop: module loaded Jul 9 23:48:32.643836 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 9 23:48:32.643848 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 9 23:48:32.643858 kernel: ACPI: bus type drm_connector registered Jul 9 23:48:32.643868 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 9 23:48:32.643879 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 9 23:48:32.643889 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 9 23:48:32.643927 systemd-journald[1157]: Collecting audit messages is disabled. Jul 9 23:48:32.643951 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 9 23:48:32.643973 systemd-journald[1157]: Journal started Jul 9 23:48:32.643996 systemd-journald[1157]: Runtime Journal (/run/log/journal/b1bd73137dbf4d8f977bb49ca35befc1) is 6M, max 48.5M, 42.4M free. Jul 9 23:48:32.644038 systemd[1]: verity-setup.service: Deactivated successfully. Jul 9 23:48:32.648016 systemd[1]: Stopped verity-setup.service. Jul 9 23:48:32.389296 systemd[1]: Queued start job for default target multi-user.target. Jul 9 23:48:32.410936 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 9 23:48:32.411365 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 9 23:48:32.650930 systemd[1]: Started systemd-journald.service - Journal Service. Jul 9 23:48:32.651675 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 9 23:48:32.652908 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 9 23:48:32.654144 systemd[1]: Mounted media.mount - External Media Directory. Jul 9 23:48:32.655247 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 9 23:48:32.656470 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 9 23:48:32.657880 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 9 23:48:32.661292 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 9 23:48:32.663061 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 23:48:32.664711 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 9 23:48:32.664890 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 9 23:48:32.666485 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 23:48:32.666653 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 23:48:32.668080 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 9 23:48:32.668250 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 9 23:48:32.669533 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 23:48:32.669679 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 23:48:32.671251 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 9 23:48:32.671415 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 9 23:48:32.672792 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 23:48:32.672983 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 23:48:32.675231 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 9 23:48:32.676624 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 23:48:32.679233 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 9 23:48:32.680956 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 9 23:48:32.694195 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 9 23:48:32.696818 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 9 23:48:32.699133 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 9 23:48:32.700238 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 9 23:48:32.700276 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 9 23:48:32.702339 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 9 23:48:32.710672 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 9 23:48:32.712019 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 23:48:32.715360 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 9 23:48:32.717449 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 9 23:48:32.718641 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 9 23:48:32.719633 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 9 23:48:32.720774 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 9 23:48:32.721700 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 23:48:32.732016 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 9 23:48:32.734128 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 9 23:48:32.735157 systemd-journald[1157]: Time spent on flushing to /var/log/journal/b1bd73137dbf4d8f977bb49ca35befc1 is 17.190ms for 890 entries. Jul 9 23:48:32.735157 systemd-journald[1157]: System Journal (/var/log/journal/b1bd73137dbf4d8f977bb49ca35befc1) is 8M, max 195.6M, 187.6M free. Jul 9 23:48:32.758468 systemd-journald[1157]: Received client request to flush runtime journal. Jul 9 23:48:32.758535 kernel: loop0: detected capacity change from 0 to 107312 Jul 9 23:48:32.739783 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 23:48:32.741215 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 9 23:48:32.743287 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 9 23:48:32.746724 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 9 23:48:32.750859 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 9 23:48:32.758793 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 9 23:48:32.763721 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 9 23:48:32.767214 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 23:48:32.778725 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 9 23:48:32.791661 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 9 23:48:32.794852 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 9 23:48:32.805032 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 9 23:48:32.811710 kernel: loop1: detected capacity change from 0 to 138376 Jul 9 23:48:32.825221 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Jul 9 23:48:32.825239 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Jul 9 23:48:32.829779 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 23:48:32.848714 kernel: loop2: detected capacity change from 0 to 207008 Jul 9 23:48:32.879736 kernel: loop3: detected capacity change from 0 to 107312 Jul 9 23:48:32.885742 kernel: loop4: detected capacity change from 0 to 138376 Jul 9 23:48:32.892726 kernel: loop5: detected capacity change from 0 to 207008 Jul 9 23:48:32.896900 (sd-merge)[1224]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 9 23:48:32.897305 (sd-merge)[1224]: Merged extensions into '/usr'. Jul 9 23:48:32.901265 systemd[1]: Reload requested from client PID 1202 ('systemd-sysext') (unit systemd-sysext.service)... Jul 9 23:48:32.901284 systemd[1]: Reloading... Jul 9 23:48:32.963905 zram_generator::config[1247]: No configuration found. Jul 9 23:48:33.003554 ldconfig[1197]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 9 23:48:33.039596 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 23:48:33.117351 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 9 23:48:33.117583 systemd[1]: Reloading finished in 215 ms. Jul 9 23:48:33.155465 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 9 23:48:33.157016 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 9 23:48:33.168068 systemd[1]: Starting ensure-sysext.service... Jul 9 23:48:33.169983 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 9 23:48:33.183695 systemd[1]: Reload requested from client PID 1284 ('systemctl') (unit ensure-sysext.service)... Jul 9 23:48:33.183710 systemd[1]: Reloading... Jul 9 23:48:33.187283 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 9 23:48:33.187541 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 9 23:48:33.187923 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 9 23:48:33.188255 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 9 23:48:33.188950 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 9 23:48:33.189248 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Jul 9 23:48:33.189358 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Jul 9 23:48:33.191824 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Jul 9 23:48:33.191912 systemd-tmpfiles[1286]: Skipping /boot Jul 9 23:48:33.200600 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Jul 9 23:48:33.200717 systemd-tmpfiles[1286]: Skipping /boot Jul 9 23:48:33.233759 zram_generator::config[1313]: No configuration found. Jul 9 23:48:33.307529 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 23:48:33.383624 systemd[1]: Reloading finished in 199 ms. Jul 9 23:48:33.408333 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 9 23:48:33.429914 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 23:48:33.439047 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 9 23:48:33.441630 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 9 23:48:33.459193 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 9 23:48:33.463513 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 9 23:48:33.466885 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 23:48:33.471995 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 9 23:48:33.475808 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 23:48:33.480449 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 23:48:33.484034 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 23:48:33.487999 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 23:48:33.489851 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 23:48:33.489988 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 23:48:33.502748 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 9 23:48:33.507727 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 9 23:48:33.509775 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 23:48:33.509968 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 23:48:33.511151 systemd-udevd[1354]: Using default interface naming scheme 'v255'. Jul 9 23:48:33.512023 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 23:48:33.512202 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 23:48:33.514105 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 9 23:48:33.517287 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 23:48:33.517451 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 23:48:33.523708 augenrules[1379]: No rules Jul 9 23:48:33.525931 systemd[1]: audit-rules.service: Deactivated successfully. Jul 9 23:48:33.526179 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 9 23:48:33.530293 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 23:48:33.537608 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 9 23:48:33.543001 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 9 23:48:33.544122 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 23:48:33.545352 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 23:48:33.547710 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 9 23:48:33.556174 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 23:48:33.559883 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 23:48:33.561424 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 23:48:33.561492 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 23:48:33.564867 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 9 23:48:33.569968 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 9 23:48:33.571348 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 9 23:48:33.571847 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 9 23:48:33.574284 systemd[1]: Finished ensure-sysext.service. Jul 9 23:48:33.577576 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 23:48:33.577745 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 23:48:33.593518 augenrules[1411]: /sbin/augenrules: No change Jul 9 23:48:33.596795 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 23:48:33.598729 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 23:48:33.600475 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 9 23:48:33.600658 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 9 23:48:33.602356 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 23:48:33.602540 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 23:48:33.604531 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 9 23:48:33.604749 augenrules[1445]: No rules Jul 9 23:48:33.608730 systemd[1]: audit-rules.service: Deactivated successfully. Jul 9 23:48:33.609040 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 9 23:48:33.622035 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 9 23:48:33.622092 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 9 23:48:33.624294 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 9 23:48:33.644902 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 9 23:48:33.674846 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 9 23:48:33.678842 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 9 23:48:33.711659 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 9 23:48:33.751742 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 9 23:48:33.756607 systemd[1]: Reached target time-set.target - System Time Set. Jul 9 23:48:33.767899 systemd-resolved[1352]: Positive Trust Anchors: Jul 9 23:48:33.768818 systemd-resolved[1352]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 9 23:48:33.768940 systemd-resolved[1352]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 9 23:48:33.786921 systemd-resolved[1352]: Defaulting to hostname 'linux'. Jul 9 23:48:33.790419 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:48:33.791753 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 9 23:48:33.793979 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 9 23:48:33.799886 systemd-networkd[1423]: lo: Link UP Jul 9 23:48:33.799898 systemd-networkd[1423]: lo: Gained carrier Jul 9 23:48:33.800871 systemd-networkd[1423]: Enumeration completed Jul 9 23:48:33.800966 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 9 23:48:33.802387 systemd[1]: Reached target network.target - Network. Jul 9 23:48:33.810929 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 9 23:48:33.814106 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 9 23:48:33.815114 systemd-networkd[1423]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 23:48:33.815122 systemd-networkd[1423]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 9 23:48:33.816223 systemd-networkd[1423]: eth0: Link UP Jul 9 23:48:33.817293 systemd-networkd[1423]: eth0: Gained carrier Jul 9 23:48:33.817313 systemd-networkd[1423]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 23:48:33.834751 systemd-networkd[1423]: eth0: DHCPv4 address 10.0.0.69/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 9 23:48:33.837345 systemd-timesyncd[1458]: Network configuration changed, trying to establish connection. Jul 9 23:48:33.838425 systemd-timesyncd[1458]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 9 23:48:33.838489 systemd-timesyncd[1458]: Initial clock synchronization to Wed 2025-07-09 23:48:34.147935 UTC. Jul 9 23:48:33.838904 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 9 23:48:33.862974 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:48:33.864352 systemd[1]: Reached target sysinit.target - System Initialization. Jul 9 23:48:33.866897 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 9 23:48:33.868129 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 9 23:48:33.869448 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 9 23:48:33.870610 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 9 23:48:33.871854 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 9 23:48:33.873222 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 9 23:48:33.873258 systemd[1]: Reached target paths.target - Path Units. Jul 9 23:48:33.874130 systemd[1]: Reached target timers.target - Timer Units. Jul 9 23:48:33.876397 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 9 23:48:33.878852 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 9 23:48:33.882234 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 9 23:48:33.883715 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 9 23:48:33.884890 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 9 23:48:33.887953 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 9 23:48:33.889346 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 9 23:48:33.891038 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 9 23:48:33.892228 systemd[1]: Reached target sockets.target - Socket Units. Jul 9 23:48:33.893286 systemd[1]: Reached target basic.target - Basic System. Jul 9 23:48:33.894291 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 9 23:48:33.894330 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 9 23:48:33.895313 systemd[1]: Starting containerd.service - containerd container runtime... Jul 9 23:48:33.897324 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 9 23:48:33.899252 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 9 23:48:33.901356 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 9 23:48:33.903375 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 9 23:48:33.904479 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 9 23:48:33.905459 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 9 23:48:33.908219 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 9 23:48:33.912403 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 9 23:48:33.916058 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 9 23:48:33.921866 jq[1499]: false Jul 9 23:48:33.926725 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 9 23:48:33.928844 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 9 23:48:33.929371 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 9 23:48:33.930507 systemd[1]: Starting update-engine.service - Update Engine... Jul 9 23:48:33.932630 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 9 23:48:33.936733 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 9 23:48:33.938378 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 9 23:48:33.939625 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 9 23:48:33.942179 systemd[1]: motdgen.service: Deactivated successfully. Jul 9 23:48:33.942371 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 9 23:48:33.946118 extend-filesystems[1500]: Found /dev/vda6 Jul 9 23:48:33.949655 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 9 23:48:33.955987 jq[1514]: true Jul 9 23:48:33.949862 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 9 23:48:33.957944 extend-filesystems[1500]: Found /dev/vda9 Jul 9 23:48:33.959023 (ntainerd)[1522]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 9 23:48:33.963499 extend-filesystems[1500]: Checking size of /dev/vda9 Jul 9 23:48:33.974661 jq[1528]: true Jul 9 23:48:33.994011 tar[1519]: linux-arm64/LICENSE Jul 9 23:48:33.997060 tar[1519]: linux-arm64/helm Jul 9 23:48:34.015196 extend-filesystems[1500]: Resized partition /dev/vda9 Jul 9 23:48:34.020299 extend-filesystems[1544]: resize2fs 1.47.2 (1-Jan-2025) Jul 9 23:48:34.029746 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 9 23:48:34.036164 systemd-logind[1512]: Watching system buttons on /dev/input/event0 (Power Button) Jul 9 23:48:34.040220 dbus-daemon[1497]: [system] SELinux support is enabled Jul 9 23:48:34.040448 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 9 23:48:34.041896 systemd-logind[1512]: New seat seat0. Jul 9 23:48:34.046332 systemd[1]: Started systemd-logind.service - User Login Management. Jul 9 23:48:34.048226 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 9 23:48:34.048265 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 9 23:48:34.050276 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 9 23:48:34.050300 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 9 23:48:34.059499 dbus-daemon[1497]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 9 23:48:34.063558 update_engine[1513]: I20250709 23:48:34.062753 1513 main.cc:92] Flatcar Update Engine starting Jul 9 23:48:34.068173 systemd[1]: Started update-engine.service - Update Engine. Jul 9 23:48:34.069622 update_engine[1513]: I20250709 23:48:34.068846 1513 update_check_scheduler.cc:74] Next update check in 3m24s Jul 9 23:48:34.072136 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 9 23:48:34.085198 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 9 23:48:34.093753 extend-filesystems[1544]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 9 23:48:34.093753 extend-filesystems[1544]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 9 23:48:34.093753 extend-filesystems[1544]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 9 23:48:34.098401 extend-filesystems[1500]: Resized filesystem in /dev/vda9 Jul 9 23:48:34.098445 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 9 23:48:34.102113 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 9 23:48:34.131746 bash[1554]: Updated "/home/core/.ssh/authorized_keys" Jul 9 23:48:34.131387 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 9 23:48:34.134298 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 9 23:48:34.173001 locksmithd[1555]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 9 23:48:34.270614 containerd[1522]: time="2025-07-09T23:48:34Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 9 23:48:34.273007 containerd[1522]: time="2025-07-09T23:48:34.272957759Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 9 23:48:34.285931 containerd[1522]: time="2025-07-09T23:48:34.285881733Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.969µs" Jul 9 23:48:34.285931 containerd[1522]: time="2025-07-09T23:48:34.285919999Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 9 23:48:34.286030 containerd[1522]: time="2025-07-09T23:48:34.285945509Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 9 23:48:34.286124 containerd[1522]: time="2025-07-09T23:48:34.286092672Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 9 23:48:34.286124 containerd[1522]: time="2025-07-09T23:48:34.286116271Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 9 23:48:34.286166 containerd[1522]: time="2025-07-09T23:48:34.286140577Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 9 23:48:34.286228 containerd[1522]: time="2025-07-09T23:48:34.286203231Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 9 23:48:34.286228 containerd[1522]: time="2025-07-09T23:48:34.286220598Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 9 23:48:34.286557 containerd[1522]: time="2025-07-09T23:48:34.286523732Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 9 23:48:34.286557 containerd[1522]: time="2025-07-09T23:48:34.286551153Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 9 23:48:34.286600 containerd[1522]: time="2025-07-09T23:48:34.286563950Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 9 23:48:34.286600 containerd[1522]: time="2025-07-09T23:48:34.286572592Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 9 23:48:34.286681 containerd[1522]: time="2025-07-09T23:48:34.286662502Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 9 23:48:34.286911 containerd[1522]: time="2025-07-09T23:48:34.286882581Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 9 23:48:34.286934 containerd[1522]: time="2025-07-09T23:48:34.286921262Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 9 23:48:34.286934 containerd[1522]: time="2025-07-09T23:48:34.286931067Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 9 23:48:34.287651 containerd[1522]: time="2025-07-09T23:48:34.287612827Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 9 23:48:34.287996 containerd[1522]: time="2025-07-09T23:48:34.287968768Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 9 23:48:34.288103 containerd[1522]: time="2025-07-09T23:48:34.288079742Z" level=info msg="metadata content store policy set" policy=shared Jul 9 23:48:34.291653 containerd[1522]: time="2025-07-09T23:48:34.291616549Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 9 23:48:34.291696 containerd[1522]: time="2025-07-09T23:48:34.291679993Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 9 23:48:34.291717 containerd[1522]: time="2025-07-09T23:48:34.291708412Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 9 23:48:34.291788 containerd[1522]: time="2025-07-09T23:48:34.291735293Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 9 23:48:34.291788 containerd[1522]: time="2025-07-09T23:48:34.291768116Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 9 23:48:34.291840 containerd[1522]: time="2025-07-09T23:48:34.291809498Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 9 23:48:34.291840 containerd[1522]: time="2025-07-09T23:48:34.291826034Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 9 23:48:34.291888 containerd[1522]: time="2025-07-09T23:48:34.291839869Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 9 23:48:34.291888 containerd[1522]: time="2025-07-09T23:48:34.291852375Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 9 23:48:34.291888 containerd[1522]: time="2025-07-09T23:48:34.291863012Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 9 23:48:34.291888 containerd[1522]: time="2025-07-09T23:48:34.291872609Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 9 23:48:34.291888 containerd[1522]: time="2025-07-09T23:48:34.291885032Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 9 23:48:34.292056 containerd[1522]: time="2025-07-09T23:48:34.292020644Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 9 23:48:34.292056 containerd[1522]: time="2025-07-09T23:48:34.292052512Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 9 23:48:34.292105 containerd[1522]: time="2025-07-09T23:48:34.292072330Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 9 23:48:34.292105 containerd[1522]: time="2025-07-09T23:48:34.292083672Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 9 23:48:34.292105 containerd[1522]: time="2025-07-09T23:48:34.292094807Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 9 23:48:34.292159 containerd[1522]: time="2025-07-09T23:48:34.292106648Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 9 23:48:34.292159 containerd[1522]: time="2025-07-09T23:48:34.292117991Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 9 23:48:34.292159 containerd[1522]: time="2025-07-09T23:48:34.292128212Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 9 23:48:34.292159 containerd[1522]: time="2025-07-09T23:48:34.292139305Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 9 23:48:34.292159 containerd[1522]: time="2025-07-09T23:48:34.292149651Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 9 23:48:34.292246 containerd[1522]: time="2025-07-09T23:48:34.292159705Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 9 23:48:34.292485 containerd[1522]: time="2025-07-09T23:48:34.292460886Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 9 23:48:34.292485 containerd[1522]: time="2025-07-09T23:48:34.292481909Z" level=info msg="Start snapshots syncer" Jul 9 23:48:34.292535 containerd[1522]: time="2025-07-09T23:48:34.292515397Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 9 23:48:34.292810 containerd[1522]: time="2025-07-09T23:48:34.292771332Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 9 23:48:34.292928 containerd[1522]: time="2025-07-09T23:48:34.292829831Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 9 23:48:34.292928 containerd[1522]: time="2025-07-09T23:48:34.292915586Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 9 23:48:34.293076 containerd[1522]: time="2025-07-09T23:48:34.293051614Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 9 23:48:34.293106 containerd[1522]: time="2025-07-09T23:48:34.293082817Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 9 23:48:34.293106 containerd[1522]: time="2025-07-09T23:48:34.293096818Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 9 23:48:34.293147 containerd[1522]: time="2025-07-09T23:48:34.293109158Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 9 23:48:34.293147 containerd[1522]: time="2025-07-09T23:48:34.293121539Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 9 23:48:34.293147 containerd[1522]: time="2025-07-09T23:48:34.293132799Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 9 23:48:34.293147 containerd[1522]: time="2025-07-09T23:48:34.293143601Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 9 23:48:34.293217 containerd[1522]: time="2025-07-09T23:48:34.293168613Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 9 23:48:34.293217 containerd[1522]: time="2025-07-09T23:48:34.293179997Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 9 23:48:34.293217 containerd[1522]: time="2025-07-09T23:48:34.293190342Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 9 23:48:34.293267 containerd[1522]: time="2025-07-09T23:48:34.293246141Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 9 23:48:34.293267 containerd[1522]: time="2025-07-09T23:48:34.293260766Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 9 23:48:34.293313 containerd[1522]: time="2025-07-09T23:48:34.293269907Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 9 23:48:34.293354 containerd[1522]: time="2025-07-09T23:48:34.293280294Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 9 23:48:34.293354 containerd[1522]: time="2025-07-09T23:48:34.293350593Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 9 23:48:34.293402 containerd[1522]: time="2025-07-09T23:48:34.293364844Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 9 23:48:34.293402 containerd[1522]: time="2025-07-09T23:48:34.293376145Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 9 23:48:34.293544 containerd[1522]: time="2025-07-09T23:48:34.293517989Z" level=info msg="runtime interface created" Jul 9 23:48:34.293544 containerd[1522]: time="2025-07-09T23:48:34.293528085Z" level=info msg="created NRI interface" Jul 9 23:48:34.293544 containerd[1522]: time="2025-07-09T23:48:34.293541630Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 9 23:48:34.293608 containerd[1522]: time="2025-07-09T23:48:34.293555549Z" level=info msg="Connect containerd service" Jul 9 23:48:34.293608 containerd[1522]: time="2025-07-09T23:48:34.293582721Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 9 23:48:34.295424 containerd[1522]: time="2025-07-09T23:48:34.295363007Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 9 23:48:34.334771 sshd_keygen[1520]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 9 23:48:34.356783 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 9 23:48:34.359932 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 9 23:48:34.378583 systemd[1]: issuegen.service: Deactivated successfully. Jul 9 23:48:34.378823 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 9 23:48:34.382008 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 9 23:48:34.401854 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 9 23:48:34.405133 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 9 23:48:34.407579 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 9 23:48:34.409854 systemd[1]: Reached target getty.target - Login Prompts. Jul 9 23:48:34.445771 containerd[1522]: time="2025-07-09T23:48:34.445605555Z" level=info msg="Start subscribing containerd event" Jul 9 23:48:34.445771 containerd[1522]: time="2025-07-09T23:48:34.445675480Z" level=info msg="Start recovering state" Jul 9 23:48:34.445771 containerd[1522]: time="2025-07-09T23:48:34.445782258Z" level=info msg="Start event monitor" Jul 9 23:48:34.446083 containerd[1522]: time="2025-07-09T23:48:34.445800705Z" level=info msg="Start cni network conf syncer for default" Jul 9 23:48:34.446083 containerd[1522]: time="2025-07-09T23:48:34.445810137Z" level=info msg="Start streaming server" Jul 9 23:48:34.446083 containerd[1522]: time="2025-07-09T23:48:34.445820815Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 9 23:48:34.446083 containerd[1522]: time="2025-07-09T23:48:34.445827753Z" level=info msg="runtime interface starting up..." Jul 9 23:48:34.446083 containerd[1522]: time="2025-07-09T23:48:34.445833362Z" level=info msg="starting plugins..." Jul 9 23:48:34.446083 containerd[1522]: time="2025-07-09T23:48:34.445845868Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 9 23:48:34.446471 containerd[1522]: time="2025-07-09T23:48:34.446428453Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 9 23:48:34.446670 containerd[1522]: time="2025-07-09T23:48:34.446644626Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 9 23:48:34.449832 containerd[1522]: time="2025-07-09T23:48:34.449798403Z" level=info msg="containerd successfully booted in 0.179535s" Jul 9 23:48:34.449918 systemd[1]: Started containerd.service - containerd container runtime. Jul 9 23:48:34.470273 tar[1519]: linux-arm64/README.md Jul 9 23:48:34.498524 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 9 23:48:35.388871 systemd-networkd[1423]: eth0: Gained IPv6LL Jul 9 23:48:35.391387 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 9 23:48:35.393285 systemd[1]: Reached target network-online.target - Network is Online. Jul 9 23:48:35.395840 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 9 23:48:35.398214 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:48:35.410365 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 9 23:48:35.427177 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 9 23:48:35.427427 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 9 23:48:35.429585 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 9 23:48:35.434741 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 9 23:48:36.003606 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:48:36.006065 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 9 23:48:36.007965 (kubelet)[1629]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 23:48:36.012109 systemd[1]: Startup finished in 2.174s (kernel) + 8.281s (initrd) + 4.150s (userspace) = 14.606s. Jul 9 23:48:36.498131 kubelet[1629]: E0709 23:48:36.497996 1629 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 23:48:36.500887 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 23:48:36.501235 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 23:48:36.501738 systemd[1]: kubelet.service: Consumed 852ms CPU time, 257M memory peak. Jul 9 23:48:36.870376 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 9 23:48:36.871968 systemd[1]: Started sshd@0-10.0.0.69:22-10.0.0.1:40332.service - OpenSSH per-connection server daemon (10.0.0.1:40332). Jul 9 23:48:36.971343 sshd[1643]: Accepted publickey for core from 10.0.0.1 port 40332 ssh2: RSA SHA256:gc9XfzCdXUit2xMYwbO9Atxxy3DG1hyaUiU6i3BG1Rg Jul 9 23:48:36.973328 sshd-session[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:36.981378 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 9 23:48:36.982429 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 9 23:48:36.990118 systemd-logind[1512]: New session 1 of user core. Jul 9 23:48:37.018817 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 9 23:48:37.021851 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 9 23:48:37.049145 (systemd)[1647]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 9 23:48:37.051973 systemd-logind[1512]: New session c1 of user core. Jul 9 23:48:37.183916 systemd[1647]: Queued start job for default target default.target. Jul 9 23:48:37.202848 systemd[1647]: Created slice app.slice - User Application Slice. Jul 9 23:48:37.202895 systemd[1647]: Reached target paths.target - Paths. Jul 9 23:48:37.202937 systemd[1647]: Reached target timers.target - Timers. Jul 9 23:48:37.204418 systemd[1647]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 9 23:48:37.216904 systemd[1647]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 9 23:48:37.217192 systemd[1647]: Reached target sockets.target - Sockets. Jul 9 23:48:37.217333 systemd[1647]: Reached target basic.target - Basic System. Jul 9 23:48:37.217435 systemd[1647]: Reached target default.target - Main User Target. Jul 9 23:48:37.217532 systemd[1647]: Startup finished in 156ms. Jul 9 23:48:37.217554 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 9 23:48:37.219314 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 9 23:48:37.281961 systemd[1]: Started sshd@1-10.0.0.69:22-10.0.0.1:40348.service - OpenSSH per-connection server daemon (10.0.0.1:40348). Jul 9 23:48:37.339238 sshd[1658]: Accepted publickey for core from 10.0.0.1 port 40348 ssh2: RSA SHA256:gc9XfzCdXUit2xMYwbO9Atxxy3DG1hyaUiU6i3BG1Rg Jul 9 23:48:37.340672 sshd-session[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:37.344786 systemd-logind[1512]: New session 2 of user core. Jul 9 23:48:37.360914 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 9 23:48:37.413378 sshd[1660]: Connection closed by 10.0.0.1 port 40348 Jul 9 23:48:37.413251 sshd-session[1658]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:37.425676 systemd[1]: sshd@1-10.0.0.69:22-10.0.0.1:40348.service: Deactivated successfully. Jul 9 23:48:37.427260 systemd[1]: session-2.scope: Deactivated successfully. Jul 9 23:48:37.428083 systemd-logind[1512]: Session 2 logged out. Waiting for processes to exit. Jul 9 23:48:37.430736 systemd[1]: Started sshd@2-10.0.0.69:22-10.0.0.1:40364.service - OpenSSH per-connection server daemon (10.0.0.1:40364). Jul 9 23:48:37.431361 systemd-logind[1512]: Removed session 2. Jul 9 23:48:37.485161 sshd[1666]: Accepted publickey for core from 10.0.0.1 port 40364 ssh2: RSA SHA256:gc9XfzCdXUit2xMYwbO9Atxxy3DG1hyaUiU6i3BG1Rg Jul 9 23:48:37.486572 sshd-session[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:37.490422 systemd-logind[1512]: New session 3 of user core. Jul 9 23:48:37.499904 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 9 23:48:37.549709 sshd[1668]: Connection closed by 10.0.0.1 port 40364 Jul 9 23:48:37.550136 sshd-session[1666]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:37.569581 systemd[1]: sshd@2-10.0.0.69:22-10.0.0.1:40364.service: Deactivated successfully. Jul 9 23:48:37.571372 systemd[1]: session-3.scope: Deactivated successfully. Jul 9 23:48:37.573907 systemd-logind[1512]: Session 3 logged out. Waiting for processes to exit. Jul 9 23:48:37.575426 systemd[1]: Started sshd@3-10.0.0.69:22-10.0.0.1:40374.service - OpenSSH per-connection server daemon (10.0.0.1:40374). Jul 9 23:48:37.576437 systemd-logind[1512]: Removed session 3. Jul 9 23:48:37.627788 sshd[1674]: Accepted publickey for core from 10.0.0.1 port 40374 ssh2: RSA SHA256:gc9XfzCdXUit2xMYwbO9Atxxy3DG1hyaUiU6i3BG1Rg Jul 9 23:48:37.629164 sshd-session[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:37.636124 systemd-logind[1512]: New session 4 of user core. Jul 9 23:48:37.643933 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 9 23:48:37.704673 sshd[1676]: Connection closed by 10.0.0.1 port 40374 Jul 9 23:48:37.704521 sshd-session[1674]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:37.715169 systemd[1]: sshd@3-10.0.0.69:22-10.0.0.1:40374.service: Deactivated successfully. Jul 9 23:48:37.716994 systemd[1]: session-4.scope: Deactivated successfully. Jul 9 23:48:37.719908 systemd-logind[1512]: Session 4 logged out. Waiting for processes to exit. Jul 9 23:48:37.725059 systemd[1]: Started sshd@4-10.0.0.69:22-10.0.0.1:40388.service - OpenSSH per-connection server daemon (10.0.0.1:40388). Jul 9 23:48:37.726230 systemd-logind[1512]: Removed session 4. Jul 9 23:48:37.766470 sshd[1682]: Accepted publickey for core from 10.0.0.1 port 40388 ssh2: RSA SHA256:gc9XfzCdXUit2xMYwbO9Atxxy3DG1hyaUiU6i3BG1Rg Jul 9 23:48:37.768167 sshd-session[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:37.772667 systemd-logind[1512]: New session 5 of user core. Jul 9 23:48:37.782921 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 9 23:48:37.840699 sudo[1685]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 9 23:48:37.840996 sudo[1685]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 23:48:37.861560 sudo[1685]: pam_unix(sudo:session): session closed for user root Jul 9 23:48:37.865502 sshd[1684]: Connection closed by 10.0.0.1 port 40388 Jul 9 23:48:37.865381 sshd-session[1682]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:37.882498 systemd[1]: sshd@4-10.0.0.69:22-10.0.0.1:40388.service: Deactivated successfully. Jul 9 23:48:37.885161 systemd[1]: session-5.scope: Deactivated successfully. Jul 9 23:48:37.887907 systemd-logind[1512]: Session 5 logged out. Waiting for processes to exit. Jul 9 23:48:37.890892 systemd[1]: Started sshd@5-10.0.0.69:22-10.0.0.1:40394.service - OpenSSH per-connection server daemon (10.0.0.1:40394). Jul 9 23:48:37.891514 systemd-logind[1512]: Removed session 5. Jul 9 23:48:37.946613 sshd[1691]: Accepted publickey for core from 10.0.0.1 port 40394 ssh2: RSA SHA256:gc9XfzCdXUit2xMYwbO9Atxxy3DG1hyaUiU6i3BG1Rg Jul 9 23:48:37.948605 sshd-session[1691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:37.954040 systemd-logind[1512]: New session 6 of user core. Jul 9 23:48:37.963927 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 9 23:48:38.016151 sudo[1695]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 9 23:48:38.016724 sudo[1695]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 23:48:38.022391 sudo[1695]: pam_unix(sudo:session): session closed for user root Jul 9 23:48:38.031593 sudo[1694]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 9 23:48:38.032543 sudo[1694]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 23:48:38.045011 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 9 23:48:38.099743 augenrules[1717]: No rules Jul 9 23:48:38.101024 systemd[1]: audit-rules.service: Deactivated successfully. Jul 9 23:48:38.101255 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 9 23:48:38.102649 sudo[1694]: pam_unix(sudo:session): session closed for user root Jul 9 23:48:38.104904 sshd[1693]: Connection closed by 10.0.0.1 port 40394 Jul 9 23:48:38.105465 sshd-session[1691]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:38.113790 systemd[1]: sshd@5-10.0.0.69:22-10.0.0.1:40394.service: Deactivated successfully. Jul 9 23:48:38.115385 systemd[1]: session-6.scope: Deactivated successfully. Jul 9 23:48:38.116111 systemd-logind[1512]: Session 6 logged out. Waiting for processes to exit. Jul 9 23:48:38.117962 systemd[1]: Started sshd@6-10.0.0.69:22-10.0.0.1:40402.service - OpenSSH per-connection server daemon (10.0.0.1:40402). Jul 9 23:48:38.119262 systemd-logind[1512]: Removed session 6. Jul 9 23:48:38.175078 sshd[1726]: Accepted publickey for core from 10.0.0.1 port 40402 ssh2: RSA SHA256:gc9XfzCdXUit2xMYwbO9Atxxy3DG1hyaUiU6i3BG1Rg Jul 9 23:48:38.176193 sshd-session[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:38.180846 systemd-logind[1512]: New session 7 of user core. Jul 9 23:48:38.190900 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 9 23:48:38.244046 sudo[1729]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 9 23:48:38.244311 sudo[1729]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 23:48:38.815489 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 9 23:48:38.834086 (dockerd)[1750]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 9 23:48:39.435880 dockerd[1750]: time="2025-07-09T23:48:39.435821518Z" level=info msg="Starting up" Jul 9 23:48:39.437638 dockerd[1750]: time="2025-07-09T23:48:39.437606579Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 9 23:48:39.607947 dockerd[1750]: time="2025-07-09T23:48:39.607908326Z" level=info msg="Loading containers: start." Jul 9 23:48:39.616718 kernel: Initializing XFRM netlink socket Jul 9 23:48:39.845789 systemd-networkd[1423]: docker0: Link UP Jul 9 23:48:39.849802 dockerd[1750]: time="2025-07-09T23:48:39.849749540Z" level=info msg="Loading containers: done." Jul 9 23:48:39.865357 dockerd[1750]: time="2025-07-09T23:48:39.865238467Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 9 23:48:39.865567 dockerd[1750]: time="2025-07-09T23:48:39.865395034Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 9 23:48:39.865567 dockerd[1750]: time="2025-07-09T23:48:39.865501058Z" level=info msg="Initializing buildkit" Jul 9 23:48:39.890122 dockerd[1750]: time="2025-07-09T23:48:39.890081525Z" level=info msg="Completed buildkit initialization" Jul 9 23:48:39.895277 dockerd[1750]: time="2025-07-09T23:48:39.895232983Z" level=info msg="Daemon has completed initialization" Jul 9 23:48:39.895356 dockerd[1750]: time="2025-07-09T23:48:39.895318650Z" level=info msg="API listen on /run/docker.sock" Jul 9 23:48:39.895468 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 9 23:48:40.542882 containerd[1522]: time="2025-07-09T23:48:40.542728257Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 9 23:48:41.121891 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4086082525.mount: Deactivated successfully. Jul 9 23:48:42.050268 containerd[1522]: time="2025-07-09T23:48:42.050218305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:48:42.050901 containerd[1522]: time="2025-07-09T23:48:42.050878691Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=26328196" Jul 9 23:48:42.051769 containerd[1522]: time="2025-07-09T23:48:42.051738129Z" level=info msg="ImageCreate event name:\"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:48:42.054692 containerd[1522]: time="2025-07-09T23:48:42.054660355Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:48:42.055754 containerd[1522]: time="2025-07-09T23:48:42.055718562Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"26324994\" in 1.512930803s" Jul 9 23:48:42.055798 containerd[1522]: time="2025-07-09T23:48:42.055756216Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\"" Jul 9 23:48:42.056368 containerd[1522]: time="2025-07-09T23:48:42.056339956Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 9 23:48:43.213084 containerd[1522]: time="2025-07-09T23:48:43.210676449Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:48:43.213675 containerd[1522]: time="2025-07-09T23:48:43.213623139Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=22529230" Jul 9 23:48:43.215420 containerd[1522]: time="2025-07-09T23:48:43.215356069Z" level=info msg="ImageCreate event name:\"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:48:43.217796 containerd[1522]: time="2025-07-09T23:48:43.217749880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:48:43.219690 containerd[1522]: time="2025-07-09T23:48:43.219627150Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"24065018\" in 1.163256202s" Jul 9 23:48:43.219690 containerd[1522]: time="2025-07-09T23:48:43.219683155Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\"" Jul 9 23:48:43.220291 containerd[1522]: time="2025-07-09T23:48:43.220254324Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 9 23:48:44.322778 containerd[1522]: time="2025-07-09T23:48:44.322653039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:48:44.323958 containerd[1522]: time="2025-07-09T23:48:44.323703505Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=17484143" Jul 9 23:48:44.324977 containerd[1522]: time="2025-07-09T23:48:44.324928450Z" level=info msg="ImageCreate event name:\"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:48:44.328487 containerd[1522]: time="2025-07-09T23:48:44.328447838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:48:44.329744 containerd[1522]: time="2025-07-09T23:48:44.329625183Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"19019949\" in 1.109339433s" Jul 9 23:48:44.329744 containerd[1522]: time="2025-07-09T23:48:44.329684743Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\"" Jul 9 23:48:44.330198 containerd[1522]: time="2025-07-09T23:48:44.330163164Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 9 23:48:45.275882 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3016108715.mount: Deactivated successfully. Jul 9 23:48:45.642346 containerd[1522]: time="2025-07-09T23:48:45.642304836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:48:45.643087 containerd[1522]: time="2025-07-09T23:48:45.642932175Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=27378408" Jul 9 23:48:45.643933 containerd[1522]: time="2025-07-09T23:48:45.643886885Z" level=info msg="ImageCreate event name:\"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:48:45.645834 containerd[1522]: time="2025-07-09T23:48:45.645767529Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:48:45.646572 containerd[1522]: time="2025-07-09T23:48:45.646230093Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"27377425\" in 1.316026694s" Jul 9 23:48:45.646572 containerd[1522]: time="2025-07-09T23:48:45.646262863Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jul 9 23:48:45.646713 containerd[1522]: time="2025-07-09T23:48:45.646660090Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 9 23:48:46.266866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3404167655.mount: Deactivated successfully. Jul 9 23:48:46.611321 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 9 23:48:46.612917 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:48:46.783917 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:48:46.798068 (kubelet)[2087]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 23:48:46.928593 kubelet[2087]: E0709 23:48:46.928471 2087 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 23:48:46.932048 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 23:48:46.932199 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 23:48:46.933805 systemd[1]: kubelet.service: Consumed 168ms CPU time, 107.8M memory peak. Jul 9 23:48:47.149011 containerd[1522]: time="2025-07-09T23:48:47.148946662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:48:47.149592 containerd[1522]: time="2025-07-09T23:48:47.149552083Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 9 23:48:47.150725 containerd[1522]: time="2025-07-09T23:48:47.150695146Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:48:47.154011 containerd[1522]: time="2025-07-09T23:48:47.153964412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:48:47.155051 containerd[1522]: time="2025-07-09T23:48:47.155006511Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.50829543s" Jul 9 23:48:47.155162 containerd[1522]: time="2025-07-09T23:48:47.155145895Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 9 23:48:47.155721 containerd[1522]: time="2025-07-09T23:48:47.155678261Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 9 23:48:47.592750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1914915700.mount: Deactivated successfully. Jul 9 23:48:47.602591 containerd[1522]: time="2025-07-09T23:48:47.602237440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 23:48:47.602768 containerd[1522]: time="2025-07-09T23:48:47.602739964Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 9 23:48:47.603753 containerd[1522]: time="2025-07-09T23:48:47.603719156Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 23:48:47.605772 containerd[1522]: time="2025-07-09T23:48:47.605734045Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 23:48:47.606489 containerd[1522]: time="2025-07-09T23:48:47.606458311Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 450.432617ms" Jul 9 23:48:47.606575 containerd[1522]: time="2025-07-09T23:48:47.606558631Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 9 23:48:47.607053 containerd[1522]: time="2025-07-09T23:48:47.607023097Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 9 23:48:48.228048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount416262341.mount: Deactivated successfully. Jul 9 23:48:49.945725 containerd[1522]: time="2025-07-09T23:48:49.945573479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:48:49.946652 containerd[1522]: time="2025-07-09T23:48:49.946407652Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" Jul 9 23:48:49.948041 containerd[1522]: time="2025-07-09T23:48:49.947515326Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:48:49.950341 containerd[1522]: time="2025-07-09T23:48:49.950307269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:48:49.951613 containerd[1522]: time="2025-07-09T23:48:49.951568018Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.344384651s" Jul 9 23:48:49.951713 containerd[1522]: time="2025-07-09T23:48:49.951617917Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jul 9 23:48:54.239502 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:48:54.239678 systemd[1]: kubelet.service: Consumed 168ms CPU time, 107.8M memory peak. Jul 9 23:48:54.241704 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:48:54.260773 systemd[1]: Reload requested from client PID 2187 ('systemctl') (unit session-7.scope)... Jul 9 23:48:54.260788 systemd[1]: Reloading... Jul 9 23:48:54.353753 zram_generator::config[2228]: No configuration found. Jul 9 23:48:54.476565 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 23:48:54.579356 systemd[1]: Reloading finished in 318 ms. Jul 9 23:48:54.629326 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 9 23:48:54.629416 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 9 23:48:54.629751 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:48:54.629802 systemd[1]: kubelet.service: Consumed 88ms CPU time, 95M memory peak. Jul 9 23:48:54.632602 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:48:54.744613 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:48:54.749245 (kubelet)[2274]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 9 23:48:54.784311 kubelet[2274]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 23:48:54.784311 kubelet[2274]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 9 23:48:54.784311 kubelet[2274]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 23:48:54.784637 kubelet[2274]: I0709 23:48:54.784417 2274 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 9 23:48:55.906723 kubelet[2274]: I0709 23:48:55.906665 2274 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 9 23:48:55.906723 kubelet[2274]: I0709 23:48:55.906712 2274 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 9 23:48:55.907369 kubelet[2274]: I0709 23:48:55.906989 2274 server.go:954] "Client rotation is on, will bootstrap in background" Jul 9 23:48:55.985172 kubelet[2274]: E0709 23:48:55.985127 2274 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.69:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:48:55.987471 kubelet[2274]: I0709 23:48:55.987290 2274 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 9 23:48:55.992070 kubelet[2274]: I0709 23:48:55.992043 2274 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 9 23:48:55.995729 kubelet[2274]: I0709 23:48:55.995675 2274 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 9 23:48:55.995985 kubelet[2274]: I0709 23:48:55.995935 2274 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 9 23:48:55.996153 kubelet[2274]: I0709 23:48:55.995975 2274 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 9 23:48:55.996292 kubelet[2274]: I0709 23:48:55.996216 2274 topology_manager.go:138] "Creating topology manager with none policy" Jul 9 23:48:55.996292 kubelet[2274]: I0709 23:48:55.996226 2274 container_manager_linux.go:304] "Creating device plugin manager" Jul 9 23:48:55.996441 kubelet[2274]: I0709 23:48:55.996411 2274 state_mem.go:36] "Initialized new in-memory state store" Jul 9 23:48:56.006537 kubelet[2274]: I0709 23:48:56.006497 2274 kubelet.go:446] "Attempting to sync node with API server" Jul 9 23:48:56.006537 kubelet[2274]: I0709 23:48:56.006536 2274 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 9 23:48:56.006629 kubelet[2274]: I0709 23:48:56.006562 2274 kubelet.go:352] "Adding apiserver pod source" Jul 9 23:48:56.006629 kubelet[2274]: I0709 23:48:56.006581 2274 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 9 23:48:56.009316 kubelet[2274]: W0709 23:48:56.009190 2274 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.69:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.69:6443: connect: connection refused Jul 9 23:48:56.009316 kubelet[2274]: E0709 23:48:56.009266 2274 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.69:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:48:56.010125 kubelet[2274]: W0709 23:48:56.010062 2274 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.69:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.69:6443: connect: connection refused Jul 9 23:48:56.010125 kubelet[2274]: E0709 23:48:56.010101 2274 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.69:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:48:56.014955 kubelet[2274]: I0709 23:48:56.014933 2274 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 9 23:48:56.015920 kubelet[2274]: I0709 23:48:56.015899 2274 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 9 23:48:56.016269 kubelet[2274]: W0709 23:48:56.016257 2274 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 9 23:48:56.019284 kubelet[2274]: I0709 23:48:56.019256 2274 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 9 23:48:56.019382 kubelet[2274]: I0709 23:48:56.019315 2274 server.go:1287] "Started kubelet" Jul 9 23:48:56.019419 kubelet[2274]: I0709 23:48:56.019398 2274 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 9 23:48:56.022746 kubelet[2274]: I0709 23:48:56.022627 2274 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 9 23:48:56.026815 kubelet[2274]: I0709 23:48:56.023063 2274 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 9 23:48:56.026815 kubelet[2274]: I0709 23:48:56.023135 2274 server.go:479] "Adding debug handlers to kubelet server" Jul 9 23:48:56.026815 kubelet[2274]: I0709 23:48:56.023552 2274 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 9 23:48:56.026815 kubelet[2274]: I0709 23:48:56.024280 2274 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 9 23:48:56.026815 kubelet[2274]: E0709 23:48:56.025824 2274 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 23:48:56.026815 kubelet[2274]: I0709 23:48:56.025866 2274 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 9 23:48:56.026815 kubelet[2274]: I0709 23:48:56.026068 2274 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 9 23:48:56.026815 kubelet[2274]: I0709 23:48:56.026121 2274 reconciler.go:26] "Reconciler: start to sync state" Jul 9 23:48:56.026815 kubelet[2274]: W0709 23:48:56.026438 2274 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.69:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.69:6443: connect: connection refused Jul 9 23:48:56.026815 kubelet[2274]: E0709 23:48:56.026479 2274 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.69:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:48:56.030140 kubelet[2274]: E0709 23:48:56.024571 2274 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.69:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.69:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1850ba2800a3342f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-09 23:48:56.019276847 +0000 UTC m=+1.267048867,LastTimestamp:2025-07-09 23:48:56.019276847 +0000 UTC m=+1.267048867,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 9 23:48:56.030140 kubelet[2274]: E0709 23:48:56.029155 2274 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.69:6443: connect: connection refused" interval="200ms" Jul 9 23:48:56.030140 kubelet[2274]: I0709 23:48:56.029339 2274 factory.go:221] Registration of the systemd container factory successfully Jul 9 23:48:56.030140 kubelet[2274]: I0709 23:48:56.029411 2274 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 9 23:48:56.031496 kubelet[2274]: I0709 23:48:56.031475 2274 factory.go:221] Registration of the containerd container factory successfully Jul 9 23:48:56.041265 kubelet[2274]: I0709 23:48:56.041223 2274 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 9 23:48:56.042398 kubelet[2274]: I0709 23:48:56.042378 2274 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 9 23:48:56.042499 kubelet[2274]: I0709 23:48:56.042488 2274 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 9 23:48:56.042563 kubelet[2274]: I0709 23:48:56.042552 2274 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 9 23:48:56.042604 kubelet[2274]: I0709 23:48:56.042597 2274 kubelet.go:2382] "Starting kubelet main sync loop" Jul 9 23:48:56.042708 kubelet[2274]: E0709 23:48:56.042673 2274 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 9 23:48:56.047146 kubelet[2274]: I0709 23:48:56.047112 2274 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 9 23:48:56.047146 kubelet[2274]: I0709 23:48:56.047130 2274 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 9 23:48:56.047146 kubelet[2274]: I0709 23:48:56.047147 2274 state_mem.go:36] "Initialized new in-memory state store" Jul 9 23:48:56.048536 kubelet[2274]: W0709 23:48:56.048436 2274 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.69:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.69:6443: connect: connection refused Jul 9 23:48:56.048536 kubelet[2274]: E0709 23:48:56.048489 2274 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.69:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:48:56.050906 kubelet[2274]: I0709 23:48:56.050886 2274 policy_none.go:49] "None policy: Start" Jul 9 23:48:56.050906 kubelet[2274]: I0709 23:48:56.050908 2274 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 9 23:48:56.050906 kubelet[2274]: I0709 23:48:56.050919 2274 state_mem.go:35] "Initializing new in-memory state store" Jul 9 23:48:56.056031 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 9 23:48:56.069872 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 9 23:48:56.073498 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 9 23:48:56.092561 kubelet[2274]: I0709 23:48:56.092535 2274 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 9 23:48:56.092999 kubelet[2274]: I0709 23:48:56.092791 2274 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 9 23:48:56.092999 kubelet[2274]: I0709 23:48:56.092804 2274 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 9 23:48:56.092999 kubelet[2274]: I0709 23:48:56.092984 2274 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 9 23:48:56.094487 kubelet[2274]: E0709 23:48:56.094464 2274 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 9 23:48:56.094558 kubelet[2274]: E0709 23:48:56.094512 2274 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 9 23:48:56.151715 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice - libcontainer container kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jul 9 23:48:56.177725 kubelet[2274]: E0709 23:48:56.177442 2274 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 23:48:56.181953 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice - libcontainer container kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jul 9 23:48:56.194799 kubelet[2274]: I0709 23:48:56.194770 2274 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 23:48:56.195595 kubelet[2274]: E0709 23:48:56.195567 2274 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.69:6443/api/v1/nodes\": dial tcp 10.0.0.69:6443: connect: connection refused" node="localhost" Jul 9 23:48:56.198960 kubelet[2274]: E0709 23:48:56.198937 2274 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 23:48:56.200563 systemd[1]: Created slice kubepods-burstable-pod0c2f5542c0626abd96d0b6f2ff386eda.slice - libcontainer container kubepods-burstable-pod0c2f5542c0626abd96d0b6f2ff386eda.slice. Jul 9 23:48:56.202171 kubelet[2274]: E0709 23:48:56.202152 2274 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 23:48:56.227629 kubelet[2274]: I0709 23:48:56.227549 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0c2f5542c0626abd96d0b6f2ff386eda-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0c2f5542c0626abd96d0b6f2ff386eda\") " pod="kube-system/kube-apiserver-localhost" Jul 9 23:48:56.227629 kubelet[2274]: I0709 23:48:56.227584 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:48:56.227629 kubelet[2274]: I0709 23:48:56.227604 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:48:56.227975 kubelet[2274]: I0709 23:48:56.227851 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:48:56.227975 kubelet[2274]: I0709 23:48:56.227876 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 9 23:48:56.227975 kubelet[2274]: I0709 23:48:56.227892 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0c2f5542c0626abd96d0b6f2ff386eda-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0c2f5542c0626abd96d0b6f2ff386eda\") " pod="kube-system/kube-apiserver-localhost" Jul 9 23:48:56.227975 kubelet[2274]: I0709 23:48:56.227910 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0c2f5542c0626abd96d0b6f2ff386eda-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0c2f5542c0626abd96d0b6f2ff386eda\") " pod="kube-system/kube-apiserver-localhost" Jul 9 23:48:56.227975 kubelet[2274]: I0709 23:48:56.227925 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:48:56.228112 kubelet[2274]: I0709 23:48:56.227945 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:48:56.229754 kubelet[2274]: E0709 23:48:56.229714 2274 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.69:6443: connect: connection refused" interval="400ms" Jul 9 23:48:56.396843 kubelet[2274]: I0709 23:48:56.396808 2274 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 23:48:56.397260 kubelet[2274]: E0709 23:48:56.397232 2274 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.69:6443/api/v1/nodes\": dial tcp 10.0.0.69:6443: connect: connection refused" node="localhost" Jul 9 23:48:56.478060 kubelet[2274]: E0709 23:48:56.477945 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:48:56.480428 containerd[1522]: time="2025-07-09T23:48:56.480393036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jul 9 23:48:56.500087 kubelet[2274]: E0709 23:48:56.500053 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:48:56.500477 containerd[1522]: time="2025-07-09T23:48:56.500440860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jul 9 23:48:56.501427 containerd[1522]: time="2025-07-09T23:48:56.501388479Z" level=info msg="connecting to shim 27ec3115c5f052d56c1216a2e80b7dac35c8e700ee212ecf7b0f335d9c518cc6" address="unix:///run/containerd/s/07f25546e32adb2e52e42a68bbcdce24d1ad78172705cfe6151fb2b961701016" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:48:56.502947 kubelet[2274]: E0709 23:48:56.502923 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:48:56.503384 containerd[1522]: time="2025-07-09T23:48:56.503358510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0c2f5542c0626abd96d0b6f2ff386eda,Namespace:kube-system,Attempt:0,}" Jul 9 23:48:56.528885 systemd[1]: Started cri-containerd-27ec3115c5f052d56c1216a2e80b7dac35c8e700ee212ecf7b0f335d9c518cc6.scope - libcontainer container 27ec3115c5f052d56c1216a2e80b7dac35c8e700ee212ecf7b0f335d9c518cc6. Jul 9 23:48:56.530125 containerd[1522]: time="2025-07-09T23:48:56.530080953Z" level=info msg="connecting to shim ea0ba803972260ee0a0eb595a2d9e531563dccf2468b836e4648e8a94b030bb5" address="unix:///run/containerd/s/c71af7f45a9cfe68c38c97ae54d53be570c84a55076a50972851c6d5144ea571" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:48:56.538461 containerd[1522]: time="2025-07-09T23:48:56.537876345Z" level=info msg="connecting to shim 6c80799a750936d49d8e532ea51d977d775d9bdfe2ae83842d58bdd4d2b9bca7" address="unix:///run/containerd/s/79b6d0df54ad1474fa05b728f389e2b21828eb7516cd80d4f4316c1b0284a6d4" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:48:56.559884 systemd[1]: Started cri-containerd-ea0ba803972260ee0a0eb595a2d9e531563dccf2468b836e4648e8a94b030bb5.scope - libcontainer container ea0ba803972260ee0a0eb595a2d9e531563dccf2468b836e4648e8a94b030bb5. Jul 9 23:48:56.563719 systemd[1]: Started cri-containerd-6c80799a750936d49d8e532ea51d977d775d9bdfe2ae83842d58bdd4d2b9bca7.scope - libcontainer container 6c80799a750936d49d8e532ea51d977d775d9bdfe2ae83842d58bdd4d2b9bca7. Jul 9 23:48:56.575514 containerd[1522]: time="2025-07-09T23:48:56.575468229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"27ec3115c5f052d56c1216a2e80b7dac35c8e700ee212ecf7b0f335d9c518cc6\"" Jul 9 23:48:56.577070 kubelet[2274]: E0709 23:48:56.576817 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:48:56.584345 containerd[1522]: time="2025-07-09T23:48:56.584306034Z" level=info msg="CreateContainer within sandbox \"27ec3115c5f052d56c1216a2e80b7dac35c8e700ee212ecf7b0f335d9c518cc6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 9 23:48:56.594610 containerd[1522]: time="2025-07-09T23:48:56.593884074Z" level=info msg="Container e1429ffa8a0679b18565500d40e66a4d290da365ae96996116353cdb36c6b02e: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:48:56.600090 containerd[1522]: time="2025-07-09T23:48:56.600056505Z" level=info msg="CreateContainer within sandbox \"27ec3115c5f052d56c1216a2e80b7dac35c8e700ee212ecf7b0f335d9c518cc6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e1429ffa8a0679b18565500d40e66a4d290da365ae96996116353cdb36c6b02e\"" Jul 9 23:48:56.600837 containerd[1522]: time="2025-07-09T23:48:56.600815458Z" level=info msg="StartContainer for \"e1429ffa8a0679b18565500d40e66a4d290da365ae96996116353cdb36c6b02e\"" Jul 9 23:48:56.601834 containerd[1522]: time="2025-07-09T23:48:56.601806005Z" level=info msg="connecting to shim e1429ffa8a0679b18565500d40e66a4d290da365ae96996116353cdb36c6b02e" address="unix:///run/containerd/s/07f25546e32adb2e52e42a68bbcdce24d1ad78172705cfe6151fb2b961701016" protocol=ttrpc version=3 Jul 9 23:48:56.609605 containerd[1522]: time="2025-07-09T23:48:56.609566004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea0ba803972260ee0a0eb595a2d9e531563dccf2468b836e4648e8a94b030bb5\"" Jul 9 23:48:56.610411 kubelet[2274]: E0709 23:48:56.610353 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:48:56.612313 containerd[1522]: time="2025-07-09T23:48:56.612280318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0c2f5542c0626abd96d0b6f2ff386eda,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c80799a750936d49d8e532ea51d977d775d9bdfe2ae83842d58bdd4d2b9bca7\"" Jul 9 23:48:56.613151 containerd[1522]: time="2025-07-09T23:48:56.613122682Z" level=info msg="CreateContainer within sandbox \"ea0ba803972260ee0a0eb595a2d9e531563dccf2468b836e4648e8a94b030bb5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 9 23:48:56.614008 kubelet[2274]: E0709 23:48:56.613985 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:48:56.615569 containerd[1522]: time="2025-07-09T23:48:56.615519266Z" level=info msg="CreateContainer within sandbox \"6c80799a750936d49d8e532ea51d977d775d9bdfe2ae83842d58bdd4d2b9bca7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 9 23:48:56.626862 systemd[1]: Started cri-containerd-e1429ffa8a0679b18565500d40e66a4d290da365ae96996116353cdb36c6b02e.scope - libcontainer container e1429ffa8a0679b18565500d40e66a4d290da365ae96996116353cdb36c6b02e. Jul 9 23:48:56.629522 containerd[1522]: time="2025-07-09T23:48:56.629464883Z" level=info msg="Container dd29af9dd1e5b890b9838cbac693ef75d4a8ca65d39165ed991b6bc22f03fc05: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:48:56.630578 kubelet[2274]: E0709 23:48:56.630342 2274 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.69:6443: connect: connection refused" interval="800ms" Jul 9 23:48:56.634323 containerd[1522]: time="2025-07-09T23:48:56.634282983Z" level=info msg="Container 37e7f2464d30177956b046ce26624c600faf0f902fae46325bdc249557610d65: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:48:56.641652 containerd[1522]: time="2025-07-09T23:48:56.641604204Z" level=info msg="CreateContainer within sandbox \"ea0ba803972260ee0a0eb595a2d9e531563dccf2468b836e4648e8a94b030bb5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dd29af9dd1e5b890b9838cbac693ef75d4a8ca65d39165ed991b6bc22f03fc05\"" Jul 9 23:48:56.642133 containerd[1522]: time="2025-07-09T23:48:56.642106632Z" level=info msg="StartContainer for \"dd29af9dd1e5b890b9838cbac693ef75d4a8ca65d39165ed991b6bc22f03fc05\"" Jul 9 23:48:56.643095 containerd[1522]: time="2025-07-09T23:48:56.643068681Z" level=info msg="connecting to shim dd29af9dd1e5b890b9838cbac693ef75d4a8ca65d39165ed991b6bc22f03fc05" address="unix:///run/containerd/s/c71af7f45a9cfe68c38c97ae54d53be570c84a55076a50972851c6d5144ea571" protocol=ttrpc version=3 Jul 9 23:48:56.648087 containerd[1522]: time="2025-07-09T23:48:56.648054524Z" level=info msg="CreateContainer within sandbox \"6c80799a750936d49d8e532ea51d977d775d9bdfe2ae83842d58bdd4d2b9bca7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"37e7f2464d30177956b046ce26624c600faf0f902fae46325bdc249557610d65\"" Jul 9 23:48:56.649155 containerd[1522]: time="2025-07-09T23:48:56.649093530Z" level=info msg="StartContainer for \"37e7f2464d30177956b046ce26624c600faf0f902fae46325bdc249557610d65\"" Jul 9 23:48:56.650656 containerd[1522]: time="2025-07-09T23:48:56.650615524Z" level=info msg="connecting to shim 37e7f2464d30177956b046ce26624c600faf0f902fae46325bdc249557610d65" address="unix:///run/containerd/s/79b6d0df54ad1474fa05b728f389e2b21828eb7516cd80d4f4316c1b0284a6d4" protocol=ttrpc version=3 Jul 9 23:48:56.666887 systemd[1]: Started cri-containerd-dd29af9dd1e5b890b9838cbac693ef75d4a8ca65d39165ed991b6bc22f03fc05.scope - libcontainer container dd29af9dd1e5b890b9838cbac693ef75d4a8ca65d39165ed991b6bc22f03fc05. Jul 9 23:48:56.671824 systemd[1]: Started cri-containerd-37e7f2464d30177956b046ce26624c600faf0f902fae46325bdc249557610d65.scope - libcontainer container 37e7f2464d30177956b046ce26624c600faf0f902fae46325bdc249557610d65. Jul 9 23:48:56.674516 containerd[1522]: time="2025-07-09T23:48:56.674448173Z" level=info msg="StartContainer for \"e1429ffa8a0679b18565500d40e66a4d290da365ae96996116353cdb36c6b02e\" returns successfully" Jul 9 23:48:56.744580 containerd[1522]: time="2025-07-09T23:48:56.743642687Z" level=info msg="StartContainer for \"dd29af9dd1e5b890b9838cbac693ef75d4a8ca65d39165ed991b6bc22f03fc05\" returns successfully" Jul 9 23:48:56.747846 containerd[1522]: time="2025-07-09T23:48:56.747811658Z" level=info msg="StartContainer for \"37e7f2464d30177956b046ce26624c600faf0f902fae46325bdc249557610d65\" returns successfully" Jul 9 23:48:56.802041 kubelet[2274]: I0709 23:48:56.802007 2274 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 23:48:56.802560 kubelet[2274]: E0709 23:48:56.802384 2274 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.69:6443/api/v1/nodes\": dial tcp 10.0.0.69:6443: connect: connection refused" node="localhost" Jul 9 23:48:57.055745 kubelet[2274]: E0709 23:48:57.055635 2274 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 23:48:57.056016 kubelet[2274]: E0709 23:48:57.056004 2274 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 23:48:57.056224 kubelet[2274]: E0709 23:48:57.056116 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:48:57.056494 kubelet[2274]: E0709 23:48:57.056479 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:48:57.058833 kubelet[2274]: E0709 23:48:57.058808 2274 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 23:48:57.059699 kubelet[2274]: E0709 23:48:57.058930 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:48:57.604301 kubelet[2274]: I0709 23:48:57.604269 2274 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 23:48:58.062952 kubelet[2274]: E0709 23:48:58.062866 2274 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 23:48:58.063251 kubelet[2274]: E0709 23:48:58.063016 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:48:58.064195 kubelet[2274]: E0709 23:48:58.064173 2274 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 23:48:58.064332 kubelet[2274]: E0709 23:48:58.064273 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:48:58.064513 kubelet[2274]: E0709 23:48:58.064493 2274 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 23:48:58.064592 kubelet[2274]: E0709 23:48:58.064578 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:48:58.784281 kubelet[2274]: E0709 23:48:58.784221 2274 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 9 23:48:58.854299 kubelet[2274]: I0709 23:48:58.854214 2274 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 9 23:48:58.927209 kubelet[2274]: I0709 23:48:58.927122 2274 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 9 23:48:58.933298 kubelet[2274]: E0709 23:48:58.933261 2274 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 9 23:48:58.933298 kubelet[2274]: I0709 23:48:58.933293 2274 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 9 23:48:58.935388 kubelet[2274]: E0709 23:48:58.935249 2274 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 9 23:48:58.935388 kubelet[2274]: I0709 23:48:58.935274 2274 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 9 23:48:58.937076 kubelet[2274]: E0709 23:48:58.937042 2274 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 9 23:48:59.009340 kubelet[2274]: I0709 23:48:59.009281 2274 apiserver.go:52] "Watching apiserver" Jul 9 23:48:59.026196 kubelet[2274]: I0709 23:48:59.026150 2274 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 9 23:49:01.209719 systemd[1]: Reload requested from client PID 2553 ('systemctl') (unit session-7.scope)... Jul 9 23:49:01.209737 systemd[1]: Reloading... Jul 9 23:49:01.302466 zram_generator::config[2596]: No configuration found. Jul 9 23:49:01.483574 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 23:49:01.604651 systemd[1]: Reloading finished in 394 ms. Jul 9 23:49:01.631214 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:49:01.643160 systemd[1]: kubelet.service: Deactivated successfully. Jul 9 23:49:01.643466 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:49:01.643527 systemd[1]: kubelet.service: Consumed 1.755s CPU time, 130.9M memory peak. Jul 9 23:49:01.646311 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:49:01.790415 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:49:01.795109 (kubelet)[2638]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 9 23:49:01.841507 kubelet[2638]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 23:49:01.841507 kubelet[2638]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 9 23:49:01.841507 kubelet[2638]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 23:49:01.842009 kubelet[2638]: I0709 23:49:01.841932 2638 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 9 23:49:01.850989 kubelet[2638]: I0709 23:49:01.850930 2638 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 9 23:49:01.850989 kubelet[2638]: I0709 23:49:01.850963 2638 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 9 23:49:01.851299 kubelet[2638]: I0709 23:49:01.851250 2638 server.go:954] "Client rotation is on, will bootstrap in background" Jul 9 23:49:01.854631 kubelet[2638]: I0709 23:49:01.853464 2638 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 9 23:49:01.858726 kubelet[2638]: I0709 23:49:01.858515 2638 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 9 23:49:01.863779 kubelet[2638]: I0709 23:49:01.863754 2638 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 9 23:49:01.868900 kubelet[2638]: I0709 23:49:01.868866 2638 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 9 23:49:01.869092 kubelet[2638]: I0709 23:49:01.869057 2638 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 9 23:49:01.869268 kubelet[2638]: I0709 23:49:01.869086 2638 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 9 23:49:01.869347 kubelet[2638]: I0709 23:49:01.869276 2638 topology_manager.go:138] "Creating topology manager with none policy" Jul 9 23:49:01.869347 kubelet[2638]: I0709 23:49:01.869286 2638 container_manager_linux.go:304] "Creating device plugin manager" Jul 9 23:49:01.869347 kubelet[2638]: I0709 23:49:01.869329 2638 state_mem.go:36] "Initialized new in-memory state store" Jul 9 23:49:01.869472 kubelet[2638]: I0709 23:49:01.869461 2638 kubelet.go:446] "Attempting to sync node with API server" Jul 9 23:49:01.869493 kubelet[2638]: I0709 23:49:01.869476 2638 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 9 23:49:01.869517 kubelet[2638]: I0709 23:49:01.869498 2638 kubelet.go:352] "Adding apiserver pod source" Jul 9 23:49:01.869517 kubelet[2638]: I0709 23:49:01.869512 2638 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 9 23:49:01.870766 kubelet[2638]: I0709 23:49:01.870739 2638 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 9 23:49:01.871217 kubelet[2638]: I0709 23:49:01.871190 2638 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 9 23:49:01.871601 kubelet[2638]: I0709 23:49:01.871567 2638 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 9 23:49:01.871601 kubelet[2638]: I0709 23:49:01.871604 2638 server.go:1287] "Started kubelet" Jul 9 23:49:01.873380 kubelet[2638]: I0709 23:49:01.873347 2638 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 9 23:49:01.874306 kubelet[2638]: I0709 23:49:01.873631 2638 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 9 23:49:01.874626 kubelet[2638]: I0709 23:49:01.874589 2638 server.go:479] "Adding debug handlers to kubelet server" Jul 9 23:49:01.876648 kubelet[2638]: I0709 23:49:01.876173 2638 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 9 23:49:01.876648 kubelet[2638]: I0709 23:49:01.876385 2638 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 9 23:49:01.878503 kubelet[2638]: I0709 23:49:01.878469 2638 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 9 23:49:01.881656 kubelet[2638]: I0709 23:49:01.881636 2638 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 9 23:49:01.881935 kubelet[2638]: I0709 23:49:01.881914 2638 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 9 23:49:01.882173 kubelet[2638]: I0709 23:49:01.882159 2638 reconciler.go:26] "Reconciler: start to sync state" Jul 9 23:49:01.884423 kubelet[2638]: E0709 23:49:01.884367 2638 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 23:49:01.886254 kubelet[2638]: I0709 23:49:01.886236 2638 factory.go:221] Registration of the systemd container factory successfully Jul 9 23:49:01.886552 kubelet[2638]: I0709 23:49:01.886527 2638 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 9 23:49:01.899215 kubelet[2638]: E0709 23:49:01.899170 2638 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 9 23:49:01.899215 kubelet[2638]: I0709 23:49:01.899265 2638 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 9 23:49:01.899215 kubelet[2638]: I0709 23:49:01.899337 2638 factory.go:221] Registration of the containerd container factory successfully Jul 9 23:49:01.908071 kubelet[2638]: I0709 23:49:01.908010 2638 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 9 23:49:01.908192 kubelet[2638]: I0709 23:49:01.908099 2638 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 9 23:49:01.908192 kubelet[2638]: I0709 23:49:01.908119 2638 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 9 23:49:01.908192 kubelet[2638]: I0709 23:49:01.908126 2638 kubelet.go:2382] "Starting kubelet main sync loop" Jul 9 23:49:01.908192 kubelet[2638]: E0709 23:49:01.908183 2638 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 9 23:49:01.947647 kubelet[2638]: I0709 23:49:01.947618 2638 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 9 23:49:01.947803 kubelet[2638]: I0709 23:49:01.947788 2638 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 9 23:49:01.947889 kubelet[2638]: I0709 23:49:01.947881 2638 state_mem.go:36] "Initialized new in-memory state store" Jul 9 23:49:01.948136 kubelet[2638]: I0709 23:49:01.948118 2638 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 9 23:49:01.948210 kubelet[2638]: I0709 23:49:01.948188 2638 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 9 23:49:01.948262 kubelet[2638]: I0709 23:49:01.948254 2638 policy_none.go:49] "None policy: Start" Jul 9 23:49:01.948327 kubelet[2638]: I0709 23:49:01.948317 2638 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 9 23:49:01.948376 kubelet[2638]: I0709 23:49:01.948369 2638 state_mem.go:35] "Initializing new in-memory state store" Jul 9 23:49:01.948533 kubelet[2638]: I0709 23:49:01.948521 2638 state_mem.go:75] "Updated machine memory state" Jul 9 23:49:01.954271 kubelet[2638]: I0709 23:49:01.954225 2638 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 9 23:49:01.954421 kubelet[2638]: I0709 23:49:01.954408 2638 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 9 23:49:01.954457 kubelet[2638]: I0709 23:49:01.954423 2638 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 9 23:49:01.954712 kubelet[2638]: I0709 23:49:01.954663 2638 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 9 23:49:01.957679 kubelet[2638]: E0709 23:49:01.957652 2638 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 9 23:49:02.009883 kubelet[2638]: I0709 23:49:02.009810 2638 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 9 23:49:02.009883 kubelet[2638]: I0709 23:49:02.009819 2638 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 9 23:49:02.009883 kubelet[2638]: I0709 23:49:02.009809 2638 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 9 23:49:02.058223 kubelet[2638]: I0709 23:49:02.058122 2638 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 23:49:02.067172 kubelet[2638]: I0709 23:49:02.066284 2638 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 9 23:49:02.067306 kubelet[2638]: I0709 23:49:02.067260 2638 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 9 23:49:02.083025 kubelet[2638]: I0709 23:49:02.082929 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 9 23:49:02.083025 kubelet[2638]: I0709 23:49:02.082974 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:49:02.083025 kubelet[2638]: I0709 23:49:02.083002 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:49:02.083025 kubelet[2638]: I0709 23:49:02.083020 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0c2f5542c0626abd96d0b6f2ff386eda-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0c2f5542c0626abd96d0b6f2ff386eda\") " pod="kube-system/kube-apiserver-localhost" Jul 9 23:49:02.083337 kubelet[2638]: I0709 23:49:02.083046 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:49:02.083337 kubelet[2638]: I0709 23:49:02.083070 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:49:02.083337 kubelet[2638]: I0709 23:49:02.083092 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:49:02.083337 kubelet[2638]: I0709 23:49:02.083107 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0c2f5542c0626abd96d0b6f2ff386eda-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0c2f5542c0626abd96d0b6f2ff386eda\") " pod="kube-system/kube-apiserver-localhost" Jul 9 23:49:02.083337 kubelet[2638]: I0709 23:49:02.083135 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0c2f5542c0626abd96d0b6f2ff386eda-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0c2f5542c0626abd96d0b6f2ff386eda\") " pod="kube-system/kube-apiserver-localhost" Jul 9 23:49:02.206675 sudo[2674]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 9 23:49:02.206967 sudo[2674]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 9 23:49:02.315504 kubelet[2638]: E0709 23:49:02.315386 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:02.315819 kubelet[2638]: E0709 23:49:02.315797 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:02.317454 kubelet[2638]: E0709 23:49:02.317432 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:02.650043 sudo[2674]: pam_unix(sudo:session): session closed for user root Jul 9 23:49:02.870384 kubelet[2638]: I0709 23:49:02.870343 2638 apiserver.go:52] "Watching apiserver" Jul 9 23:49:02.882075 kubelet[2638]: I0709 23:49:02.882026 2638 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 9 23:49:02.921109 kubelet[2638]: E0709 23:49:02.921002 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:02.921529 kubelet[2638]: I0709 23:49:02.921500 2638 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 9 23:49:02.925013 kubelet[2638]: E0709 23:49:02.924956 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:02.927697 kubelet[2638]: E0709 23:49:02.927662 2638 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 9 23:49:02.928330 kubelet[2638]: E0709 23:49:02.928260 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:02.928702 kubelet[2638]: I0709 23:49:02.928246 2638 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.928233181 podStartE2EDuration="928.233181ms" podCreationTimestamp="2025-07-09 23:49:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:49:02.916480736 +0000 UTC m=+1.116685362" watchObservedRunningTime="2025-07-09 23:49:02.928233181 +0000 UTC m=+1.128437807" Jul 9 23:49:02.938594 kubelet[2638]: I0709 23:49:02.938371 2638 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.938356649 podStartE2EDuration="938.356649ms" podCreationTimestamp="2025-07-09 23:49:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:49:02.93091825 +0000 UTC m=+1.131122876" watchObservedRunningTime="2025-07-09 23:49:02.938356649 +0000 UTC m=+1.138561275" Jul 9 23:49:02.938594 kubelet[2638]: I0709 23:49:02.938493 2638 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.938490011 podStartE2EDuration="938.490011ms" podCreationTimestamp="2025-07-09 23:49:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:49:02.93792265 +0000 UTC m=+1.138127276" watchObservedRunningTime="2025-07-09 23:49:02.938490011 +0000 UTC m=+1.138694637" Jul 9 23:49:03.922482 kubelet[2638]: E0709 23:49:03.922430 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:03.923292 kubelet[2638]: E0709 23:49:03.923171 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:05.330926 sudo[1729]: pam_unix(sudo:session): session closed for user root Jul 9 23:49:05.332769 sshd[1728]: Connection closed by 10.0.0.1 port 40402 Jul 9 23:49:05.334040 sshd-session[1726]: pam_unix(sshd:session): session closed for user core Jul 9 23:49:05.338714 systemd[1]: sshd@6-10.0.0.69:22-10.0.0.1:40402.service: Deactivated successfully. Jul 9 23:49:05.339103 systemd-logind[1512]: Session 7 logged out. Waiting for processes to exit. Jul 9 23:49:05.341199 systemd[1]: session-7.scope: Deactivated successfully. Jul 9 23:49:05.341460 systemd[1]: session-7.scope: Consumed 7.818s CPU time, 263.8M memory peak. Jul 9 23:49:05.344630 systemd-logind[1512]: Removed session 7. Jul 9 23:49:06.591749 kubelet[2638]: I0709 23:49:06.591679 2638 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 9 23:49:06.592219 containerd[1522]: time="2025-07-09T23:49:06.592189687Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 9 23:49:06.592494 kubelet[2638]: I0709 23:49:06.592362 2638 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 9 23:49:07.365657 systemd[1]: Created slice kubepods-burstable-pod09d9652e_00d8_4cb5_96b7_df6aabc1e902.slice - libcontainer container kubepods-burstable-pod09d9652e_00d8_4cb5_96b7_df6aabc1e902.slice. Jul 9 23:49:07.375623 systemd[1]: Created slice kubepods-besteffort-pod22d248c6_d4aa_4c6c_a44f_a8a6af5c642a.slice - libcontainer container kubepods-besteffort-pod22d248c6_d4aa_4c6c_a44f_a8a6af5c642a.slice. Jul 9 23:49:07.378714 kubelet[2638]: E0709 23:49:07.378047 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:07.415090 kubelet[2638]: I0709 23:49:07.415053 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/09d9652e-00d8-4cb5-96b7-df6aabc1e902-bpf-maps\") pod \"cilium-qb757\" (UID: \"09d9652e-00d8-4cb5-96b7-df6aabc1e902\") " pod="kube-system/cilium-qb757" Jul 9 23:49:07.415238 kubelet[2638]: I0709 23:49:07.415097 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/09d9652e-00d8-4cb5-96b7-df6aabc1e902-etc-cni-netd\") pod \"cilium-qb757\" (UID: \"09d9652e-00d8-4cb5-96b7-df6aabc1e902\") " pod="kube-system/cilium-qb757" Jul 9 23:49:07.415238 kubelet[2638]: I0709 23:49:07.415134 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/09d9652e-00d8-4cb5-96b7-df6aabc1e902-host-proc-sys-kernel\") pod \"cilium-qb757\" (UID: \"09d9652e-00d8-4cb5-96b7-df6aabc1e902\") " pod="kube-system/cilium-qb757" Jul 9 23:49:07.415238 kubelet[2638]: I0709 23:49:07.415172 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnhwp\" (UniqueName: \"kubernetes.io/projected/22d248c6-d4aa-4c6c-a44f-a8a6af5c642a-kube-api-access-fnhwp\") pod \"kube-proxy-fxbmx\" (UID: \"22d248c6-d4aa-4c6c-a44f-a8a6af5c642a\") " pod="kube-system/kube-proxy-fxbmx" Jul 9 23:49:07.415238 kubelet[2638]: I0709 23:49:07.415192 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/09d9652e-00d8-4cb5-96b7-df6aabc1e902-hubble-tls\") pod \"cilium-qb757\" (UID: \"09d9652e-00d8-4cb5-96b7-df6aabc1e902\") " pod="kube-system/cilium-qb757" Jul 9 23:49:07.415238 kubelet[2638]: I0709 23:49:07.415216 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/22d248c6-d4aa-4c6c-a44f-a8a6af5c642a-kube-proxy\") pod \"kube-proxy-fxbmx\" (UID: \"22d248c6-d4aa-4c6c-a44f-a8a6af5c642a\") " pod="kube-system/kube-proxy-fxbmx" Jul 9 23:49:07.415402 kubelet[2638]: I0709 23:49:07.415233 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22d248c6-d4aa-4c6c-a44f-a8a6af5c642a-xtables-lock\") pod \"kube-proxy-fxbmx\" (UID: \"22d248c6-d4aa-4c6c-a44f-a8a6af5c642a\") " pod="kube-system/kube-proxy-fxbmx" Jul 9 23:49:07.415402 kubelet[2638]: I0709 23:49:07.415251 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/09d9652e-00d8-4cb5-96b7-df6aabc1e902-cni-path\") pod \"cilium-qb757\" (UID: \"09d9652e-00d8-4cb5-96b7-df6aabc1e902\") " pod="kube-system/cilium-qb757" Jul 9 23:49:07.415402 kubelet[2638]: I0709 23:49:07.415272 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09d9652e-00d8-4cb5-96b7-df6aabc1e902-lib-modules\") pod \"cilium-qb757\" (UID: \"09d9652e-00d8-4cb5-96b7-df6aabc1e902\") " pod="kube-system/cilium-qb757" Jul 9 23:49:07.415402 kubelet[2638]: I0709 23:49:07.415354 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22d248c6-d4aa-4c6c-a44f-a8a6af5c642a-lib-modules\") pod \"kube-proxy-fxbmx\" (UID: \"22d248c6-d4aa-4c6c-a44f-a8a6af5c642a\") " pod="kube-system/kube-proxy-fxbmx" Jul 9 23:49:07.415402 kubelet[2638]: I0709 23:49:07.415373 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/09d9652e-00d8-4cb5-96b7-df6aabc1e902-cilium-cgroup\") pod \"cilium-qb757\" (UID: \"09d9652e-00d8-4cb5-96b7-df6aabc1e902\") " pod="kube-system/cilium-qb757" Jul 9 23:49:07.415402 kubelet[2638]: I0709 23:49:07.415394 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/09d9652e-00d8-4cb5-96b7-df6aabc1e902-clustermesh-secrets\") pod \"cilium-qb757\" (UID: \"09d9652e-00d8-4cb5-96b7-df6aabc1e902\") " pod="kube-system/cilium-qb757" Jul 9 23:49:07.415677 kubelet[2638]: I0709 23:49:07.415412 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/09d9652e-00d8-4cb5-96b7-df6aabc1e902-cilium-config-path\") pod \"cilium-qb757\" (UID: \"09d9652e-00d8-4cb5-96b7-df6aabc1e902\") " pod="kube-system/cilium-qb757" Jul 9 23:49:07.415677 kubelet[2638]: I0709 23:49:07.415429 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/09d9652e-00d8-4cb5-96b7-df6aabc1e902-hostproc\") pod \"cilium-qb757\" (UID: \"09d9652e-00d8-4cb5-96b7-df6aabc1e902\") " pod="kube-system/cilium-qb757" Jul 9 23:49:07.415677 kubelet[2638]: I0709 23:49:07.415447 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/09d9652e-00d8-4cb5-96b7-df6aabc1e902-xtables-lock\") pod \"cilium-qb757\" (UID: \"09d9652e-00d8-4cb5-96b7-df6aabc1e902\") " pod="kube-system/cilium-qb757" Jul 9 23:49:07.415677 kubelet[2638]: I0709 23:49:07.415470 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/09d9652e-00d8-4cb5-96b7-df6aabc1e902-host-proc-sys-net\") pod \"cilium-qb757\" (UID: \"09d9652e-00d8-4cb5-96b7-df6aabc1e902\") " pod="kube-system/cilium-qb757" Jul 9 23:49:07.415677 kubelet[2638]: I0709 23:49:07.415489 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jps98\" (UniqueName: \"kubernetes.io/projected/09d9652e-00d8-4cb5-96b7-df6aabc1e902-kube-api-access-jps98\") pod \"cilium-qb757\" (UID: \"09d9652e-00d8-4cb5-96b7-df6aabc1e902\") " pod="kube-system/cilium-qb757" Jul 9 23:49:07.415677 kubelet[2638]: I0709 23:49:07.415510 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/09d9652e-00d8-4cb5-96b7-df6aabc1e902-cilium-run\") pod \"cilium-qb757\" (UID: \"09d9652e-00d8-4cb5-96b7-df6aabc1e902\") " pod="kube-system/cilium-qb757" Jul 9 23:49:07.635532 systemd[1]: Created slice kubepods-besteffort-pod0ff2f7f6_1790_4e03_b1f0_65e619634106.slice - libcontainer container kubepods-besteffort-pod0ff2f7f6_1790_4e03_b1f0_65e619634106.slice. Jul 9 23:49:07.669707 kubelet[2638]: E0709 23:49:07.669646 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:07.670871 containerd[1522]: time="2025-07-09T23:49:07.670818856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qb757,Uid:09d9652e-00d8-4cb5-96b7-df6aabc1e902,Namespace:kube-system,Attempt:0,}" Jul 9 23:49:07.687418 kubelet[2638]: E0709 23:49:07.687326 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:07.688138 containerd[1522]: time="2025-07-09T23:49:07.688107993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fxbmx,Uid:22d248c6-d4aa-4c6c-a44f-a8a6af5c642a,Namespace:kube-system,Attempt:0,}" Jul 9 23:49:07.714520 containerd[1522]: time="2025-07-09T23:49:07.714455608Z" level=info msg="connecting to shim 807a5ac2a9004e19b8c4379c042cf6d433cf5dfb13b19f61b5ba9882512e89c9" address="unix:///run/containerd/s/9dfac7a5f0fa6e1dac371a0e1ebd31b0ff13c2bd0b8bee25b3804b1de462df7d" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:49:07.717497 kubelet[2638]: I0709 23:49:07.717455 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0ff2f7f6-1790-4e03-b1f0-65e619634106-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-sfz6m\" (UID: \"0ff2f7f6-1790-4e03-b1f0-65e619634106\") " pod="kube-system/cilium-operator-6c4d7847fc-sfz6m" Jul 9 23:49:07.717746 kubelet[2638]: I0709 23:49:07.717670 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz94f\" (UniqueName: \"kubernetes.io/projected/0ff2f7f6-1790-4e03-b1f0-65e619634106-kube-api-access-kz94f\") pod \"cilium-operator-6c4d7847fc-sfz6m\" (UID: \"0ff2f7f6-1790-4e03-b1f0-65e619634106\") " pod="kube-system/cilium-operator-6c4d7847fc-sfz6m" Jul 9 23:49:07.725391 containerd[1522]: time="2025-07-09T23:49:07.725333845Z" level=info msg="connecting to shim daeef8ef9ec52fd2712cd31d44e0afc76d225ad36b359d1433443b72406eefad" address="unix:///run/containerd/s/95ddd1f2c284e414c5d086c810a773e24d3eb1fff7d00903c0b5df1204a61d8b" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:49:07.742919 systemd[1]: Started cri-containerd-807a5ac2a9004e19b8c4379c042cf6d433cf5dfb13b19f61b5ba9882512e89c9.scope - libcontainer container 807a5ac2a9004e19b8c4379c042cf6d433cf5dfb13b19f61b5ba9882512e89c9. Jul 9 23:49:07.746377 systemd[1]: Started cri-containerd-daeef8ef9ec52fd2712cd31d44e0afc76d225ad36b359d1433443b72406eefad.scope - libcontainer container daeef8ef9ec52fd2712cd31d44e0afc76d225ad36b359d1433443b72406eefad. Jul 9 23:49:07.775248 containerd[1522]: time="2025-07-09T23:49:07.775201332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qb757,Uid:09d9652e-00d8-4cb5-96b7-df6aabc1e902,Namespace:kube-system,Attempt:0,} returns sandbox id \"807a5ac2a9004e19b8c4379c042cf6d433cf5dfb13b19f61b5ba9882512e89c9\"" Jul 9 23:49:07.776626 kubelet[2638]: E0709 23:49:07.776347 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:07.778821 containerd[1522]: time="2025-07-09T23:49:07.778785728Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 9 23:49:07.780905 containerd[1522]: time="2025-07-09T23:49:07.780835006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fxbmx,Uid:22d248c6-d4aa-4c6c-a44f-a8a6af5c642a,Namespace:kube-system,Attempt:0,} returns sandbox id \"daeef8ef9ec52fd2712cd31d44e0afc76d225ad36b359d1433443b72406eefad\"" Jul 9 23:49:07.781985 kubelet[2638]: E0709 23:49:07.781961 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:07.786352 containerd[1522]: time="2025-07-09T23:49:07.786291237Z" level=info msg="CreateContainer within sandbox \"daeef8ef9ec52fd2712cd31d44e0afc76d225ad36b359d1433443b72406eefad\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 9 23:49:07.803287 containerd[1522]: time="2025-07-09T23:49:07.803224524Z" level=info msg="Container 00038d887b88a2e9ae6da4a2fce08495e79fe053b04eb490208917e942a61fbc: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:49:07.811966 containerd[1522]: time="2025-07-09T23:49:07.811914144Z" level=info msg="CreateContainer within sandbox \"daeef8ef9ec52fd2712cd31d44e0afc76d225ad36b359d1433443b72406eefad\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"00038d887b88a2e9ae6da4a2fce08495e79fe053b04eb490208917e942a61fbc\"" Jul 9 23:49:07.812791 containerd[1522]: time="2025-07-09T23:49:07.812744367Z" level=info msg="StartContainer for \"00038d887b88a2e9ae6da4a2fce08495e79fe053b04eb490208917e942a61fbc\"" Jul 9 23:49:07.814344 containerd[1522]: time="2025-07-09T23:49:07.814292814Z" level=info msg="connecting to shim 00038d887b88a2e9ae6da4a2fce08495e79fe053b04eb490208917e942a61fbc" address="unix:///run/containerd/s/95ddd1f2c284e414c5d086c810a773e24d3eb1fff7d00903c0b5df1204a61d8b" protocol=ttrpc version=3 Jul 9 23:49:07.843932 systemd[1]: Started cri-containerd-00038d887b88a2e9ae6da4a2fce08495e79fe053b04eb490208917e942a61fbc.scope - libcontainer container 00038d887b88a2e9ae6da4a2fce08495e79fe053b04eb490208917e942a61fbc. Jul 9 23:49:07.886554 containerd[1522]: time="2025-07-09T23:49:07.886370692Z" level=info msg="StartContainer for \"00038d887b88a2e9ae6da4a2fce08495e79fe053b04eb490208917e942a61fbc\" returns successfully" Jul 9 23:49:07.932427 kubelet[2638]: E0709 23:49:07.931537 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:07.937393 kubelet[2638]: E0709 23:49:07.937044 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:07.941667 kubelet[2638]: E0709 23:49:07.941629 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:07.942436 containerd[1522]: time="2025-07-09T23:49:07.942354552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-sfz6m,Uid:0ff2f7f6-1790-4e03-b1f0-65e619634106,Namespace:kube-system,Attempt:0,}" Jul 9 23:49:08.029797 containerd[1522]: time="2025-07-09T23:49:08.029742856Z" level=info msg="connecting to shim e90287c1a44e24bfeb26f3389e6f65fd6ede9a6a2f322373a167f17880c6a038" address="unix:///run/containerd/s/351791c5d114acf70c2275521679698ed66aaf437c91560a8c84014a86ca1982" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:49:08.054918 systemd[1]: Started cri-containerd-e90287c1a44e24bfeb26f3389e6f65fd6ede9a6a2f322373a167f17880c6a038.scope - libcontainer container e90287c1a44e24bfeb26f3389e6f65fd6ede9a6a2f322373a167f17880c6a038. Jul 9 23:49:08.104625 containerd[1522]: time="2025-07-09T23:49:08.104572971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-sfz6m,Uid:0ff2f7f6-1790-4e03-b1f0-65e619634106,Namespace:kube-system,Attempt:0,} returns sandbox id \"e90287c1a44e24bfeb26f3389e6f65fd6ede9a6a2f322373a167f17880c6a038\"" Jul 9 23:49:08.105406 kubelet[2638]: E0709 23:49:08.105384 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:08.865678 kubelet[2638]: E0709 23:49:08.865643 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:08.883774 kubelet[2638]: I0709 23:49:08.883467 2638 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fxbmx" podStartSLOduration=1.8834486049999999 podStartE2EDuration="1.883448605s" podCreationTimestamp="2025-07-09 23:49:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:49:08.005140257 +0000 UTC m=+6.205344883" watchObservedRunningTime="2025-07-09 23:49:08.883448605 +0000 UTC m=+7.083653231" Jul 9 23:49:08.938638 kubelet[2638]: E0709 23:49:08.938584 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:08.939197 kubelet[2638]: E0709 23:49:08.939143 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:10.151928 kubelet[2638]: E0709 23:49:10.151884 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:10.943483 kubelet[2638]: E0709 23:49:10.943443 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:15.520747 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3318947907.mount: Deactivated successfully. Jul 9 23:49:17.019229 containerd[1522]: time="2025-07-09T23:49:17.019166452Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:17.019847 containerd[1522]: time="2025-07-09T23:49:17.019791947Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 9 23:49:17.021822 containerd[1522]: time="2025-07-09T23:49:17.021778637Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:17.023308 containerd[1522]: time="2025-07-09T23:49:17.023261521Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.244432205s" Jul 9 23:49:17.023370 containerd[1522]: time="2025-07-09T23:49:17.023309341Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 9 23:49:17.031460 containerd[1522]: time="2025-07-09T23:49:17.031413604Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 9 23:49:17.048663 containerd[1522]: time="2025-07-09T23:49:17.048616615Z" level=info msg="CreateContainer within sandbox \"807a5ac2a9004e19b8c4379c042cf6d433cf5dfb13b19f61b5ba9882512e89c9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 9 23:49:17.058792 containerd[1522]: time="2025-07-09T23:49:17.058128011Z" level=info msg="Container 814cf0df638e13d8c570079f5bd98b350811442a1698337ba8f825ea3a6afebc: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:49:17.062587 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2069807087.mount: Deactivated successfully. Jul 9 23:49:17.066815 containerd[1522]: time="2025-07-09T23:49:17.066676175Z" level=info msg="CreateContainer within sandbox \"807a5ac2a9004e19b8c4379c042cf6d433cf5dfb13b19f61b5ba9882512e89c9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"814cf0df638e13d8c570079f5bd98b350811442a1698337ba8f825ea3a6afebc\"" Jul 9 23:49:17.070318 containerd[1522]: time="2025-07-09T23:49:17.070267799Z" level=info msg="StartContainer for \"814cf0df638e13d8c570079f5bd98b350811442a1698337ba8f825ea3a6afebc\"" Jul 9 23:49:17.071375 containerd[1522]: time="2025-07-09T23:49:17.071341997Z" level=info msg="connecting to shim 814cf0df638e13d8c570079f5bd98b350811442a1698337ba8f825ea3a6afebc" address="unix:///run/containerd/s/9dfac7a5f0fa6e1dac371a0e1ebd31b0ff13c2bd0b8bee25b3804b1de462df7d" protocol=ttrpc version=3 Jul 9 23:49:17.120918 systemd[1]: Started cri-containerd-814cf0df638e13d8c570079f5bd98b350811442a1698337ba8f825ea3a6afebc.scope - libcontainer container 814cf0df638e13d8c570079f5bd98b350811442a1698337ba8f825ea3a6afebc. Jul 9 23:49:17.155174 containerd[1522]: time="2025-07-09T23:49:17.155084887Z" level=info msg="StartContainer for \"814cf0df638e13d8c570079f5bd98b350811442a1698337ba8f825ea3a6afebc\" returns successfully" Jul 9 23:49:17.229498 systemd[1]: cri-containerd-814cf0df638e13d8c570079f5bd98b350811442a1698337ba8f825ea3a6afebc.scope: Deactivated successfully. Jul 9 23:49:17.261603 containerd[1522]: time="2025-07-09T23:49:17.261520987Z" level=info msg="TaskExit event in podsandbox handler container_id:\"814cf0df638e13d8c570079f5bd98b350811442a1698337ba8f825ea3a6afebc\" id:\"814cf0df638e13d8c570079f5bd98b350811442a1698337ba8f825ea3a6afebc\" pid:3058 exited_at:{seconds:1752104957 nanos:249852991}" Jul 9 23:49:17.261603 containerd[1522]: time="2025-07-09T23:49:17.261522467Z" level=info msg="received exit event container_id:\"814cf0df638e13d8c570079f5bd98b350811442a1698337ba8f825ea3a6afebc\" id:\"814cf0df638e13d8c570079f5bd98b350811442a1698337ba8f825ea3a6afebc\" pid:3058 exited_at:{seconds:1752104957 nanos:249852991}" Jul 9 23:49:17.294332 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-814cf0df638e13d8c570079f5bd98b350811442a1698337ba8f825ea3a6afebc-rootfs.mount: Deactivated successfully. Jul 9 23:49:17.967844 kubelet[2638]: E0709 23:49:17.967807 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:17.972129 containerd[1522]: time="2025-07-09T23:49:17.972086266Z" level=info msg="CreateContainer within sandbox \"807a5ac2a9004e19b8c4379c042cf6d433cf5dfb13b19f61b5ba9882512e89c9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 9 23:49:17.988589 containerd[1522]: time="2025-07-09T23:49:17.988540332Z" level=info msg="Container da51315d832313529f1c3f9cb6375300d04d06277d3f3a74d561a8173a339703: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:49:17.998679 containerd[1522]: time="2025-07-09T23:49:17.998558695Z" level=info msg="CreateContainer within sandbox \"807a5ac2a9004e19b8c4379c042cf6d433cf5dfb13b19f61b5ba9882512e89c9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"da51315d832313529f1c3f9cb6375300d04d06277d3f3a74d561a8173a339703\"" Jul 9 23:49:18.001823 containerd[1522]: time="2025-07-09T23:49:18.001783648Z" level=info msg="StartContainer for \"da51315d832313529f1c3f9cb6375300d04d06277d3f3a74d561a8173a339703\"" Jul 9 23:49:18.003467 containerd[1522]: time="2025-07-09T23:49:18.003436167Z" level=info msg="connecting to shim da51315d832313529f1c3f9cb6375300d04d06277d3f3a74d561a8173a339703" address="unix:///run/containerd/s/9dfac7a5f0fa6e1dac371a0e1ebd31b0ff13c2bd0b8bee25b3804b1de462df7d" protocol=ttrpc version=3 Jul 9 23:49:18.028925 systemd[1]: Started cri-containerd-da51315d832313529f1c3f9cb6375300d04d06277d3f3a74d561a8173a339703.scope - libcontainer container da51315d832313529f1c3f9cb6375300d04d06277d3f3a74d561a8173a339703. Jul 9 23:49:18.055107 containerd[1522]: time="2025-07-09T23:49:18.055037711Z" level=info msg="StartContainer for \"da51315d832313529f1c3f9cb6375300d04d06277d3f3a74d561a8173a339703\" returns successfully" Jul 9 23:49:18.082302 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 9 23:49:18.082530 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 9 23:49:18.083278 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 9 23:49:18.084667 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 23:49:18.086476 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 9 23:49:18.086908 systemd[1]: cri-containerd-da51315d832313529f1c3f9cb6375300d04d06277d3f3a74d561a8173a339703.scope: Deactivated successfully. Jul 9 23:49:18.095616 containerd[1522]: time="2025-07-09T23:49:18.095577690Z" level=info msg="TaskExit event in podsandbox handler container_id:\"da51315d832313529f1c3f9cb6375300d04d06277d3f3a74d561a8173a339703\" id:\"da51315d832313529f1c3f9cb6375300d04d06277d3f3a74d561a8173a339703\" pid:3104 exited_at:{seconds:1752104958 nanos:95271612}" Jul 9 23:49:18.095944 containerd[1522]: time="2025-07-09T23:49:18.095858199Z" level=info msg="received exit event container_id:\"da51315d832313529f1c3f9cb6375300d04d06277d3f3a74d561a8173a339703\" id:\"da51315d832313529f1c3f9cb6375300d04d06277d3f3a74d561a8173a339703\" pid:3104 exited_at:{seconds:1752104958 nanos:95271612}" Jul 9 23:49:18.114470 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da51315d832313529f1c3f9cb6375300d04d06277d3f3a74d561a8173a339703-rootfs.mount: Deactivated successfully. Jul 9 23:49:18.123432 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 23:49:18.970311 kubelet[2638]: E0709 23:49:18.970275 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:18.974204 containerd[1522]: time="2025-07-09T23:49:18.974162251Z" level=info msg="CreateContainer within sandbox \"807a5ac2a9004e19b8c4379c042cf6d433cf5dfb13b19f61b5ba9882512e89c9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 9 23:49:18.989300 containerd[1522]: time="2025-07-09T23:49:18.989010001Z" level=info msg="Container afdc8ecc031cd6205bed68c1f7787c58891d56e850687ab081296f7ca4088091: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:49:19.002124 containerd[1522]: time="2025-07-09T23:49:19.002081437Z" level=info msg="CreateContainer within sandbox \"807a5ac2a9004e19b8c4379c042cf6d433cf5dfb13b19f61b5ba9882512e89c9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"afdc8ecc031cd6205bed68c1f7787c58891d56e850687ab081296f7ca4088091\"" Jul 9 23:49:19.002965 containerd[1522]: time="2025-07-09T23:49:19.002905100Z" level=info msg="StartContainer for \"afdc8ecc031cd6205bed68c1f7787c58891d56e850687ab081296f7ca4088091\"" Jul 9 23:49:19.005754 containerd[1522]: time="2025-07-09T23:49:19.005715375Z" level=info msg="connecting to shim afdc8ecc031cd6205bed68c1f7787c58891d56e850687ab081296f7ca4088091" address="unix:///run/containerd/s/9dfac7a5f0fa6e1dac371a0e1ebd31b0ff13c2bd0b8bee25b3804b1de462df7d" protocol=ttrpc version=3 Jul 9 23:49:19.035917 systemd[1]: Started cri-containerd-afdc8ecc031cd6205bed68c1f7787c58891d56e850687ab081296f7ca4088091.scope - libcontainer container afdc8ecc031cd6205bed68c1f7787c58891d56e850687ab081296f7ca4088091. Jul 9 23:49:19.060999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1151396406.mount: Deactivated successfully. Jul 9 23:49:19.098620 containerd[1522]: time="2025-07-09T23:49:19.098581611Z" level=info msg="StartContainer for \"afdc8ecc031cd6205bed68c1f7787c58891d56e850687ab081296f7ca4088091\" returns successfully" Jul 9 23:49:19.100015 systemd[1]: cri-containerd-afdc8ecc031cd6205bed68c1f7787c58891d56e850687ab081296f7ca4088091.scope: Deactivated successfully. Jul 9 23:49:19.102952 containerd[1522]: time="2025-07-09T23:49:19.102858505Z" level=info msg="received exit event container_id:\"afdc8ecc031cd6205bed68c1f7787c58891d56e850687ab081296f7ca4088091\" id:\"afdc8ecc031cd6205bed68c1f7787c58891d56e850687ab081296f7ca4088091\" pid:3163 exited_at:{seconds:1752104959 nanos:102663954}" Jul 9 23:49:19.102952 containerd[1522]: time="2025-07-09T23:49:19.102923209Z" level=info msg="TaskExit event in podsandbox handler container_id:\"afdc8ecc031cd6205bed68c1f7787c58891d56e850687ab081296f7ca4088091\" id:\"afdc8ecc031cd6205bed68c1f7787c58891d56e850687ab081296f7ca4088091\" pid:3163 exited_at:{seconds:1752104959 nanos:102663954}" Jul 9 23:49:19.136292 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-afdc8ecc031cd6205bed68c1f7787c58891d56e850687ab081296f7ca4088091-rootfs.mount: Deactivated successfully. Jul 9 23:49:19.239205 containerd[1522]: time="2025-07-09T23:49:19.239090630Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:19.239930 containerd[1522]: time="2025-07-09T23:49:19.239902569Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 9 23:49:19.240819 containerd[1522]: time="2025-07-09T23:49:19.240771249Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:19.241926 containerd[1522]: time="2025-07-09T23:49:19.241895263Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.210438082s" Jul 9 23:49:19.241999 containerd[1522]: time="2025-07-09T23:49:19.241929155Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 9 23:49:19.244940 containerd[1522]: time="2025-07-09T23:49:19.244912894Z" level=info msg="CreateContainer within sandbox \"e90287c1a44e24bfeb26f3389e6f65fd6ede9a6a2f322373a167f17880c6a038\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 9 23:49:19.252943 containerd[1522]: time="2025-07-09T23:49:19.252343670Z" level=info msg="Container d02785da9762320e5652ecc702b64470bfec946bc6b3c98fe61dc05cafc9898e: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:49:19.255042 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2124790122.mount: Deactivated successfully. Jul 9 23:49:19.260879 containerd[1522]: time="2025-07-09T23:49:19.260834036Z" level=info msg="CreateContainer within sandbox \"e90287c1a44e24bfeb26f3389e6f65fd6ede9a6a2f322373a167f17880c6a038\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d02785da9762320e5652ecc702b64470bfec946bc6b3c98fe61dc05cafc9898e\"" Jul 9 23:49:19.261406 containerd[1522]: time="2025-07-09T23:49:19.261386280Z" level=info msg="StartContainer for \"d02785da9762320e5652ecc702b64470bfec946bc6b3c98fe61dc05cafc9898e\"" Jul 9 23:49:19.262846 containerd[1522]: time="2025-07-09T23:49:19.262819487Z" level=info msg="connecting to shim d02785da9762320e5652ecc702b64470bfec946bc6b3c98fe61dc05cafc9898e" address="unix:///run/containerd/s/351791c5d114acf70c2275521679698ed66aaf437c91560a8c84014a86ca1982" protocol=ttrpc version=3 Jul 9 23:49:19.284915 systemd[1]: Started cri-containerd-d02785da9762320e5652ecc702b64470bfec946bc6b3c98fe61dc05cafc9898e.scope - libcontainer container d02785da9762320e5652ecc702b64470bfec946bc6b3c98fe61dc05cafc9898e. Jul 9 23:49:19.322271 containerd[1522]: time="2025-07-09T23:49:19.322222841Z" level=info msg="StartContainer for \"d02785da9762320e5652ecc702b64470bfec946bc6b3c98fe61dc05cafc9898e\" returns successfully" Jul 9 23:49:19.409738 update_engine[1513]: I20250709 23:49:19.409391 1513 update_attempter.cc:509] Updating boot flags... Jul 9 23:49:19.986004 kubelet[2638]: E0709 23:49:19.985697 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:20.000713 containerd[1522]: time="2025-07-09T23:49:19.999177514Z" level=info msg="CreateContainer within sandbox \"807a5ac2a9004e19b8c4379c042cf6d433cf5dfb13b19f61b5ba9882512e89c9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 9 23:49:20.028843 containerd[1522]: time="2025-07-09T23:49:20.028797840Z" level=info msg="Container b6d598d9d7f831fbe6efd9db7eecc4fb4be992b62ef3474009274efe1b79c004: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:49:20.028966 kubelet[2638]: E0709 23:49:20.028823 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:20.042894 containerd[1522]: time="2025-07-09T23:49:20.042832757Z" level=info msg="CreateContainer within sandbox \"807a5ac2a9004e19b8c4379c042cf6d433cf5dfb13b19f61b5ba9882512e89c9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b6d598d9d7f831fbe6efd9db7eecc4fb4be992b62ef3474009274efe1b79c004\"" Jul 9 23:49:20.044704 containerd[1522]: time="2025-07-09T23:49:20.044638790Z" level=info msg="StartContainer for \"b6d598d9d7f831fbe6efd9db7eecc4fb4be992b62ef3474009274efe1b79c004\"" Jul 9 23:49:20.045812 containerd[1522]: time="2025-07-09T23:49:20.045734534Z" level=info msg="connecting to shim b6d598d9d7f831fbe6efd9db7eecc4fb4be992b62ef3474009274efe1b79c004" address="unix:///run/containerd/s/9dfac7a5f0fa6e1dac371a0e1ebd31b0ff13c2bd0b8bee25b3804b1de462df7d" protocol=ttrpc version=3 Jul 9 23:49:20.051093 kubelet[2638]: I0709 23:49:20.050149 2638 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-sfz6m" podStartSLOduration=1.914305481 podStartE2EDuration="13.050130954s" podCreationTimestamp="2025-07-09 23:49:07 +0000 UTC" firstStartedPulling="2025-07-09 23:49:08.106971722 +0000 UTC m=+6.307176348" lastFinishedPulling="2025-07-09 23:49:19.242797195 +0000 UTC m=+17.443001821" observedRunningTime="2025-07-09 23:49:20.049199228 +0000 UTC m=+18.249403854" watchObservedRunningTime="2025-07-09 23:49:20.050130954 +0000 UTC m=+18.250335580" Jul 9 23:49:20.099909 systemd[1]: Started cri-containerd-b6d598d9d7f831fbe6efd9db7eecc4fb4be992b62ef3474009274efe1b79c004.scope - libcontainer container b6d598d9d7f831fbe6efd9db7eecc4fb4be992b62ef3474009274efe1b79c004. Jul 9 23:49:20.150185 systemd[1]: cri-containerd-b6d598d9d7f831fbe6efd9db7eecc4fb4be992b62ef3474009274efe1b79c004.scope: Deactivated successfully. Jul 9 23:49:20.153491 containerd[1522]: time="2025-07-09T23:49:20.151845714Z" level=info msg="received exit event container_id:\"b6d598d9d7f831fbe6efd9db7eecc4fb4be992b62ef3474009274efe1b79c004\" id:\"b6d598d9d7f831fbe6efd9db7eecc4fb4be992b62ef3474009274efe1b79c004\" pid:3257 exited_at:{seconds:1752104960 nanos:151388593}" Jul 9 23:49:20.153491 containerd[1522]: time="2025-07-09T23:49:20.152057948Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b6d598d9d7f831fbe6efd9db7eecc4fb4be992b62ef3474009274efe1b79c004\" id:\"b6d598d9d7f831fbe6efd9db7eecc4fb4be992b62ef3474009274efe1b79c004\" pid:3257 exited_at:{seconds:1752104960 nanos:151388593}" Jul 9 23:49:20.158833 containerd[1522]: time="2025-07-09T23:49:20.158763777Z" level=info msg="StartContainer for \"b6d598d9d7f831fbe6efd9db7eecc4fb4be992b62ef3474009274efe1b79c004\" returns successfully" Jul 9 23:49:20.172308 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6d598d9d7f831fbe6efd9db7eecc4fb4be992b62ef3474009274efe1b79c004-rootfs.mount: Deactivated successfully. Jul 9 23:49:21.027061 kubelet[2638]: E0709 23:49:21.026054 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:21.027061 kubelet[2638]: E0709 23:49:21.026168 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:21.031480 containerd[1522]: time="2025-07-09T23:49:21.031432768Z" level=info msg="CreateContainer within sandbox \"807a5ac2a9004e19b8c4379c042cf6d433cf5dfb13b19f61b5ba9882512e89c9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 9 23:49:21.053720 containerd[1522]: time="2025-07-09T23:49:21.053119684Z" level=info msg="Container f2bab746185deca16935c763576e10ef89103d46fbbefb68e7219319afad5ecf: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:49:21.056739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2380206185.mount: Deactivated successfully. Jul 9 23:49:21.071078 containerd[1522]: time="2025-07-09T23:49:21.070960196Z" level=info msg="CreateContainer within sandbox \"807a5ac2a9004e19b8c4379c042cf6d433cf5dfb13b19f61b5ba9882512e89c9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f2bab746185deca16935c763576e10ef89103d46fbbefb68e7219319afad5ecf\"" Jul 9 23:49:21.071918 containerd[1522]: time="2025-07-09T23:49:21.071885825Z" level=info msg="StartContainer for \"f2bab746185deca16935c763576e10ef89103d46fbbefb68e7219319afad5ecf\"" Jul 9 23:49:21.072944 containerd[1522]: time="2025-07-09T23:49:21.072919170Z" level=info msg="connecting to shim f2bab746185deca16935c763576e10ef89103d46fbbefb68e7219319afad5ecf" address="unix:///run/containerd/s/9dfac7a5f0fa6e1dac371a0e1ebd31b0ff13c2bd0b8bee25b3804b1de462df7d" protocol=ttrpc version=3 Jul 9 23:49:21.099088 systemd[1]: Started cri-containerd-f2bab746185deca16935c763576e10ef89103d46fbbefb68e7219319afad5ecf.scope - libcontainer container f2bab746185deca16935c763576e10ef89103d46fbbefb68e7219319afad5ecf. Jul 9 23:49:21.136987 containerd[1522]: time="2025-07-09T23:49:21.136935889Z" level=info msg="StartContainer for \"f2bab746185deca16935c763576e10ef89103d46fbbefb68e7219319afad5ecf\" returns successfully" Jul 9 23:49:21.306319 containerd[1522]: time="2025-07-09T23:49:21.306101852Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f2bab746185deca16935c763576e10ef89103d46fbbefb68e7219319afad5ecf\" id:\"78d0802dc145c0a6ff0fcf9e2b957ff9e6bbae114646cc869e22d97e80dd72ed\" pid:3323 exited_at:{seconds:1752104961 nanos:305605926}" Jul 9 23:49:21.365548 kubelet[2638]: I0709 23:49:21.365514 2638 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 9 23:49:21.421856 systemd[1]: Created slice kubepods-burstable-pod4967c598_9590_4c79_bd18_9d4f6da82ca7.slice - libcontainer container kubepods-burstable-pod4967c598_9590_4c79_bd18_9d4f6da82ca7.slice. Jul 9 23:49:21.428847 kubelet[2638]: I0709 23:49:21.428756 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j9lv\" (UniqueName: \"kubernetes.io/projected/4967c598-9590-4c79-bd18-9d4f6da82ca7-kube-api-access-4j9lv\") pod \"coredns-668d6bf9bc-7p8cc\" (UID: \"4967c598-9590-4c79-bd18-9d4f6da82ca7\") " pod="kube-system/coredns-668d6bf9bc-7p8cc" Jul 9 23:49:21.428847 kubelet[2638]: I0709 23:49:21.428797 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4967c598-9590-4c79-bd18-9d4f6da82ca7-config-volume\") pod \"coredns-668d6bf9bc-7p8cc\" (UID: \"4967c598-9590-4c79-bd18-9d4f6da82ca7\") " pod="kube-system/coredns-668d6bf9bc-7p8cc" Jul 9 23:49:21.428847 kubelet[2638]: I0709 23:49:21.428817 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d332047c-157e-4832-ad12-51f99d6eb3bd-config-volume\") pod \"coredns-668d6bf9bc-swfc6\" (UID: \"d332047c-157e-4832-ad12-51f99d6eb3bd\") " pod="kube-system/coredns-668d6bf9bc-swfc6" Jul 9 23:49:21.429000 kubelet[2638]: I0709 23:49:21.428866 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc27l\" (UniqueName: \"kubernetes.io/projected/d332047c-157e-4832-ad12-51f99d6eb3bd-kube-api-access-vc27l\") pod \"coredns-668d6bf9bc-swfc6\" (UID: \"d332047c-157e-4832-ad12-51f99d6eb3bd\") " pod="kube-system/coredns-668d6bf9bc-swfc6" Jul 9 23:49:21.429808 systemd[1]: Created slice kubepods-burstable-podd332047c_157e_4832_ad12_51f99d6eb3bd.slice - libcontainer container kubepods-burstable-podd332047c_157e_4832_ad12_51f99d6eb3bd.slice. Jul 9 23:49:21.725154 kubelet[2638]: E0709 23:49:21.725021 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:21.728258 containerd[1522]: time="2025-07-09T23:49:21.728215211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7p8cc,Uid:4967c598-9590-4c79-bd18-9d4f6da82ca7,Namespace:kube-system,Attempt:0,}" Jul 9 23:49:21.733712 kubelet[2638]: E0709 23:49:21.733669 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:21.734178 containerd[1522]: time="2025-07-09T23:49:21.734146590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-swfc6,Uid:d332047c-157e-4832-ad12-51f99d6eb3bd,Namespace:kube-system,Attempt:0,}" Jul 9 23:49:22.038625 kubelet[2638]: E0709 23:49:22.038002 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:22.072732 kubelet[2638]: I0709 23:49:22.072322 2638 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qb757" podStartSLOduration=5.819335058 podStartE2EDuration="15.072305815s" podCreationTimestamp="2025-07-09 23:49:07 +0000 UTC" firstStartedPulling="2025-07-09 23:49:07.778167494 +0000 UTC m=+5.978372120" lastFinishedPulling="2025-07-09 23:49:17.031138251 +0000 UTC m=+15.231342877" observedRunningTime="2025-07-09 23:49:22.072206663 +0000 UTC m=+20.272411289" watchObservedRunningTime="2025-07-09 23:49:22.072305815 +0000 UTC m=+20.272510441" Jul 9 23:49:23.039090 kubelet[2638]: E0709 23:49:23.039059 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:23.592917 systemd-networkd[1423]: cilium_host: Link UP Jul 9 23:49:23.593512 systemd-networkd[1423]: cilium_net: Link UP Jul 9 23:49:23.593887 systemd-networkd[1423]: cilium_net: Gained carrier Jul 9 23:49:23.594158 systemd-networkd[1423]: cilium_host: Gained carrier Jul 9 23:49:23.693525 systemd-networkd[1423]: cilium_vxlan: Link UP Jul 9 23:49:23.693853 systemd-networkd[1423]: cilium_vxlan: Gained carrier Jul 9 23:49:24.021754 kernel: NET: Registered PF_ALG protocol family Jul 9 23:49:24.041245 kubelet[2638]: E0709 23:49:24.041212 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:24.068834 systemd-networkd[1423]: cilium_net: Gained IPv6LL Jul 9 23:49:24.288336 systemd-networkd[1423]: cilium_host: Gained IPv6LL Jul 9 23:49:24.661975 systemd-networkd[1423]: lxc_health: Link UP Jul 9 23:49:24.673693 systemd-networkd[1423]: lxc_health: Gained carrier Jul 9 23:49:24.858818 systemd-networkd[1423]: lxc314d831a2861: Link UP Jul 9 23:49:24.866032 systemd-networkd[1423]: lxc77484267051f: Link UP Jul 9 23:49:24.874882 kernel: eth0: renamed from tmp3a1f1 Jul 9 23:49:24.875030 kernel: eth0: renamed from tmp18e8d Jul 9 23:49:24.875798 systemd-networkd[1423]: lxc314d831a2861: Gained carrier Jul 9 23:49:24.876958 systemd-networkd[1423]: lxc77484267051f: Gained carrier Jul 9 23:49:25.044048 kubelet[2638]: E0709 23:49:25.043702 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:25.180854 systemd-networkd[1423]: cilium_vxlan: Gained IPv6LL Jul 9 23:49:26.047047 kubelet[2638]: E0709 23:49:26.046993 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:26.396857 systemd-networkd[1423]: lxc314d831a2861: Gained IPv6LL Jul 9 23:49:26.716867 systemd-networkd[1423]: lxc_health: Gained IPv6LL Jul 9 23:49:26.780849 systemd-networkd[1423]: lxc77484267051f: Gained IPv6LL Jul 9 23:49:27.048418 kubelet[2638]: E0709 23:49:27.048243 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:28.640418 containerd[1522]: time="2025-07-09T23:49:28.640368525Z" level=info msg="connecting to shim 18e8df045e7bc1dee2fa10ad8790ce38919e1c89d6928b48f9021af82e349934" address="unix:///run/containerd/s/4565fbef472d6ec3ae5bd61174d532d85b01faca93dec766102887be4f189140" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:49:28.645477 containerd[1522]: time="2025-07-09T23:49:28.645434273Z" level=info msg="connecting to shim 3a1f1c432233e805c7f35293e69882b09b56535d97f7b3e61b47ca475b57dc12" address="unix:///run/containerd/s/22ae6e920a51122b7331aa834f5291e59da8e37dce3ef9a09cbfab6d5ab031a2" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:49:28.666912 systemd[1]: Started cri-containerd-18e8df045e7bc1dee2fa10ad8790ce38919e1c89d6928b48f9021af82e349934.scope - libcontainer container 18e8df045e7bc1dee2fa10ad8790ce38919e1c89d6928b48f9021af82e349934. Jul 9 23:49:28.670798 systemd[1]: Started cri-containerd-3a1f1c432233e805c7f35293e69882b09b56535d97f7b3e61b47ca475b57dc12.scope - libcontainer container 3a1f1c432233e805c7f35293e69882b09b56535d97f7b3e61b47ca475b57dc12. Jul 9 23:49:28.681266 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 23:49:28.686906 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 23:49:28.706642 containerd[1522]: time="2025-07-09T23:49:28.706597381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7p8cc,Uid:4967c598-9590-4c79-bd18-9d4f6da82ca7,Namespace:kube-system,Attempt:0,} returns sandbox id \"18e8df045e7bc1dee2fa10ad8790ce38919e1c89d6928b48f9021af82e349934\"" Jul 9 23:49:28.709398 containerd[1522]: time="2025-07-09T23:49:28.709354129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-swfc6,Uid:d332047c-157e-4832-ad12-51f99d6eb3bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a1f1c432233e805c7f35293e69882b09b56535d97f7b3e61b47ca475b57dc12\"" Jul 9 23:49:28.711004 kubelet[2638]: E0709 23:49:28.710733 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:28.711329 kubelet[2638]: E0709 23:49:28.711137 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:28.712909 containerd[1522]: time="2025-07-09T23:49:28.712857939Z" level=info msg="CreateContainer within sandbox \"3a1f1c432233e805c7f35293e69882b09b56535d97f7b3e61b47ca475b57dc12\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 9 23:49:28.714879 containerd[1522]: time="2025-07-09T23:49:28.714842180Z" level=info msg="CreateContainer within sandbox \"18e8df045e7bc1dee2fa10ad8790ce38919e1c89d6928b48f9021af82e349934\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 9 23:49:28.739446 containerd[1522]: time="2025-07-09T23:49:28.739399333Z" level=info msg="Container d310e3f472a5eaee808792b1404f9d01470a4634704b608a2ea6be95945a761a: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:49:28.746639 containerd[1522]: time="2025-07-09T23:49:28.746595877Z" level=info msg="Container e95b1af64209b8c870286414b22eb68b10f0e6b49bba285e114f86efdab6bce6: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:49:28.830039 containerd[1522]: time="2025-07-09T23:49:28.829956686Z" level=info msg="CreateContainer within sandbox \"3a1f1c432233e805c7f35293e69882b09b56535d97f7b3e61b47ca475b57dc12\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d310e3f472a5eaee808792b1404f9d01470a4634704b608a2ea6be95945a761a\"" Jul 9 23:49:28.830718 containerd[1522]: time="2025-07-09T23:49:28.830673660Z" level=info msg="StartContainer for \"d310e3f472a5eaee808792b1404f9d01470a4634704b608a2ea6be95945a761a\"" Jul 9 23:49:28.831579 containerd[1522]: time="2025-07-09T23:49:28.831552473Z" level=info msg="connecting to shim d310e3f472a5eaee808792b1404f9d01470a4634704b608a2ea6be95945a761a" address="unix:///run/containerd/s/22ae6e920a51122b7331aa834f5291e59da8e37dce3ef9a09cbfab6d5ab031a2" protocol=ttrpc version=3 Jul 9 23:49:28.857876 systemd[1]: Started cri-containerd-d310e3f472a5eaee808792b1404f9d01470a4634704b608a2ea6be95945a761a.scope - libcontainer container d310e3f472a5eaee808792b1404f9d01470a4634704b608a2ea6be95945a761a. Jul 9 23:49:29.049718 containerd[1522]: time="2025-07-09T23:49:29.049133176Z" level=info msg="CreateContainer within sandbox \"18e8df045e7bc1dee2fa10ad8790ce38919e1c89d6928b48f9021af82e349934\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e95b1af64209b8c870286414b22eb68b10f0e6b49bba285e114f86efdab6bce6\"" Jul 9 23:49:29.052122 containerd[1522]: time="2025-07-09T23:49:29.050536583Z" level=info msg="StartContainer for \"e95b1af64209b8c870286414b22eb68b10f0e6b49bba285e114f86efdab6bce6\"" Jul 9 23:49:29.052733 containerd[1522]: time="2025-07-09T23:49:29.052680561Z" level=info msg="StartContainer for \"d310e3f472a5eaee808792b1404f9d01470a4634704b608a2ea6be95945a761a\" returns successfully" Jul 9 23:49:29.053672 containerd[1522]: time="2025-07-09T23:49:29.053619379Z" level=info msg="connecting to shim e95b1af64209b8c870286414b22eb68b10f0e6b49bba285e114f86efdab6bce6" address="unix:///run/containerd/s/4565fbef472d6ec3ae5bd61174d532d85b01faca93dec766102887be4f189140" protocol=ttrpc version=3 Jul 9 23:49:29.061255 kubelet[2638]: E0709 23:49:29.061218 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:29.082429 kubelet[2638]: I0709 23:49:29.082113 2638 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-swfc6" podStartSLOduration=22.082092637 podStartE2EDuration="22.082092637s" podCreationTimestamp="2025-07-09 23:49:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:49:29.081533307 +0000 UTC m=+27.281737933" watchObservedRunningTime="2025-07-09 23:49:29.082092637 +0000 UTC m=+27.282297303" Jul 9 23:49:29.110159 systemd[1]: Started cri-containerd-e95b1af64209b8c870286414b22eb68b10f0e6b49bba285e114f86efdab6bce6.scope - libcontainer container e95b1af64209b8c870286414b22eb68b10f0e6b49bba285e114f86efdab6bce6. Jul 9 23:49:29.163213 containerd[1522]: time="2025-07-09T23:49:29.162177052Z" level=info msg="StartContainer for \"e95b1af64209b8c870286414b22eb68b10f0e6b49bba285e114f86efdab6bce6\" returns successfully" Jul 9 23:49:30.070252 kubelet[2638]: E0709 23:49:30.070121 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:30.071379 kubelet[2638]: E0709 23:49:30.071349 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:30.083107 kubelet[2638]: I0709 23:49:30.082981 2638 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7p8cc" podStartSLOduration=23.082961553 podStartE2EDuration="23.082961553s" podCreationTimestamp="2025-07-09 23:49:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:49:30.082329212 +0000 UTC m=+28.282533838" watchObservedRunningTime="2025-07-09 23:49:30.082961553 +0000 UTC m=+28.283166179" Jul 9 23:49:31.072787 kubelet[2638]: E0709 23:49:31.072345 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:31.072787 kubelet[2638]: E0709 23:49:31.072526 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:32.077713 kubelet[2638]: E0709 23:49:32.077596 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:32.425281 systemd[1]: Started sshd@7-10.0.0.69:22-10.0.0.1:38132.service - OpenSSH per-connection server daemon (10.0.0.1:38132). Jul 9 23:49:32.479912 sshd[3975]: Accepted publickey for core from 10.0.0.1 port 38132 ssh2: RSA SHA256:gc9XfzCdXUit2xMYwbO9Atxxy3DG1hyaUiU6i3BG1Rg Jul 9 23:49:32.481476 sshd-session[3975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:49:32.486590 systemd-logind[1512]: New session 8 of user core. Jul 9 23:49:32.497941 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 9 23:49:32.635932 sshd[3977]: Connection closed by 10.0.0.1 port 38132 Jul 9 23:49:32.636284 sshd-session[3975]: pam_unix(sshd:session): session closed for user core Jul 9 23:49:32.640146 systemd[1]: sshd@7-10.0.0.69:22-10.0.0.1:38132.service: Deactivated successfully. Jul 9 23:49:32.642618 systemd[1]: session-8.scope: Deactivated successfully. Jul 9 23:49:32.643980 systemd-logind[1512]: Session 8 logged out. Waiting for processes to exit. Jul 9 23:49:32.648436 systemd-logind[1512]: Removed session 8. Jul 9 23:49:33.086300 kubelet[2638]: E0709 23:49:33.086251 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:37.650737 systemd[1]: Started sshd@8-10.0.0.69:22-10.0.0.1:46328.service - OpenSSH per-connection server daemon (10.0.0.1:46328). Jul 9 23:49:37.726806 sshd[3997]: Accepted publickey for core from 10.0.0.1 port 46328 ssh2: RSA SHA256:gc9XfzCdXUit2xMYwbO9Atxxy3DG1hyaUiU6i3BG1Rg Jul 9 23:49:37.728802 sshd-session[3997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:49:37.734484 systemd-logind[1512]: New session 9 of user core. Jul 9 23:49:37.743955 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 9 23:49:37.867488 sshd[4001]: Connection closed by 10.0.0.1 port 46328 Jul 9 23:49:37.868192 sshd-session[3997]: pam_unix(sshd:session): session closed for user core Jul 9 23:49:37.872029 systemd[1]: sshd@8-10.0.0.69:22-10.0.0.1:46328.service: Deactivated successfully. Jul 9 23:49:37.874286 systemd[1]: session-9.scope: Deactivated successfully. Jul 9 23:49:37.876139 systemd-logind[1512]: Session 9 logged out. Waiting for processes to exit. Jul 9 23:49:37.877365 systemd-logind[1512]: Removed session 9. Jul 9 23:49:42.888748 systemd[1]: Started sshd@9-10.0.0.69:22-10.0.0.1:59816.service - OpenSSH per-connection server daemon (10.0.0.1:59816). Jul 9 23:49:42.957759 sshd[4019]: Accepted publickey for core from 10.0.0.1 port 59816 ssh2: RSA SHA256:gc9XfzCdXUit2xMYwbO9Atxxy3DG1hyaUiU6i3BG1Rg Jul 9 23:49:42.959783 sshd-session[4019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:49:42.965651 systemd-logind[1512]: New session 10 of user core. Jul 9 23:49:42.978579 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 9 23:49:43.109941 sshd[4021]: Connection closed by 10.0.0.1 port 59816 Jul 9 23:49:43.111393 sshd-session[4019]: pam_unix(sshd:session): session closed for user core Jul 9 23:49:43.116682 systemd[1]: sshd@9-10.0.0.69:22-10.0.0.1:59816.service: Deactivated successfully. Jul 9 23:49:43.120119 systemd[1]: session-10.scope: Deactivated successfully. Jul 9 23:49:43.121897 systemd-logind[1512]: Session 10 logged out. Waiting for processes to exit. Jul 9 23:49:43.123742 systemd-logind[1512]: Removed session 10. Jul 9 23:49:48.126816 systemd[1]: Started sshd@10-10.0.0.69:22-10.0.0.1:59828.service - OpenSSH per-connection server daemon (10.0.0.1:59828). Jul 9 23:49:48.191570 sshd[4035]: Accepted publickey for core from 10.0.0.1 port 59828 ssh2: RSA SHA256:gc9XfzCdXUit2xMYwbO9Atxxy3DG1hyaUiU6i3BG1Rg Jul 9 23:49:48.193330 sshd-session[4035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:49:48.198795 systemd-logind[1512]: New session 11 of user core. Jul 9 23:49:48.208936 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 9 23:49:48.340565 sshd[4037]: Connection closed by 10.0.0.1 port 59828 Jul 9 23:49:48.341939 sshd-session[4035]: pam_unix(sshd:session): session closed for user core Jul 9 23:49:48.359815 systemd[1]: sshd@10-10.0.0.69:22-10.0.0.1:59828.service: Deactivated successfully. Jul 9 23:49:48.363536 systemd[1]: session-11.scope: Deactivated successfully. Jul 9 23:49:48.364601 systemd-logind[1512]: Session 11 logged out. Waiting for processes to exit. Jul 9 23:49:48.371907 systemd[1]: Started sshd@11-10.0.0.69:22-10.0.0.1:59844.service - OpenSSH per-connection server daemon (10.0.0.1:59844). Jul 9 23:49:48.372908 systemd-logind[1512]: Removed session 11. Jul 9 23:49:48.443252 sshd[4052]: Accepted publickey for core from 10.0.0.1 port 59844 ssh2: RSA SHA256:gc9XfzCdXUit2xMYwbO9Atxxy3DG1hyaUiU6i3BG1Rg Jul 9 23:49:48.445649 sshd-session[4052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:49:48.450970 systemd-logind[1512]: New session 12 of user core. Jul 9 23:49:48.458948 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 9 23:49:48.646379 sshd[4054]: Connection closed by 10.0.0.1 port 59844 Jul 9 23:49:48.649729 sshd-session[4052]: pam_unix(sshd:session): session closed for user core Jul 9 23:49:48.668593 systemd[1]: sshd@11-10.0.0.69:22-10.0.0.1:59844.service: Deactivated successfully. Jul 9 23:49:48.676067 systemd[1]: session-12.scope: Deactivated successfully. Jul 9 23:49:48.678457 systemd-logind[1512]: Session 12 logged out. Waiting for processes to exit. Jul 9 23:49:48.685001 systemd[1]: Started sshd@12-10.0.0.69:22-10.0.0.1:59860.service - OpenSSH per-connection server daemon (10.0.0.1:59860). Jul 9 23:49:48.686360 systemd-logind[1512]: Removed session 12. Jul 9 23:49:48.739835 sshd[4065]: Accepted publickey for core from 10.0.0.1 port 59860 ssh2: RSA SHA256:gc9XfzCdXUit2xMYwbO9Atxxy3DG1hyaUiU6i3BG1Rg Jul 9 23:49:48.742072 sshd-session[4065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:49:48.748020 systemd-logind[1512]: New session 13 of user core. Jul 9 23:49:48.760940 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 9 23:49:48.886540 sshd[4067]: Connection closed by 10.0.0.1 port 59860 Jul 9 23:49:48.887943 sshd-session[4065]: pam_unix(sshd:session): session closed for user core Jul 9 23:49:48.893087 systemd[1]: sshd@12-10.0.0.69:22-10.0.0.1:59860.service: Deactivated successfully. Jul 9 23:49:48.896664 systemd[1]: session-13.scope: Deactivated successfully. Jul 9 23:49:48.897649 systemd-logind[1512]: Session 13 logged out. Waiting for processes to exit. Jul 9 23:49:48.898955 systemd-logind[1512]: Removed session 13. Jul 9 23:49:53.907259 systemd[1]: Started sshd@13-10.0.0.69:22-10.0.0.1:50560.service - OpenSSH per-connection server daemon (10.0.0.1:50560). Jul 9 23:49:53.968248 sshd[4081]: Accepted publickey for core from 10.0.0.1 port 50560 ssh2: RSA SHA256:gc9XfzCdXUit2xMYwbO9Atxxy3DG1hyaUiU6i3BG1Rg Jul 9 23:49:53.969669 sshd-session[4081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:49:53.974215 systemd-logind[1512]: New session 14 of user core. Jul 9 23:49:53.989948 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 9 23:49:54.114203 sshd[4083]: Connection closed by 10.0.0.1 port 50560 Jul 9 23:49:54.114956 sshd-session[4081]: pam_unix(sshd:session): session closed for user core Jul 9 23:49:54.118576 systemd[1]: sshd@13-10.0.0.69:22-10.0.0.1:50560.service: Deactivated successfully. Jul 9 23:49:54.122226 systemd[1]: session-14.scope: Deactivated successfully. Jul 9 23:49:54.123013 systemd-logind[1512]: Session 14 logged out. Waiting for processes to exit. Jul 9 23:49:54.124117 systemd-logind[1512]: Removed session 14. Jul 9 23:49:59.127571 systemd[1]: Started sshd@14-10.0.0.69:22-10.0.0.1:50564.service - OpenSSH per-connection server daemon (10.0.0.1:50564). Jul 9 23:49:59.197418 sshd[4098]: Accepted publickey for core from 10.0.0.1 port 50564 ssh2: RSA SHA256:gc9XfzCdXUit2xMYwbO9Atxxy3DG1hyaUiU6i3BG1Rg Jul 9 23:49:59.198867 sshd-session[4098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:49:59.205611 systemd-logind[1512]: New session 15 of user core. Jul 9 23:49:59.213864 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 9 23:49:59.330283 sshd[4100]: Connection closed by 10.0.0.1 port 50564 Jul 9 23:49:59.330998 sshd-session[4098]: pam_unix(sshd:session): session closed for user core Jul 9 23:49:59.345637 systemd[1]: sshd@14-10.0.0.69:22-10.0.0.1:50564.service: Deactivated successfully. Jul 9 23:49:59.349063 systemd[1]: session-15.scope: Deactivated successfully. Jul 9 23:49:59.350109 systemd-logind[1512]: Session 15 logged out. Waiting for processes to exit. Jul 9 23:49:59.352661 systemd-logind[1512]: Removed session 15. Jul 9 23:49:59.356387 systemd[1]: Started sshd@15-10.0.0.69:22-10.0.0.1:50570.service - OpenSSH per-connection server daemon (10.0.0.1:50570). Jul 9 23:49:59.417979 sshd[4113]: Accepted publickey for core from 10.0.0.1 port 50570 ssh2: RSA SHA256:gc9XfzCdXUit2xMYwbO9Atxxy3DG1hyaUiU6i3BG1Rg Jul 9 23:49:59.419627 sshd-session[4113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:49:59.424505 systemd-logind[1512]: New session 16 of user core. Jul 9 23:49:59.434895 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 9 23:49:59.736340 sshd[4115]: Connection closed by 10.0.0.1 port 50570 Jul 9 23:49:59.736914 sshd-session[4113]: pam_unix(sshd:session): session closed for user core Jul 9 23:49:59.747427 systemd[1]: sshd@15-10.0.0.69:22-10.0.0.1:50570.service: Deactivated successfully. Jul 9 23:49:59.749617 systemd[1]: session-16.scope: Deactivated successfully. Jul 9 23:49:59.750706 systemd-logind[1512]: Session 16 logged out. Waiting for processes to exit. Jul 9 23:49:59.752899 systemd-logind[1512]: Removed session 16. Jul 9 23:49:59.755068 systemd[1]: Started sshd@16-10.0.0.69:22-10.0.0.1:50578.service - OpenSSH per-connection server daemon (10.0.0.1:50578). Jul 9 23:49:59.831974 sshd[4127]: Accepted publickey for core from 10.0.0.1 port 50578 ssh2: RSA SHA256:gc9XfzCdXUit2xMYwbO9Atxxy3DG1hyaUiU6i3BG1Rg Jul 9 23:49:59.833307 sshd-session[4127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:49:59.837712 systemd-logind[1512]: New session 17 of user core. Jul 9 23:49:59.847906 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 9 23:50:00.717715 sshd[4129]: Connection closed by 10.0.0.1 port 50578 Jul 9 23:50:00.719978 sshd-session[4127]: pam_unix(sshd:session): session closed for user core Jul 9 23:50:00.728910 systemd[1]: sshd@16-10.0.0.69:22-10.0.0.1:50578.service: Deactivated successfully. Jul 9 23:50:00.734118 systemd[1]: session-17.scope: Deactivated successfully. Jul 9 23:50:00.736299 systemd-logind[1512]: Session 17 logged out. Waiting for processes to exit. Jul 9 23:50:00.744168 systemd[1]: Started sshd@17-10.0.0.69:22-10.0.0.1:50580.service - OpenSSH per-connection server daemon (10.0.0.1:50580). Jul 9 23:50:00.750211 systemd-logind[1512]: Removed session 17. Jul 9 23:50:00.819416 sshd[4151]: Accepted publickey for core from 10.0.0.1 port 50580 ssh2: RSA SHA256:gc9XfzCdXUit2xMYwbO9Atxxy3DG1hyaUiU6i3BG1Rg Jul 9 23:50:00.820867 sshd-session[4151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:50:00.826117 systemd-logind[1512]: New session 18 of user core. Jul 9 23:50:00.836927 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 9 23:50:01.092220 sshd[4153]: Connection closed by 10.0.0.1 port 50580 Jul 9 23:50:01.093648 sshd-session[4151]: pam_unix(sshd:session): session closed for user core Jul 9 23:50:01.104494 systemd[1]: sshd@17-10.0.0.69:22-10.0.0.1:50580.service: Deactivated successfully. Jul 9 23:50:01.108372 systemd[1]: session-18.scope: Deactivated successfully. Jul 9 23:50:01.109524 systemd-logind[1512]: Session 18 logged out. Waiting for processes to exit. Jul 9 23:50:01.112750 systemd[1]: Started sshd@18-10.0.0.69:22-10.0.0.1:50586.service - OpenSSH per-connection server daemon (10.0.0.1:50586). Jul 9 23:50:01.113621 systemd-logind[1512]: Removed session 18. Jul 9 23:50:01.167120 sshd[4165]: Accepted publickey for core from 10.0.0.1 port 50586 ssh2: RSA SHA256:gc9XfzCdXUit2xMYwbO9Atxxy3DG1hyaUiU6i3BG1Rg Jul 9 23:50:01.169078 sshd-session[4165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:50:01.173839 systemd-logind[1512]: New session 19 of user core. Jul 9 23:50:01.180911 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 9 23:50:01.307011 sshd[4167]: Connection closed by 10.0.0.1 port 50586 Jul 9 23:50:01.307833 sshd-session[4165]: pam_unix(sshd:session): session closed for user core Jul 9 23:50:01.311975 systemd[1]: sshd@18-10.0.0.69:22-10.0.0.1:50586.service: Deactivated successfully. Jul 9 23:50:01.315133 systemd[1]: session-19.scope: Deactivated successfully. Jul 9 23:50:01.315993 systemd-logind[1512]: Session 19 logged out. Waiting for processes to exit. Jul 9 23:50:01.317491 systemd-logind[1512]: Removed session 19. Jul 9 23:50:06.328478 systemd[1]: Started sshd@19-10.0.0.69:22-10.0.0.1:58022.service - OpenSSH per-connection server daemon (10.0.0.1:58022). Jul 9 23:50:06.376741 sshd[4185]: Accepted publickey for core from 10.0.0.1 port 58022 ssh2: RSA SHA256:gc9XfzCdXUit2xMYwbO9Atxxy3DG1hyaUiU6i3BG1Rg Jul 9 23:50:06.377968 sshd-session[4185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:50:06.385771 systemd-logind[1512]: New session 20 of user core. Jul 9 23:50:06.403943 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 9 23:50:06.527498 sshd[4187]: Connection closed by 10.0.0.1 port 58022 Jul 9 23:50:06.528050 sshd-session[4185]: pam_unix(sshd:session): session closed for user core Jul 9 23:50:06.532320 systemd[1]: sshd@19-10.0.0.69:22-10.0.0.1:58022.service: Deactivated successfully. Jul 9 23:50:06.534245 systemd[1]: session-20.scope: Deactivated successfully. Jul 9 23:50:06.535190 systemd-logind[1512]: Session 20 logged out. Waiting for processes to exit. Jul 9 23:50:06.536956 systemd-logind[1512]: Removed session 20. Jul 9 23:50:11.543408 systemd[1]: Started sshd@20-10.0.0.69:22-10.0.0.1:58034.service - OpenSSH per-connection server daemon (10.0.0.1:58034). Jul 9 23:50:11.591456 sshd[4203]: Accepted publickey for core from 10.0.0.1 port 58034 ssh2: RSA SHA256:gc9XfzCdXUit2xMYwbO9Atxxy3DG1hyaUiU6i3BG1Rg Jul 9 23:50:11.592806 sshd-session[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:50:11.597758 systemd-logind[1512]: New session 21 of user core. Jul 9 23:50:11.607879 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 9 23:50:11.723857 sshd[4205]: Connection closed by 10.0.0.1 port 58034 Jul 9 23:50:11.724190 sshd-session[4203]: pam_unix(sshd:session): session closed for user core Jul 9 23:50:11.728230 systemd[1]: sshd@20-10.0.0.69:22-10.0.0.1:58034.service: Deactivated successfully. Jul 9 23:50:11.730070 systemd[1]: session-21.scope: Deactivated successfully. Jul 9 23:50:11.732149 systemd-logind[1512]: Session 21 logged out. Waiting for processes to exit. Jul 9 23:50:11.733216 systemd-logind[1512]: Removed session 21. Jul 9 23:50:13.909579 kubelet[2638]: E0709 23:50:13.909172 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:50:16.747857 systemd[1]: Started sshd@21-10.0.0.69:22-10.0.0.1:57020.service - OpenSSH per-connection server daemon (10.0.0.1:57020). Jul 9 23:50:16.808022 sshd[4219]: Accepted publickey for core from 10.0.0.1 port 57020 ssh2: RSA SHA256:gc9XfzCdXUit2xMYwbO9Atxxy3DG1hyaUiU6i3BG1Rg Jul 9 23:50:16.809400 sshd-session[4219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:50:16.817978 systemd-logind[1512]: New session 22 of user core. Jul 9 23:50:16.832960 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 9 23:50:16.962872 sshd[4221]: Connection closed by 10.0.0.1 port 57020 Jul 9 23:50:16.963571 sshd-session[4219]: pam_unix(sshd:session): session closed for user core Jul 9 23:50:16.977344 systemd[1]: sshd@21-10.0.0.69:22-10.0.0.1:57020.service: Deactivated successfully. Jul 9 23:50:16.980263 systemd[1]: session-22.scope: Deactivated successfully. Jul 9 23:50:16.981665 systemd-logind[1512]: Session 22 logged out. Waiting for processes to exit. Jul 9 23:50:16.985933 systemd[1]: Started sshd@22-10.0.0.69:22-10.0.0.1:57022.service - OpenSSH per-connection server daemon (10.0.0.1:57022). Jul 9 23:50:16.986756 systemd-logind[1512]: Removed session 22. Jul 9 23:50:17.036784 sshd[4234]: Accepted publickey for core from 10.0.0.1 port 57022 ssh2: RSA SHA256:gc9XfzCdXUit2xMYwbO9Atxxy3DG1hyaUiU6i3BG1Rg Jul 9 23:50:17.038241 sshd-session[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:50:17.043808 systemd-logind[1512]: New session 23 of user core. Jul 9 23:50:17.053943 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 9 23:50:19.172068 containerd[1522]: time="2025-07-09T23:50:19.171982680Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 9 23:50:19.176256 containerd[1522]: time="2025-07-09T23:50:19.176222655Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f2bab746185deca16935c763576e10ef89103d46fbbefb68e7219319afad5ecf\" id:\"4bbda63c8e4c8ceae2dcedbd656a2a504b692b64563ea864cd7be8a34e548fac\" pid:4256 exited_at:{seconds:1752105019 nanos:175898278}" Jul 9 23:50:19.178056 containerd[1522]: time="2025-07-09T23:50:19.178023165Z" level=info msg="StopContainer for \"f2bab746185deca16935c763576e10ef89103d46fbbefb68e7219319afad5ecf\" with timeout 2 (s)" Jul 9 23:50:19.178314 containerd[1522]: time="2025-07-09T23:50:19.178278667Z" level=info msg="Stop container \"f2bab746185deca16935c763576e10ef89103d46fbbefb68e7219319afad5ecf\" with signal terminated" Jul 9 23:50:19.181897 containerd[1522]: time="2025-07-09T23:50:19.181862849Z" level=info msg="StopContainer for \"d02785da9762320e5652ecc702b64470bfec946bc6b3c98fe61dc05cafc9898e\" with timeout 30 (s)" Jul 9 23:50:19.182253 containerd[1522]: time="2025-07-09T23:50:19.182225302Z" level=info msg="Stop container \"d02785da9762320e5652ecc702b64470bfec946bc6b3c98fe61dc05cafc9898e\" with signal terminated" Jul 9 23:50:19.189004 systemd-networkd[1423]: lxc_health: Link DOWN Jul 9 23:50:19.189010 systemd-networkd[1423]: lxc_health: Lost carrier Jul 9 23:50:19.195957 systemd[1]: cri-containerd-d02785da9762320e5652ecc702b64470bfec946bc6b3c98fe61dc05cafc9898e.scope: Deactivated successfully. Jul 9 23:50:19.198117 containerd[1522]: time="2025-07-09T23:50:19.198076560Z" level=info msg="received exit event container_id:\"d02785da9762320e5652ecc702b64470bfec946bc6b3c98fe61dc05cafc9898e\" id:\"d02785da9762320e5652ecc702b64470bfec946bc6b3c98fe61dc05cafc9898e\" pid:3206 exited_at:{seconds:1752105019 nanos:197798140}" Jul 9 23:50:19.198330 containerd[1522]: time="2025-07-09T23:50:19.198311783Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d02785da9762320e5652ecc702b64470bfec946bc6b3c98fe61dc05cafc9898e\" id:\"d02785da9762320e5652ecc702b64470bfec946bc6b3c98fe61dc05cafc9898e\" pid:3206 exited_at:{seconds:1752105019 nanos:197798140}" Jul 9 23:50:19.211173 systemd[1]: cri-containerd-f2bab746185deca16935c763576e10ef89103d46fbbefb68e7219319afad5ecf.scope: Deactivated successfully. Jul 9 23:50:19.211494 systemd[1]: cri-containerd-f2bab746185deca16935c763576e10ef89103d46fbbefb68e7219319afad5ecf.scope: Consumed 6.973s CPU time, 124.1M memory peak, 132K read from disk, 12.9M written to disk. Jul 9 23:50:19.218356 containerd[1522]: time="2025-07-09T23:50:19.218315502Z" level=info msg="received exit event container_id:\"f2bab746185deca16935c763576e10ef89103d46fbbefb68e7219319afad5ecf\" id:\"f2bab746185deca16935c763576e10ef89103d46fbbefb68e7219319afad5ecf\" pid:3293 exited_at:{seconds:1752105019 nanos:218093038}" Jul 9 23:50:19.218870 containerd[1522]: time="2025-07-09T23:50:19.218819906Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f2bab746185deca16935c763576e10ef89103d46fbbefb68e7219319afad5ecf\" id:\"f2bab746185deca16935c763576e10ef89103d46fbbefb68e7219319afad5ecf\" pid:3293 exited_at:{seconds:1752105019 nanos:218093038}" Jul 9 23:50:19.222007 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d02785da9762320e5652ecc702b64470bfec946bc6b3c98fe61dc05cafc9898e-rootfs.mount: Deactivated successfully. Jul 9 23:50:19.238027 containerd[1522]: time="2025-07-09T23:50:19.237929569Z" level=info msg="StopContainer for \"d02785da9762320e5652ecc702b64470bfec946bc6b3c98fe61dc05cafc9898e\" returns successfully" Jul 9 23:50:19.243521 containerd[1522]: time="2025-07-09T23:50:19.241446075Z" level=info msg="StopPodSandbox for \"e90287c1a44e24bfeb26f3389e6f65fd6ede9a6a2f322373a167f17880c6a038\"" Jul 9 23:50:19.243297 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2bab746185deca16935c763576e10ef89103d46fbbefb68e7219319afad5ecf-rootfs.mount: Deactivated successfully. Jul 9 23:50:19.244614 containerd[1522]: time="2025-07-09T23:50:19.244545252Z" level=info msg="Container to stop \"d02785da9762320e5652ecc702b64470bfec946bc6b3c98fe61dc05cafc9898e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 23:50:19.250522 containerd[1522]: time="2025-07-09T23:50:19.250467265Z" level=info msg="StopContainer for \"f2bab746185deca16935c763576e10ef89103d46fbbefb68e7219319afad5ecf\" returns successfully" Jul 9 23:50:19.253775 containerd[1522]: time="2025-07-09T23:50:19.253732310Z" level=info msg="StopPodSandbox for \"807a5ac2a9004e19b8c4379c042cf6d433cf5dfb13b19f61b5ba9882512e89c9\"" Jul 9 23:50:19.253901 containerd[1522]: time="2025-07-09T23:50:19.253844902Z" level=info msg="Container to stop \"814cf0df638e13d8c570079f5bd98b350811442a1698337ba8f825ea3a6afebc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 23:50:19.253901 containerd[1522]: time="2025-07-09T23:50:19.253862981Z" level=info msg="Container to stop \"da51315d832313529f1c3f9cb6375300d04d06277d3f3a74d561a8173a339703\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 23:50:19.253901 containerd[1522]: time="2025-07-09T23:50:19.253878259Z" level=info msg="Container to stop \"afdc8ecc031cd6205bed68c1f7787c58891d56e850687ab081296f7ca4088091\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 23:50:19.253901 containerd[1522]: time="2025-07-09T23:50:19.253888459Z" level=info msg="Container to stop \"b6d598d9d7f831fbe6efd9db7eecc4fb4be992b62ef3474009274efe1b79c004\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 23:50:19.253901 containerd[1522]: time="2025-07-09T23:50:19.253896698Z" level=info msg="Container to stop \"f2bab746185deca16935c763576e10ef89103d46fbbefb68e7219319afad5ecf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 23:50:19.259893 systemd[1]: cri-containerd-807a5ac2a9004e19b8c4379c042cf6d433cf5dfb13b19f61b5ba9882512e89c9.scope: Deactivated successfully. Jul 9 23:50:19.262379 containerd[1522]: time="2025-07-09T23:50:19.262330930Z" level=info msg="TaskExit event in podsandbox handler container_id:\"807a5ac2a9004e19b8c4379c042cf6d433cf5dfb13b19f61b5ba9882512e89c9\" id:\"807a5ac2a9004e19b8c4379c042cf6d433cf5dfb13b19f61b5ba9882512e89c9\" pid:2785 exit_status:137 exited_at:{seconds:1752105019 nanos:261989675}" Jul 9 23:50:19.279587 systemd[1]: cri-containerd-e90287c1a44e24bfeb26f3389e6f65fd6ede9a6a2f322373a167f17880c6a038.scope: Deactivated successfully. Jul 9 23:50:19.288205 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-807a5ac2a9004e19b8c4379c042cf6d433cf5dfb13b19f61b5ba9882512e89c9-rootfs.mount: Deactivated successfully. Jul 9 23:50:19.293138 containerd[1522]: time="2025-07-09T23:50:19.293039318Z" level=info msg="shim disconnected" id=807a5ac2a9004e19b8c4379c042cf6d433cf5dfb13b19f61b5ba9882512e89c9 namespace=k8s.io Jul 9 23:50:19.300720 containerd[1522]: time="2025-07-09T23:50:19.293078155Z" level=warning msg="cleaning up after shim disconnected" id=807a5ac2a9004e19b8c4379c042cf6d433cf5dfb13b19f61b5ba9882512e89c9 namespace=k8s.io Jul 9 23:50:19.300720 containerd[1522]: time="2025-07-09T23:50:19.300713525Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 23:50:19.312890 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e90287c1a44e24bfeb26f3389e6f65fd6ede9a6a2f322373a167f17880c6a038-rootfs.mount: Deactivated successfully. Jul 9 23:50:19.316979 containerd[1522]: time="2025-07-09T23:50:19.316941675Z" level=info msg="shim disconnected" id=e90287c1a44e24bfeb26f3389e6f65fd6ede9a6a2f322373a167f17880c6a038 namespace=k8s.io Jul 9 23:50:19.317119 containerd[1522]: time="2025-07-09T23:50:19.316975833Z" level=warning msg="cleaning up after shim disconnected" id=e90287c1a44e24bfeb26f3389e6f65fd6ede9a6a2f322373a167f17880c6a038 namespace=k8s.io Jul 9 23:50:19.317119 containerd[1522]: time="2025-07-09T23:50:19.317005671Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 23:50:19.319247 containerd[1522]: time="2025-07-09T23:50:19.319204592Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e90287c1a44e24bfeb26f3389e6f65fd6ede9a6a2f322373a167f17880c6a038\" id:\"e90287c1a44e24bfeb26f3389e6f65fd6ede9a6a2f322373a167f17880c6a038\" pid:2883 exit_status:137 exited_at:{seconds:1752105019 nanos:288244783}" Jul 9 23:50:19.319496 containerd[1522]: time="2025-07-09T23:50:19.319466573Z" level=info msg="TearDown network for sandbox \"807a5ac2a9004e19b8c4379c042cf6d433cf5dfb13b19f61b5ba9882512e89c9\" successfully" Jul 9 23:50:19.319545 containerd[1522]: time="2025-07-09T23:50:19.319498731Z" level=info msg="StopPodSandbox for \"807a5ac2a9004e19b8c4379c042cf6d433cf5dfb13b19f61b5ba9882512e89c9\" returns successfully" Jul 9 23:50:19.321289 containerd[1522]: time="2025-07-09T23:50:19.321216727Z" level=info msg="TearDown network for sandbox \"e90287c1a44e24bfeb26f3389e6f65fd6ede9a6a2f322373a167f17880c6a038\" successfully" Jul 9 23:50:19.321289 containerd[1522]: time="2025-07-09T23:50:19.321248085Z" level=info msg="StopPodSandbox for \"e90287c1a44e24bfeb26f3389e6f65fd6ede9a6a2f322373a167f17880c6a038\" returns successfully" Jul 9 23:50:19.321303 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e90287c1a44e24bfeb26f3389e6f65fd6ede9a6a2f322373a167f17880c6a038-shm.mount: Deactivated successfully. Jul 9 23:50:19.321414 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-807a5ac2a9004e19b8c4379c042cf6d433cf5dfb13b19f61b5ba9882512e89c9-shm.mount: Deactivated successfully. Jul 9 23:50:19.321477 containerd[1522]: time="2025-07-09T23:50:19.321451710Z" level=info msg="received exit event sandbox_id:\"807a5ac2a9004e19b8c4379c042cf6d433cf5dfb13b19f61b5ba9882512e89c9\" exit_status:137 exited_at:{seconds:1752105019 nanos:261989675}" Jul 9 23:50:19.321907 containerd[1522]: time="2025-07-09T23:50:19.321852442Z" level=info msg="received exit event sandbox_id:\"e90287c1a44e24bfeb26f3389e6f65fd6ede9a6a2f322373a167f17880c6a038\" exit_status:137 exited_at:{seconds:1752105019 nanos:288244783}" Jul 9 23:50:19.438242 kubelet[2638]: I0709 23:50:19.438095 2638 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/09d9652e-00d8-4cb5-96b7-df6aabc1e902-xtables-lock\") pod \"09d9652e-00d8-4cb5-96b7-df6aabc1e902\" (UID: \"09d9652e-00d8-4cb5-96b7-df6aabc1e902\") " Jul 9 23:50:19.438242 kubelet[2638]: I0709 23:50:19.438155 2638 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kz94f\" (UniqueName: \"kubernetes.io/projected/0ff2f7f6-1790-4e03-b1f0-65e619634106-kube-api-access-kz94f\") pod \"0ff2f7f6-1790-4e03-b1f0-65e619634106\" (UID: \"0ff2f7f6-1790-4e03-b1f0-65e619634106\") " Jul 9 23:50:19.438242 kubelet[2638]: I0709 23:50:19.438178 2638 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/09d9652e-00d8-4cb5-96b7-df6aabc1e902-hubble-tls\") pod \"09d9652e-00d8-4cb5-96b7-df6aabc1e902\" (UID: \"09d9652e-00d8-4cb5-96b7-df6aabc1e902\") " Jul 9 23:50:19.438242 kubelet[2638]: I0709 23:50:19.438204 2638 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/09d9652e-00d8-4cb5-96b7-df6aabc1e902-cilium-cgroup\") pod \"09d9652e-00d8-4cb5-96b7-df6aabc1e902\" (UID: \"09d9652e-00d8-4cb5-96b7-df6aabc1e902\") " Jul 9 23:50:19.438242 kubelet[2638]: I0709 23:50:19.438220 2638 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/09d9652e-00d8-4cb5-96b7-df6aabc1e902-etc-cni-netd\") pod \"09d9652e-00d8-4cb5-96b7-df6aabc1e902\" (UID: \"09d9652e-00d8-4cb5-96b7-df6aabc1e902\") " Jul 9 23:50:19.438242 kubelet[2638]: I0709 23:50:19.438238 2638 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jps98\" (UniqueName: \"kubernetes.io/projected/09d9652e-00d8-4cb5-96b7-df6aabc1e902-kube-api-access-jps98\") pod \"09d9652e-00d8-4cb5-96b7-df6aabc1e902\" (UID: \"09d9652e-00d8-4cb5-96b7-df6aabc1e902\") " Jul 9 23:50:19.438825 kubelet[2638]: I0709 23:50:19.438256 2638 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/09d9652e-00d8-4cb5-96b7-df6aabc1e902-clustermesh-secrets\") pod \"09d9652e-00d8-4cb5-96b7-df6aabc1e902\" (UID: \"09d9652e-00d8-4cb5-96b7-df6aabc1e902\") " Jul 9 23:50:19.438825 kubelet[2638]: I0709 23:50:19.438282 2638 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/09d9652e-00d8-4cb5-96b7-df6aabc1e902-cilium-config-path\") pod \"09d9652e-00d8-4cb5-96b7-df6aabc1e902\" (UID: \"09d9652e-00d8-4cb5-96b7-df6aabc1e902\") " Jul 9 23:50:19.438825 kubelet[2638]: I0709 23:50:19.438297 2638 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09d9652e-00d8-4cb5-96b7-df6aabc1e902-lib-modules\") pod \"09d9652e-00d8-4cb5-96b7-df6aabc1e902\" (UID: \"09d9652e-00d8-4cb5-96b7-df6aabc1e902\") " Jul 9 23:50:19.438825 kubelet[2638]: I0709 23:50:19.438311 2638 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/09d9652e-00d8-4cb5-96b7-df6aabc1e902-host-proc-sys-net\") pod \"09d9652e-00d8-4cb5-96b7-df6aabc1e902\" (UID: \"09d9652e-00d8-4cb5-96b7-df6aabc1e902\") " Jul 9 23:50:19.438825 kubelet[2638]: I0709 23:50:19.438327 2638 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0ff2f7f6-1790-4e03-b1f0-65e619634106-cilium-config-path\") pod \"0ff2f7f6-1790-4e03-b1f0-65e619634106\" (UID: \"0ff2f7f6-1790-4e03-b1f0-65e619634106\") " Jul 9 23:50:19.438825 kubelet[2638]: I0709 23:50:19.438350 2638 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/09d9652e-00d8-4cb5-96b7-df6aabc1e902-bpf-maps\") pod \"09d9652e-00d8-4cb5-96b7-df6aabc1e902\" (UID: \"09d9652e-00d8-4cb5-96b7-df6aabc1e902\") " Jul 9 23:50:19.438962 kubelet[2638]: I0709 23:50:19.438366 2638 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/09d9652e-00d8-4cb5-96b7-df6aabc1e902-cilium-run\") pod \"09d9652e-00d8-4cb5-96b7-df6aabc1e902\" (UID: \"09d9652e-00d8-4cb5-96b7-df6aabc1e902\") " Jul 9 23:50:19.438962 kubelet[2638]: I0709 23:50:19.438383 2638 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/09d9652e-00d8-4cb5-96b7-df6aabc1e902-host-proc-sys-kernel\") pod \"09d9652e-00d8-4cb5-96b7-df6aabc1e902\" (UID: \"09d9652e-00d8-4cb5-96b7-df6aabc1e902\") " Jul 9 23:50:19.438962 kubelet[2638]: I0709 23:50:19.438402 2638 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/09d9652e-00d8-4cb5-96b7-df6aabc1e902-cni-path\") pod \"09d9652e-00d8-4cb5-96b7-df6aabc1e902\" (UID: \"09d9652e-00d8-4cb5-96b7-df6aabc1e902\") " Jul 9 23:50:19.438962 kubelet[2638]: I0709 23:50:19.438423 2638 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/09d9652e-00d8-4cb5-96b7-df6aabc1e902-hostproc\") pod \"09d9652e-00d8-4cb5-96b7-df6aabc1e902\" (UID: \"09d9652e-00d8-4cb5-96b7-df6aabc1e902\") " Jul 9 23:50:19.448015 kubelet[2638]: I0709 23:50:19.447970 2638 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09d9652e-00d8-4cb5-96b7-df6aabc1e902-hostproc" (OuterVolumeSpecName: "hostproc") pod "09d9652e-00d8-4cb5-96b7-df6aabc1e902" (UID: "09d9652e-00d8-4cb5-96b7-df6aabc1e902"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 23:50:19.448015 kubelet[2638]: I0709 23:50:19.447968 2638 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09d9652e-00d8-4cb5-96b7-df6aabc1e902-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "09d9652e-00d8-4cb5-96b7-df6aabc1e902" (UID: "09d9652e-00d8-4cb5-96b7-df6aabc1e902"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 23:50:19.448177 kubelet[2638]: I0709 23:50:19.448039 2638 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09d9652e-00d8-4cb5-96b7-df6aabc1e902-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "09d9652e-00d8-4cb5-96b7-df6aabc1e902" (UID: "09d9652e-00d8-4cb5-96b7-df6aabc1e902"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 23:50:19.448404 kubelet[2638]: I0709 23:50:19.448361 2638 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09d9652e-00d8-4cb5-96b7-df6aabc1e902-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "09d9652e-00d8-4cb5-96b7-df6aabc1e902" (UID: "09d9652e-00d8-4cb5-96b7-df6aabc1e902"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 23:50:19.449464 kubelet[2638]: I0709 23:50:19.449431 2638 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09d9652e-00d8-4cb5-96b7-df6aabc1e902-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "09d9652e-00d8-4cb5-96b7-df6aabc1e902" (UID: "09d9652e-00d8-4cb5-96b7-df6aabc1e902"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 23:50:19.449516 kubelet[2638]: I0709 23:50:19.449468 2638 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09d9652e-00d8-4cb5-96b7-df6aabc1e902-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "09d9652e-00d8-4cb5-96b7-df6aabc1e902" (UID: "09d9652e-00d8-4cb5-96b7-df6aabc1e902"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 23:50:19.449516 kubelet[2638]: I0709 23:50:19.449502 2638 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09d9652e-00d8-4cb5-96b7-df6aabc1e902-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "09d9652e-00d8-4cb5-96b7-df6aabc1e902" (UID: "09d9652e-00d8-4cb5-96b7-df6aabc1e902"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 23:50:19.449567 kubelet[2638]: I0709 23:50:19.449521 2638 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09d9652e-00d8-4cb5-96b7-df6aabc1e902-cni-path" (OuterVolumeSpecName: "cni-path") pod "09d9652e-00d8-4cb5-96b7-df6aabc1e902" (UID: "09d9652e-00d8-4cb5-96b7-df6aabc1e902"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 23:50:19.449567 kubelet[2638]: I0709 23:50:19.449537 2638 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09d9652e-00d8-4cb5-96b7-df6aabc1e902-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "09d9652e-00d8-4cb5-96b7-df6aabc1e902" (UID: "09d9652e-00d8-4cb5-96b7-df6aabc1e902"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 23:50:19.449567 kubelet[2638]: I0709 23:50:19.449552 2638 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09d9652e-00d8-4cb5-96b7-df6aabc1e902-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "09d9652e-00d8-4cb5-96b7-df6aabc1e902" (UID: "09d9652e-00d8-4cb5-96b7-df6aabc1e902"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 23:50:19.450243 kubelet[2638]: I0709 23:50:19.450182 2638 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ff2f7f6-1790-4e03-b1f0-65e619634106-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0ff2f7f6-1790-4e03-b1f0-65e619634106" (UID: "0ff2f7f6-1790-4e03-b1f0-65e619634106"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 9 23:50:19.452179 kubelet[2638]: I0709 23:50:19.452037 2638 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09d9652e-00d8-4cb5-96b7-df6aabc1e902-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "09d9652e-00d8-4cb5-96b7-df6aabc1e902" (UID: "09d9652e-00d8-4cb5-96b7-df6aabc1e902"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 9 23:50:19.452179 kubelet[2638]: I0709 23:50:19.452156 2638 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09d9652e-00d8-4cb5-96b7-df6aabc1e902-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "09d9652e-00d8-4cb5-96b7-df6aabc1e902" (UID: "09d9652e-00d8-4cb5-96b7-df6aabc1e902"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 9 23:50:19.452559 kubelet[2638]: I0709 23:50:19.452528 2638 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09d9652e-00d8-4cb5-96b7-df6aabc1e902-kube-api-access-jps98" (OuterVolumeSpecName: "kube-api-access-jps98") pod "09d9652e-00d8-4cb5-96b7-df6aabc1e902" (UID: "09d9652e-00d8-4cb5-96b7-df6aabc1e902"). InnerVolumeSpecName "kube-api-access-jps98". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 9 23:50:19.452730 kubelet[2638]: I0709 23:50:19.452704 2638 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ff2f7f6-1790-4e03-b1f0-65e619634106-kube-api-access-kz94f" (OuterVolumeSpecName: "kube-api-access-kz94f") pod "0ff2f7f6-1790-4e03-b1f0-65e619634106" (UID: "0ff2f7f6-1790-4e03-b1f0-65e619634106"). InnerVolumeSpecName "kube-api-access-kz94f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 9 23:50:19.453168 kubelet[2638]: I0709 23:50:19.453137 2638 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09d9652e-00d8-4cb5-96b7-df6aabc1e902-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "09d9652e-00d8-4cb5-96b7-df6aabc1e902" (UID: "09d9652e-00d8-4cb5-96b7-df6aabc1e902"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 9 23:50:19.539543 kubelet[2638]: I0709 23:50:19.539494 2638 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/09d9652e-00d8-4cb5-96b7-df6aabc1e902-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 9 23:50:19.539543 kubelet[2638]: I0709 23:50:19.539530 2638 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09d9652e-00d8-4cb5-96b7-df6aabc1e902-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 9 23:50:19.539543 kubelet[2638]: I0709 23:50:19.539540 2638 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/09d9652e-00d8-4cb5-96b7-df6aabc1e902-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 9 23:50:19.539543 kubelet[2638]: I0709 23:50:19.539550 2638 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0ff2f7f6-1790-4e03-b1f0-65e619634106-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 9 23:50:19.539756 kubelet[2638]: I0709 23:50:19.539564 2638 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/09d9652e-00d8-4cb5-96b7-df6aabc1e902-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 9 23:50:19.539756 kubelet[2638]: I0709 23:50:19.539572 2638 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/09d9652e-00d8-4cb5-96b7-df6aabc1e902-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 9 23:50:19.539756 kubelet[2638]: I0709 23:50:19.539580 2638 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/09d9652e-00d8-4cb5-96b7-df6aabc1e902-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 9 23:50:19.539756 kubelet[2638]: I0709 23:50:19.539588 2638 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/09d9652e-00d8-4cb5-96b7-df6aabc1e902-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 9 23:50:19.539756 kubelet[2638]: I0709 23:50:19.539595 2638 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/09d9652e-00d8-4cb5-96b7-df6aabc1e902-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 9 23:50:19.539756 kubelet[2638]: I0709 23:50:19.539603 2638 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kz94f\" (UniqueName: \"kubernetes.io/projected/0ff2f7f6-1790-4e03-b1f0-65e619634106-kube-api-access-kz94f\") on node \"localhost\" DevicePath \"\"" Jul 9 23:50:19.539756 kubelet[2638]: I0709 23:50:19.539612 2638 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/09d9652e-00d8-4cb5-96b7-df6aabc1e902-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 9 23:50:19.539756 kubelet[2638]: I0709 23:50:19.539625 2638 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/09d9652e-00d8-4cb5-96b7-df6aabc1e902-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 9 23:50:19.539934 kubelet[2638]: I0709 23:50:19.539634 2638 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/09d9652e-00d8-4cb5-96b7-df6aabc1e902-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 9 23:50:19.539934 kubelet[2638]: I0709 23:50:19.539642 2638 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/09d9652e-00d8-4cb5-96b7-df6aabc1e902-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 9 23:50:19.539934 kubelet[2638]: I0709 23:50:19.539649 2638 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/09d9652e-00d8-4cb5-96b7-df6aabc1e902-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 9 23:50:19.539934 kubelet[2638]: I0709 23:50:19.539658 2638 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jps98\" (UniqueName: \"kubernetes.io/projected/09d9652e-00d8-4cb5-96b7-df6aabc1e902-kube-api-access-jps98\") on node \"localhost\" DevicePath \"\"" Jul 9 23:50:19.921411 systemd[1]: Removed slice kubepods-burstable-pod09d9652e_00d8_4cb5_96b7_df6aabc1e902.slice - libcontainer container kubepods-burstable-pod09d9652e_00d8_4cb5_96b7_df6aabc1e902.slice. Jul 9 23:50:19.921527 systemd[1]: kubepods-burstable-pod09d9652e_00d8_4cb5_96b7_df6aabc1e902.slice: Consumed 7.162s CPU time, 124.4M memory peak, 136K read from disk, 12.9M written to disk. Jul 9 23:50:19.922601 systemd[1]: Removed slice kubepods-besteffort-pod0ff2f7f6_1790_4e03_b1f0_65e619634106.slice - libcontainer container kubepods-besteffort-pod0ff2f7f6_1790_4e03_b1f0_65e619634106.slice. Jul 9 23:50:20.187020 kubelet[2638]: I0709 23:50:20.186910 2638 scope.go:117] "RemoveContainer" containerID="f2bab746185deca16935c763576e10ef89103d46fbbefb68e7219319afad5ecf" Jul 9 23:50:20.192171 containerd[1522]: time="2025-07-09T23:50:20.191944740Z" level=info msg="RemoveContainer for \"f2bab746185deca16935c763576e10ef89103d46fbbefb68e7219319afad5ecf\"" Jul 9 23:50:20.216902 containerd[1522]: time="2025-07-09T23:50:20.216848969Z" level=info msg="RemoveContainer for \"f2bab746185deca16935c763576e10ef89103d46fbbefb68e7219319afad5ecf\" returns successfully" Jul 9 23:50:20.217253 kubelet[2638]: I0709 23:50:20.217219 2638 scope.go:117] "RemoveContainer" containerID="b6d598d9d7f831fbe6efd9db7eecc4fb4be992b62ef3474009274efe1b79c004" Jul 9 23:50:20.219657 containerd[1522]: time="2025-07-09T23:50:20.219353439Z" level=info msg="RemoveContainer for \"b6d598d9d7f831fbe6efd9db7eecc4fb4be992b62ef3474009274efe1b79c004\"" Jul 9 23:50:20.222289 systemd[1]: var-lib-kubelet-pods-0ff2f7f6\x2d1790\x2d4e03\x2db1f0\x2d65e619634106-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkz94f.mount: Deactivated successfully. Jul 9 23:50:20.222409 systemd[1]: var-lib-kubelet-pods-09d9652e\x2d00d8\x2d4cb5\x2d96b7\x2ddf6aabc1e902-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djps98.mount: Deactivated successfully. Jul 9 23:50:20.222479 systemd[1]: var-lib-kubelet-pods-09d9652e\x2d00d8\x2d4cb5\x2d96b7\x2ddf6aabc1e902-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 9 23:50:20.222531 systemd[1]: var-lib-kubelet-pods-09d9652e\x2d00d8\x2d4cb5\x2d96b7\x2ddf6aabc1e902-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 9 23:50:20.234665 containerd[1522]: time="2025-07-09T23:50:20.234611324Z" level=info msg="RemoveContainer for \"b6d598d9d7f831fbe6efd9db7eecc4fb4be992b62ef3474009274efe1b79c004\" returns successfully" Jul 9 23:50:20.235043 kubelet[2638]: I0709 23:50:20.234891 2638 scope.go:117] "RemoveContainer" containerID="afdc8ecc031cd6205bed68c1f7787c58891d56e850687ab081296f7ca4088091" Jul 9 23:50:20.237258 containerd[1522]: time="2025-07-09T23:50:20.237227026Z" level=info msg="RemoveContainer for \"afdc8ecc031cd6205bed68c1f7787c58891d56e850687ab081296f7ca4088091\"" Jul 9 23:50:20.241057 containerd[1522]: time="2025-07-09T23:50:20.241013489Z" level=info msg="RemoveContainer for \"afdc8ecc031cd6205bed68c1f7787c58891d56e850687ab081296f7ca4088091\" returns successfully" Jul 9 23:50:20.241387 kubelet[2638]: I0709 23:50:20.241255 2638 scope.go:117] "RemoveContainer" containerID="da51315d832313529f1c3f9cb6375300d04d06277d3f3a74d561a8173a339703" Jul 9 23:50:20.243030 containerd[1522]: time="2025-07-09T23:50:20.242999714Z" level=info msg="RemoveContainer for \"da51315d832313529f1c3f9cb6375300d04d06277d3f3a74d561a8173a339703\"" Jul 9 23:50:20.250340 containerd[1522]: time="2025-07-09T23:50:20.250294499Z" level=info msg="RemoveContainer for \"da51315d832313529f1c3f9cb6375300d04d06277d3f3a74d561a8173a339703\" returns successfully" Jul 9 23:50:20.250766 kubelet[2638]: I0709 23:50:20.250538 2638 scope.go:117] "RemoveContainer" containerID="814cf0df638e13d8c570079f5bd98b350811442a1698337ba8f825ea3a6afebc" Jul 9 23:50:20.252358 containerd[1522]: time="2025-07-09T23:50:20.252322161Z" level=info msg="RemoveContainer for \"814cf0df638e13d8c570079f5bd98b350811442a1698337ba8f825ea3a6afebc\"" Jul 9 23:50:20.255214 containerd[1522]: time="2025-07-09T23:50:20.255181327Z" level=info msg="RemoveContainer for \"814cf0df638e13d8c570079f5bd98b350811442a1698337ba8f825ea3a6afebc\" returns successfully" Jul 9 23:50:20.255486 kubelet[2638]: I0709 23:50:20.255425 2638 scope.go:117] "RemoveContainer" containerID="f2bab746185deca16935c763576e10ef89103d46fbbefb68e7219319afad5ecf" Jul 9 23:50:20.255991 containerd[1522]: time="2025-07-09T23:50:20.255830563Z" level=error msg="ContainerStatus for \"f2bab746185deca16935c763576e10ef89103d46fbbefb68e7219319afad5ecf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f2bab746185deca16935c763576e10ef89103d46fbbefb68e7219319afad5ecf\": not found" Jul 9 23:50:20.259856 kubelet[2638]: E0709 23:50:20.259822 2638 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f2bab746185deca16935c763576e10ef89103d46fbbefb68e7219319afad5ecf\": not found" containerID="f2bab746185deca16935c763576e10ef89103d46fbbefb68e7219319afad5ecf" Jul 9 23:50:20.264458 kubelet[2638]: I0709 23:50:20.264222 2638 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f2bab746185deca16935c763576e10ef89103d46fbbefb68e7219319afad5ecf"} err="failed to get container status \"f2bab746185deca16935c763576e10ef89103d46fbbefb68e7219319afad5ecf\": rpc error: code = NotFound desc = an error occurred when try to find container \"f2bab746185deca16935c763576e10ef89103d46fbbefb68e7219319afad5ecf\": not found" Jul 9 23:50:20.264458 kubelet[2638]: I0709 23:50:20.264374 2638 scope.go:117] "RemoveContainer" containerID="b6d598d9d7f831fbe6efd9db7eecc4fb4be992b62ef3474009274efe1b79c004" Jul 9 23:50:20.264947 containerd[1522]: time="2025-07-09T23:50:20.264900587Z" level=error msg="ContainerStatus for \"b6d598d9d7f831fbe6efd9db7eecc4fb4be992b62ef3474009274efe1b79c004\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b6d598d9d7f831fbe6efd9db7eecc4fb4be992b62ef3474009274efe1b79c004\": not found" Jul 9 23:50:20.265155 kubelet[2638]: E0709 23:50:20.265132 2638 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b6d598d9d7f831fbe6efd9db7eecc4fb4be992b62ef3474009274efe1b79c004\": not found" containerID="b6d598d9d7f831fbe6efd9db7eecc4fb4be992b62ef3474009274efe1b79c004" Jul 9 23:50:20.265244 kubelet[2638]: I0709 23:50:20.265218 2638 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b6d598d9d7f831fbe6efd9db7eecc4fb4be992b62ef3474009274efe1b79c004"} err="failed to get container status \"b6d598d9d7f831fbe6efd9db7eecc4fb4be992b62ef3474009274efe1b79c004\": rpc error: code = NotFound desc = an error occurred when try to find container \"b6d598d9d7f831fbe6efd9db7eecc4fb4be992b62ef3474009274efe1b79c004\": not found" Jul 9 23:50:20.265336 kubelet[2638]: I0709 23:50:20.265289 2638 scope.go:117] "RemoveContainer" containerID="afdc8ecc031cd6205bed68c1f7787c58891d56e850687ab081296f7ca4088091" Jul 9 23:50:20.265592 containerd[1522]: time="2025-07-09T23:50:20.265558503Z" level=error msg="ContainerStatus for \"afdc8ecc031cd6205bed68c1f7787c58891d56e850687ab081296f7ca4088091\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"afdc8ecc031cd6205bed68c1f7787c58891d56e850687ab081296f7ca4088091\": not found" Jul 9 23:50:20.265745 kubelet[2638]: E0709 23:50:20.265721 2638 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"afdc8ecc031cd6205bed68c1f7787c58891d56e850687ab081296f7ca4088091\": not found" containerID="afdc8ecc031cd6205bed68c1f7787c58891d56e850687ab081296f7ca4088091" Jul 9 23:50:20.265789 kubelet[2638]: I0709 23:50:20.265752 2638 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"afdc8ecc031cd6205bed68c1f7787c58891d56e850687ab081296f7ca4088091"} err="failed to get container status \"afdc8ecc031cd6205bed68c1f7787c58891d56e850687ab081296f7ca4088091\": rpc error: code = NotFound desc = an error occurred when try to find container \"afdc8ecc031cd6205bed68c1f7787c58891d56e850687ab081296f7ca4088091\": not found" Jul 9 23:50:20.265789 kubelet[2638]: I0709 23:50:20.265770 2638 scope.go:117] "RemoveContainer" containerID="da51315d832313529f1c3f9cb6375300d04d06277d3f3a74d561a8173a339703" Jul 9 23:50:20.265999 containerd[1522]: time="2025-07-09T23:50:20.265971394Z" level=error msg="ContainerStatus for \"da51315d832313529f1c3f9cb6375300d04d06277d3f3a74d561a8173a339703\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"da51315d832313529f1c3f9cb6375300d04d06277d3f3a74d561a8173a339703\": not found" Jul 9 23:50:20.266165 kubelet[2638]: E0709 23:50:20.266139 2638 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"da51315d832313529f1c3f9cb6375300d04d06277d3f3a74d561a8173a339703\": not found" containerID="da51315d832313529f1c3f9cb6375300d04d06277d3f3a74d561a8173a339703" Jul 9 23:50:20.266313 kubelet[2638]: I0709 23:50:20.266233 2638 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"da51315d832313529f1c3f9cb6375300d04d06277d3f3a74d561a8173a339703"} err="failed to get container status \"da51315d832313529f1c3f9cb6375300d04d06277d3f3a74d561a8173a339703\": rpc error: code = NotFound desc = an error occurred when try to find container \"da51315d832313529f1c3f9cb6375300d04d06277d3f3a74d561a8173a339703\": not found" Jul 9 23:50:20.266313 kubelet[2638]: I0709 23:50:20.266252 2638 scope.go:117] "RemoveContainer" containerID="814cf0df638e13d8c570079f5bd98b350811442a1698337ba8f825ea3a6afebc" Jul 9 23:50:20.266522 containerd[1522]: time="2025-07-09T23:50:20.266491359Z" level=error msg="ContainerStatus for \"814cf0df638e13d8c570079f5bd98b350811442a1698337ba8f825ea3a6afebc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"814cf0df638e13d8c570079f5bd98b350811442a1698337ba8f825ea3a6afebc\": not found" Jul 9 23:50:20.266769 kubelet[2638]: E0709 23:50:20.266748 2638 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"814cf0df638e13d8c570079f5bd98b350811442a1698337ba8f825ea3a6afebc\": not found" containerID="814cf0df638e13d8c570079f5bd98b350811442a1698337ba8f825ea3a6afebc" Jul 9 23:50:20.266837 kubelet[2638]: I0709 23:50:20.266772 2638 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"814cf0df638e13d8c570079f5bd98b350811442a1698337ba8f825ea3a6afebc"} err="failed to get container status \"814cf0df638e13d8c570079f5bd98b350811442a1698337ba8f825ea3a6afebc\": rpc error: code = NotFound desc = an error occurred when try to find container \"814cf0df638e13d8c570079f5bd98b350811442a1698337ba8f825ea3a6afebc\": not found" Jul 9 23:50:20.266837 kubelet[2638]: I0709 23:50:20.266788 2638 scope.go:117] "RemoveContainer" containerID="d02785da9762320e5652ecc702b64470bfec946bc6b3c98fe61dc05cafc9898e" Jul 9 23:50:20.268283 containerd[1522]: time="2025-07-09T23:50:20.268247600Z" level=info msg="RemoveContainer for \"d02785da9762320e5652ecc702b64470bfec946bc6b3c98fe61dc05cafc9898e\"" Jul 9 23:50:20.270984 containerd[1522]: time="2025-07-09T23:50:20.270954256Z" level=info msg="RemoveContainer for \"d02785da9762320e5652ecc702b64470bfec946bc6b3c98fe61dc05cafc9898e\" returns successfully" Jul 9 23:50:20.271182 kubelet[2638]: I0709 23:50:20.271162 2638 scope.go:117] "RemoveContainer" containerID="d02785da9762320e5652ecc702b64470bfec946bc6b3c98fe61dc05cafc9898e" Jul 9 23:50:20.271665 containerd[1522]: time="2025-07-09T23:50:20.271553975Z" level=error msg="ContainerStatus for \"d02785da9762320e5652ecc702b64470bfec946bc6b3c98fe61dc05cafc9898e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d02785da9762320e5652ecc702b64470bfec946bc6b3c98fe61dc05cafc9898e\": not found" Jul 9 23:50:20.271773 kubelet[2638]: E0709 23:50:20.271740 2638 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d02785da9762320e5652ecc702b64470bfec946bc6b3c98fe61dc05cafc9898e\": not found" containerID="d02785da9762320e5652ecc702b64470bfec946bc6b3c98fe61dc05cafc9898e" Jul 9 23:50:20.271809 kubelet[2638]: I0709 23:50:20.271769 2638 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d02785da9762320e5652ecc702b64470bfec946bc6b3c98fe61dc05cafc9898e"} err="failed to get container status \"d02785da9762320e5652ecc702b64470bfec946bc6b3c98fe61dc05cafc9898e\": rpc error: code = NotFound desc = an error occurred when try to find container \"d02785da9762320e5652ecc702b64470bfec946bc6b3c98fe61dc05cafc9898e\": not found" Jul 9 23:50:21.087571 sshd[4236]: Connection closed by 10.0.0.1 port 57022 Jul 9 23:50:21.088137 sshd-session[4234]: pam_unix(sshd:session): session closed for user core Jul 9 23:50:21.104014 systemd[1]: sshd@22-10.0.0.69:22-10.0.0.1:57022.service: Deactivated successfully. Jul 9 23:50:21.107006 systemd[1]: session-23.scope: Deactivated successfully. Jul 9 23:50:21.107916 systemd[1]: session-23.scope: Consumed 1.378s CPU time, 24M memory peak. Jul 9 23:50:21.111749 systemd-logind[1512]: Session 23 logged out. Waiting for processes to exit. Jul 9 23:50:21.113318 systemd[1]: Started sshd@23-10.0.0.69:22-10.0.0.1:57036.service - OpenSSH per-connection server daemon (10.0.0.1:57036). Jul 9 23:50:21.114766 systemd-logind[1512]: Removed session 23. Jul 9 23:50:21.179087 sshd[4386]: Accepted publickey for core from 10.0.0.1 port 57036 ssh2: RSA SHA256:gc9XfzCdXUit2xMYwbO9Atxxy3DG1hyaUiU6i3BG1Rg Jul 9 23:50:21.180573 sshd-session[4386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:50:21.185778 systemd-logind[1512]: New session 24 of user core. Jul 9 23:50:21.191869 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 9 23:50:21.914477 kubelet[2638]: I0709 23:50:21.914438 2638 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09d9652e-00d8-4cb5-96b7-df6aabc1e902" path="/var/lib/kubelet/pods/09d9652e-00d8-4cb5-96b7-df6aabc1e902/volumes" Jul 9 23:50:21.915215 kubelet[2638]: I0709 23:50:21.915195 2638 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ff2f7f6-1790-4e03-b1f0-65e619634106" path="/var/lib/kubelet/pods/0ff2f7f6-1790-4e03-b1f0-65e619634106/volumes" Jul 9 23:50:22.020647 kubelet[2638]: E0709 23:50:22.020011 2638 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 9 23:50:22.346035 sshd[4388]: Connection closed by 10.0.0.1 port 57036 Jul 9 23:50:22.347066 sshd-session[4386]: pam_unix(sshd:session): session closed for user core Jul 9 23:50:22.359623 systemd[1]: sshd@23-10.0.0.69:22-10.0.0.1:57036.service: Deactivated successfully. Jul 9 23:50:22.368395 systemd[1]: session-24.scope: Deactivated successfully. Jul 9 23:50:22.368651 systemd[1]: session-24.scope: Consumed 1.049s CPU time, 24.1M memory peak. Jul 9 23:50:22.373497 systemd-logind[1512]: Session 24 logged out. Waiting for processes to exit. Jul 9 23:50:22.381121 systemd[1]: Started sshd@24-10.0.0.69:22-10.0.0.1:57048.service - OpenSSH per-connection server daemon (10.0.0.1:57048). Jul 9 23:50:22.382545 kubelet[2638]: I0709 23:50:22.382512 2638 memory_manager.go:355] "RemoveStaleState removing state" podUID="09d9652e-00d8-4cb5-96b7-df6aabc1e902" containerName="cilium-agent" Jul 9 23:50:22.382545 kubelet[2638]: I0709 23:50:22.382553 2638 memory_manager.go:355] "RemoveStaleState removing state" podUID="0ff2f7f6-1790-4e03-b1f0-65e619634106" containerName="cilium-operator" Jul 9 23:50:22.383208 systemd-logind[1512]: Removed session 24. Jul 9 23:50:22.399957 systemd[1]: Created slice kubepods-burstable-pod6f7cfbaf_7969_4ed4_b079_f0f4e5f1cace.slice - libcontainer container kubepods-burstable-pod6f7cfbaf_7969_4ed4_b079_f0f4e5f1cace.slice. Jul 9 23:50:22.441417 sshd[4400]: Accepted publickey for core from 10.0.0.1 port 57048 ssh2: RSA SHA256:gc9XfzCdXUit2xMYwbO9Atxxy3DG1hyaUiU6i3BG1Rg Jul 9 23:50:22.442774 sshd-session[4400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:50:22.446718 systemd-logind[1512]: New session 25 of user core. Jul 9 23:50:22.456540 kubelet[2638]: I0709 23:50:22.456501 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f7cfbaf-7969-4ed4-b079-f0f4e5f1cace-cilium-config-path\") pod \"cilium-cr75v\" (UID: \"6f7cfbaf-7969-4ed4-b079-f0f4e5f1cace\") " pod="kube-system/cilium-cr75v" Jul 9 23:50:22.456540 kubelet[2638]: I0709 23:50:22.456542 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6f7cfbaf-7969-4ed4-b079-f0f4e5f1cace-cilium-run\") pod \"cilium-cr75v\" (UID: \"6f7cfbaf-7969-4ed4-b079-f0f4e5f1cace\") " pod="kube-system/cilium-cr75v" Jul 9 23:50:22.456677 kubelet[2638]: I0709 23:50:22.456572 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f7cfbaf-7969-4ed4-b079-f0f4e5f1cace-etc-cni-netd\") pod \"cilium-cr75v\" (UID: \"6f7cfbaf-7969-4ed4-b079-f0f4e5f1cace\") " pod="kube-system/cilium-cr75v" Jul 9 23:50:22.456677 kubelet[2638]: I0709 23:50:22.456591 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6f7cfbaf-7969-4ed4-b079-f0f4e5f1cace-cni-path\") pod \"cilium-cr75v\" (UID: \"6f7cfbaf-7969-4ed4-b079-f0f4e5f1cace\") " pod="kube-system/cilium-cr75v" Jul 9 23:50:22.456677 kubelet[2638]: I0709 23:50:22.456608 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6f7cfbaf-7969-4ed4-b079-f0f4e5f1cace-hostproc\") pod \"cilium-cr75v\" (UID: \"6f7cfbaf-7969-4ed4-b079-f0f4e5f1cace\") " pod="kube-system/cilium-cr75v" Jul 9 23:50:22.456677 kubelet[2638]: I0709 23:50:22.456623 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6f7cfbaf-7969-4ed4-b079-f0f4e5f1cace-cilium-ipsec-secrets\") pod \"cilium-cr75v\" (UID: \"6f7cfbaf-7969-4ed4-b079-f0f4e5f1cace\") " pod="kube-system/cilium-cr75v" Jul 9 23:50:22.456677 kubelet[2638]: I0709 23:50:22.456637 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6f7cfbaf-7969-4ed4-b079-f0f4e5f1cace-host-proc-sys-kernel\") pod \"cilium-cr75v\" (UID: \"6f7cfbaf-7969-4ed4-b079-f0f4e5f1cace\") " pod="kube-system/cilium-cr75v" Jul 9 23:50:22.456677 kubelet[2638]: I0709 23:50:22.456651 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcqt7\" (UniqueName: \"kubernetes.io/projected/6f7cfbaf-7969-4ed4-b079-f0f4e5f1cace-kube-api-access-fcqt7\") pod \"cilium-cr75v\" (UID: \"6f7cfbaf-7969-4ed4-b079-f0f4e5f1cace\") " pod="kube-system/cilium-cr75v" Jul 9 23:50:22.456822 kubelet[2638]: I0709 23:50:22.456666 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6f7cfbaf-7969-4ed4-b079-f0f4e5f1cace-clustermesh-secrets\") pod \"cilium-cr75v\" (UID: \"6f7cfbaf-7969-4ed4-b079-f0f4e5f1cace\") " pod="kube-system/cilium-cr75v" Jul 9 23:50:22.456822 kubelet[2638]: I0709 23:50:22.456681 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6f7cfbaf-7969-4ed4-b079-f0f4e5f1cace-cilium-cgroup\") pod \"cilium-cr75v\" (UID: \"6f7cfbaf-7969-4ed4-b079-f0f4e5f1cace\") " pod="kube-system/cilium-cr75v" Jul 9 23:50:22.456822 kubelet[2638]: I0709 23:50:22.456709 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f7cfbaf-7969-4ed4-b079-f0f4e5f1cace-xtables-lock\") pod \"cilium-cr75v\" (UID: \"6f7cfbaf-7969-4ed4-b079-f0f4e5f1cace\") " pod="kube-system/cilium-cr75v" Jul 9 23:50:22.456822 kubelet[2638]: I0709 23:50:22.456724 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6f7cfbaf-7969-4ed4-b079-f0f4e5f1cace-hubble-tls\") pod \"cilium-cr75v\" (UID: \"6f7cfbaf-7969-4ed4-b079-f0f4e5f1cace\") " pod="kube-system/cilium-cr75v" Jul 9 23:50:22.456822 kubelet[2638]: I0709 23:50:22.456741 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6f7cfbaf-7969-4ed4-b079-f0f4e5f1cace-bpf-maps\") pod \"cilium-cr75v\" (UID: \"6f7cfbaf-7969-4ed4-b079-f0f4e5f1cace\") " pod="kube-system/cilium-cr75v" Jul 9 23:50:22.456822 kubelet[2638]: I0709 23:50:22.456759 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f7cfbaf-7969-4ed4-b079-f0f4e5f1cace-lib-modules\") pod \"cilium-cr75v\" (UID: \"6f7cfbaf-7969-4ed4-b079-f0f4e5f1cace\") " pod="kube-system/cilium-cr75v" Jul 9 23:50:22.456937 kubelet[2638]: I0709 23:50:22.456775 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6f7cfbaf-7969-4ed4-b079-f0f4e5f1cace-host-proc-sys-net\") pod \"cilium-cr75v\" (UID: \"6f7cfbaf-7969-4ed4-b079-f0f4e5f1cace\") " pod="kube-system/cilium-cr75v" Jul 9 23:50:22.461897 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 9 23:50:22.510763 sshd[4402]: Connection closed by 10.0.0.1 port 57048 Jul 9 23:50:22.511048 sshd-session[4400]: pam_unix(sshd:session): session closed for user core Jul 9 23:50:22.528033 systemd[1]: sshd@24-10.0.0.69:22-10.0.0.1:57048.service: Deactivated successfully. Jul 9 23:50:22.531020 systemd[1]: session-25.scope: Deactivated successfully. Jul 9 23:50:22.531647 systemd-logind[1512]: Session 25 logged out. Waiting for processes to exit. Jul 9 23:50:22.534932 systemd[1]: Started sshd@25-10.0.0.69:22-10.0.0.1:60324.service - OpenSSH per-connection server daemon (10.0.0.1:60324). Jul 9 23:50:22.536200 systemd-logind[1512]: Removed session 25. Jul 9 23:50:22.588744 sshd[4409]: Accepted publickey for core from 10.0.0.1 port 60324 ssh2: RSA SHA256:gc9XfzCdXUit2xMYwbO9Atxxy3DG1hyaUiU6i3BG1Rg Jul 9 23:50:22.589466 sshd-session[4409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:50:22.594078 systemd-logind[1512]: New session 26 of user core. Jul 9 23:50:22.603913 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 9 23:50:22.702818 kubelet[2638]: E0709 23:50:22.702758 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:50:22.704014 containerd[1522]: time="2025-07-09T23:50:22.703974726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cr75v,Uid:6f7cfbaf-7969-4ed4-b079-f0f4e5f1cace,Namespace:kube-system,Attempt:0,}" Jul 9 23:50:22.778452 containerd[1522]: time="2025-07-09T23:50:22.778285911Z" level=info msg="connecting to shim 73657d091ea1d68bbae7c080ccc639d350bb0203b8e289d528810525bc38d2b9" address="unix:///run/containerd/s/0cc2c48e197fc7efc0e8e345255f81c725d28f339578061a7d0b0ce90a63a3db" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:50:22.801886 systemd[1]: Started cri-containerd-73657d091ea1d68bbae7c080ccc639d350bb0203b8e289d528810525bc38d2b9.scope - libcontainer container 73657d091ea1d68bbae7c080ccc639d350bb0203b8e289d528810525bc38d2b9. Jul 9 23:50:22.826474 containerd[1522]: time="2025-07-09T23:50:22.826416026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cr75v,Uid:6f7cfbaf-7969-4ed4-b079-f0f4e5f1cace,Namespace:kube-system,Attempt:0,} returns sandbox id \"73657d091ea1d68bbae7c080ccc639d350bb0203b8e289d528810525bc38d2b9\"" Jul 9 23:50:22.827263 kubelet[2638]: E0709 23:50:22.827230 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:50:22.830670 containerd[1522]: time="2025-07-09T23:50:22.830621374Z" level=info msg="CreateContainer within sandbox \"73657d091ea1d68bbae7c080ccc639d350bb0203b8e289d528810525bc38d2b9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 9 23:50:22.838298 containerd[1522]: time="2025-07-09T23:50:22.838255996Z" level=info msg="Container d0ad5661d3da191972088b51683a6ccb468f3ee8ffa809ba29c81a582bf4edb8: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:50:22.847637 containerd[1522]: time="2025-07-09T23:50:22.847591677Z" level=info msg="CreateContainer within sandbox \"73657d091ea1d68bbae7c080ccc639d350bb0203b8e289d528810525bc38d2b9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d0ad5661d3da191972088b51683a6ccb468f3ee8ffa809ba29c81a582bf4edb8\"" Jul 9 23:50:22.848434 containerd[1522]: time="2025-07-09T23:50:22.848356791Z" level=info msg="StartContainer for \"d0ad5661d3da191972088b51683a6ccb468f3ee8ffa809ba29c81a582bf4edb8\"" Jul 9 23:50:22.849616 containerd[1522]: time="2025-07-09T23:50:22.849294374Z" level=info msg="connecting to shim d0ad5661d3da191972088b51683a6ccb468f3ee8ffa809ba29c81a582bf4edb8" address="unix:///run/containerd/s/0cc2c48e197fc7efc0e8e345255f81c725d28f339578061a7d0b0ce90a63a3db" protocol=ttrpc version=3 Jul 9 23:50:22.871868 systemd[1]: Started cri-containerd-d0ad5661d3da191972088b51683a6ccb468f3ee8ffa809ba29c81a582bf4edb8.scope - libcontainer container d0ad5661d3da191972088b51683a6ccb468f3ee8ffa809ba29c81a582bf4edb8. Jul 9 23:50:22.901918 containerd[1522]: time="2025-07-09T23:50:22.901882422Z" level=info msg="StartContainer for \"d0ad5661d3da191972088b51683a6ccb468f3ee8ffa809ba29c81a582bf4edb8\" returns successfully" Jul 9 23:50:22.911863 kubelet[2638]: E0709 23:50:22.911827 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:50:22.913200 systemd[1]: cri-containerd-d0ad5661d3da191972088b51683a6ccb468f3ee8ffa809ba29c81a582bf4edb8.scope: Deactivated successfully. Jul 9 23:50:22.914993 containerd[1522]: time="2025-07-09T23:50:22.914960198Z" level=info msg="received exit event container_id:\"d0ad5661d3da191972088b51683a6ccb468f3ee8ffa809ba29c81a582bf4edb8\" id:\"d0ad5661d3da191972088b51683a6ccb468f3ee8ffa809ba29c81a582bf4edb8\" pid:4479 exited_at:{seconds:1752105022 nanos:914567901}" Jul 9 23:50:22.915591 containerd[1522]: time="2025-07-09T23:50:22.915497606Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d0ad5661d3da191972088b51683a6ccb468f3ee8ffa809ba29c81a582bf4edb8\" id:\"d0ad5661d3da191972088b51683a6ccb468f3ee8ffa809ba29c81a582bf4edb8\" pid:4479 exited_at:{seconds:1752105022 nanos:914567901}" Jul 9 23:50:23.204057 kubelet[2638]: E0709 23:50:23.203721 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:50:23.206200 containerd[1522]: time="2025-07-09T23:50:23.206137154Z" level=info msg="CreateContainer within sandbox \"73657d091ea1d68bbae7c080ccc639d350bb0203b8e289d528810525bc38d2b9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 9 23:50:23.214517 containerd[1522]: time="2025-07-09T23:50:23.214468646Z" level=info msg="Container 563f0514aa0b95d44ac501c3cb46f976169aa7a9b6c742d0697257e93dfeefa5: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:50:23.227942 containerd[1522]: time="2025-07-09T23:50:23.227896212Z" level=info msg="CreateContainer within sandbox \"73657d091ea1d68bbae7c080ccc639d350bb0203b8e289d528810525bc38d2b9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"563f0514aa0b95d44ac501c3cb46f976169aa7a9b6c742d0697257e93dfeefa5\"" Jul 9 23:50:23.228435 containerd[1522]: time="2025-07-09T23:50:23.228408303Z" level=info msg="StartContainer for \"563f0514aa0b95d44ac501c3cb46f976169aa7a9b6c742d0697257e93dfeefa5\"" Jul 9 23:50:23.229259 containerd[1522]: time="2025-07-09T23:50:23.229225657Z" level=info msg="connecting to shim 563f0514aa0b95d44ac501c3cb46f976169aa7a9b6c742d0697257e93dfeefa5" address="unix:///run/containerd/s/0cc2c48e197fc7efc0e8e345255f81c725d28f339578061a7d0b0ce90a63a3db" protocol=ttrpc version=3 Jul 9 23:50:23.255062 systemd[1]: Started cri-containerd-563f0514aa0b95d44ac501c3cb46f976169aa7a9b6c742d0697257e93dfeefa5.scope - libcontainer container 563f0514aa0b95d44ac501c3cb46f976169aa7a9b6c742d0697257e93dfeefa5. Jul 9 23:50:23.285813 containerd[1522]: time="2025-07-09T23:50:23.285779841Z" level=info msg="StartContainer for \"563f0514aa0b95d44ac501c3cb46f976169aa7a9b6c742d0697257e93dfeefa5\" returns successfully" Jul 9 23:50:23.295827 systemd[1]: cri-containerd-563f0514aa0b95d44ac501c3cb46f976169aa7a9b6c742d0697257e93dfeefa5.scope: Deactivated successfully. Jul 9 23:50:23.296887 containerd[1522]: time="2025-07-09T23:50:23.296835940Z" level=info msg="received exit event container_id:\"563f0514aa0b95d44ac501c3cb46f976169aa7a9b6c742d0697257e93dfeefa5\" id:\"563f0514aa0b95d44ac501c3cb46f976169aa7a9b6c742d0697257e93dfeefa5\" pid:4524 exited_at:{seconds:1752105023 nanos:296441482}" Jul 9 23:50:23.297243 containerd[1522]: time="2025-07-09T23:50:23.297222358Z" level=info msg="TaskExit event in podsandbox handler container_id:\"563f0514aa0b95d44ac501c3cb46f976169aa7a9b6c742d0697257e93dfeefa5\" id:\"563f0514aa0b95d44ac501c3cb46f976169aa7a9b6c742d0697257e93dfeefa5\" pid:4524 exited_at:{seconds:1752105023 nanos:296441482}" Jul 9 23:50:23.446724 kubelet[2638]: I0709 23:50:23.445670 2638 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-09T23:50:23Z","lastTransitionTime":"2025-07-09T23:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 9 23:50:24.207233 kubelet[2638]: E0709 23:50:24.207096 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:50:24.210956 containerd[1522]: time="2025-07-09T23:50:24.210901972Z" level=info msg="CreateContainer within sandbox \"73657d091ea1d68bbae7c080ccc639d350bb0203b8e289d528810525bc38d2b9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 9 23:50:24.246352 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1577822201.mount: Deactivated successfully. Jul 9 23:50:24.246895 containerd[1522]: time="2025-07-09T23:50:24.246801047Z" level=info msg="Container f56408e320d52dfe0b7d8a44f61a768d88de4393e5f4cbeba3474762033bfbf9: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:50:24.256584 containerd[1522]: time="2025-07-09T23:50:24.256536776Z" level=info msg="CreateContainer within sandbox \"73657d091ea1d68bbae7c080ccc639d350bb0203b8e289d528810525bc38d2b9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f56408e320d52dfe0b7d8a44f61a768d88de4393e5f4cbeba3474762033bfbf9\"" Jul 9 23:50:24.258813 containerd[1522]: time="2025-07-09T23:50:24.258767659Z" level=info msg="StartContainer for \"f56408e320d52dfe0b7d8a44f61a768d88de4393e5f4cbeba3474762033bfbf9\"" Jul 9 23:50:24.260735 containerd[1522]: time="2025-07-09T23:50:24.260463290Z" level=info msg="connecting to shim f56408e320d52dfe0b7d8a44f61a768d88de4393e5f4cbeba3474762033bfbf9" address="unix:///run/containerd/s/0cc2c48e197fc7efc0e8e345255f81c725d28f339578061a7d0b0ce90a63a3db" protocol=ttrpc version=3 Jul 9 23:50:24.290915 systemd[1]: Started cri-containerd-f56408e320d52dfe0b7d8a44f61a768d88de4393e5f4cbeba3474762033bfbf9.scope - libcontainer container f56408e320d52dfe0b7d8a44f61a768d88de4393e5f4cbeba3474762033bfbf9. Jul 9 23:50:24.325211 systemd[1]: cri-containerd-f56408e320d52dfe0b7d8a44f61a768d88de4393e5f4cbeba3474762033bfbf9.scope: Deactivated successfully. Jul 9 23:50:24.328713 containerd[1522]: time="2025-07-09T23:50:24.327173588Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f56408e320d52dfe0b7d8a44f61a768d88de4393e5f4cbeba3474762033bfbf9\" id:\"f56408e320d52dfe0b7d8a44f61a768d88de4393e5f4cbeba3474762033bfbf9\" pid:4571 exited_at:{seconds:1752105024 nanos:326893882}" Jul 9 23:50:24.328713 containerd[1522]: time="2025-07-09T23:50:24.327256103Z" level=info msg="received exit event container_id:\"f56408e320d52dfe0b7d8a44f61a768d88de4393e5f4cbeba3474762033bfbf9\" id:\"f56408e320d52dfe0b7d8a44f61a768d88de4393e5f4cbeba3474762033bfbf9\" pid:4571 exited_at:{seconds:1752105024 nanos:326893882}" Jul 9 23:50:24.328713 containerd[1522]: time="2025-07-09T23:50:24.327809514Z" level=info msg="StartContainer for \"f56408e320d52dfe0b7d8a44f61a768d88de4393e5f4cbeba3474762033bfbf9\" returns successfully" Jul 9 23:50:24.561638 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f56408e320d52dfe0b7d8a44f61a768d88de4393e5f4cbeba3474762033bfbf9-rootfs.mount: Deactivated successfully. Jul 9 23:50:25.211897 kubelet[2638]: E0709 23:50:25.211856 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:50:25.214892 containerd[1522]: time="2025-07-09T23:50:25.213867874Z" level=info msg="CreateContainer within sandbox \"73657d091ea1d68bbae7c080ccc639d350bb0203b8e289d528810525bc38d2b9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 9 23:50:25.225306 containerd[1522]: time="2025-07-09T23:50:25.225264557Z" level=info msg="Container e98a6589d7e831d020bee9aedbc161f81bcd7755be4d1e1d2654b97c81aab9f6: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:50:25.237586 containerd[1522]: time="2025-07-09T23:50:25.237538076Z" level=info msg="CreateContainer within sandbox \"73657d091ea1d68bbae7c080ccc639d350bb0203b8e289d528810525bc38d2b9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e98a6589d7e831d020bee9aedbc161f81bcd7755be4d1e1d2654b97c81aab9f6\"" Jul 9 23:50:25.238279 containerd[1522]: time="2025-07-09T23:50:25.238246721Z" level=info msg="StartContainer for \"e98a6589d7e831d020bee9aedbc161f81bcd7755be4d1e1d2654b97c81aab9f6\"" Jul 9 23:50:25.239258 containerd[1522]: time="2025-07-09T23:50:25.239231153Z" level=info msg="connecting to shim e98a6589d7e831d020bee9aedbc161f81bcd7755be4d1e1d2654b97c81aab9f6" address="unix:///run/containerd/s/0cc2c48e197fc7efc0e8e345255f81c725d28f339578061a7d0b0ce90a63a3db" protocol=ttrpc version=3 Jul 9 23:50:25.280923 systemd[1]: Started cri-containerd-e98a6589d7e831d020bee9aedbc161f81bcd7755be4d1e1d2654b97c81aab9f6.scope - libcontainer container e98a6589d7e831d020bee9aedbc161f81bcd7755be4d1e1d2654b97c81aab9f6. Jul 9 23:50:25.305827 systemd[1]: cri-containerd-e98a6589d7e831d020bee9aedbc161f81bcd7755be4d1e1d2654b97c81aab9f6.scope: Deactivated successfully. Jul 9 23:50:25.322819 containerd[1522]: time="2025-07-09T23:50:25.322768224Z" level=info msg="received exit event container_id:\"e98a6589d7e831d020bee9aedbc161f81bcd7755be4d1e1d2654b97c81aab9f6\" id:\"e98a6589d7e831d020bee9aedbc161f81bcd7755be4d1e1d2654b97c81aab9f6\" pid:4609 exited_at:{seconds:1752105025 nanos:322499997}" Jul 9 23:50:25.322962 containerd[1522]: time="2025-07-09T23:50:25.322934296Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e98a6589d7e831d020bee9aedbc161f81bcd7755be4d1e1d2654b97c81aab9f6\" id:\"e98a6589d7e831d020bee9aedbc161f81bcd7755be4d1e1d2654b97c81aab9f6\" pid:4609 exited_at:{seconds:1752105025 nanos:322499997}" Jul 9 23:50:25.330348 containerd[1522]: time="2025-07-09T23:50:25.330314855Z" level=info msg="StartContainer for \"e98a6589d7e831d020bee9aedbc161f81bcd7755be4d1e1d2654b97c81aab9f6\" returns successfully" Jul 9 23:50:25.346100 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e98a6589d7e831d020bee9aedbc161f81bcd7755be4d1e1d2654b97c81aab9f6-rootfs.mount: Deactivated successfully. Jul 9 23:50:26.217828 kubelet[2638]: E0709 23:50:26.217668 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:50:26.221717 containerd[1522]: time="2025-07-09T23:50:26.221670827Z" level=info msg="CreateContainer within sandbox \"73657d091ea1d68bbae7c080ccc639d350bb0203b8e289d528810525bc38d2b9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 9 23:50:26.235709 containerd[1522]: time="2025-07-09T23:50:26.235232370Z" level=info msg="Container 73518c97b7489eb955318e74217a6fae65af773b98f1d99c65bff87bf2a5d470: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:50:26.248785 containerd[1522]: time="2025-07-09T23:50:26.248746675Z" level=info msg="CreateContainer within sandbox \"73657d091ea1d68bbae7c080ccc639d350bb0203b8e289d528810525bc38d2b9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"73518c97b7489eb955318e74217a6fae65af773b98f1d99c65bff87bf2a5d470\"" Jul 9 23:50:26.249534 containerd[1522]: time="2025-07-09T23:50:26.249495921Z" level=info msg="StartContainer for \"73518c97b7489eb955318e74217a6fae65af773b98f1d99c65bff87bf2a5d470\"" Jul 9 23:50:26.250758 containerd[1522]: time="2025-07-09T23:50:26.250722025Z" level=info msg="connecting to shim 73518c97b7489eb955318e74217a6fae65af773b98f1d99c65bff87bf2a5d470" address="unix:///run/containerd/s/0cc2c48e197fc7efc0e8e345255f81c725d28f339578061a7d0b0ce90a63a3db" protocol=ttrpc version=3 Jul 9 23:50:26.274881 systemd[1]: Started cri-containerd-73518c97b7489eb955318e74217a6fae65af773b98f1d99c65bff87bf2a5d470.scope - libcontainer container 73518c97b7489eb955318e74217a6fae65af773b98f1d99c65bff87bf2a5d470. Jul 9 23:50:26.306948 containerd[1522]: time="2025-07-09T23:50:26.306912749Z" level=info msg="StartContainer for \"73518c97b7489eb955318e74217a6fae65af773b98f1d99c65bff87bf2a5d470\" returns successfully" Jul 9 23:50:26.365498 containerd[1522]: time="2025-07-09T23:50:26.365427366Z" level=info msg="TaskExit event in podsandbox handler container_id:\"73518c97b7489eb955318e74217a6fae65af773b98f1d99c65bff87bf2a5d470\" id:\"0fe54f8bb99f75715bb92246e71291eebefd2feb4ab6d9b8762eaa759969310a\" pid:4676 exited_at:{seconds:1752105026 nanos:365109500}" Jul 9 23:50:26.622715 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 9 23:50:27.223705 kubelet[2638]: E0709 23:50:27.223646 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:50:27.244158 kubelet[2638]: I0709 23:50:27.244048 2638 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cr75v" podStartSLOduration=5.244031635 podStartE2EDuration="5.244031635s" podCreationTimestamp="2025-07-09 23:50:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:50:27.241799569 +0000 UTC m=+85.442004195" watchObservedRunningTime="2025-07-09 23:50:27.244031635 +0000 UTC m=+85.444236261" Jul 9 23:50:27.909629 kubelet[2638]: E0709 23:50:27.909576 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:50:28.704116 kubelet[2638]: E0709 23:50:28.704066 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:50:28.981844 containerd[1522]: time="2025-07-09T23:50:28.981667409Z" level=info msg="TaskExit event in podsandbox handler container_id:\"73518c97b7489eb955318e74217a6fae65af773b98f1d99c65bff87bf2a5d470\" id:\"8fd463f17b50b1f3ca3b7b73483a9875dfd55961f373f78db4143cfe80192d9d\" pid:5024 exit_status:1 exited_at:{seconds:1752105028 nanos:981084431}" Jul 9 23:50:29.589073 systemd-networkd[1423]: lxc_health: Link UP Jul 9 23:50:29.598293 systemd-networkd[1423]: lxc_health: Gained carrier Jul 9 23:50:30.704546 kubelet[2638]: E0709 23:50:30.704430 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:50:30.909825 systemd-networkd[1423]: lxc_health: Gained IPv6LL Jul 9 23:50:31.110480 containerd[1522]: time="2025-07-09T23:50:31.110326206Z" level=info msg="TaskExit event in podsandbox handler container_id:\"73518c97b7489eb955318e74217a6fae65af773b98f1d99c65bff87bf2a5d470\" id:\"2fd98a9e8d507d0b98cb2e52295f2d776c639d94c84f1c106f5db0d4d99b1e8b\" pid:5211 exited_at:{seconds:1752105031 nanos:109938137}" Jul 9 23:50:31.233009 kubelet[2638]: E0709 23:50:31.232981 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:50:32.234533 kubelet[2638]: E0709 23:50:32.234503 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:50:33.271417 containerd[1522]: time="2025-07-09T23:50:33.271363313Z" level=info msg="TaskExit event in podsandbox handler container_id:\"73518c97b7489eb955318e74217a6fae65af773b98f1d99c65bff87bf2a5d470\" id:\"1eb49260d96637176edd55c024a9b2d6961100518d0e34cd9a5d8f6d6e7dd12e\" pid:5242 exited_at:{seconds:1752105033 nanos:270904364}" Jul 9 23:50:35.392112 containerd[1522]: time="2025-07-09T23:50:35.392067665Z" level=info msg="TaskExit event in podsandbox handler container_id:\"73518c97b7489eb955318e74217a6fae65af773b98f1d99c65bff87bf2a5d470\" id:\"6324006e31d09d1518c6c8f52e7d6c715e00fb7b9e65db9d8acb7a22ff14e240\" pid:5271 exited_at:{seconds:1752105035 nanos:390809129}" Jul 9 23:50:35.397050 sshd[4415]: Connection closed by 10.0.0.1 port 60324 Jul 9 23:50:35.397751 sshd-session[4409]: pam_unix(sshd:session): session closed for user core Jul 9 23:50:35.401567 systemd[1]: sshd@25-10.0.0.69:22-10.0.0.1:60324.service: Deactivated successfully. Jul 9 23:50:35.403390 systemd[1]: session-26.scope: Deactivated successfully. Jul 9 23:50:35.406433 systemd-logind[1512]: Session 26 logged out. Waiting for processes to exit. Jul 9 23:50:35.407562 systemd-logind[1512]: Removed session 26. Jul 9 23:50:35.910192 kubelet[2638]: E0709 23:50:35.910156 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"