Nov 4 12:34:21.358363 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Nov 4 12:34:21.358386 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Tue Nov 4 10:59:33 -00 2025 Nov 4 12:34:21.358395 kernel: KASLR enabled Nov 4 12:34:21.358401 kernel: efi: EFI v2.7 by EDK II Nov 4 12:34:21.358406 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Nov 4 12:34:21.358412 kernel: random: crng init done Nov 4 12:34:21.358419 kernel: secureboot: Secure boot disabled Nov 4 12:34:21.358425 kernel: ACPI: Early table checksum verification disabled Nov 4 12:34:21.358433 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Nov 4 12:34:21.358439 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Nov 4 12:34:21.358445 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 12:34:21.358451 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 12:34:21.358457 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 12:34:21.358463 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 12:34:21.358471 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 12:34:21.358478 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 12:34:21.358484 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 12:34:21.358490 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 12:34:21.358497 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 12:34:21.358503 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Nov 4 12:34:21.358509 kernel: ACPI: Use ACPI SPCR as default console: No Nov 4 12:34:21.358516 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Nov 4 12:34:21.358523 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Nov 4 12:34:21.358529 kernel: Zone ranges: Nov 4 12:34:21.358536 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Nov 4 12:34:21.358551 kernel: DMA32 empty Nov 4 12:34:21.358557 kernel: Normal empty Nov 4 12:34:21.358563 kernel: Device empty Nov 4 12:34:21.358570 kernel: Movable zone start for each node Nov 4 12:34:21.358576 kernel: Early memory node ranges Nov 4 12:34:21.358582 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Nov 4 12:34:21.358589 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Nov 4 12:34:21.358595 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Nov 4 12:34:21.358601 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Nov 4 12:34:21.358621 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Nov 4 12:34:21.358627 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Nov 4 12:34:21.358633 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Nov 4 12:34:21.358640 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Nov 4 12:34:21.358646 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Nov 4 12:34:21.358653 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Nov 4 12:34:21.358663 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Nov 4 12:34:21.358669 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Nov 4 12:34:21.358676 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Nov 4 12:34:21.358683 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Nov 4 12:34:21.358690 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Nov 4 12:34:21.358697 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Nov 4 12:34:21.358703 kernel: psci: probing for conduit method from ACPI. Nov 4 12:34:21.358710 kernel: psci: PSCIv1.1 detected in firmware. Nov 4 12:34:21.358718 kernel: psci: Using standard PSCI v0.2 function IDs Nov 4 12:34:21.358725 kernel: psci: Trusted OS migration not required Nov 4 12:34:21.358732 kernel: psci: SMC Calling Convention v1.1 Nov 4 12:34:21.358739 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Nov 4 12:34:21.358746 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Nov 4 12:34:21.358752 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Nov 4 12:34:21.358760 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Nov 4 12:34:21.358767 kernel: Detected PIPT I-cache on CPU0 Nov 4 12:34:21.358773 kernel: CPU features: detected: GIC system register CPU interface Nov 4 12:34:21.358780 kernel: CPU features: detected: Spectre-v4 Nov 4 12:34:21.358787 kernel: CPU features: detected: Spectre-BHB Nov 4 12:34:21.358795 kernel: CPU features: kernel page table isolation forced ON by KASLR Nov 4 12:34:21.358802 kernel: CPU features: detected: Kernel page table isolation (KPTI) Nov 4 12:34:21.358808 kernel: CPU features: detected: ARM erratum 1418040 Nov 4 12:34:21.358815 kernel: CPU features: detected: SSBS not fully self-synchronizing Nov 4 12:34:21.358822 kernel: alternatives: applying boot alternatives Nov 4 12:34:21.358830 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=03857d169a2df39cb9cf428f5c3ec4e76f72bbd8ea41fdc44c442b7e7c3fbee3 Nov 4 12:34:21.358837 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 4 12:34:21.358844 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 4 12:34:21.358851 kernel: Fallback order for Node 0: 0 Nov 4 12:34:21.358857 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Nov 4 12:34:21.358865 kernel: Policy zone: DMA Nov 4 12:34:21.358872 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 4 12:34:21.358879 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Nov 4 12:34:21.358885 kernel: software IO TLB: area num 4. Nov 4 12:34:21.358892 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Nov 4 12:34:21.358899 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Nov 4 12:34:21.358906 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 4 12:34:21.358913 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 4 12:34:21.358920 kernel: rcu: RCU event tracing is enabled. Nov 4 12:34:21.358927 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 4 12:34:21.358934 kernel: Trampoline variant of Tasks RCU enabled. Nov 4 12:34:21.358942 kernel: Tracing variant of Tasks RCU enabled. Nov 4 12:34:21.358949 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 4 12:34:21.358956 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 4 12:34:21.358963 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 4 12:34:21.358970 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 4 12:34:21.358976 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 4 12:34:21.358983 kernel: GICv3: 256 SPIs implemented Nov 4 12:34:21.358990 kernel: GICv3: 0 Extended SPIs implemented Nov 4 12:34:21.358996 kernel: Root IRQ handler: gic_handle_irq Nov 4 12:34:21.359003 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Nov 4 12:34:21.359010 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Nov 4 12:34:21.359018 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Nov 4 12:34:21.359025 kernel: ITS [mem 0x08080000-0x0809ffff] Nov 4 12:34:21.359032 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Nov 4 12:34:21.359039 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Nov 4 12:34:21.359046 kernel: GICv3: using LPI property table @0x0000000040130000 Nov 4 12:34:21.359053 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Nov 4 12:34:21.359059 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 4 12:34:21.359066 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 4 12:34:21.359073 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Nov 4 12:34:21.359080 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Nov 4 12:34:21.359088 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Nov 4 12:34:21.359096 kernel: arm-pv: using stolen time PV Nov 4 12:34:21.359103 kernel: Console: colour dummy device 80x25 Nov 4 12:34:21.359111 kernel: ACPI: Core revision 20240827 Nov 4 12:34:21.359118 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Nov 4 12:34:21.359126 kernel: pid_max: default: 32768 minimum: 301 Nov 4 12:34:21.359140 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 4 12:34:21.359147 kernel: landlock: Up and running. Nov 4 12:34:21.359154 kernel: SELinux: Initializing. Nov 4 12:34:21.359163 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 4 12:34:21.359171 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 4 12:34:21.359178 kernel: rcu: Hierarchical SRCU implementation. Nov 4 12:34:21.359185 kernel: rcu: Max phase no-delay instances is 400. Nov 4 12:34:21.359193 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 4 12:34:21.359200 kernel: Remapping and enabling EFI services. Nov 4 12:34:21.359207 kernel: smp: Bringing up secondary CPUs ... Nov 4 12:34:21.359218 kernel: Detected PIPT I-cache on CPU1 Nov 4 12:34:21.359230 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Nov 4 12:34:21.359239 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Nov 4 12:34:21.359246 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 4 12:34:21.359254 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Nov 4 12:34:21.359261 kernel: Detected PIPT I-cache on CPU2 Nov 4 12:34:21.359269 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Nov 4 12:34:21.359278 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Nov 4 12:34:21.359286 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 4 12:34:21.359295 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Nov 4 12:34:21.359303 kernel: Detected PIPT I-cache on CPU3 Nov 4 12:34:21.359310 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Nov 4 12:34:21.359318 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Nov 4 12:34:21.359326 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 4 12:34:21.359334 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Nov 4 12:34:21.359342 kernel: smp: Brought up 1 node, 4 CPUs Nov 4 12:34:21.359350 kernel: SMP: Total of 4 processors activated. Nov 4 12:34:21.359359 kernel: CPU: All CPU(s) started at EL1 Nov 4 12:34:21.359368 kernel: CPU features: detected: 32-bit EL0 Support Nov 4 12:34:21.359378 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Nov 4 12:34:21.359386 kernel: CPU features: detected: Common not Private translations Nov 4 12:34:21.359395 kernel: CPU features: detected: CRC32 instructions Nov 4 12:34:21.359402 kernel: CPU features: detected: Enhanced Virtualization Traps Nov 4 12:34:21.359410 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Nov 4 12:34:21.359418 kernel: CPU features: detected: LSE atomic instructions Nov 4 12:34:21.359427 kernel: CPU features: detected: Privileged Access Never Nov 4 12:34:21.359435 kernel: CPU features: detected: RAS Extension Support Nov 4 12:34:21.359442 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Nov 4 12:34:21.359451 kernel: alternatives: applying system-wide alternatives Nov 4 12:34:21.359459 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Nov 4 12:34:21.359467 kernel: Memory: 2450400K/2572288K available (11136K kernel code, 2456K rwdata, 9084K rodata, 12992K init, 1038K bss, 99552K reserved, 16384K cma-reserved) Nov 4 12:34:21.359476 kernel: devtmpfs: initialized Nov 4 12:34:21.359484 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 4 12:34:21.359492 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 4 12:34:21.359500 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Nov 4 12:34:21.359511 kernel: 0 pages in range for non-PLT usage Nov 4 12:34:21.359521 kernel: 515056 pages in range for PLT usage Nov 4 12:34:21.359529 kernel: pinctrl core: initialized pinctrl subsystem Nov 4 12:34:21.359536 kernel: SMBIOS 3.0.0 present. Nov 4 12:34:21.359551 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Nov 4 12:34:21.359558 kernel: DMI: Memory slots populated: 1/1 Nov 4 12:34:21.359566 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 4 12:34:21.359573 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 4 12:34:21.359582 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 4 12:34:21.359590 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 4 12:34:21.359598 kernel: audit: initializing netlink subsys (disabled) Nov 4 12:34:21.359605 kernel: audit: type=2000 audit(0.016:1): state=initialized audit_enabled=0 res=1 Nov 4 12:34:21.359612 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 4 12:34:21.359620 kernel: cpuidle: using governor menu Nov 4 12:34:21.359627 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 4 12:34:21.359636 kernel: ASID allocator initialised with 32768 entries Nov 4 12:34:21.359644 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 4 12:34:21.359651 kernel: Serial: AMBA PL011 UART driver Nov 4 12:34:21.359658 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 4 12:34:21.359666 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 4 12:34:21.359673 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 4 12:34:21.359681 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 4 12:34:21.359689 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 4 12:34:21.359697 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 4 12:34:21.359704 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 4 12:34:21.359712 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 4 12:34:21.359719 kernel: ACPI: Added _OSI(Module Device) Nov 4 12:34:21.359726 kernel: ACPI: Added _OSI(Processor Device) Nov 4 12:34:21.359734 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 4 12:34:21.359742 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 4 12:34:21.359750 kernel: ACPI: Interpreter enabled Nov 4 12:34:21.359758 kernel: ACPI: Using GIC for interrupt routing Nov 4 12:34:21.359765 kernel: ACPI: MCFG table detected, 1 entries Nov 4 12:34:21.359772 kernel: ACPI: CPU0 has been hot-added Nov 4 12:34:21.359780 kernel: ACPI: CPU1 has been hot-added Nov 4 12:34:21.359787 kernel: ACPI: CPU2 has been hot-added Nov 4 12:34:21.359794 kernel: ACPI: CPU3 has been hot-added Nov 4 12:34:21.359803 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Nov 4 12:34:21.359811 kernel: printk: legacy console [ttyAMA0] enabled Nov 4 12:34:21.359818 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 4 12:34:21.359967 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 4 12:34:21.360055 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 4 12:34:21.360145 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 4 12:34:21.360229 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Nov 4 12:34:21.360309 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Nov 4 12:34:21.360319 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Nov 4 12:34:21.360326 kernel: PCI host bridge to bus 0000:00 Nov 4 12:34:21.360411 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Nov 4 12:34:21.360485 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 4 12:34:21.360571 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Nov 4 12:34:21.360645 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 4 12:34:21.360743 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Nov 4 12:34:21.360834 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 4 12:34:21.360920 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Nov 4 12:34:21.361003 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Nov 4 12:34:21.361084 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Nov 4 12:34:21.361171 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Nov 4 12:34:21.361251 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Nov 4 12:34:21.361341 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Nov 4 12:34:21.361413 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Nov 4 12:34:21.361486 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 4 12:34:21.361581 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Nov 4 12:34:21.361593 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 4 12:34:21.361600 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 4 12:34:21.361608 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 4 12:34:21.361616 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 4 12:34:21.361623 kernel: iommu: Default domain type: Translated Nov 4 12:34:21.361633 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 4 12:34:21.361640 kernel: efivars: Registered efivars operations Nov 4 12:34:21.361648 kernel: vgaarb: loaded Nov 4 12:34:21.361655 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 4 12:34:21.361663 kernel: VFS: Disk quotas dquot_6.6.0 Nov 4 12:34:21.361670 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 4 12:34:21.361678 kernel: pnp: PnP ACPI init Nov 4 12:34:21.361774 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Nov 4 12:34:21.361785 kernel: pnp: PnP ACPI: found 1 devices Nov 4 12:34:21.361793 kernel: NET: Registered PF_INET protocol family Nov 4 12:34:21.361801 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 4 12:34:21.361809 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 4 12:34:21.361816 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 4 12:34:21.361824 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 4 12:34:21.361833 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 4 12:34:21.361841 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 4 12:34:21.361848 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 4 12:34:21.361856 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 4 12:34:21.361864 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 4 12:34:21.361871 kernel: PCI: CLS 0 bytes, default 64 Nov 4 12:34:21.361878 kernel: kvm [1]: HYP mode not available Nov 4 12:34:21.361887 kernel: Initialise system trusted keyrings Nov 4 12:34:21.361895 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 4 12:34:21.361902 kernel: Key type asymmetric registered Nov 4 12:34:21.361909 kernel: Asymmetric key parser 'x509' registered Nov 4 12:34:21.361917 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 4 12:34:21.361925 kernel: io scheduler mq-deadline registered Nov 4 12:34:21.361932 kernel: io scheduler kyber registered Nov 4 12:34:21.361941 kernel: io scheduler bfq registered Nov 4 12:34:21.361949 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 4 12:34:21.361956 kernel: ACPI: button: Power Button [PWRB] Nov 4 12:34:21.361964 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 4 12:34:21.362044 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Nov 4 12:34:21.362054 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 4 12:34:21.362062 kernel: thunder_xcv, ver 1.0 Nov 4 12:34:21.362071 kernel: thunder_bgx, ver 1.0 Nov 4 12:34:21.362078 kernel: nicpf, ver 1.0 Nov 4 12:34:21.362086 kernel: nicvf, ver 1.0 Nov 4 12:34:21.362188 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 4 12:34:21.362267 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-04T12:34:20 UTC (1762259660) Nov 4 12:34:21.362278 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 4 12:34:21.362287 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Nov 4 12:34:21.362295 kernel: watchdog: NMI not fully supported Nov 4 12:34:21.362302 kernel: watchdog: Hard watchdog permanently disabled Nov 4 12:34:21.362310 kernel: NET: Registered PF_INET6 protocol family Nov 4 12:34:21.362317 kernel: Segment Routing with IPv6 Nov 4 12:34:21.362325 kernel: In-situ OAM (IOAM) with IPv6 Nov 4 12:34:21.362332 kernel: NET: Registered PF_PACKET protocol family Nov 4 12:34:21.362341 kernel: Key type dns_resolver registered Nov 4 12:34:21.362349 kernel: registered taskstats version 1 Nov 4 12:34:21.362356 kernel: Loading compiled-in X.509 certificates Nov 4 12:34:21.362364 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 663f57c0d83c90dfacd5aa64fd10e0e7f59b6b15' Nov 4 12:34:21.362371 kernel: Demotion targets for Node 0: null Nov 4 12:34:21.362379 kernel: Key type .fscrypt registered Nov 4 12:34:21.362386 kernel: Key type fscrypt-provisioning registered Nov 4 12:34:21.362393 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 4 12:34:21.362402 kernel: ima: Allocated hash algorithm: sha1 Nov 4 12:34:21.362410 kernel: ima: No architecture policies found Nov 4 12:34:21.362417 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 4 12:34:21.362425 kernel: clk: Disabling unused clocks Nov 4 12:34:21.362432 kernel: PM: genpd: Disabling unused power domains Nov 4 12:34:21.362439 kernel: Freeing unused kernel memory: 12992K Nov 4 12:34:21.362447 kernel: Run /init as init process Nov 4 12:34:21.362455 kernel: with arguments: Nov 4 12:34:21.362463 kernel: /init Nov 4 12:34:21.362470 kernel: with environment: Nov 4 12:34:21.362477 kernel: HOME=/ Nov 4 12:34:21.362485 kernel: TERM=linux Nov 4 12:34:21.362594 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Nov 4 12:34:21.362676 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Nov 4 12:34:21.362688 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 4 12:34:21.362696 kernel: GPT:16515071 != 27000831 Nov 4 12:34:21.362704 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 4 12:34:21.362711 kernel: GPT:16515071 != 27000831 Nov 4 12:34:21.362718 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 4 12:34:21.362725 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 4 12:34:21.362734 kernel: Invalid ELF header magic: != \u007fELF Nov 4 12:34:21.362742 kernel: Invalid ELF header magic: != \u007fELF Nov 4 12:34:21.362749 kernel: SCSI subsystem initialized Nov 4 12:34:21.362757 kernel: Invalid ELF header magic: != \u007fELF Nov 4 12:34:21.362764 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 4 12:34:21.362772 kernel: device-mapper: uevent: version 1.0.3 Nov 4 12:34:21.362780 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 4 12:34:21.362789 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Nov 4 12:34:21.362796 kernel: Invalid ELF header magic: != \u007fELF Nov 4 12:34:21.362804 kernel: Invalid ELF header magic: != \u007fELF Nov 4 12:34:21.362811 kernel: Invalid ELF header magic: != \u007fELF Nov 4 12:34:21.362819 kernel: raid6: neonx8 gen() 15789 MB/s Nov 4 12:34:21.362826 kernel: raid6: neonx4 gen() 15829 MB/s Nov 4 12:34:21.362834 kernel: raid6: neonx2 gen() 13217 MB/s Nov 4 12:34:21.362842 kernel: raid6: neonx1 gen() 10463 MB/s Nov 4 12:34:21.362850 kernel: raid6: int64x8 gen() 6916 MB/s Nov 4 12:34:21.362857 kernel: raid6: int64x4 gen() 7362 MB/s Nov 4 12:34:21.362865 kernel: raid6: int64x2 gen() 6111 MB/s Nov 4 12:34:21.362872 kernel: raid6: int64x1 gen() 5055 MB/s Nov 4 12:34:21.362880 kernel: raid6: using algorithm neonx4 gen() 15829 MB/s Nov 4 12:34:21.362887 kernel: raid6: .... xor() 12363 MB/s, rmw enabled Nov 4 12:34:21.362895 kernel: raid6: using neon recovery algorithm Nov 4 12:34:21.362904 kernel: Invalid ELF header magic: != \u007fELF Nov 4 12:34:21.362911 kernel: Invalid ELF header magic: != \u007fELF Nov 4 12:34:21.362918 kernel: Invalid ELF header magic: != \u007fELF Nov 4 12:34:21.362926 kernel: Invalid ELF header magic: != \u007fELF Nov 4 12:34:21.362933 kernel: xor: measuring software checksum speed Nov 4 12:34:21.362941 kernel: 8regs : 21584 MB/sec Nov 4 12:34:21.362949 kernel: 32regs : 21641 MB/sec Nov 4 12:34:21.362956 kernel: arm64_neon : 26613 MB/sec Nov 4 12:34:21.362965 kernel: xor: using function: arm64_neon (26613 MB/sec) Nov 4 12:34:21.362973 kernel: Invalid ELF header magic: != \u007fELF Nov 4 12:34:21.362980 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 4 12:34:21.362988 kernel: BTRFS: device fsid a0f53245-1da9-4f46-990c-2f6a958947c8 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (203) Nov 4 12:34:21.362996 kernel: BTRFS info (device dm-0): first mount of filesystem a0f53245-1da9-4f46-990c-2f6a958947c8 Nov 4 12:34:21.363004 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 4 12:34:21.363012 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 4 12:34:21.363021 kernel: BTRFS info (device dm-0): enabling free space tree Nov 4 12:34:21.363029 kernel: Invalid ELF header magic: != \u007fELF Nov 4 12:34:21.363036 kernel: loop: module loaded Nov 4 12:34:21.363043 kernel: loop0: detected capacity change from 0 to 91464 Nov 4 12:34:21.363051 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 4 12:34:21.363059 systemd[1]: Successfully made /usr/ read-only. Nov 4 12:34:21.363070 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 4 12:34:21.363080 systemd[1]: Detected virtualization kvm. Nov 4 12:34:21.363088 systemd[1]: Detected architecture arm64. Nov 4 12:34:21.363096 systemd[1]: Running in initrd. Nov 4 12:34:21.363103 systemd[1]: No hostname configured, using default hostname. Nov 4 12:34:21.363112 systemd[1]: Hostname set to . Nov 4 12:34:21.363119 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 4 12:34:21.363136 systemd[1]: Queued start job for default target initrd.target. Nov 4 12:34:21.363146 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 4 12:34:21.363154 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 12:34:21.363162 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 12:34:21.363171 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 4 12:34:21.363182 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 4 12:34:21.363194 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 4 12:34:21.363207 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 4 12:34:21.363217 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 12:34:21.363227 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 4 12:34:21.363240 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 4 12:34:21.363251 systemd[1]: Reached target paths.target - Path Units. Nov 4 12:34:21.363261 systemd[1]: Reached target slices.target - Slice Units. Nov 4 12:34:21.363270 systemd[1]: Reached target swap.target - Swaps. Nov 4 12:34:21.363279 systemd[1]: Reached target timers.target - Timer Units. Nov 4 12:34:21.363287 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 4 12:34:21.363295 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 4 12:34:21.363305 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 4 12:34:21.363314 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 4 12:34:21.363322 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 4 12:34:21.363330 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 4 12:34:21.363339 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 12:34:21.363347 systemd[1]: Reached target sockets.target - Socket Units. Nov 4 12:34:21.363355 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 4 12:34:21.363365 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 4 12:34:21.363373 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 4 12:34:21.363381 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 4 12:34:21.363390 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 4 12:34:21.363399 systemd[1]: Starting systemd-fsck-usr.service... Nov 4 12:34:21.363407 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 4 12:34:21.363415 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 4 12:34:21.363425 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 12:34:21.363433 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 4 12:34:21.363442 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 12:34:21.363452 systemd[1]: Finished systemd-fsck-usr.service. Nov 4 12:34:21.363460 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 4 12:34:21.363485 systemd-journald[343]: Collecting audit messages is disabled. Nov 4 12:34:21.363506 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 4 12:34:21.363514 kernel: Bridge firewalling registered Nov 4 12:34:21.363522 systemd-journald[343]: Journal started Nov 4 12:34:21.363549 systemd-journald[343]: Runtime Journal (/run/log/journal/c5f574f312fd49e9aef0247918cd1b96) is 6M, max 48.5M, 42.4M free. Nov 4 12:34:21.363894 systemd-modules-load[344]: Inserted module 'br_netfilter' Nov 4 12:34:21.369574 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 4 12:34:21.372406 systemd[1]: Started systemd-journald.service - Journal Service. Nov 4 12:34:21.373006 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 12:34:21.376529 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 4 12:34:21.378228 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 12:34:21.380265 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 4 12:34:21.387954 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 12:34:21.390964 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 4 12:34:21.395902 systemd-tmpfiles[365]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 4 12:34:21.399771 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 12:34:21.402710 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 12:34:21.404990 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 12:34:21.408415 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 4 12:34:21.409814 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 4 12:34:21.412809 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 4 12:34:21.430616 dracut-cmdline[386]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=03857d169a2df39cb9cf428f5c3ec4e76f72bbd8ea41fdc44c442b7e7c3fbee3 Nov 4 12:34:21.453764 systemd-resolved[385]: Positive Trust Anchors: Nov 4 12:34:21.453782 systemd-resolved[385]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 4 12:34:21.453786 systemd-resolved[385]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 4 12:34:21.453816 systemd-resolved[385]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 4 12:34:21.476472 systemd-resolved[385]: Defaulting to hostname 'linux'. Nov 4 12:34:21.477439 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 4 12:34:21.478716 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 4 12:34:21.509579 kernel: Loading iSCSI transport class v2.0-870. Nov 4 12:34:21.517559 kernel: iscsi: registered transport (tcp) Nov 4 12:34:21.531569 kernel: iscsi: registered transport (qla4xxx) Nov 4 12:34:21.531595 kernel: QLogic iSCSI HBA Driver Nov 4 12:34:21.552560 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 4 12:34:21.571720 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 12:34:21.573448 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 4 12:34:21.622055 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 4 12:34:21.625699 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 4 12:34:21.627436 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 4 12:34:21.666061 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 4 12:34:21.669732 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 12:34:21.700194 systemd-udevd[629]: Using default interface naming scheme 'v257'. Nov 4 12:34:21.708055 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 12:34:21.710921 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 4 12:34:21.735068 dracut-pre-trigger[695]: rd.md=0: removing MD RAID activation Nov 4 12:34:21.741629 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 4 12:34:21.744460 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 4 12:34:21.763777 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 4 12:34:21.766021 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 4 12:34:21.790157 systemd-networkd[742]: lo: Link UP Nov 4 12:34:21.790167 systemd-networkd[742]: lo: Gained carrier Nov 4 12:34:21.790676 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 4 12:34:21.792011 systemd[1]: Reached target network.target - Network. Nov 4 12:34:21.828622 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 12:34:21.830997 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 4 12:34:21.868986 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 4 12:34:21.876146 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 4 12:34:21.887041 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 4 12:34:21.894663 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 4 12:34:21.901234 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 4 12:34:21.902631 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 4 12:34:21.905268 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 12:34:21.907709 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 4 12:34:21.910647 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 4 12:34:21.916858 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 4 12:34:21.925339 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 12:34:21.925470 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 12:34:21.928235 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 12:34:21.931065 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 12:34:21.938998 disk-uuid[805]: Primary Header is updated. Nov 4 12:34:21.938998 disk-uuid[805]: Secondary Entries is updated. Nov 4 12:34:21.938998 disk-uuid[805]: Secondary Header is updated. Nov 4 12:34:21.941953 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 4 12:34:21.942443 systemd-networkd[742]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 12:34:21.942448 systemd-networkd[742]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 4 12:34:21.944724 systemd-networkd[742]: eth0: Link UP Nov 4 12:34:21.944875 systemd-networkd[742]: eth0: Gained carrier Nov 4 12:34:21.944887 systemd-networkd[742]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 12:34:21.953705 systemd-networkd[742]: eth0: DHCPv4 address 10.0.0.141/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 4 12:34:21.965891 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 12:34:23.016393 disk-uuid[813]: Warning: The kernel is still using the old partition table. Nov 4 12:34:23.016393 disk-uuid[813]: The new table will be used at the next reboot or after you Nov 4 12:34:23.016393 disk-uuid[813]: run partprobe(8) or kpartx(8) Nov 4 12:34:23.016393 disk-uuid[813]: The operation has completed successfully. Nov 4 12:34:23.023595 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 4 12:34:23.023726 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 4 12:34:23.026707 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 4 12:34:23.057569 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (828) Nov 4 12:34:23.057604 kernel: BTRFS info (device vda6): first mount of filesystem 316b847e-0ec6-41c0-b528-97fbff93a67a Nov 4 12:34:23.059685 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 4 12:34:23.062567 kernel: BTRFS info (device vda6): turning on async discard Nov 4 12:34:23.062583 kernel: BTRFS info (device vda6): enabling free space tree Nov 4 12:34:23.067559 kernel: BTRFS info (device vda6): last unmount of filesystem 316b847e-0ec6-41c0-b528-97fbff93a67a Nov 4 12:34:23.068361 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 4 12:34:23.070330 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 4 12:34:23.172442 ignition[847]: Ignition 2.22.0 Nov 4 12:34:23.172460 ignition[847]: Stage: fetch-offline Nov 4 12:34:23.172494 ignition[847]: no configs at "/usr/lib/ignition/base.d" Nov 4 12:34:23.172504 ignition[847]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 12:34:23.172598 ignition[847]: parsed url from cmdline: "" Nov 4 12:34:23.172601 ignition[847]: no config URL provided Nov 4 12:34:23.172606 ignition[847]: reading system config file "/usr/lib/ignition/user.ign" Nov 4 12:34:23.172614 ignition[847]: no config at "/usr/lib/ignition/user.ign" Nov 4 12:34:23.172651 ignition[847]: op(1): [started] loading QEMU firmware config module Nov 4 12:34:23.172655 ignition[847]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 4 12:34:23.177423 ignition[847]: op(1): [finished] loading QEMU firmware config module Nov 4 12:34:23.218618 ignition[847]: parsing config with SHA512: c3532bac0219dceb2d8d68261a11cd58d4e46c336e99c3213e15f8480530781192d7179ad6c2cd04531ab472378053a756c0596cfe2cdb8c4fff1f2230b213fa Nov 4 12:34:23.224604 unknown[847]: fetched base config from "system" Nov 4 12:34:23.224617 unknown[847]: fetched user config from "qemu" Nov 4 12:34:23.225044 ignition[847]: fetch-offline: fetch-offline passed Nov 4 12:34:23.226822 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 4 12:34:23.225200 ignition[847]: Ignition finished successfully Nov 4 12:34:23.228379 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 4 12:34:23.230231 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 4 12:34:23.259049 ignition[859]: Ignition 2.22.0 Nov 4 12:34:23.259068 ignition[859]: Stage: kargs Nov 4 12:34:23.259230 ignition[859]: no configs at "/usr/lib/ignition/base.d" Nov 4 12:34:23.259238 ignition[859]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 12:34:23.260012 ignition[859]: kargs: kargs passed Nov 4 12:34:23.260061 ignition[859]: Ignition finished successfully Nov 4 12:34:23.265236 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 4 12:34:23.267441 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 4 12:34:23.297685 ignition[867]: Ignition 2.22.0 Nov 4 12:34:23.297701 ignition[867]: Stage: disks Nov 4 12:34:23.297867 ignition[867]: no configs at "/usr/lib/ignition/base.d" Nov 4 12:34:23.301030 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 4 12:34:23.297876 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 12:34:23.302234 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 4 12:34:23.298676 ignition[867]: disks: disks passed Nov 4 12:34:23.304355 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 4 12:34:23.298723 ignition[867]: Ignition finished successfully Nov 4 12:34:23.306577 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 4 12:34:23.308581 systemd[1]: Reached target sysinit.target - System Initialization. Nov 4 12:34:23.310155 systemd[1]: Reached target basic.target - Basic System. Nov 4 12:34:23.313194 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 4 12:34:23.345050 systemd-fsck[877]: ROOT: clean, 15/456736 files, 38230/456704 blocks Nov 4 12:34:23.349396 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 4 12:34:23.352628 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 4 12:34:23.409663 systemd-networkd[742]: eth0: Gained IPv6LL Nov 4 12:34:23.416566 kernel: EXT4-fs (vda9): mounted filesystem 9b363c44-0d55-4856-b006-3e673304a340 r/w with ordered data mode. Quota mode: none. Nov 4 12:34:23.417080 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 4 12:34:23.418433 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 4 12:34:23.421016 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 4 12:34:23.422832 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 4 12:34:23.423894 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 4 12:34:23.423929 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 4 12:34:23.423957 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 4 12:34:23.439300 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 4 12:34:23.442030 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 4 12:34:23.445614 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (886) Nov 4 12:34:23.448420 kernel: BTRFS info (device vda6): first mount of filesystem 316b847e-0ec6-41c0-b528-97fbff93a67a Nov 4 12:34:23.448469 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 4 12:34:23.451790 kernel: BTRFS info (device vda6): turning on async discard Nov 4 12:34:23.451816 kernel: BTRFS info (device vda6): enabling free space tree Nov 4 12:34:23.452877 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 4 12:34:23.480829 initrd-setup-root[910]: cut: /sysroot/etc/passwd: No such file or directory Nov 4 12:34:23.485507 initrd-setup-root[917]: cut: /sysroot/etc/group: No such file or directory Nov 4 12:34:23.489848 initrd-setup-root[924]: cut: /sysroot/etc/shadow: No such file or directory Nov 4 12:34:23.493466 initrd-setup-root[931]: cut: /sysroot/etc/gshadow: No such file or directory Nov 4 12:34:23.567595 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 4 12:34:23.570058 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 4 12:34:23.571676 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 4 12:34:23.599442 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 4 12:34:23.602862 kernel: BTRFS info (device vda6): last unmount of filesystem 316b847e-0ec6-41c0-b528-97fbff93a67a Nov 4 12:34:23.615724 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 4 12:34:23.631035 ignition[1000]: INFO : Ignition 2.22.0 Nov 4 12:34:23.631035 ignition[1000]: INFO : Stage: mount Nov 4 12:34:23.632636 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 12:34:23.632636 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 12:34:23.632636 ignition[1000]: INFO : mount: mount passed Nov 4 12:34:23.632636 ignition[1000]: INFO : Ignition finished successfully Nov 4 12:34:23.633679 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 4 12:34:23.637172 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 4 12:34:24.418710 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 4 12:34:24.449602 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1013) Nov 4 12:34:24.449650 kernel: BTRFS info (device vda6): first mount of filesystem 316b847e-0ec6-41c0-b528-97fbff93a67a Nov 4 12:34:24.449661 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 4 12:34:24.453650 kernel: BTRFS info (device vda6): turning on async discard Nov 4 12:34:24.453691 kernel: BTRFS info (device vda6): enabling free space tree Nov 4 12:34:24.455022 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 4 12:34:24.496733 ignition[1030]: INFO : Ignition 2.22.0 Nov 4 12:34:24.496733 ignition[1030]: INFO : Stage: files Nov 4 12:34:24.498471 ignition[1030]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 12:34:24.498471 ignition[1030]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 12:34:24.498471 ignition[1030]: DEBUG : files: compiled without relabeling support, skipping Nov 4 12:34:24.501948 ignition[1030]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 4 12:34:24.501948 ignition[1030]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 4 12:34:24.501948 ignition[1030]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 4 12:34:24.501948 ignition[1030]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 4 12:34:24.501948 ignition[1030]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 4 12:34:24.501748 unknown[1030]: wrote ssh authorized keys file for user: core Nov 4 12:34:24.511315 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 4 12:34:24.511315 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Nov 4 12:34:24.532641 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 4 12:34:24.740225 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 4 12:34:24.740225 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 4 12:34:24.744340 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Nov 4 12:34:24.958065 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 4 12:34:25.039808 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 4 12:34:25.041755 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 4 12:34:25.041755 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 4 12:34:25.041755 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 4 12:34:25.041755 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 4 12:34:25.041755 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 4 12:34:25.041755 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 4 12:34:25.041755 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 4 12:34:25.041755 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 4 12:34:25.056008 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 4 12:34:25.056008 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 4 12:34:25.056008 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 4 12:34:25.056008 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 4 12:34:25.056008 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 4 12:34:25.056008 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Nov 4 12:34:25.359839 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 4 12:34:25.588354 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 4 12:34:25.588354 ignition[1030]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 4 12:34:25.592444 ignition[1030]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 4 12:34:25.592444 ignition[1030]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 4 12:34:25.592444 ignition[1030]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 4 12:34:25.592444 ignition[1030]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Nov 4 12:34:25.592444 ignition[1030]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 4 12:34:25.592444 ignition[1030]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 4 12:34:25.592444 ignition[1030]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Nov 4 12:34:25.592444 ignition[1030]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Nov 4 12:34:25.607015 ignition[1030]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 4 12:34:25.609813 ignition[1030]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 4 12:34:25.611986 ignition[1030]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Nov 4 12:34:25.611986 ignition[1030]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Nov 4 12:34:25.611986 ignition[1030]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Nov 4 12:34:25.611986 ignition[1030]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 4 12:34:25.611986 ignition[1030]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 4 12:34:25.611986 ignition[1030]: INFO : files: files passed Nov 4 12:34:25.611986 ignition[1030]: INFO : Ignition finished successfully Nov 4 12:34:25.613115 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 4 12:34:25.617334 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 4 12:34:25.628872 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 4 12:34:25.633827 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 4 12:34:25.633925 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 4 12:34:25.638571 initrd-setup-root-after-ignition[1061]: grep: /sysroot/oem/oem-release: No such file or directory Nov 4 12:34:25.642435 initrd-setup-root-after-ignition[1063]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 4 12:34:25.642435 initrd-setup-root-after-ignition[1063]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 4 12:34:25.645860 initrd-setup-root-after-ignition[1067]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 4 12:34:25.646002 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 4 12:34:25.649091 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 4 12:34:25.651616 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 4 12:34:25.705821 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 4 12:34:25.705959 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 4 12:34:25.708222 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 4 12:34:25.710205 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 4 12:34:25.712233 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 4 12:34:25.713015 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 4 12:34:25.748590 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 4 12:34:25.751178 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 4 12:34:25.766470 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 4 12:34:25.766628 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 4 12:34:25.769013 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 12:34:25.771303 systemd[1]: Stopped target timers.target - Timer Units. Nov 4 12:34:25.773255 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 4 12:34:25.773381 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 4 12:34:25.776066 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 4 12:34:25.778202 systemd[1]: Stopped target basic.target - Basic System. Nov 4 12:34:25.779975 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 4 12:34:25.781815 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 4 12:34:25.783787 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 4 12:34:25.785750 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 4 12:34:25.787763 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 4 12:34:25.789662 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 4 12:34:25.791787 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 4 12:34:25.793801 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 4 12:34:25.795615 systemd[1]: Stopped target swap.target - Swaps. Nov 4 12:34:25.797280 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 4 12:34:25.797415 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 4 12:34:25.799818 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 4 12:34:25.801833 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 12:34:25.803797 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 4 12:34:25.803913 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 12:34:25.805942 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 4 12:34:25.806067 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 4 12:34:25.809029 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 4 12:34:25.809160 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 4 12:34:25.811224 systemd[1]: Stopped target paths.target - Path Units. Nov 4 12:34:25.812952 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 4 12:34:25.813068 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 12:34:25.815139 systemd[1]: Stopped target slices.target - Slice Units. Nov 4 12:34:25.817031 systemd[1]: Stopped target sockets.target - Socket Units. Nov 4 12:34:25.818586 systemd[1]: iscsid.socket: Deactivated successfully. Nov 4 12:34:25.818682 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 4 12:34:25.820554 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 4 12:34:25.820646 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 4 12:34:25.822950 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 4 12:34:25.823070 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 4 12:34:25.824905 systemd[1]: ignition-files.service: Deactivated successfully. Nov 4 12:34:25.825012 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 4 12:34:25.827491 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 4 12:34:25.829985 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 4 12:34:25.830932 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 4 12:34:25.831054 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 12:34:25.833118 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 4 12:34:25.833230 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 12:34:25.835277 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 4 12:34:25.835386 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 4 12:34:25.841321 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 4 12:34:25.841684 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 4 12:34:25.851633 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 4 12:34:25.857145 ignition[1087]: INFO : Ignition 2.22.0 Nov 4 12:34:25.857145 ignition[1087]: INFO : Stage: umount Nov 4 12:34:25.858822 ignition[1087]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 12:34:25.858822 ignition[1087]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 12:34:25.858822 ignition[1087]: INFO : umount: umount passed Nov 4 12:34:25.858822 ignition[1087]: INFO : Ignition finished successfully Nov 4 12:34:25.860120 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 4 12:34:25.860219 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 4 12:34:25.863828 systemd[1]: Stopped target network.target - Network. Nov 4 12:34:25.865270 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 4 12:34:25.865334 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 4 12:34:25.867078 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 4 12:34:25.867145 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 4 12:34:25.868910 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 4 12:34:25.868960 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 4 12:34:25.870760 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 4 12:34:25.870803 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 4 12:34:25.872688 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 4 12:34:25.874599 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 4 12:34:25.883299 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 4 12:34:25.883390 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 4 12:34:25.890255 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 4 12:34:25.891482 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 4 12:34:25.895167 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 4 12:34:25.896780 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 4 12:34:25.896817 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 4 12:34:25.899643 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 4 12:34:25.900626 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 4 12:34:25.900686 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 4 12:34:25.902820 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 4 12:34:25.902863 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 4 12:34:25.904620 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 4 12:34:25.904662 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 4 12:34:25.906719 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 12:34:25.910362 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 4 12:34:25.910448 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 4 12:34:25.912287 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 4 12:34:25.912382 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 4 12:34:25.921497 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 4 12:34:25.921663 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 12:34:25.923885 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 4 12:34:25.923922 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 4 12:34:25.925528 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 4 12:34:25.925611 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 12:34:25.927762 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 4 12:34:25.927819 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 4 12:34:25.930611 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 4 12:34:25.930664 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 4 12:34:25.933570 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 4 12:34:25.933629 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 4 12:34:25.937333 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 4 12:34:25.938742 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 4 12:34:25.938799 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 12:34:25.940860 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 4 12:34:25.940904 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 12:34:25.942892 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 4 12:34:25.942937 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 12:34:25.945246 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 4 12:34:25.945297 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 12:34:25.947525 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 12:34:25.947591 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 12:34:25.950139 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 4 12:34:25.957679 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 4 12:34:25.963222 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 4 12:34:25.963334 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 4 12:34:25.965876 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 4 12:34:25.968430 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 4 12:34:25.992737 systemd[1]: Switching root. Nov 4 12:34:26.033687 systemd-journald[343]: Journal stopped Nov 4 12:34:26.811655 systemd-journald[343]: Received SIGTERM from PID 1 (systemd). Nov 4 12:34:26.811708 kernel: SELinux: policy capability network_peer_controls=1 Nov 4 12:34:26.811726 kernel: SELinux: policy capability open_perms=1 Nov 4 12:34:26.811738 kernel: SELinux: policy capability extended_socket_class=1 Nov 4 12:34:26.811747 kernel: SELinux: policy capability always_check_network=0 Nov 4 12:34:26.811761 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 4 12:34:26.811770 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 4 12:34:26.811780 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 4 12:34:26.811793 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 4 12:34:26.811804 kernel: SELinux: policy capability userspace_initial_context=0 Nov 4 12:34:26.811814 systemd[1]: Successfully loaded SELinux policy in 62.031ms. Nov 4 12:34:26.811830 kernel: audit: type=1403 audit(1762259666.240:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 4 12:34:26.811841 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.449ms. Nov 4 12:34:26.811853 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 4 12:34:26.811866 systemd[1]: Detected virtualization kvm. Nov 4 12:34:26.811876 systemd[1]: Detected architecture arm64. Nov 4 12:34:26.811888 systemd[1]: Detected first boot. Nov 4 12:34:26.811899 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 4 12:34:26.811909 zram_generator::config[1134]: No configuration found. Nov 4 12:34:26.811920 kernel: NET: Registered PF_VSOCK protocol family Nov 4 12:34:26.811930 systemd[1]: Populated /etc with preset unit settings. Nov 4 12:34:26.811940 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 4 12:34:26.811952 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 4 12:34:26.811962 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 4 12:34:26.811974 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 4 12:34:26.811984 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 4 12:34:26.811995 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 4 12:34:26.812005 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 4 12:34:26.812016 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 4 12:34:26.812028 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 4 12:34:26.812039 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 4 12:34:26.812049 systemd[1]: Created slice user.slice - User and Session Slice. Nov 4 12:34:26.812060 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 12:34:26.812071 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 12:34:26.812082 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 4 12:34:26.812093 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 4 12:34:26.812114 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 4 12:34:26.812127 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 4 12:34:26.812138 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Nov 4 12:34:26.812150 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 12:34:26.812161 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 4 12:34:26.812172 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 4 12:34:26.812184 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 4 12:34:26.812198 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 4 12:34:26.812208 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 4 12:34:26.812219 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 12:34:26.812230 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 4 12:34:26.812241 systemd[1]: Reached target slices.target - Slice Units. Nov 4 12:34:26.812252 systemd[1]: Reached target swap.target - Swaps. Nov 4 12:34:26.812262 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 4 12:34:26.812274 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 4 12:34:26.812285 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 4 12:34:26.812295 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 4 12:34:26.812306 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 4 12:34:26.812316 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 12:34:26.812327 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 4 12:34:26.812338 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 4 12:34:26.812350 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 4 12:34:26.812360 systemd[1]: Mounting media.mount - External Media Directory... Nov 4 12:34:26.812371 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 4 12:34:26.812381 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 4 12:34:26.812392 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 4 12:34:26.812404 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 4 12:34:26.812416 systemd[1]: Reached target machines.target - Containers. Nov 4 12:34:26.812428 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 4 12:34:26.812439 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 12:34:26.812450 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 4 12:34:26.812461 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 4 12:34:26.812472 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 12:34:26.812483 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 4 12:34:26.812496 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 12:34:26.812507 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 4 12:34:26.812518 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 12:34:26.812529 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 4 12:34:26.812599 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 4 12:34:26.812614 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 4 12:34:26.812625 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 4 12:34:26.812638 systemd[1]: Stopped systemd-fsck-usr.service. Nov 4 12:34:26.812650 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 12:34:26.812661 kernel: fuse: init (API version 7.41) Nov 4 12:34:26.812671 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 4 12:34:26.812682 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 4 12:34:26.812693 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 4 12:34:26.812704 kernel: ACPI: bus type drm_connector registered Nov 4 12:34:26.812731 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 4 12:34:26.812742 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 4 12:34:26.812754 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 4 12:34:26.812764 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 4 12:34:26.812795 systemd-journald[1207]: Collecting audit messages is disabled. Nov 4 12:34:26.812817 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 4 12:34:26.812832 systemd-journald[1207]: Journal started Nov 4 12:34:26.812853 systemd-journald[1207]: Runtime Journal (/run/log/journal/c5f574f312fd49e9aef0247918cd1b96) is 6M, max 48.5M, 42.4M free. Nov 4 12:34:26.586629 systemd[1]: Queued start job for default target multi-user.target. Nov 4 12:34:26.606457 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 4 12:34:26.606885 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 4 12:34:26.816045 systemd[1]: Started systemd-journald.service - Journal Service. Nov 4 12:34:26.816979 systemd[1]: Mounted media.mount - External Media Directory. Nov 4 12:34:26.818131 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 4 12:34:26.819382 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 4 12:34:26.820692 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 4 12:34:26.822642 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 4 12:34:26.824076 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 12:34:26.825639 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 4 12:34:26.825804 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 4 12:34:26.827241 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 12:34:26.827394 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 12:34:26.828843 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 4 12:34:26.829010 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 4 12:34:26.830438 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 12:34:26.830647 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 12:34:26.832183 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 4 12:34:26.832338 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 4 12:34:26.833756 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 12:34:26.833907 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 12:34:26.835393 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 4 12:34:26.836953 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 12:34:26.839184 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 4 12:34:26.841047 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 4 12:34:26.853886 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 4 12:34:26.855430 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 4 12:34:26.857760 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 4 12:34:26.859774 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 4 12:34:26.860984 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 4 12:34:26.861024 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 4 12:34:26.862891 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 4 12:34:26.864286 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 12:34:26.870312 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 4 12:34:26.872433 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 4 12:34:26.873713 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 12:34:26.874683 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 4 12:34:26.875914 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 12:34:26.879681 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 12:34:26.880369 systemd-journald[1207]: Time spent on flushing to /var/log/journal/c5f574f312fd49e9aef0247918cd1b96 is 13.086ms for 886 entries. Nov 4 12:34:26.880369 systemd-journald[1207]: System Journal (/var/log/journal/c5f574f312fd49e9aef0247918cd1b96) is 8M, max 163.5M, 155.5M free. Nov 4 12:34:26.899554 systemd-journald[1207]: Received client request to flush runtime journal. Nov 4 12:34:26.882856 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 4 12:34:26.885602 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 4 12:34:26.889587 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 12:34:26.892341 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 4 12:34:26.893857 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 4 12:34:26.895368 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 4 12:34:26.899015 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 4 12:34:26.902825 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 4 12:34:26.903653 kernel: loop1: detected capacity change from 0 to 119344 Nov 4 12:34:26.905510 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 4 12:34:26.910970 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 12:34:26.912911 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Nov 4 12:34:26.912931 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Nov 4 12:34:26.923400 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 12:34:26.926090 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 4 12:34:26.934563 kernel: loop2: detected capacity change from 0 to 100624 Nov 4 12:34:26.935952 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 4 12:34:26.948623 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 4 12:34:26.951409 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 4 12:34:26.953439 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 4 12:34:26.970891 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 4 12:34:26.973560 kernel: loop3: detected capacity change from 0 to 211168 Nov 4 12:34:26.982714 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Nov 4 12:34:26.982735 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Nov 4 12:34:26.986385 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 12:34:26.998372 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 4 12:34:27.004574 kernel: loop4: detected capacity change from 0 to 119344 Nov 4 12:34:27.010565 kernel: loop5: detected capacity change from 0 to 100624 Nov 4 12:34:27.015563 kernel: loop6: detected capacity change from 0 to 211168 Nov 4 12:34:27.021986 (sd-merge)[1280]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Nov 4 12:34:27.026250 (sd-merge)[1280]: Merged extensions into '/usr'. Nov 4 12:34:27.031203 systemd[1]: Reload requested from client PID 1248 ('systemd-sysext') (unit systemd-sysext.service)... Nov 4 12:34:27.031224 systemd[1]: Reloading... Nov 4 12:34:27.068528 systemd-resolved[1268]: Positive Trust Anchors: Nov 4 12:34:27.068561 systemd-resolved[1268]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 4 12:34:27.068565 systemd-resolved[1268]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 4 12:34:27.068595 systemd-resolved[1268]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 4 12:34:27.075959 systemd-resolved[1268]: Defaulting to hostname 'linux'. Nov 4 12:34:27.082570 zram_generator::config[1310]: No configuration found. Nov 4 12:34:27.218333 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 4 12:34:27.218613 systemd[1]: Reloading finished in 187 ms. Nov 4 12:34:27.246237 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 4 12:34:27.247875 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 4 12:34:27.251217 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 4 12:34:27.271757 systemd[1]: Starting ensure-sysext.service... Nov 4 12:34:27.273624 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 4 12:34:27.280205 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 4 12:34:27.285450 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 12:34:27.286902 systemd[1]: Reload requested from client PID 1343 ('systemctl') (unit ensure-sysext.service)... Nov 4 12:34:27.286919 systemd[1]: Reloading... Nov 4 12:34:27.287893 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 4 12:34:27.287928 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 4 12:34:27.288165 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 4 12:34:27.288357 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 4 12:34:27.289067 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 4 12:34:27.289276 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. Nov 4 12:34:27.289319 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. Nov 4 12:34:27.293071 systemd-tmpfiles[1344]: Detected autofs mount point /boot during canonicalization of boot. Nov 4 12:34:27.293079 systemd-tmpfiles[1344]: Skipping /boot Nov 4 12:34:27.299384 systemd-tmpfiles[1344]: Detected autofs mount point /boot during canonicalization of boot. Nov 4 12:34:27.299474 systemd-tmpfiles[1344]: Skipping /boot Nov 4 12:34:27.324603 systemd-udevd[1347]: Using default interface naming scheme 'v257'. Nov 4 12:34:27.337607 zram_generator::config[1375]: No configuration found. Nov 4 12:34:27.512347 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Nov 4 12:34:27.512633 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 4 12:34:27.514187 systemd[1]: Reloading finished in 227 ms. Nov 4 12:34:27.528110 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 12:34:27.542942 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 12:34:27.560705 systemd[1]: Finished ensure-sysext.service. Nov 4 12:34:27.575375 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 12:34:27.577389 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 4 12:34:27.578826 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 12:34:27.586390 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 4 12:34:27.590706 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 12:34:27.592793 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 4 12:34:27.595195 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 12:34:27.598110 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 12:34:27.599360 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 12:34:27.600407 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 4 12:34:27.602670 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 12:34:27.603980 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 4 12:34:27.607730 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 4 12:34:27.612490 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 4 12:34:27.616771 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 4 12:34:27.619175 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 12:34:27.624463 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 12:34:27.624720 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 12:34:27.627176 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 4 12:34:27.627412 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 4 12:34:27.632380 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 12:34:27.632708 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 12:34:27.634456 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 12:34:27.634691 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 12:34:27.636245 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 4 12:34:27.639135 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 4 12:34:27.651075 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 4 12:34:27.655248 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 12:34:27.655379 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 12:34:27.655438 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 4 12:34:27.660755 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 4 12:34:27.675384 augenrules[1498]: No rules Nov 4 12:34:27.677840 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 12:34:27.678054 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 12:34:27.682012 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 12:34:27.704533 systemd-networkd[1468]: lo: Link UP Nov 4 12:34:27.704563 systemd-networkd[1468]: lo: Gained carrier Nov 4 12:34:27.705406 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 4 12:34:27.705753 systemd-networkd[1468]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 12:34:27.705756 systemd-networkd[1468]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 4 12:34:27.706277 systemd-networkd[1468]: eth0: Link UP Nov 4 12:34:27.706402 systemd-networkd[1468]: eth0: Gained carrier Nov 4 12:34:27.706416 systemd-networkd[1468]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 12:34:27.706862 systemd[1]: Reached target network.target - Network. Nov 4 12:34:27.709254 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 4 12:34:27.712654 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 4 12:34:27.714766 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 4 12:34:27.716694 systemd[1]: Reached target time-set.target - System Time Set. Nov 4 12:34:27.720618 systemd-networkd[1468]: eth0: DHCPv4 address 10.0.0.141/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 4 12:34:27.721669 systemd-timesyncd[1471]: Network configuration changed, trying to establish connection. Nov 4 12:34:27.722640 systemd-timesyncd[1471]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 4 12:34:27.722695 systemd-timesyncd[1471]: Initial clock synchronization to Tue 2025-11-04 12:34:27.773988 UTC. Nov 4 12:34:27.732233 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 4 12:34:27.856489 ldconfig[1452]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 4 12:34:27.860359 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 4 12:34:27.865806 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 4 12:34:27.893284 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 4 12:34:27.894804 systemd[1]: Reached target sysinit.target - System Initialization. Nov 4 12:34:27.897756 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 4 12:34:27.899068 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 4 12:34:27.900535 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 4 12:34:27.901714 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 4 12:34:27.903136 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 4 12:34:27.904445 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 4 12:34:27.904481 systemd[1]: Reached target paths.target - Path Units. Nov 4 12:34:27.905470 systemd[1]: Reached target timers.target - Timer Units. Nov 4 12:34:27.907149 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 4 12:34:27.909598 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 4 12:34:27.912201 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 4 12:34:27.914845 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 4 12:34:27.916142 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 4 12:34:27.919334 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 4 12:34:27.920932 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 4 12:34:27.922672 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 4 12:34:27.923838 systemd[1]: Reached target sockets.target - Socket Units. Nov 4 12:34:27.924832 systemd[1]: Reached target basic.target - Basic System. Nov 4 12:34:27.925815 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 4 12:34:27.925850 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 4 12:34:27.926724 systemd[1]: Starting containerd.service - containerd container runtime... Nov 4 12:34:27.928687 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 4 12:34:27.930591 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 4 12:34:27.932981 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 4 12:34:27.935266 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 4 12:34:27.936407 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 4 12:34:27.937426 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 4 12:34:27.940052 jq[1522]: false Nov 4 12:34:27.940647 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 4 12:34:27.942628 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 4 12:34:27.945724 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 4 12:34:27.949719 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 4 12:34:27.950851 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 4 12:34:27.951354 extend-filesystems[1523]: Found /dev/vda6 Nov 4 12:34:27.951260 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 4 12:34:27.953690 systemd[1]: Starting update-engine.service - Update Engine... Nov 4 12:34:27.955561 extend-filesystems[1523]: Found /dev/vda9 Nov 4 12:34:27.955778 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 4 12:34:27.958724 extend-filesystems[1523]: Checking size of /dev/vda9 Nov 4 12:34:27.968063 extend-filesystems[1523]: Resized partition /dev/vda9 Nov 4 12:34:27.968353 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 4 12:34:27.969220 jq[1540]: true Nov 4 12:34:27.972472 extend-filesystems[1549]: resize2fs 1.47.3 (8-Jul-2025) Nov 4 12:34:27.972196 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 4 12:34:27.972381 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 4 12:34:27.972658 systemd[1]: motdgen.service: Deactivated successfully. Nov 4 12:34:27.972830 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 4 12:34:27.977557 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Nov 4 12:34:27.979859 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 4 12:34:27.980360 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 4 12:34:28.005240 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Nov 4 12:34:28.005303 jq[1554]: true Nov 4 12:34:28.015551 tar[1551]: linux-arm64/LICENSE Nov 4 12:34:28.023405 tar[1551]: linux-arm64/helm Nov 4 12:34:28.025762 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 4 12:34:28.026363 extend-filesystems[1549]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 4 12:34:28.026363 extend-filesystems[1549]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 4 12:34:28.026363 extend-filesystems[1549]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Nov 4 12:34:28.033584 update_engine[1536]: I20251104 12:34:28.024825 1536 main.cc:92] Flatcar Update Engine starting Nov 4 12:34:28.026006 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 4 12:34:28.034479 extend-filesystems[1523]: Resized filesystem in /dev/vda9 Nov 4 12:34:28.045758 dbus-daemon[1520]: [system] SELinux support is enabled Nov 4 12:34:28.045965 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 4 12:34:28.049606 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 4 12:34:28.049646 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 4 12:34:28.051589 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 4 12:34:28.051614 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 4 12:34:28.053043 update_engine[1536]: I20251104 12:34:28.052990 1536 update_check_scheduler.cc:74] Next update check in 6m32s Nov 4 12:34:28.053495 systemd[1]: Started update-engine.service - Update Engine. Nov 4 12:34:28.056840 bash[1587]: Updated "/home/core/.ssh/authorized_keys" Nov 4 12:34:28.061855 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 4 12:34:28.063713 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 4 12:34:28.066461 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 4 12:34:28.074422 systemd-logind[1532]: Watching system buttons on /dev/input/event0 (Power Button) Nov 4 12:34:28.074721 systemd-logind[1532]: New seat seat0. Nov 4 12:34:28.076938 systemd[1]: Started systemd-logind.service - User Login Management. Nov 4 12:34:28.116383 locksmithd[1588]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 4 12:34:28.180671 containerd[1556]: time="2025-11-04T12:34:28Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 4 12:34:28.181426 containerd[1556]: time="2025-11-04T12:34:28.181392007Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 4 12:34:28.190392 containerd[1556]: time="2025-11-04T12:34:28.190354923Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.258µs" Nov 4 12:34:28.190430 containerd[1556]: time="2025-11-04T12:34:28.190391316Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 4 12:34:28.190430 containerd[1556]: time="2025-11-04T12:34:28.190411968Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 4 12:34:28.190592 containerd[1556]: time="2025-11-04T12:34:28.190573962Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 4 12:34:28.190619 containerd[1556]: time="2025-11-04T12:34:28.190594614Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 4 12:34:28.190644 containerd[1556]: time="2025-11-04T12:34:28.190620781Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 4 12:34:28.190686 containerd[1556]: time="2025-11-04T12:34:28.190669331Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 4 12:34:28.190686 containerd[1556]: time="2025-11-04T12:34:28.190682858Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 4 12:34:28.190870 containerd[1556]: time="2025-11-04T12:34:28.190848395Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 4 12:34:28.190870 containerd[1556]: time="2025-11-04T12:34:28.190866712Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 4 12:34:28.190908 containerd[1556]: time="2025-11-04T12:34:28.190878869Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 4 12:34:28.190908 containerd[1556]: time="2025-11-04T12:34:28.190886639Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 4 12:34:28.190973 containerd[1556]: time="2025-11-04T12:34:28.190956606Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 4 12:34:28.191148 containerd[1556]: time="2025-11-04T12:34:28.191129550Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 4 12:34:28.191174 containerd[1556]: time="2025-11-04T12:34:28.191160951Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 4 12:34:28.191174 containerd[1556]: time="2025-11-04T12:34:28.191170894Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 4 12:34:28.191564 containerd[1556]: time="2025-11-04T12:34:28.191215499Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 4 12:34:28.191564 containerd[1556]: time="2025-11-04T12:34:28.191447662Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 4 12:34:28.191564 containerd[1556]: time="2025-11-04T12:34:28.191516541Z" level=info msg="metadata content store policy set" policy=shared Nov 4 12:34:28.194936 containerd[1556]: time="2025-11-04T12:34:28.194904174Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 4 12:34:28.194978 containerd[1556]: time="2025-11-04T12:34:28.194959649Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 4 12:34:28.194978 containerd[1556]: time="2025-11-04T12:34:28.194975027Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 4 12:34:28.195073 containerd[1556]: time="2025-11-04T12:34:28.194987185Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 4 12:34:28.195073 containerd[1556]: time="2025-11-04T12:34:28.194998698Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 4 12:34:28.195073 containerd[1556]: time="2025-11-04T12:34:28.195012224Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 4 12:34:28.195073 containerd[1556]: time="2025-11-04T12:34:28.195024463Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 4 12:34:28.195073 containerd[1556]: time="2025-11-04T12:34:28.195052361Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 4 12:34:28.195073 containerd[1556]: time="2025-11-04T12:34:28.195064840Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 4 12:34:28.195073 containerd[1556]: time="2025-11-04T12:34:28.195074905Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 4 12:34:28.195186 containerd[1556]: time="2025-11-04T12:34:28.195084566Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 4 12:34:28.195186 containerd[1556]: time="2025-11-04T12:34:28.195096603Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 4 12:34:28.195218 containerd[1556]: time="2025-11-04T12:34:28.195203486Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 4 12:34:28.195235 containerd[1556]: time="2025-11-04T12:34:28.195223091Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 4 12:34:28.195252 containerd[1556]: time="2025-11-04T12:34:28.195240563Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 4 12:34:28.195273 containerd[1556]: time="2025-11-04T12:34:28.195252157Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 4 12:34:28.195273 containerd[1556]: time="2025-11-04T12:34:28.195263147Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 4 12:34:28.195304 containerd[1556]: time="2025-11-04T12:34:28.195273493Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 4 12:34:28.195304 containerd[1556]: time="2025-11-04T12:34:28.195284201Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 4 12:34:28.195304 containerd[1556]: time="2025-11-04T12:34:28.195294145Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 4 12:34:28.195405 containerd[1556]: time="2025-11-04T12:34:28.195305819Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 4 12:34:28.195405 containerd[1556]: time="2025-11-04T12:34:28.195317534Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 4 12:34:28.195405 containerd[1556]: time="2025-11-04T12:34:28.195327357Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 4 12:34:28.196550 containerd[1556]: time="2025-11-04T12:34:28.195518055Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 4 12:34:28.196550 containerd[1556]: time="2025-11-04T12:34:28.195560284Z" level=info msg="Start snapshots syncer" Nov 4 12:34:28.196550 containerd[1556]: time="2025-11-04T12:34:28.195589229Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 4 12:34:28.196626 containerd[1556]: time="2025-11-04T12:34:28.195792487Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 4 12:34:28.196626 containerd[1556]: time="2025-11-04T12:34:28.195843010Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 4 12:34:28.196626 containerd[1556]: time="2025-11-04T12:34:28.196004803Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 4 12:34:28.196626 containerd[1556]: time="2025-11-04T12:34:28.196107096Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 4 12:34:28.196626 containerd[1556]: time="2025-11-04T12:34:28.196128795Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 4 12:34:28.196626 containerd[1556]: time="2025-11-04T12:34:28.196140026Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 4 12:34:28.196626 containerd[1556]: time="2025-11-04T12:34:28.196153714Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 4 12:34:28.196626 containerd[1556]: time="2025-11-04T12:34:28.196165872Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 4 12:34:28.196626 containerd[1556]: time="2025-11-04T12:34:28.196178150Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 4 12:34:28.196626 containerd[1556]: time="2025-11-04T12:34:28.196188214Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 4 12:34:28.196626 containerd[1556]: time="2025-11-04T12:34:28.196212248Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 4 12:34:28.196626 containerd[1556]: time="2025-11-04T12:34:28.196251740Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 4 12:34:28.196626 containerd[1556]: time="2025-11-04T12:34:28.196265347Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 4 12:34:28.196626 containerd[1556]: time="2025-11-04T12:34:28.196306852Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 4 12:34:28.196626 containerd[1556]: time="2025-11-04T12:34:28.196322109Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 4 12:34:28.196626 containerd[1556]: time="2025-11-04T12:34:28.196330603Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 4 12:34:28.196945 containerd[1556]: time="2025-11-04T12:34:28.196351618Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 4 12:34:28.196945 containerd[1556]: time="2025-11-04T12:34:28.196756684Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 4 12:34:28.196945 containerd[1556]: time="2025-11-04T12:34:28.196784140Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 4 12:34:28.196945 containerd[1556]: time="2025-11-04T12:34:28.196801048Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 4 12:34:28.197703 containerd[1556]: time="2025-11-04T12:34:28.197675794Z" level=info msg="runtime interface created" Nov 4 12:34:28.197703 containerd[1556]: time="2025-11-04T12:34:28.197701236Z" level=info msg="created NRI interface" Nov 4 12:34:28.197746 containerd[1556]: time="2025-11-04T12:34:28.197715608Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 4 12:34:28.197746 containerd[1556]: time="2025-11-04T12:34:28.197739641Z" level=info msg="Connect containerd service" Nov 4 12:34:28.197821 containerd[1556]: time="2025-11-04T12:34:28.197804777Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 4 12:34:28.198706 containerd[1556]: time="2025-11-04T12:34:28.198679805Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 4 12:34:28.264197 containerd[1556]: time="2025-11-04T12:34:28.264124850Z" level=info msg="Start subscribing containerd event" Nov 4 12:34:28.264290 containerd[1556]: time="2025-11-04T12:34:28.264209269Z" level=info msg="Start recovering state" Nov 4 12:34:28.264309 containerd[1556]: time="2025-11-04T12:34:28.264301498Z" level=info msg="Start event monitor" Nov 4 12:34:28.264327 containerd[1556]: time="2025-11-04T12:34:28.264314662Z" level=info msg="Start cni network conf syncer for default" Nov 4 12:34:28.264327 containerd[1556]: time="2025-11-04T12:34:28.264323921Z" level=info msg="Start streaming server" Nov 4 12:34:28.264381 containerd[1556]: time="2025-11-04T12:34:28.264333623Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 4 12:34:28.264381 containerd[1556]: time="2025-11-04T12:34:28.264344171Z" level=info msg="runtime interface starting up..." Nov 4 12:34:28.264381 containerd[1556]: time="2025-11-04T12:34:28.264350531Z" level=info msg="starting plugins..." Nov 4 12:34:28.264381 containerd[1556]: time="2025-11-04T12:34:28.264364058Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 4 12:34:28.264705 containerd[1556]: time="2025-11-04T12:34:28.264680357Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 4 12:34:28.264803 containerd[1556]: time="2025-11-04T12:34:28.264786515Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 4 12:34:28.265668 containerd[1556]: time="2025-11-04T12:34:28.265650231Z" level=info msg="containerd successfully booted in 0.085340s" Nov 4 12:34:28.265846 systemd[1]: Started containerd.service - containerd container runtime. Nov 4 12:34:28.312320 tar[1551]: linux-arm64/README.md Nov 4 12:34:28.330604 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 4 12:34:28.673122 sshd_keygen[1545]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 4 12:34:28.693665 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 4 12:34:28.696428 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 4 12:34:28.710695 systemd[1]: issuegen.service: Deactivated successfully. Nov 4 12:34:28.710894 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 4 12:34:28.713440 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 4 12:34:28.731966 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 4 12:34:28.734655 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 4 12:34:28.736814 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Nov 4 12:34:28.738176 systemd[1]: Reached target getty.target - Login Prompts. Nov 4 12:34:29.169747 systemd-networkd[1468]: eth0: Gained IPv6LL Nov 4 12:34:29.172606 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 4 12:34:29.174437 systemd[1]: Reached target network-online.target - Network is Online. Nov 4 12:34:29.177227 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 4 12:34:29.179908 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 12:34:29.197444 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 4 12:34:29.214366 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 4 12:34:29.214617 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 4 12:34:29.216399 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 4 12:34:29.219044 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 4 12:34:29.752256 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 12:34:29.753972 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 4 12:34:29.756141 (kubelet)[1659]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 12:34:29.756394 systemd[1]: Startup finished in 1.207s (kernel) + 5.111s (initrd) + 3.578s (userspace) = 9.897s. Nov 4 12:34:30.104852 kubelet[1659]: E1104 12:34:30.104798 1659 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 12:34:30.107138 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 12:34:30.107265 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 12:34:30.107586 systemd[1]: kubelet.service: Consumed 759ms CPU time, 258.5M memory peak. Nov 4 12:34:32.567179 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 4 12:34:32.568284 systemd[1]: Started sshd@0-10.0.0.141:22-10.0.0.1:43210.service - OpenSSH per-connection server daemon (10.0.0.1:43210). Nov 4 12:34:32.641332 sshd[1673]: Accepted publickey for core from 10.0.0.1 port 43210 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:34:32.643215 sshd-session[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:34:32.649216 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 4 12:34:32.650245 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 4 12:34:32.657144 systemd-logind[1532]: New session 1 of user core. Nov 4 12:34:32.669009 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 4 12:34:32.671317 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 4 12:34:32.692646 (systemd)[1678]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 4 12:34:32.694828 systemd-logind[1532]: New session c1 of user core. Nov 4 12:34:32.795179 systemd[1678]: Queued start job for default target default.target. Nov 4 12:34:32.811538 systemd[1678]: Created slice app.slice - User Application Slice. Nov 4 12:34:32.811599 systemd[1678]: Reached target paths.target - Paths. Nov 4 12:34:32.811642 systemd[1678]: Reached target timers.target - Timers. Nov 4 12:34:32.812862 systemd[1678]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 4 12:34:32.822253 systemd[1678]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 4 12:34:32.822320 systemd[1678]: Reached target sockets.target - Sockets. Nov 4 12:34:32.822357 systemd[1678]: Reached target basic.target - Basic System. Nov 4 12:34:32.822395 systemd[1678]: Reached target default.target - Main User Target. Nov 4 12:34:32.822423 systemd[1678]: Startup finished in 122ms. Nov 4 12:34:32.822647 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 4 12:34:32.824120 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 4 12:34:32.881617 systemd[1]: Started sshd@1-10.0.0.141:22-10.0.0.1:43214.service - OpenSSH per-connection server daemon (10.0.0.1:43214). Nov 4 12:34:32.940939 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 43214 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:34:32.942101 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:34:32.946497 systemd-logind[1532]: New session 2 of user core. Nov 4 12:34:32.957700 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 4 12:34:33.008972 sshd[1692]: Connection closed by 10.0.0.1 port 43214 Nov 4 12:34:33.009420 sshd-session[1689]: pam_unix(sshd:session): session closed for user core Nov 4 12:34:33.030478 systemd[1]: sshd@1-10.0.0.141:22-10.0.0.1:43214.service: Deactivated successfully. Nov 4 12:34:33.032045 systemd[1]: session-2.scope: Deactivated successfully. Nov 4 12:34:33.034716 systemd-logind[1532]: Session 2 logged out. Waiting for processes to exit. Nov 4 12:34:33.036623 systemd[1]: Started sshd@2-10.0.0.141:22-10.0.0.1:43228.service - OpenSSH per-connection server daemon (10.0.0.1:43228). Nov 4 12:34:33.038240 systemd-logind[1532]: Removed session 2. Nov 4 12:34:33.092379 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 43228 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:34:33.093729 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:34:33.097673 systemd-logind[1532]: New session 3 of user core. Nov 4 12:34:33.111721 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 4 12:34:33.159104 sshd[1701]: Connection closed by 10.0.0.1 port 43228 Nov 4 12:34:33.159577 sshd-session[1698]: pam_unix(sshd:session): session closed for user core Nov 4 12:34:33.174208 systemd[1]: sshd@2-10.0.0.141:22-10.0.0.1:43228.service: Deactivated successfully. Nov 4 12:34:33.177051 systemd[1]: session-3.scope: Deactivated successfully. Nov 4 12:34:33.178354 systemd-logind[1532]: Session 3 logged out. Waiting for processes to exit. Nov 4 12:34:33.180050 systemd[1]: Started sshd@3-10.0.0.141:22-10.0.0.1:43230.service - OpenSSH per-connection server daemon (10.0.0.1:43230). Nov 4 12:34:33.181145 systemd-logind[1532]: Removed session 3. Nov 4 12:34:33.245097 sshd[1707]: Accepted publickey for core from 10.0.0.1 port 43230 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:34:33.246408 sshd-session[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:34:33.251107 systemd-logind[1532]: New session 4 of user core. Nov 4 12:34:33.261709 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 4 12:34:33.316233 sshd[1710]: Connection closed by 10.0.0.1 port 43230 Nov 4 12:34:33.316715 sshd-session[1707]: pam_unix(sshd:session): session closed for user core Nov 4 12:34:33.325436 systemd[1]: sshd@3-10.0.0.141:22-10.0.0.1:43230.service: Deactivated successfully. Nov 4 12:34:33.326719 systemd[1]: session-4.scope: Deactivated successfully. Nov 4 12:34:33.328695 systemd-logind[1532]: Session 4 logged out. Waiting for processes to exit. Nov 4 12:34:33.334451 systemd[1]: Started sshd@4-10.0.0.141:22-10.0.0.1:43232.service - OpenSSH per-connection server daemon (10.0.0.1:43232). Nov 4 12:34:33.335121 systemd-logind[1532]: Removed session 4. Nov 4 12:34:33.385536 sshd[1716]: Accepted publickey for core from 10.0.0.1 port 43232 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:34:33.386870 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:34:33.391239 systemd-logind[1532]: New session 5 of user core. Nov 4 12:34:33.405703 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 4 12:34:33.462149 sudo[1720]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 4 12:34:33.462412 sudo[1720]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 12:34:33.475492 sudo[1720]: pam_unix(sudo:session): session closed for user root Nov 4 12:34:33.478305 sshd[1719]: Connection closed by 10.0.0.1 port 43232 Nov 4 12:34:33.478204 sshd-session[1716]: pam_unix(sshd:session): session closed for user core Nov 4 12:34:33.485396 systemd[1]: sshd@4-10.0.0.141:22-10.0.0.1:43232.service: Deactivated successfully. Nov 4 12:34:33.486898 systemd[1]: session-5.scope: Deactivated successfully. Nov 4 12:34:33.487545 systemd-logind[1532]: Session 5 logged out. Waiting for processes to exit. Nov 4 12:34:33.489712 systemd[1]: Started sshd@5-10.0.0.141:22-10.0.0.1:43236.service - OpenSSH per-connection server daemon (10.0.0.1:43236). Nov 4 12:34:33.490137 systemd-logind[1532]: Removed session 5. Nov 4 12:34:33.542453 sshd[1726]: Accepted publickey for core from 10.0.0.1 port 43236 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:34:33.543629 sshd-session[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:34:33.548036 systemd-logind[1532]: New session 6 of user core. Nov 4 12:34:33.554689 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 4 12:34:33.606471 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 4 12:34:33.606750 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 12:34:33.611688 sudo[1731]: pam_unix(sudo:session): session closed for user root Nov 4 12:34:33.617300 sudo[1730]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 4 12:34:33.617814 sudo[1730]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 12:34:33.626188 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 12:34:33.662580 augenrules[1753]: No rules Nov 4 12:34:33.663672 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 12:34:33.664647 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 12:34:33.665868 sudo[1730]: pam_unix(sudo:session): session closed for user root Nov 4 12:34:33.667206 sshd[1729]: Connection closed by 10.0.0.1 port 43236 Nov 4 12:34:33.667518 sshd-session[1726]: pam_unix(sshd:session): session closed for user core Nov 4 12:34:33.680497 systemd[1]: sshd@5-10.0.0.141:22-10.0.0.1:43236.service: Deactivated successfully. Nov 4 12:34:33.682715 systemd[1]: session-6.scope: Deactivated successfully. Nov 4 12:34:33.685063 systemd-logind[1532]: Session 6 logged out. Waiting for processes to exit. Nov 4 12:34:33.686063 systemd[1]: Started sshd@6-10.0.0.141:22-10.0.0.1:43240.service - OpenSSH per-connection server daemon (10.0.0.1:43240). Nov 4 12:34:33.686977 systemd-logind[1532]: Removed session 6. Nov 4 12:34:33.743073 sshd[1762]: Accepted publickey for core from 10.0.0.1 port 43240 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:34:33.744181 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:34:33.748450 systemd-logind[1532]: New session 7 of user core. Nov 4 12:34:33.766786 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 4 12:34:33.818992 sudo[1766]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 4 12:34:33.819577 sudo[1766]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 12:34:34.087961 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 4 12:34:34.101841 (dockerd)[1786]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 4 12:34:34.297137 dockerd[1786]: time="2025-11-04T12:34:34.297073754Z" level=info msg="Starting up" Nov 4 12:34:34.297980 dockerd[1786]: time="2025-11-04T12:34:34.297948028Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 4 12:34:34.307911 dockerd[1786]: time="2025-11-04T12:34:34.307855563Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 4 12:34:34.459922 dockerd[1786]: time="2025-11-04T12:34:34.459817458Z" level=info msg="Loading containers: start." Nov 4 12:34:34.468558 kernel: Initializing XFRM netlink socket Nov 4 12:34:34.658440 systemd-networkd[1468]: docker0: Link UP Nov 4 12:34:34.661717 dockerd[1786]: time="2025-11-04T12:34:34.661674294Z" level=info msg="Loading containers: done." Nov 4 12:34:34.675223 dockerd[1786]: time="2025-11-04T12:34:34.675165857Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 4 12:34:34.675348 dockerd[1786]: time="2025-11-04T12:34:34.675257441Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 4 12:34:34.675348 dockerd[1786]: time="2025-11-04T12:34:34.675336468Z" level=info msg="Initializing buildkit" Nov 4 12:34:34.696893 dockerd[1786]: time="2025-11-04T12:34:34.696849621Z" level=info msg="Completed buildkit initialization" Nov 4 12:34:34.701489 dockerd[1786]: time="2025-11-04T12:34:34.701441989Z" level=info msg="Daemon has completed initialization" Nov 4 12:34:34.702002 dockerd[1786]: time="2025-11-04T12:34:34.701534094Z" level=info msg="API listen on /run/docker.sock" Nov 4 12:34:34.701730 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 4 12:34:35.261444 containerd[1556]: time="2025-11-04T12:34:35.261406867Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 4 12:34:35.318002 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1909897091-merged.mount: Deactivated successfully. Nov 4 12:34:35.787770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4197180059.mount: Deactivated successfully. Nov 4 12:34:36.912113 containerd[1556]: time="2025-11-04T12:34:36.912064144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:34:36.912852 containerd[1556]: time="2025-11-04T12:34:36.912822415Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=27390230" Nov 4 12:34:36.913840 containerd[1556]: time="2025-11-04T12:34:36.913801453Z" level=info msg="ImageCreate event name:\"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:34:36.918342 containerd[1556]: time="2025-11-04T12:34:36.918147349Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:34:36.919179 containerd[1556]: time="2025-11-04T12:34:36.919146591Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"27386827\" in 1.657700306s" Nov 4 12:34:36.919238 containerd[1556]: time="2025-11-04T12:34:36.919184795Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\"" Nov 4 12:34:36.920359 containerd[1556]: time="2025-11-04T12:34:36.920336492Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 4 12:34:38.071347 containerd[1556]: time="2025-11-04T12:34:38.071294224Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:34:38.071960 containerd[1556]: time="2025-11-04T12:34:38.071919559Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=23547919" Nov 4 12:34:38.072906 containerd[1556]: time="2025-11-04T12:34:38.072857382Z" level=info msg="ImageCreate event name:\"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:34:38.075837 containerd[1556]: time="2025-11-04T12:34:38.075785605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:34:38.076858 containerd[1556]: time="2025-11-04T12:34:38.076686966Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"25135832\" in 1.156249054s" Nov 4 12:34:38.076858 containerd[1556]: time="2025-11-04T12:34:38.076717898Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\"" Nov 4 12:34:38.077180 containerd[1556]: time="2025-11-04T12:34:38.077111883Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 4 12:34:39.363174 containerd[1556]: time="2025-11-04T12:34:39.363114880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:34:39.363676 containerd[1556]: time="2025-11-04T12:34:39.363641739Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=18295979" Nov 4 12:34:39.364592 containerd[1556]: time="2025-11-04T12:34:39.364561057Z" level=info msg="ImageCreate event name:\"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:34:39.367633 containerd[1556]: time="2025-11-04T12:34:39.367599065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:34:39.368561 containerd[1556]: time="2025-11-04T12:34:39.368516019Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"19883910\" in 1.291372805s" Nov 4 12:34:39.368561 containerd[1556]: time="2025-11-04T12:34:39.368553354Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\"" Nov 4 12:34:39.369226 containerd[1556]: time="2025-11-04T12:34:39.368938884Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 4 12:34:40.121091 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 4 12:34:40.122619 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 12:34:40.260885 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 12:34:40.274959 (kubelet)[2085]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 12:34:40.313388 kubelet[2085]: E1104 12:34:40.313337 2085 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 12:34:40.316677 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 12:34:40.316808 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 12:34:40.317124 systemd[1]: kubelet.service: Consumed 143ms CPU time, 108.2M memory peak. Nov 4 12:34:40.389558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1586255489.mount: Deactivated successfully. Nov 4 12:34:41.131185 containerd[1556]: time="2025-11-04T12:34:41.131131710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:34:41.132096 containerd[1556]: time="2025-11-04T12:34:41.132052872Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=28240108" Nov 4 12:34:41.132956 containerd[1556]: time="2025-11-04T12:34:41.132923657Z" level=info msg="ImageCreate event name:\"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:34:41.134618 containerd[1556]: time="2025-11-04T12:34:41.134585738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:34:41.135273 containerd[1556]: time="2025-11-04T12:34:41.135227103Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"28239125\" in 1.766256336s" Nov 4 12:34:41.135273 containerd[1556]: time="2025-11-04T12:34:41.135268390Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\"" Nov 4 12:34:41.135916 containerd[1556]: time="2025-11-04T12:34:41.135894058Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 4 12:34:41.659498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3153295863.mount: Deactivated successfully. Nov 4 12:34:42.997490 containerd[1556]: time="2025-11-04T12:34:42.997430902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:34:42.997981 containerd[1556]: time="2025-11-04T12:34:42.997946492Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Nov 4 12:34:42.998987 containerd[1556]: time="2025-11-04T12:34:42.998947764Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:34:43.001497 containerd[1556]: time="2025-11-04T12:34:43.001454974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:34:43.002926 containerd[1556]: time="2025-11-04T12:34:43.002681436Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.866756466s" Nov 4 12:34:43.002926 containerd[1556]: time="2025-11-04T12:34:43.002722872Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Nov 4 12:34:43.003249 containerd[1556]: time="2025-11-04T12:34:43.003226789Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 4 12:34:43.450347 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1943050409.mount: Deactivated successfully. Nov 4 12:34:43.454645 containerd[1556]: time="2025-11-04T12:34:43.454594440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 12:34:43.455179 containerd[1556]: time="2025-11-04T12:34:43.455138272Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Nov 4 12:34:43.456493 containerd[1556]: time="2025-11-04T12:34:43.456051263Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 12:34:43.458275 containerd[1556]: time="2025-11-04T12:34:43.458233393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 12:34:43.458991 containerd[1556]: time="2025-11-04T12:34:43.458956460Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 455.546352ms" Nov 4 12:34:43.458991 containerd[1556]: time="2025-11-04T12:34:43.458985925Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Nov 4 12:34:43.459584 containerd[1556]: time="2025-11-04T12:34:43.459490683Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 4 12:34:43.852959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1657555241.mount: Deactivated successfully. Nov 4 12:34:45.487091 containerd[1556]: time="2025-11-04T12:34:45.487010550Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:34:45.487653 containerd[1556]: time="2025-11-04T12:34:45.487600862Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465859" Nov 4 12:34:45.488569 containerd[1556]: time="2025-11-04T12:34:45.488526596Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:34:45.491216 containerd[1556]: time="2025-11-04T12:34:45.491186401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:34:45.493186 containerd[1556]: time="2025-11-04T12:34:45.493149984Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.033623992s" Nov 4 12:34:45.493186 containerd[1556]: time="2025-11-04T12:34:45.493182125Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Nov 4 12:34:49.517566 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 12:34:49.518065 systemd[1]: kubelet.service: Consumed 143ms CPU time, 108.2M memory peak. Nov 4 12:34:49.519963 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 12:34:49.541649 systemd[1]: Reload requested from client PID 2241 ('systemctl') (unit session-7.scope)... Nov 4 12:34:49.541669 systemd[1]: Reloading... Nov 4 12:34:49.617561 zram_generator::config[2283]: No configuration found. Nov 4 12:34:49.822518 systemd[1]: Reloading finished in 280 ms. Nov 4 12:34:49.858945 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 12:34:49.861523 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 12:34:49.862447 systemd[1]: kubelet.service: Deactivated successfully. Nov 4 12:34:49.862658 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 12:34:49.862689 systemd[1]: kubelet.service: Consumed 93ms CPU time, 95.1M memory peak. Nov 4 12:34:49.864075 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 12:34:49.981441 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 12:34:49.985007 (kubelet)[2332]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 4 12:34:50.015595 kubelet[2332]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 12:34:50.015595 kubelet[2332]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 4 12:34:50.015595 kubelet[2332]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 12:34:50.015883 kubelet[2332]: I1104 12:34:50.015638 2332 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 4 12:34:51.549173 kubelet[2332]: I1104 12:34:51.548361 2332 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 4 12:34:51.549173 kubelet[2332]: I1104 12:34:51.548396 2332 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 4 12:34:51.549173 kubelet[2332]: I1104 12:34:51.548782 2332 server.go:956] "Client rotation is on, will bootstrap in background" Nov 4 12:34:51.573637 kubelet[2332]: E1104 12:34:51.573592 2332 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.141:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 4 12:34:51.574496 kubelet[2332]: I1104 12:34:51.574175 2332 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 12:34:51.582560 kubelet[2332]: I1104 12:34:51.582213 2332 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 4 12:34:51.584834 kubelet[2332]: I1104 12:34:51.584817 2332 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 4 12:34:51.585113 kubelet[2332]: I1104 12:34:51.585092 2332 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 4 12:34:51.585289 kubelet[2332]: I1104 12:34:51.585115 2332 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 4 12:34:51.585371 kubelet[2332]: I1104 12:34:51.585354 2332 topology_manager.go:138] "Creating topology manager with none policy" Nov 4 12:34:51.585371 kubelet[2332]: I1104 12:34:51.585364 2332 container_manager_linux.go:303] "Creating device plugin manager" Nov 4 12:34:51.585575 kubelet[2332]: I1104 12:34:51.585537 2332 state_mem.go:36] "Initialized new in-memory state store" Nov 4 12:34:51.588034 kubelet[2332]: I1104 12:34:51.587989 2332 kubelet.go:480] "Attempting to sync node with API server" Nov 4 12:34:51.588034 kubelet[2332]: I1104 12:34:51.588015 2332 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 4 12:34:51.588468 kubelet[2332]: I1104 12:34:51.588409 2332 kubelet.go:386] "Adding apiserver pod source" Nov 4 12:34:51.589605 kubelet[2332]: I1104 12:34:51.589588 2332 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 4 12:34:51.590565 kubelet[2332]: I1104 12:34:51.590519 2332 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 4 12:34:51.592399 kubelet[2332]: E1104 12:34:51.592357 2332 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.141:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 4 12:34:51.592621 kubelet[2332]: E1104 12:34:51.592591 2332 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.141:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 4 12:34:51.592774 kubelet[2332]: I1104 12:34:51.592752 2332 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 4 12:34:51.592893 kubelet[2332]: W1104 12:34:51.592878 2332 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 4 12:34:51.595958 kubelet[2332]: I1104 12:34:51.595935 2332 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 4 12:34:51.596050 kubelet[2332]: I1104 12:34:51.595977 2332 server.go:1289] "Started kubelet" Nov 4 12:34:51.596212 kubelet[2332]: I1104 12:34:51.596166 2332 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 4 12:34:51.598520 kubelet[2332]: I1104 12:34:51.598471 2332 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 4 12:34:51.598844 kubelet[2332]: I1104 12:34:51.598828 2332 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 4 12:34:51.601494 kubelet[2332]: I1104 12:34:51.600532 2332 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 4 12:34:51.601718 kubelet[2332]: I1104 12:34:51.601690 2332 server.go:317] "Adding debug handlers to kubelet server" Nov 4 12:34:51.601871 kubelet[2332]: I1104 12:34:51.601853 2332 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 4 12:34:51.603350 kubelet[2332]: E1104 12:34:51.603317 2332 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 12:34:51.603465 kubelet[2332]: I1104 12:34:51.603454 2332 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 4 12:34:51.603609 kubelet[2332]: I1104 12:34:51.603595 2332 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 4 12:34:51.603710 kubelet[2332]: I1104 12:34:51.603699 2332 reconciler.go:26] "Reconciler: start to sync state" Nov 4 12:34:51.604010 kubelet[2332]: E1104 12:34:51.603968 2332 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.141:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 4 12:34:51.604087 kubelet[2332]: E1104 12:34:51.601977 2332 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.141:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.141:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1874cdda4790f898 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-04 12:34:51.595954328 +0000 UTC m=+1.607792672,LastTimestamp:2025-11-04 12:34:51.595954328 +0000 UTC m=+1.607792672,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 4 12:34:51.604828 kubelet[2332]: E1104 12:34:51.604796 2332 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.141:6443: connect: connection refused" interval="200ms" Nov 4 12:34:51.604928 kubelet[2332]: I1104 12:34:51.604908 2332 factory.go:223] Registration of the systemd container factory successfully Nov 4 12:34:51.605041 kubelet[2332]: I1104 12:34:51.605022 2332 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 4 12:34:51.605240 kubelet[2332]: E1104 12:34:51.605033 2332 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 4 12:34:51.606028 kubelet[2332]: I1104 12:34:51.606008 2332 factory.go:223] Registration of the containerd container factory successfully Nov 4 12:34:51.616929 kubelet[2332]: I1104 12:34:51.616875 2332 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 4 12:34:51.617810 kubelet[2332]: I1104 12:34:51.617782 2332 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 4 12:34:51.617810 kubelet[2332]: I1104 12:34:51.617806 2332 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 4 12:34:51.617873 kubelet[2332]: I1104 12:34:51.617826 2332 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 4 12:34:51.617873 kubelet[2332]: I1104 12:34:51.617832 2332 kubelet.go:2436] "Starting kubelet main sync loop" Nov 4 12:34:51.617917 kubelet[2332]: E1104 12:34:51.617871 2332 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 4 12:34:51.622295 kubelet[2332]: I1104 12:34:51.622259 2332 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 4 12:34:51.622295 kubelet[2332]: I1104 12:34:51.622279 2332 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 4 12:34:51.622295 kubelet[2332]: I1104 12:34:51.622297 2332 state_mem.go:36] "Initialized new in-memory state store" Nov 4 12:34:51.622491 kubelet[2332]: E1104 12:34:51.622452 2332 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.141:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 4 12:34:51.704046 kubelet[2332]: E1104 12:34:51.704005 2332 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 12:34:51.718304 kubelet[2332]: E1104 12:34:51.718258 2332 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 4 12:34:51.744630 kubelet[2332]: I1104 12:34:51.744594 2332 policy_none.go:49] "None policy: Start" Nov 4 12:34:51.744745 kubelet[2332]: I1104 12:34:51.744642 2332 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 4 12:34:51.744745 kubelet[2332]: I1104 12:34:51.744661 2332 state_mem.go:35] "Initializing new in-memory state store" Nov 4 12:34:51.749820 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 4 12:34:51.775430 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 4 12:34:51.778766 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 4 12:34:51.793406 kubelet[2332]: E1104 12:34:51.793356 2332 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 4 12:34:51.793605 kubelet[2332]: I1104 12:34:51.793582 2332 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 4 12:34:51.793647 kubelet[2332]: I1104 12:34:51.793601 2332 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 4 12:34:51.794378 kubelet[2332]: I1104 12:34:51.794258 2332 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 4 12:34:51.797317 kubelet[2332]: E1104 12:34:51.797116 2332 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 4 12:34:51.797317 kubelet[2332]: E1104 12:34:51.797155 2332 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 4 12:34:51.807229 kubelet[2332]: E1104 12:34:51.806013 2332 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.141:6443: connect: connection refused" interval="400ms" Nov 4 12:34:51.895846 kubelet[2332]: I1104 12:34:51.895349 2332 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 12:34:51.895846 kubelet[2332]: E1104 12:34:51.895765 2332 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.141:6443/api/v1/nodes\": dial tcp 10.0.0.141:6443: connect: connection refused" node="localhost" Nov 4 12:34:51.937763 systemd[1]: Created slice kubepods-burstable-pod2f7b1770687a43eefe83c2cd5fac237b.slice - libcontainer container kubepods-burstable-pod2f7b1770687a43eefe83c2cd5fac237b.slice. Nov 4 12:34:51.947270 kubelet[2332]: E1104 12:34:51.947238 2332 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 12:34:51.951181 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice - libcontainer container kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Nov 4 12:34:51.962067 kubelet[2332]: E1104 12:34:51.962040 2332 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 12:34:51.964816 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice - libcontainer container kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Nov 4 12:34:51.966466 kubelet[2332]: E1104 12:34:51.966438 2332 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 12:34:52.005948 kubelet[2332]: I1104 12:34:52.005910 2332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 12:34:52.005948 kubelet[2332]: I1104 12:34:52.005945 2332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 4 12:34:52.006037 kubelet[2332]: I1104 12:34:52.005966 2332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 12:34:52.006037 kubelet[2332]: I1104 12:34:52.005982 2332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 12:34:52.006037 kubelet[2332]: I1104 12:34:52.006001 2332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2f7b1770687a43eefe83c2cd5fac237b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2f7b1770687a43eefe83c2cd5fac237b\") " pod="kube-system/kube-apiserver-localhost" Nov 4 12:34:52.006037 kubelet[2332]: I1104 12:34:52.006016 2332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2f7b1770687a43eefe83c2cd5fac237b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2f7b1770687a43eefe83c2cd5fac237b\") " pod="kube-system/kube-apiserver-localhost" Nov 4 12:34:52.006037 kubelet[2332]: I1104 12:34:52.006031 2332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2f7b1770687a43eefe83c2cd5fac237b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2f7b1770687a43eefe83c2cd5fac237b\") " pod="kube-system/kube-apiserver-localhost" Nov 4 12:34:52.006135 kubelet[2332]: I1104 12:34:52.006046 2332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 12:34:52.006135 kubelet[2332]: I1104 12:34:52.006061 2332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 12:34:52.097809 kubelet[2332]: I1104 12:34:52.097704 2332 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 12:34:52.098180 kubelet[2332]: E1104 12:34:52.098075 2332 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.141:6443/api/v1/nodes\": dial tcp 10.0.0.141:6443: connect: connection refused" node="localhost" Nov 4 12:34:52.207143 kubelet[2332]: E1104 12:34:52.207093 2332 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.141:6443: connect: connection refused" interval="800ms" Nov 4 12:34:52.248361 kubelet[2332]: E1104 12:34:52.248320 2332 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:34:52.249256 containerd[1556]: time="2025-11-04T12:34:52.248951124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2f7b1770687a43eefe83c2cd5fac237b,Namespace:kube-system,Attempt:0,}" Nov 4 12:34:52.264045 kubelet[2332]: E1104 12:34:52.263826 2332 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:34:52.264625 containerd[1556]: time="2025-11-04T12:34:52.264596602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Nov 4 12:34:52.267214 containerd[1556]: time="2025-11-04T12:34:52.267183076Z" level=info msg="connecting to shim 333a2b919021f06aeb7e9246aed7fadbd42912a091a871aa6282afd722bf2e01" address="unix:///run/containerd/s/7471166945a670c307eb127d6d74a0bd16692404ce23331033a23b8b0bfc822f" namespace=k8s.io protocol=ttrpc version=3 Nov 4 12:34:52.267917 kubelet[2332]: E1104 12:34:52.267826 2332 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:34:52.268421 containerd[1556]: time="2025-11-04T12:34:52.268387950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Nov 4 12:34:52.292713 systemd[1]: Started cri-containerd-333a2b919021f06aeb7e9246aed7fadbd42912a091a871aa6282afd722bf2e01.scope - libcontainer container 333a2b919021f06aeb7e9246aed7fadbd42912a091a871aa6282afd722bf2e01. Nov 4 12:34:52.293147 containerd[1556]: time="2025-11-04T12:34:52.293112755Z" level=info msg="connecting to shim 8b9331174337e198898e7cf7efb105b99fa0ee9fd9c6b700c2f68ea17a2ebc43" address="unix:///run/containerd/s/394c03bb30105806745bb33f3a71f15f0a44ae3480ca25facb8d6c3c5fbb343d" namespace=k8s.io protocol=ttrpc version=3 Nov 4 12:34:52.302388 containerd[1556]: time="2025-11-04T12:34:52.302346602Z" level=info msg="connecting to shim 85eb26df4020648de3b75d229fda0a5612c7297f91ce436b7683bd2bdb4f62ff" address="unix:///run/containerd/s/9d9b93717353b6158a4eb03c29f6757c3e908f37276b09315637a3cf901243de" namespace=k8s.io protocol=ttrpc version=3 Nov 4 12:34:52.317724 systemd[1]: Started cri-containerd-8b9331174337e198898e7cf7efb105b99fa0ee9fd9c6b700c2f68ea17a2ebc43.scope - libcontainer container 8b9331174337e198898e7cf7efb105b99fa0ee9fd9c6b700c2f68ea17a2ebc43. Nov 4 12:34:52.322186 systemd[1]: Started cri-containerd-85eb26df4020648de3b75d229fda0a5612c7297f91ce436b7683bd2bdb4f62ff.scope - libcontainer container 85eb26df4020648de3b75d229fda0a5612c7297f91ce436b7683bd2bdb4f62ff. Nov 4 12:34:52.348939 containerd[1556]: time="2025-11-04T12:34:52.348843402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2f7b1770687a43eefe83c2cd5fac237b,Namespace:kube-system,Attempt:0,} returns sandbox id \"333a2b919021f06aeb7e9246aed7fadbd42912a091a871aa6282afd722bf2e01\"" Nov 4 12:34:52.350367 kubelet[2332]: E1104 12:34:52.350340 2332 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:34:52.355243 containerd[1556]: time="2025-11-04T12:34:52.355205820Z" level=info msg="CreateContainer within sandbox \"333a2b919021f06aeb7e9246aed7fadbd42912a091a871aa6282afd722bf2e01\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 4 12:34:52.358557 containerd[1556]: time="2025-11-04T12:34:52.358369085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b9331174337e198898e7cf7efb105b99fa0ee9fd9c6b700c2f68ea17a2ebc43\"" Nov 4 12:34:52.359506 kubelet[2332]: E1104 12:34:52.359478 2332 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:34:52.364962 containerd[1556]: time="2025-11-04T12:34:52.364929475Z" level=info msg="CreateContainer within sandbox \"8b9331174337e198898e7cf7efb105b99fa0ee9fd9c6b700c2f68ea17a2ebc43\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 4 12:34:52.365899 containerd[1556]: time="2025-11-04T12:34:52.365867880Z" level=info msg="Container 1798833a89dced0e800ca46ab7b6c37fc9795ac990ae284cc70c215e5dc2354c: CDI devices from CRI Config.CDIDevices: []" Nov 4 12:34:52.373457 containerd[1556]: time="2025-11-04T12:34:52.373410846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"85eb26df4020648de3b75d229fda0a5612c7297f91ce436b7683bd2bdb4f62ff\"" Nov 4 12:34:52.374042 kubelet[2332]: E1104 12:34:52.374024 2332 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:34:52.376115 containerd[1556]: time="2025-11-04T12:34:52.376090985Z" level=info msg="Container 61a298f58a9bf1598e06fca2cfb9ca7f5c1d0c52c2dcb853479872ff32e56c78: CDI devices from CRI Config.CDIDevices: []" Nov 4 12:34:52.378169 containerd[1556]: time="2025-11-04T12:34:52.378120434Z" level=info msg="CreateContainer within sandbox \"333a2b919021f06aeb7e9246aed7fadbd42912a091a871aa6282afd722bf2e01\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1798833a89dced0e800ca46ab7b6c37fc9795ac990ae284cc70c215e5dc2354c\"" Nov 4 12:34:52.378792 containerd[1556]: time="2025-11-04T12:34:52.378732553Z" level=info msg="CreateContainer within sandbox \"85eb26df4020648de3b75d229fda0a5612c7297f91ce436b7683bd2bdb4f62ff\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 4 12:34:52.378885 containerd[1556]: time="2025-11-04T12:34:52.378753759Z" level=info msg="StartContainer for \"1798833a89dced0e800ca46ab7b6c37fc9795ac990ae284cc70c215e5dc2354c\"" Nov 4 12:34:52.379890 containerd[1556]: time="2025-11-04T12:34:52.379848364Z" level=info msg="connecting to shim 1798833a89dced0e800ca46ab7b6c37fc9795ac990ae284cc70c215e5dc2354c" address="unix:///run/containerd/s/7471166945a670c307eb127d6d74a0bd16692404ce23331033a23b8b0bfc822f" protocol=ttrpc version=3 Nov 4 12:34:52.381812 containerd[1556]: time="2025-11-04T12:34:52.381561250Z" level=info msg="CreateContainer within sandbox \"8b9331174337e198898e7cf7efb105b99fa0ee9fd9c6b700c2f68ea17a2ebc43\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"61a298f58a9bf1598e06fca2cfb9ca7f5c1d0c52c2dcb853479872ff32e56c78\"" Nov 4 12:34:52.381927 containerd[1556]: time="2025-11-04T12:34:52.381896738Z" level=info msg="StartContainer for \"61a298f58a9bf1598e06fca2cfb9ca7f5c1d0c52c2dcb853479872ff32e56c78\"" Nov 4 12:34:52.383012 containerd[1556]: time="2025-11-04T12:34:52.382980700Z" level=info msg="connecting to shim 61a298f58a9bf1598e06fca2cfb9ca7f5c1d0c52c2dcb853479872ff32e56c78" address="unix:///run/containerd/s/394c03bb30105806745bb33f3a71f15f0a44ae3480ca25facb8d6c3c5fbb343d" protocol=ttrpc version=3 Nov 4 12:34:52.384719 containerd[1556]: time="2025-11-04T12:34:52.384359420Z" level=info msg="Container 62e832c82b036647a8cc014368ab0e26fce8d8a81a14905714c5de7162ba2af6: CDI devices from CRI Config.CDIDevices: []" Nov 4 12:34:52.394564 containerd[1556]: time="2025-11-04T12:34:52.394498383Z" level=info msg="CreateContainer within sandbox \"85eb26df4020648de3b75d229fda0a5612c7297f91ce436b7683bd2bdb4f62ff\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"62e832c82b036647a8cc014368ab0e26fce8d8a81a14905714c5de7162ba2af6\"" Nov 4 12:34:52.395061 containerd[1556]: time="2025-11-04T12:34:52.395033162Z" level=info msg="StartContainer for \"62e832c82b036647a8cc014368ab0e26fce8d8a81a14905714c5de7162ba2af6\"" Nov 4 12:34:52.396336 containerd[1556]: time="2025-11-04T12:34:52.396296251Z" level=info msg="connecting to shim 62e832c82b036647a8cc014368ab0e26fce8d8a81a14905714c5de7162ba2af6" address="unix:///run/containerd/s/9d9b93717353b6158a4eb03c29f6757c3e908f37276b09315637a3cf901243de" protocol=ttrpc version=3 Nov 4 12:34:52.401682 systemd[1]: Started cri-containerd-1798833a89dced0e800ca46ab7b6c37fc9795ac990ae284cc70c215e5dc2354c.scope - libcontainer container 1798833a89dced0e800ca46ab7b6c37fc9795ac990ae284cc70c215e5dc2354c. Nov 4 12:34:52.403146 systemd[1]: Started cri-containerd-61a298f58a9bf1598e06fca2cfb9ca7f5c1d0c52c2dcb853479872ff32e56c78.scope - libcontainer container 61a298f58a9bf1598e06fca2cfb9ca7f5c1d0c52c2dcb853479872ff32e56c78. Nov 4 12:34:52.426287 systemd[1]: Started cri-containerd-62e832c82b036647a8cc014368ab0e26fce8d8a81a14905714c5de7162ba2af6.scope - libcontainer container 62e832c82b036647a8cc014368ab0e26fce8d8a81a14905714c5de7162ba2af6. Nov 4 12:34:52.448313 containerd[1556]: time="2025-11-04T12:34:52.448176495Z" level=info msg="StartContainer for \"61a298f58a9bf1598e06fca2cfb9ca7f5c1d0c52c2dcb853479872ff32e56c78\" returns successfully" Nov 4 12:34:52.450773 containerd[1556]: time="2025-11-04T12:34:52.450689630Z" level=info msg="StartContainer for \"1798833a89dced0e800ca46ab7b6c37fc9795ac990ae284cc70c215e5dc2354c\" returns successfully" Nov 4 12:34:52.482177 containerd[1556]: time="2025-11-04T12:34:52.482132266Z" level=info msg="StartContainer for \"62e832c82b036647a8cc014368ab0e26fce8d8a81a14905714c5de7162ba2af6\" returns successfully" Nov 4 12:34:52.500686 kubelet[2332]: I1104 12:34:52.500646 2332 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 12:34:52.501027 kubelet[2332]: E1104 12:34:52.500985 2332 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.141:6443/api/v1/nodes\": dial tcp 10.0.0.141:6443: connect: connection refused" node="localhost" Nov 4 12:34:52.510007 kubelet[2332]: E1104 12:34:52.509909 2332 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.141:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 4 12:34:52.626848 kubelet[2332]: E1104 12:34:52.626757 2332 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 12:34:52.627126 kubelet[2332]: E1104 12:34:52.626885 2332 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:34:52.629785 kubelet[2332]: E1104 12:34:52.629398 2332 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 12:34:52.629785 kubelet[2332]: E1104 12:34:52.629510 2332 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:34:52.632738 kubelet[2332]: E1104 12:34:52.632719 2332 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 12:34:52.632835 kubelet[2332]: E1104 12:34:52.632819 2332 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:34:53.303123 kubelet[2332]: I1104 12:34:53.303086 2332 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 12:34:53.633006 kubelet[2332]: E1104 12:34:53.632910 2332 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 12:34:53.633006 kubelet[2332]: E1104 12:34:53.632973 2332 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 12:34:53.633294 kubelet[2332]: E1104 12:34:53.633056 2332 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:34:53.633294 kubelet[2332]: E1104 12:34:53.633068 2332 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:34:54.717891 kubelet[2332]: E1104 12:34:54.717856 2332 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 4 12:34:54.831770 kubelet[2332]: I1104 12:34:54.831585 2332 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 4 12:34:54.831770 kubelet[2332]: E1104 12:34:54.831625 2332 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 4 12:34:54.904636 kubelet[2332]: I1104 12:34:54.904600 2332 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 4 12:34:54.909605 kubelet[2332]: E1104 12:34:54.909574 2332 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 4 12:34:54.909605 kubelet[2332]: I1104 12:34:54.909603 2332 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 4 12:34:54.911257 kubelet[2332]: E1104 12:34:54.911232 2332 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 4 12:34:54.911257 kubelet[2332]: I1104 12:34:54.911256 2332 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 4 12:34:54.912616 kubelet[2332]: E1104 12:34:54.912592 2332 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 4 12:34:55.177714 kubelet[2332]: I1104 12:34:55.177675 2332 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 4 12:34:55.179985 kubelet[2332]: E1104 12:34:55.179957 2332 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 4 12:34:55.180138 kubelet[2332]: E1104 12:34:55.180123 2332 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:34:55.594366 kubelet[2332]: I1104 12:34:55.594260 2332 apiserver.go:52] "Watching apiserver" Nov 4 12:34:55.604191 kubelet[2332]: I1104 12:34:55.604138 2332 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 4 12:34:56.926124 systemd[1]: Reload requested from client PID 2612 ('systemctl') (unit session-7.scope)... Nov 4 12:34:56.926151 systemd[1]: Reloading... Nov 4 12:34:56.995662 zram_generator::config[2656]: No configuration found. Nov 4 12:34:57.168752 systemd[1]: Reloading finished in 242 ms. Nov 4 12:34:57.200812 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 12:34:57.212691 systemd[1]: kubelet.service: Deactivated successfully. Nov 4 12:34:57.213617 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 12:34:57.213677 systemd[1]: kubelet.service: Consumed 1.970s CPU time, 126.7M memory peak. Nov 4 12:34:57.215491 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 12:34:57.364174 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 12:34:57.368218 (kubelet)[2698]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 4 12:34:57.411575 kubelet[2698]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 12:34:57.411575 kubelet[2698]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 4 12:34:57.411575 kubelet[2698]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 12:34:57.411575 kubelet[2698]: I1104 12:34:57.410981 2698 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 4 12:34:57.416819 kubelet[2698]: I1104 12:34:57.416787 2698 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 4 12:34:57.416957 kubelet[2698]: I1104 12:34:57.416946 2698 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 4 12:34:57.417220 kubelet[2698]: I1104 12:34:57.417192 2698 server.go:956] "Client rotation is on, will bootstrap in background" Nov 4 12:34:57.418599 kubelet[2698]: I1104 12:34:57.418532 2698 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 4 12:34:57.421087 kubelet[2698]: I1104 12:34:57.421049 2698 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 12:34:57.428209 kubelet[2698]: I1104 12:34:57.428175 2698 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 4 12:34:57.430754 kubelet[2698]: I1104 12:34:57.430730 2698 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 4 12:34:57.430972 kubelet[2698]: I1104 12:34:57.430944 2698 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 4 12:34:57.431117 kubelet[2698]: I1104 12:34:57.430971 2698 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 4 12:34:57.431194 kubelet[2698]: I1104 12:34:57.431128 2698 topology_manager.go:138] "Creating topology manager with none policy" Nov 4 12:34:57.431194 kubelet[2698]: I1104 12:34:57.431140 2698 container_manager_linux.go:303] "Creating device plugin manager" Nov 4 12:34:57.431194 kubelet[2698]: I1104 12:34:57.431184 2698 state_mem.go:36] "Initialized new in-memory state store" Nov 4 12:34:57.431343 kubelet[2698]: I1104 12:34:57.431330 2698 kubelet.go:480] "Attempting to sync node with API server" Nov 4 12:34:57.431366 kubelet[2698]: I1104 12:34:57.431346 2698 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 4 12:34:57.431387 kubelet[2698]: I1104 12:34:57.431375 2698 kubelet.go:386] "Adding apiserver pod source" Nov 4 12:34:57.431416 kubelet[2698]: I1104 12:34:57.431388 2698 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 4 12:34:57.432422 kubelet[2698]: I1104 12:34:57.432301 2698 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 4 12:34:57.432925 kubelet[2698]: I1104 12:34:57.432895 2698 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 4 12:34:57.434943 kubelet[2698]: I1104 12:34:57.434917 2698 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 4 12:34:57.434994 kubelet[2698]: I1104 12:34:57.434969 2698 server.go:1289] "Started kubelet" Nov 4 12:34:57.438557 kubelet[2698]: I1104 12:34:57.437684 2698 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 4 12:34:57.441562 kubelet[2698]: I1104 12:34:57.439963 2698 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 4 12:34:57.441562 kubelet[2698]: I1104 12:34:57.439962 2698 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 4 12:34:57.441562 kubelet[2698]: I1104 12:34:57.439974 2698 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 4 12:34:57.441562 kubelet[2698]: I1104 12:34:57.440177 2698 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 4 12:34:57.441562 kubelet[2698]: I1104 12:34:57.440810 2698 server.go:317] "Adding debug handlers to kubelet server" Nov 4 12:34:57.441980 kubelet[2698]: I1104 12:34:57.441959 2698 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 4 12:34:57.442196 kubelet[2698]: I1104 12:34:57.442180 2698 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 4 12:34:57.442389 kubelet[2698]: I1104 12:34:57.442374 2698 reconciler.go:26] "Reconciler: start to sync state" Nov 4 12:34:57.449961 kubelet[2698]: I1104 12:34:57.449835 2698 factory.go:223] Registration of the systemd container factory successfully Nov 4 12:34:57.450474 kubelet[2698]: I1104 12:34:57.450142 2698 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 4 12:34:57.450693 kubelet[2698]: E1104 12:34:57.450535 2698 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 12:34:57.455385 kubelet[2698]: I1104 12:34:57.455293 2698 factory.go:223] Registration of the containerd container factory successfully Nov 4 12:34:57.460710 kubelet[2698]: E1104 12:34:57.460665 2698 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 4 12:34:57.466149 kubelet[2698]: I1104 12:34:57.466109 2698 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 4 12:34:57.467918 kubelet[2698]: I1104 12:34:57.467891 2698 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 4 12:34:57.467918 kubelet[2698]: I1104 12:34:57.467919 2698 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 4 12:34:57.468019 kubelet[2698]: I1104 12:34:57.467945 2698 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 4 12:34:57.468019 kubelet[2698]: I1104 12:34:57.467952 2698 kubelet.go:2436] "Starting kubelet main sync loop" Nov 4 12:34:57.468019 kubelet[2698]: E1104 12:34:57.467993 2698 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 4 12:34:57.497702 kubelet[2698]: I1104 12:34:57.497661 2698 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 4 12:34:57.497885 kubelet[2698]: I1104 12:34:57.497680 2698 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 4 12:34:57.497885 kubelet[2698]: I1104 12:34:57.497847 2698 state_mem.go:36] "Initialized new in-memory state store" Nov 4 12:34:57.498589 kubelet[2698]: I1104 12:34:57.498566 2698 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 4 12:34:57.498647 kubelet[2698]: I1104 12:34:57.498588 2698 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 4 12:34:57.498647 kubelet[2698]: I1104 12:34:57.498616 2698 policy_none.go:49] "None policy: Start" Nov 4 12:34:57.498647 kubelet[2698]: I1104 12:34:57.498626 2698 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 4 12:34:57.498647 kubelet[2698]: I1104 12:34:57.498637 2698 state_mem.go:35] "Initializing new in-memory state store" Nov 4 12:34:57.498733 kubelet[2698]: I1104 12:34:57.498727 2698 state_mem.go:75] "Updated machine memory state" Nov 4 12:34:57.502424 kubelet[2698]: E1104 12:34:57.502398 2698 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 4 12:34:57.502617 kubelet[2698]: I1104 12:34:57.502600 2698 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 4 12:34:57.502672 kubelet[2698]: I1104 12:34:57.502615 2698 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 4 12:34:57.503295 kubelet[2698]: I1104 12:34:57.502852 2698 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 4 12:34:57.503946 kubelet[2698]: E1104 12:34:57.503910 2698 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 4 12:34:57.569144 kubelet[2698]: I1104 12:34:57.569098 2698 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 4 12:34:57.569574 kubelet[2698]: I1104 12:34:57.569536 2698 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 4 12:34:57.569903 kubelet[2698]: I1104 12:34:57.569883 2698 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 4 12:34:57.606167 kubelet[2698]: I1104 12:34:57.606119 2698 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 12:34:57.613489 kubelet[2698]: I1104 12:34:57.613168 2698 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 4 12:34:57.613489 kubelet[2698]: I1104 12:34:57.613264 2698 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 4 12:34:57.643244 kubelet[2698]: I1104 12:34:57.643199 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 12:34:57.643244 kubelet[2698]: I1104 12:34:57.643243 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2f7b1770687a43eefe83c2cd5fac237b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2f7b1770687a43eefe83c2cd5fac237b\") " pod="kube-system/kube-apiserver-localhost" Nov 4 12:34:57.643369 kubelet[2698]: I1104 12:34:57.643281 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 12:34:57.643369 kubelet[2698]: I1104 12:34:57.643302 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 12:34:57.643369 kubelet[2698]: I1104 12:34:57.643319 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 4 12:34:57.643369 kubelet[2698]: I1104 12:34:57.643333 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2f7b1770687a43eefe83c2cd5fac237b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2f7b1770687a43eefe83c2cd5fac237b\") " pod="kube-system/kube-apiserver-localhost" Nov 4 12:34:57.643369 kubelet[2698]: I1104 12:34:57.643348 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2f7b1770687a43eefe83c2cd5fac237b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2f7b1770687a43eefe83c2cd5fac237b\") " pod="kube-system/kube-apiserver-localhost" Nov 4 12:34:57.643500 kubelet[2698]: I1104 12:34:57.643361 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 12:34:57.643500 kubelet[2698]: I1104 12:34:57.643375 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 12:34:57.875113 kubelet[2698]: E1104 12:34:57.874939 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:34:57.875113 kubelet[2698]: E1104 12:34:57.875061 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:34:57.875228 kubelet[2698]: E1104 12:34:57.875130 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:34:57.932349 sudo[2741]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 4 12:34:57.933013 sudo[2741]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 4 12:34:58.259791 sudo[2741]: pam_unix(sudo:session): session closed for user root Nov 4 12:34:58.431931 kubelet[2698]: I1104 12:34:58.431869 2698 apiserver.go:52] "Watching apiserver" Nov 4 12:34:58.442363 kubelet[2698]: I1104 12:34:58.442334 2698 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 4 12:34:58.489081 kubelet[2698]: E1104 12:34:58.489047 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:34:58.489081 kubelet[2698]: I1104 12:34:58.489077 2698 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 4 12:34:58.489643 kubelet[2698]: I1104 12:34:58.489614 2698 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 4 12:34:58.495936 kubelet[2698]: E1104 12:34:58.495894 2698 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 4 12:34:58.497532 kubelet[2698]: E1104 12:34:58.496089 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:34:58.497532 kubelet[2698]: E1104 12:34:58.497482 2698 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 4 12:34:58.497680 kubelet[2698]: E1104 12:34:58.497641 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:34:58.514802 kubelet[2698]: I1104 12:34:58.514648 2698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.514633425 podStartE2EDuration="1.514633425s" podCreationTimestamp="2025-11-04 12:34:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 12:34:58.5115089 +0000 UTC m=+1.140034341" watchObservedRunningTime="2025-11-04 12:34:58.514633425 +0000 UTC m=+1.143158866" Nov 4 12:34:58.528040 kubelet[2698]: I1104 12:34:58.527982 2698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.527963625 podStartE2EDuration="1.527963625s" podCreationTimestamp="2025-11-04 12:34:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 12:34:58.520495791 +0000 UTC m=+1.149021232" watchObservedRunningTime="2025-11-04 12:34:58.527963625 +0000 UTC m=+1.156489066" Nov 4 12:34:58.536256 kubelet[2698]: I1104 12:34:58.536205 2698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.536189307 podStartE2EDuration="1.536189307s" podCreationTimestamp="2025-11-04 12:34:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 12:34:58.528278782 +0000 UTC m=+1.156804223" watchObservedRunningTime="2025-11-04 12:34:58.536189307 +0000 UTC m=+1.164714748" Nov 4 12:34:59.490308 kubelet[2698]: E1104 12:34:59.490107 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:34:59.490308 kubelet[2698]: E1104 12:34:59.490231 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:34:59.490928 kubelet[2698]: E1104 12:34:59.490895 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:35:00.171566 sudo[1766]: pam_unix(sudo:session): session closed for user root Nov 4 12:35:00.173380 sshd[1765]: Connection closed by 10.0.0.1 port 43240 Nov 4 12:35:00.173916 sshd-session[1762]: pam_unix(sshd:session): session closed for user core Nov 4 12:35:00.177747 systemd[1]: sshd@6-10.0.0.141:22-10.0.0.1:43240.service: Deactivated successfully. Nov 4 12:35:00.180096 systemd[1]: session-7.scope: Deactivated successfully. Nov 4 12:35:00.180398 systemd[1]: session-7.scope: Consumed 6.344s CPU time, 256.3M memory peak. Nov 4 12:35:00.181414 systemd-logind[1532]: Session 7 logged out. Waiting for processes to exit. Nov 4 12:35:00.182517 systemd-logind[1532]: Removed session 7. Nov 4 12:35:00.513267 kubelet[2698]: E1104 12:35:00.512944 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:35:01.057428 kubelet[2698]: E1104 12:35:01.057396 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:35:03.784384 kubelet[2698]: I1104 12:35:03.784349 2698 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 4 12:35:03.787286 containerd[1556]: time="2025-11-04T12:35:03.787233205Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 4 12:35:03.787729 kubelet[2698]: I1104 12:35:03.787712 2698 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 4 12:35:04.305520 systemd[1]: Created slice kubepods-besteffort-pod36d79aa6_0a25_4547_89cb_aa1615ec551d.slice - libcontainer container kubepods-besteffort-pod36d79aa6_0a25_4547_89cb_aa1615ec551d.slice. Nov 4 12:35:04.321670 systemd[1]: Created slice kubepods-burstable-pod14afe496_c962_41b4_ad4e_b8fcffe6e1d4.slice - libcontainer container kubepods-burstable-pod14afe496_c962_41b4_ad4e_b8fcffe6e1d4.slice. Nov 4 12:35:04.404832 kubelet[2698]: I1104 12:35:04.404779 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbc4k\" (UniqueName: \"kubernetes.io/projected/36d79aa6-0a25-4547-89cb-aa1615ec551d-kube-api-access-nbc4k\") pod \"kube-proxy-tvs72\" (UID: \"36d79aa6-0a25-4547-89cb-aa1615ec551d\") " pod="kube-system/kube-proxy-tvs72" Nov 4 12:35:04.405091 kubelet[2698]: I1104 12:35:04.405062 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-hostproc\") pod \"cilium-565jf\" (UID: \"14afe496-c962-41b4-ad4e-b8fcffe6e1d4\") " pod="kube-system/cilium-565jf" Nov 4 12:35:04.405162 kubelet[2698]: I1104 12:35:04.405148 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-xtables-lock\") pod \"cilium-565jf\" (UID: \"14afe496-c962-41b4-ad4e-b8fcffe6e1d4\") " pod="kube-system/cilium-565jf" Nov 4 12:35:04.405806 kubelet[2698]: I1104 12:35:04.405774 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-host-proc-sys-net\") pod \"cilium-565jf\" (UID: \"14afe496-c962-41b4-ad4e-b8fcffe6e1d4\") " pod="kube-system/cilium-565jf" Nov 4 12:35:04.405859 kubelet[2698]: I1104 12:35:04.405823 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-host-proc-sys-kernel\") pod \"cilium-565jf\" (UID: \"14afe496-c962-41b4-ad4e-b8fcffe6e1d4\") " pod="kube-system/cilium-565jf" Nov 4 12:35:04.405859 kubelet[2698]: I1104 12:35:04.405845 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9vsk\" (UniqueName: \"kubernetes.io/projected/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-kube-api-access-s9vsk\") pod \"cilium-565jf\" (UID: \"14afe496-c962-41b4-ad4e-b8fcffe6e1d4\") " pod="kube-system/cilium-565jf" Nov 4 12:35:04.406032 kubelet[2698]: I1104 12:35:04.405870 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/36d79aa6-0a25-4547-89cb-aa1615ec551d-lib-modules\") pod \"kube-proxy-tvs72\" (UID: \"36d79aa6-0a25-4547-89cb-aa1615ec551d\") " pod="kube-system/kube-proxy-tvs72" Nov 4 12:35:04.406032 kubelet[2698]: I1104 12:35:04.405888 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-cilium-cgroup\") pod \"cilium-565jf\" (UID: \"14afe496-c962-41b4-ad4e-b8fcffe6e1d4\") " pod="kube-system/cilium-565jf" Nov 4 12:35:04.406032 kubelet[2698]: I1104 12:35:04.405905 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-cni-path\") pod \"cilium-565jf\" (UID: \"14afe496-c962-41b4-ad4e-b8fcffe6e1d4\") " pod="kube-system/cilium-565jf" Nov 4 12:35:04.406032 kubelet[2698]: I1104 12:35:04.405941 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-lib-modules\") pod \"cilium-565jf\" (UID: \"14afe496-c962-41b4-ad4e-b8fcffe6e1d4\") " pod="kube-system/cilium-565jf" Nov 4 12:35:04.406032 kubelet[2698]: I1104 12:35:04.405961 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-clustermesh-secrets\") pod \"cilium-565jf\" (UID: \"14afe496-c962-41b4-ad4e-b8fcffe6e1d4\") " pod="kube-system/cilium-565jf" Nov 4 12:35:04.406032 kubelet[2698]: I1104 12:35:04.406019 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/36d79aa6-0a25-4547-89cb-aa1615ec551d-xtables-lock\") pod \"kube-proxy-tvs72\" (UID: \"36d79aa6-0a25-4547-89cb-aa1615ec551d\") " pod="kube-system/kube-proxy-tvs72" Nov 4 12:35:04.406878 kubelet[2698]: I1104 12:35:04.406052 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-cilium-run\") pod \"cilium-565jf\" (UID: \"14afe496-c962-41b4-ad4e-b8fcffe6e1d4\") " pod="kube-system/cilium-565jf" Nov 4 12:35:04.406878 kubelet[2698]: I1104 12:35:04.406116 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/36d79aa6-0a25-4547-89cb-aa1615ec551d-kube-proxy\") pod \"kube-proxy-tvs72\" (UID: \"36d79aa6-0a25-4547-89cb-aa1615ec551d\") " pod="kube-system/kube-proxy-tvs72" Nov 4 12:35:04.406878 kubelet[2698]: I1104 12:35:04.406435 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-bpf-maps\") pod \"cilium-565jf\" (UID: \"14afe496-c962-41b4-ad4e-b8fcffe6e1d4\") " pod="kube-system/cilium-565jf" Nov 4 12:35:04.406878 kubelet[2698]: I1104 12:35:04.406479 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-etc-cni-netd\") pod \"cilium-565jf\" (UID: \"14afe496-c962-41b4-ad4e-b8fcffe6e1d4\") " pod="kube-system/cilium-565jf" Nov 4 12:35:04.406878 kubelet[2698]: I1104 12:35:04.406525 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-cilium-config-path\") pod \"cilium-565jf\" (UID: \"14afe496-c962-41b4-ad4e-b8fcffe6e1d4\") " pod="kube-system/cilium-565jf" Nov 4 12:35:04.406878 kubelet[2698]: I1104 12:35:04.406802 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-hubble-tls\") pod \"cilium-565jf\" (UID: \"14afe496-c962-41b4-ad4e-b8fcffe6e1d4\") " pod="kube-system/cilium-565jf" Nov 4 12:35:04.616090 kubelet[2698]: E1104 12:35:04.615703 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:35:04.617804 containerd[1556]: time="2025-11-04T12:35:04.617768053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tvs72,Uid:36d79aa6-0a25-4547-89cb-aa1615ec551d,Namespace:kube-system,Attempt:0,}" Nov 4 12:35:04.627285 kubelet[2698]: E1104 12:35:04.627248 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:35:04.627994 containerd[1556]: time="2025-11-04T12:35:04.627705928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-565jf,Uid:14afe496-c962-41b4-ad4e-b8fcffe6e1d4,Namespace:kube-system,Attempt:0,}" Nov 4 12:35:04.637073 containerd[1556]: time="2025-11-04T12:35:04.637023810Z" level=info msg="connecting to shim d5bc74db134250daa577e9513e673bcc4ba11b894c9f83db26e81dcd3e407025" address="unix:///run/containerd/s/a02739b30f6f8b415ac16b7be755216fda14d8ab641ebc6f80fc2d217602b724" namespace=k8s.io protocol=ttrpc version=3 Nov 4 12:35:04.646089 containerd[1556]: time="2025-11-04T12:35:04.646034476Z" level=info msg="connecting to shim 870428edb30c9f3dc3c31f520d8b0ce1d8e0f4611cbf646b577a5912040876a3" address="unix:///run/containerd/s/0f52a848e1f1cdf8775c1f3681198affc20e8d3eaa02a0a018b67d6b3498ddbf" namespace=k8s.io protocol=ttrpc version=3 Nov 4 12:35:04.656709 systemd[1]: Started cri-containerd-d5bc74db134250daa577e9513e673bcc4ba11b894c9f83db26e81dcd3e407025.scope - libcontainer container d5bc74db134250daa577e9513e673bcc4ba11b894c9f83db26e81dcd3e407025. Nov 4 12:35:04.680757 systemd[1]: Started cri-containerd-870428edb30c9f3dc3c31f520d8b0ce1d8e0f4611cbf646b577a5912040876a3.scope - libcontainer container 870428edb30c9f3dc3c31f520d8b0ce1d8e0f4611cbf646b577a5912040876a3. Nov 4 12:35:04.701411 containerd[1556]: time="2025-11-04T12:35:04.701367779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tvs72,Uid:36d79aa6-0a25-4547-89cb-aa1615ec551d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5bc74db134250daa577e9513e673bcc4ba11b894c9f83db26e81dcd3e407025\"" Nov 4 12:35:04.705059 kubelet[2698]: E1104 12:35:04.704592 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:35:04.709450 containerd[1556]: time="2025-11-04T12:35:04.709364793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-565jf,Uid:14afe496-c962-41b4-ad4e-b8fcffe6e1d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"870428edb30c9f3dc3c31f520d8b0ce1d8e0f4611cbf646b577a5912040876a3\"" Nov 4 12:35:04.711154 kubelet[2698]: E1104 12:35:04.711132 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:35:04.713150 containerd[1556]: time="2025-11-04T12:35:04.713121227Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 4 12:35:04.721147 containerd[1556]: time="2025-11-04T12:35:04.721120561Z" level=info msg="CreateContainer within sandbox \"d5bc74db134250daa577e9513e673bcc4ba11b894c9f83db26e81dcd3e407025\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 4 12:35:04.729575 containerd[1556]: time="2025-11-04T12:35:04.729202699Z" level=info msg="Container 5d5aab2f6b808d001448f09c5b33f7843451832b729a7a062e3d7d5f1af6fea2: CDI devices from CRI Config.CDIDevices: []" Nov 4 12:35:04.736275 containerd[1556]: time="2025-11-04T12:35:04.736228622Z" level=info msg="CreateContainer within sandbox \"d5bc74db134250daa577e9513e673bcc4ba11b894c9f83db26e81dcd3e407025\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5d5aab2f6b808d001448f09c5b33f7843451832b729a7a062e3d7d5f1af6fea2\"" Nov 4 12:35:04.736970 containerd[1556]: time="2025-11-04T12:35:04.736942379Z" level=info msg="StartContainer for \"5d5aab2f6b808d001448f09c5b33f7843451832b729a7a062e3d7d5f1af6fea2\"" Nov 4 12:35:04.738417 containerd[1556]: time="2025-11-04T12:35:04.738383294Z" level=info msg="connecting to shim 5d5aab2f6b808d001448f09c5b33f7843451832b729a7a062e3d7d5f1af6fea2" address="unix:///run/containerd/s/a02739b30f6f8b415ac16b7be755216fda14d8ab641ebc6f80fc2d217602b724" protocol=ttrpc version=3 Nov 4 12:35:04.758705 systemd[1]: Started cri-containerd-5d5aab2f6b808d001448f09c5b33f7843451832b729a7a062e3d7d5f1af6fea2.scope - libcontainer container 5d5aab2f6b808d001448f09c5b33f7843451832b729a7a062e3d7d5f1af6fea2. Nov 4 12:35:04.788443 containerd[1556]: time="2025-11-04T12:35:04.788355839Z" level=info msg="StartContainer for \"5d5aab2f6b808d001448f09c5b33f7843451832b729a7a062e3d7d5f1af6fea2\" returns successfully" Nov 4 12:35:04.957716 systemd[1]: Created slice kubepods-besteffort-podeeb86756_7b27_45f4_b0b9_1af4e818db03.slice - libcontainer container kubepods-besteffort-podeeb86756_7b27_45f4_b0b9_1af4e818db03.slice. Nov 4 12:35:05.016652 kubelet[2698]: I1104 12:35:05.016581 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grt9d\" (UniqueName: \"kubernetes.io/projected/eeb86756-7b27-45f4-b0b9-1af4e818db03-kube-api-access-grt9d\") pod \"cilium-operator-6c4d7847fc-kwbgf\" (UID: \"eeb86756-7b27-45f4-b0b9-1af4e818db03\") " pod="kube-system/cilium-operator-6c4d7847fc-kwbgf" Nov 4 12:35:05.016652 kubelet[2698]: I1104 12:35:05.016639 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eeb86756-7b27-45f4-b0b9-1af4e818db03-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-kwbgf\" (UID: \"eeb86756-7b27-45f4-b0b9-1af4e818db03\") " pod="kube-system/cilium-operator-6c4d7847fc-kwbgf" Nov 4 12:35:05.261623 kubelet[2698]: E1104 12:35:05.261456 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:35:05.262332 containerd[1556]: time="2025-11-04T12:35:05.262051223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-kwbgf,Uid:eeb86756-7b27-45f4-b0b9-1af4e818db03,Namespace:kube-system,Attempt:0,}" Nov 4 12:35:05.280573 containerd[1556]: time="2025-11-04T12:35:05.280099227Z" level=info msg="connecting to shim 1f63f793b7a7143ce55adf46f6af9e56fe25edc6c6c9d4b00924a059e5bdc140" address="unix:///run/containerd/s/b85b24be9407261caffd6f46328ec5f43eb2bb0de8f8ac7893d0c0710ce3bbb9" namespace=k8s.io protocol=ttrpc version=3 Nov 4 12:35:05.301699 systemd[1]: Started cri-containerd-1f63f793b7a7143ce55adf46f6af9e56fe25edc6c6c9d4b00924a059e5bdc140.scope - libcontainer container 1f63f793b7a7143ce55adf46f6af9e56fe25edc6c6c9d4b00924a059e5bdc140. Nov 4 12:35:05.331059 containerd[1556]: time="2025-11-04T12:35:05.331026560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-kwbgf,Uid:eeb86756-7b27-45f4-b0b9-1af4e818db03,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f63f793b7a7143ce55adf46f6af9e56fe25edc6c6c9d4b00924a059e5bdc140\"" Nov 4 12:35:05.331609 kubelet[2698]: E1104 12:35:05.331587 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:35:05.505892 kubelet[2698]: E1104 12:35:05.505826 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:35:08.128577 kubelet[2698]: E1104 12:35:08.128116 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:35:08.150444 kubelet[2698]: I1104 12:35:08.149522 2698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tvs72" podStartSLOduration=4.142799028 podStartE2EDuration="4.142799028s" podCreationTimestamp="2025-11-04 12:35:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 12:35:05.516751612 +0000 UTC m=+8.145277053" watchObservedRunningTime="2025-11-04 12:35:08.142799028 +0000 UTC m=+10.771324469" Nov 4 12:35:10.521617 kubelet[2698]: E1104 12:35:10.520757 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:35:11.065236 kubelet[2698]: E1104 12:35:11.065205 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:35:11.518225 kubelet[2698]: E1104 12:35:11.518104 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:35:11.519018 kubelet[2698]: E1104 12:35:11.518817 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:35:12.862415 update_engine[1536]: I20251104 12:35:12.862353 1536 update_attempter.cc:509] Updating boot flags... Nov 4 12:35:16.003319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1434487162.mount: Deactivated successfully. Nov 4 12:35:17.288693 containerd[1556]: time="2025-11-04T12:35:17.288646008Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:35:17.289609 containerd[1556]: time="2025-11-04T12:35:17.289411028Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Nov 4 12:35:17.290310 containerd[1556]: time="2025-11-04T12:35:17.290271771Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:35:17.292238 containerd[1556]: time="2025-11-04T12:35:17.292196302Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 12.579041953s" Nov 4 12:35:17.292238 containerd[1556]: time="2025-11-04T12:35:17.292231862Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Nov 4 12:35:17.293816 containerd[1556]: time="2025-11-04T12:35:17.293729102Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 4 12:35:17.302814 containerd[1556]: time="2025-11-04T12:35:17.302782502Z" level=info msg="CreateContainer within sandbox \"870428edb30c9f3dc3c31f520d8b0ce1d8e0f4611cbf646b577a5912040876a3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 4 12:35:17.311084 containerd[1556]: time="2025-11-04T12:35:17.311050240Z" level=info msg="Container 0aac5dc2e2c26f56f52b55487945a616d99e23e543212e7f7f6d708ae2870c07: CDI devices from CRI Config.CDIDevices: []" Nov 4 12:35:17.316273 containerd[1556]: time="2025-11-04T12:35:17.316239937Z" level=info msg="CreateContainer within sandbox \"870428edb30c9f3dc3c31f520d8b0ce1d8e0f4611cbf646b577a5912040876a3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0aac5dc2e2c26f56f52b55487945a616d99e23e543212e7f7f6d708ae2870c07\"" Nov 4 12:35:17.316594 containerd[1556]: time="2025-11-04T12:35:17.316573066Z" level=info msg="StartContainer for \"0aac5dc2e2c26f56f52b55487945a616d99e23e543212e7f7f6d708ae2870c07\"" Nov 4 12:35:17.318554 containerd[1556]: time="2025-11-04T12:35:17.318507237Z" level=info msg="connecting to shim 0aac5dc2e2c26f56f52b55487945a616d99e23e543212e7f7f6d708ae2870c07" address="unix:///run/containerd/s/0f52a848e1f1cdf8775c1f3681198affc20e8d3eaa02a0a018b67d6b3498ddbf" protocol=ttrpc version=3 Nov 4 12:35:17.356893 systemd[1]: Started cri-containerd-0aac5dc2e2c26f56f52b55487945a616d99e23e543212e7f7f6d708ae2870c07.scope - libcontainer container 0aac5dc2e2c26f56f52b55487945a616d99e23e543212e7f7f6d708ae2870c07. Nov 4 12:35:17.383911 containerd[1556]: time="2025-11-04T12:35:17.383874287Z" level=info msg="StartContainer for \"0aac5dc2e2c26f56f52b55487945a616d99e23e543212e7f7f6d708ae2870c07\" returns successfully" Nov 4 12:35:17.395906 systemd[1]: cri-containerd-0aac5dc2e2c26f56f52b55487945a616d99e23e543212e7f7f6d708ae2870c07.scope: Deactivated successfully. Nov 4 12:35:17.416193 containerd[1556]: time="2025-11-04T12:35:17.416154900Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0aac5dc2e2c26f56f52b55487945a616d99e23e543212e7f7f6d708ae2870c07\" id:\"0aac5dc2e2c26f56f52b55487945a616d99e23e543212e7f7f6d708ae2870c07\" pid:3145 exited_at:{seconds:1762259717 nanos:415696568}" Nov 4 12:35:17.420248 containerd[1556]: time="2025-11-04T12:35:17.420103245Z" level=info msg="received exit event container_id:\"0aac5dc2e2c26f56f52b55487945a616d99e23e543212e7f7f6d708ae2870c07\" id:\"0aac5dc2e2c26f56f52b55487945a616d99e23e543212e7f7f6d708ae2870c07\" pid:3145 exited_at:{seconds:1762259717 nanos:415696568}" Nov 4 12:35:17.451695 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0aac5dc2e2c26f56f52b55487945a616d99e23e543212e7f7f6d708ae2870c07-rootfs.mount: Deactivated successfully. Nov 4 12:35:17.534406 kubelet[2698]: E1104 12:35:17.534375 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:35:18.538842 kubelet[2698]: E1104 12:35:18.538813 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:35:18.544448 containerd[1556]: time="2025-11-04T12:35:18.544346969Z" level=info msg="CreateContainer within sandbox \"870428edb30c9f3dc3c31f520d8b0ce1d8e0f4611cbf646b577a5912040876a3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 4 12:35:18.554128 containerd[1556]: time="2025-11-04T12:35:18.553033668Z" level=info msg="Container c153e68b84c8b18b05e3078494aff3a55eb383797ce413f24f4107351fe7b167: CDI devices from CRI Config.CDIDevices: []" Nov 4 12:35:18.559266 containerd[1556]: time="2025-11-04T12:35:18.559234145Z" level=info msg="CreateContainer within sandbox \"870428edb30c9f3dc3c31f520d8b0ce1d8e0f4611cbf646b577a5912040876a3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c153e68b84c8b18b05e3078494aff3a55eb383797ce413f24f4107351fe7b167\"" Nov 4 12:35:18.560564 containerd[1556]: time="2025-11-04T12:35:18.560518097Z" level=info msg="StartContainer for \"c153e68b84c8b18b05e3078494aff3a55eb383797ce413f24f4107351fe7b167\"" Nov 4 12:35:18.563257 containerd[1556]: time="2025-11-04T12:35:18.563216525Z" level=info msg="connecting to shim c153e68b84c8b18b05e3078494aff3a55eb383797ce413f24f4107351fe7b167" address="unix:///run/containerd/s/0f52a848e1f1cdf8775c1f3681198affc20e8d3eaa02a0a018b67d6b3498ddbf" protocol=ttrpc version=3 Nov 4 12:35:18.589466 systemd[1]: Started cri-containerd-c153e68b84c8b18b05e3078494aff3a55eb383797ce413f24f4107351fe7b167.scope - libcontainer container c153e68b84c8b18b05e3078494aff3a55eb383797ce413f24f4107351fe7b167. Nov 4 12:35:18.618854 containerd[1556]: time="2025-11-04T12:35:18.618764528Z" level=info msg="StartContainer for \"c153e68b84c8b18b05e3078494aff3a55eb383797ce413f24f4107351fe7b167\" returns successfully" Nov 4 12:35:18.632886 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2379146050.mount: Deactivated successfully. Nov 4 12:35:18.637598 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 4 12:35:18.637799 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 4 12:35:18.637851 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 4 12:35:18.640032 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 12:35:18.642767 systemd[1]: cri-containerd-c153e68b84c8b18b05e3078494aff3a55eb383797ce413f24f4107351fe7b167.scope: Deactivated successfully. Nov 4 12:35:18.643947 containerd[1556]: time="2025-11-04T12:35:18.643776999Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c153e68b84c8b18b05e3078494aff3a55eb383797ce413f24f4107351fe7b167\" id:\"c153e68b84c8b18b05e3078494aff3a55eb383797ce413f24f4107351fe7b167\" pid:3191 exited_at:{seconds:1762259718 nanos:643415190}" Nov 4 12:35:18.643947 containerd[1556]: time="2025-11-04T12:35:18.643829001Z" level=info msg="received exit event container_id:\"c153e68b84c8b18b05e3078494aff3a55eb383797ce413f24f4107351fe7b167\" id:\"c153e68b84c8b18b05e3078494aff3a55eb383797ce413f24f4107351fe7b167\" pid:3191 exited_at:{seconds:1762259718 nanos:643415190}" Nov 4 12:35:18.673752 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 12:35:19.025790 containerd[1556]: time="2025-11-04T12:35:19.025711414Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:35:19.026556 containerd[1556]: time="2025-11-04T12:35:19.026362750Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Nov 4 12:35:19.028007 containerd[1556]: time="2025-11-04T12:35:19.027972029Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:35:19.029215 containerd[1556]: time="2025-11-04T12:35:19.029175538Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.735411954s" Nov 4 12:35:19.029352 containerd[1556]: time="2025-11-04T12:35:19.029315581Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Nov 4 12:35:19.033991 containerd[1556]: time="2025-11-04T12:35:19.033652246Z" level=info msg="CreateContainer within sandbox \"1f63f793b7a7143ce55adf46f6af9e56fe25edc6c6c9d4b00924a059e5bdc140\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 4 12:35:19.039094 containerd[1556]: time="2025-11-04T12:35:19.039047896Z" level=info msg="Container 3580cc6302f14888151f6d6d38244422918ac4dd3c28f19744a1ae89eea56056: CDI devices from CRI Config.CDIDevices: []" Nov 4 12:35:19.043707 containerd[1556]: time="2025-11-04T12:35:19.043674527Z" level=info msg="CreateContainer within sandbox \"1f63f793b7a7143ce55adf46f6af9e56fe25edc6c6c9d4b00924a059e5bdc140\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3580cc6302f14888151f6d6d38244422918ac4dd3c28f19744a1ae89eea56056\"" Nov 4 12:35:19.044616 containerd[1556]: time="2025-11-04T12:35:19.044160699Z" level=info msg="StartContainer for \"3580cc6302f14888151f6d6d38244422918ac4dd3c28f19744a1ae89eea56056\"" Nov 4 12:35:19.044989 containerd[1556]: time="2025-11-04T12:35:19.044962478Z" level=info msg="connecting to shim 3580cc6302f14888151f6d6d38244422918ac4dd3c28f19744a1ae89eea56056" address="unix:///run/containerd/s/b85b24be9407261caffd6f46328ec5f43eb2bb0de8f8ac7893d0c0710ce3bbb9" protocol=ttrpc version=3 Nov 4 12:35:19.063727 systemd[1]: Started cri-containerd-3580cc6302f14888151f6d6d38244422918ac4dd3c28f19744a1ae89eea56056.scope - libcontainer container 3580cc6302f14888151f6d6d38244422918ac4dd3c28f19744a1ae89eea56056. Nov 4 12:35:19.088575 containerd[1556]: time="2025-11-04T12:35:19.088471968Z" level=info msg="StartContainer for \"3580cc6302f14888151f6d6d38244422918ac4dd3c28f19744a1ae89eea56056\" returns successfully" Nov 4 12:35:19.543722 kubelet[2698]: E1104 12:35:19.543564 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:35:19.548004 kubelet[2698]: E1104 12:35:19.547144 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:35:19.557137 containerd[1556]: time="2025-11-04T12:35:19.555928124Z" level=info msg="CreateContainer within sandbox \"870428edb30c9f3dc3c31f520d8b0ce1d8e0f4611cbf646b577a5912040876a3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 4 12:35:19.556299 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c153e68b84c8b18b05e3078494aff3a55eb383797ce413f24f4107351fe7b167-rootfs.mount: Deactivated successfully. Nov 4 12:35:19.576793 containerd[1556]: time="2025-11-04T12:35:19.576744226Z" level=info msg="Container 25a4a968fff9a842b15c47d6591a30bb0da13698d9bef204fb2f9c307a183f7f: CDI devices from CRI Config.CDIDevices: []" Nov 4 12:35:19.582588 kubelet[2698]: I1104 12:35:19.581945 2698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-kwbgf" podStartSLOduration=1.884817288 podStartE2EDuration="15.581927271s" podCreationTimestamp="2025-11-04 12:35:04 +0000 UTC" firstStartedPulling="2025-11-04 12:35:05.333038778 +0000 UTC m=+7.961564219" lastFinishedPulling="2025-11-04 12:35:19.030148801 +0000 UTC m=+21.658674202" observedRunningTime="2025-11-04 12:35:19.55577876 +0000 UTC m=+22.184304201" watchObservedRunningTime="2025-11-04 12:35:19.581927271 +0000 UTC m=+22.210452712" Nov 4 12:35:19.588892 containerd[1556]: time="2025-11-04T12:35:19.588835037Z" level=info msg="CreateContainer within sandbox \"870428edb30c9f3dc3c31f520d8b0ce1d8e0f4611cbf646b577a5912040876a3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"25a4a968fff9a842b15c47d6591a30bb0da13698d9bef204fb2f9c307a183f7f\"" Nov 4 12:35:19.590333 containerd[1556]: time="2025-11-04T12:35:19.589484533Z" level=info msg="StartContainer for \"25a4a968fff9a842b15c47d6591a30bb0da13698d9bef204fb2f9c307a183f7f\"" Nov 4 12:35:19.591050 containerd[1556]: time="2025-11-04T12:35:19.591005650Z" level=info msg="connecting to shim 25a4a968fff9a842b15c47d6591a30bb0da13698d9bef204fb2f9c307a183f7f" address="unix:///run/containerd/s/0f52a848e1f1cdf8775c1f3681198affc20e8d3eaa02a0a018b67d6b3498ddbf" protocol=ttrpc version=3 Nov 4 12:35:19.624709 systemd[1]: Started cri-containerd-25a4a968fff9a842b15c47d6591a30bb0da13698d9bef204fb2f9c307a183f7f.scope - libcontainer container 25a4a968fff9a842b15c47d6591a30bb0da13698d9bef204fb2f9c307a183f7f. Nov 4 12:35:19.714275 containerd[1556]: time="2025-11-04T12:35:19.714233582Z" level=info msg="StartContainer for \"25a4a968fff9a842b15c47d6591a30bb0da13698d9bef204fb2f9c307a183f7f\" returns successfully" Nov 4 12:35:19.717478 systemd[1]: cri-containerd-25a4a968fff9a842b15c47d6591a30bb0da13698d9bef204fb2f9c307a183f7f.scope: Deactivated successfully. Nov 4 12:35:19.718716 containerd[1556]: time="2025-11-04T12:35:19.718679809Z" level=info msg="received exit event container_id:\"25a4a968fff9a842b15c47d6591a30bb0da13698d9bef204fb2f9c307a183f7f\" id:\"25a4a968fff9a842b15c47d6591a30bb0da13698d9bef204fb2f9c307a183f7f\" pid:3289 exited_at:{seconds:1762259719 nanos:718290000}" Nov 4 12:35:19.718902 containerd[1556]: time="2025-11-04T12:35:19.718879294Z" level=info msg="TaskExit event in podsandbox handler container_id:\"25a4a968fff9a842b15c47d6591a30bb0da13698d9bef204fb2f9c307a183f7f\" id:\"25a4a968fff9a842b15c47d6591a30bb0da13698d9bef204fb2f9c307a183f7f\" pid:3289 exited_at:{seconds:1762259719 nanos:718290000}" Nov 4 12:35:20.553464 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-25a4a968fff9a842b15c47d6591a30bb0da13698d9bef204fb2f9c307a183f7f-rootfs.mount: Deactivated successfully. Nov 4 12:35:20.554520 kubelet[2698]: E1104 12:35:20.554325 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:35:20.555354 kubelet[2698]: E1104 12:35:20.555165 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:35:20.561803 containerd[1556]: time="2025-11-04T12:35:20.561763472Z" level=info msg="CreateContainer within sandbox \"870428edb30c9f3dc3c31f520d8b0ce1d8e0f4611cbf646b577a5912040876a3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 4 12:35:20.621961 containerd[1556]: time="2025-11-04T12:35:20.621902019Z" level=info msg="Container 2ee4a3556cefcb665bff491ebf81a146c1f73330d5ab805c9f8bf7ca705f41b7: CDI devices from CRI Config.CDIDevices: []" Nov 4 12:35:20.636085 containerd[1556]: time="2025-11-04T12:35:20.635973224Z" level=info msg="CreateContainer within sandbox \"870428edb30c9f3dc3c31f520d8b0ce1d8e0f4611cbf646b577a5912040876a3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2ee4a3556cefcb665bff491ebf81a146c1f73330d5ab805c9f8bf7ca705f41b7\"" Nov 4 12:35:20.636456 containerd[1556]: time="2025-11-04T12:35:20.636428514Z" level=info msg="StartContainer for \"2ee4a3556cefcb665bff491ebf81a146c1f73330d5ab805c9f8bf7ca705f41b7\"" Nov 4 12:35:20.637222 containerd[1556]: time="2025-11-04T12:35:20.637187292Z" level=info msg="connecting to shim 2ee4a3556cefcb665bff491ebf81a146c1f73330d5ab805c9f8bf7ca705f41b7" address="unix:///run/containerd/s/0f52a848e1f1cdf8775c1f3681198affc20e8d3eaa02a0a018b67d6b3498ddbf" protocol=ttrpc version=3 Nov 4 12:35:20.657713 systemd[1]: Started cri-containerd-2ee4a3556cefcb665bff491ebf81a146c1f73330d5ab805c9f8bf7ca705f41b7.scope - libcontainer container 2ee4a3556cefcb665bff491ebf81a146c1f73330d5ab805c9f8bf7ca705f41b7. Nov 4 12:35:20.679444 systemd[1]: cri-containerd-2ee4a3556cefcb665bff491ebf81a146c1f73330d5ab805c9f8bf7ca705f41b7.scope: Deactivated successfully. Nov 4 12:35:20.682254 containerd[1556]: time="2025-11-04T12:35:20.682186089Z" level=info msg="received exit event container_id:\"2ee4a3556cefcb665bff491ebf81a146c1f73330d5ab805c9f8bf7ca705f41b7\" id:\"2ee4a3556cefcb665bff491ebf81a146c1f73330d5ab805c9f8bf7ca705f41b7\" pid:3328 exited_at:{seconds:1762259720 nanos:680088681}" Nov 4 12:35:20.682395 containerd[1556]: time="2025-11-04T12:35:20.682373254Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2ee4a3556cefcb665bff491ebf81a146c1f73330d5ab805c9f8bf7ca705f41b7\" id:\"2ee4a3556cefcb665bff491ebf81a146c1f73330d5ab805c9f8bf7ca705f41b7\" pid:3328 exited_at:{seconds:1762259720 nanos:680088681}" Nov 4 12:35:20.683226 containerd[1556]: time="2025-11-04T12:35:20.682952027Z" level=info msg="StartContainer for \"2ee4a3556cefcb665bff491ebf81a146c1f73330d5ab805c9f8bf7ca705f41b7\" returns successfully" Nov 4 12:35:20.698136 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ee4a3556cefcb665bff491ebf81a146c1f73330d5ab805c9f8bf7ca705f41b7-rootfs.mount: Deactivated successfully. Nov 4 12:35:21.560625 kubelet[2698]: E1104 12:35:21.560590 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:35:21.565587 containerd[1556]: time="2025-11-04T12:35:21.565424221Z" level=info msg="CreateContainer within sandbox \"870428edb30c9f3dc3c31f520d8b0ce1d8e0f4611cbf646b577a5912040876a3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 4 12:35:21.575324 containerd[1556]: time="2025-11-04T12:35:21.575289278Z" level=info msg="Container c9868fde06a5c8b71c9fb374bb4fb2c24897fed3ecc5fb58c5e5656762f149ff: CDI devices from CRI Config.CDIDevices: []" Nov 4 12:35:21.580770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3919771925.mount: Deactivated successfully. Nov 4 12:35:21.585362 containerd[1556]: time="2025-11-04T12:35:21.585321900Z" level=info msg="CreateContainer within sandbox \"870428edb30c9f3dc3c31f520d8b0ce1d8e0f4611cbf646b577a5912040876a3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c9868fde06a5c8b71c9fb374bb4fb2c24897fed3ecc5fb58c5e5656762f149ff\"" Nov 4 12:35:21.585882 containerd[1556]: time="2025-11-04T12:35:21.585857232Z" level=info msg="StartContainer for \"c9868fde06a5c8b71c9fb374bb4fb2c24897fed3ecc5fb58c5e5656762f149ff\"" Nov 4 12:35:21.587265 containerd[1556]: time="2025-11-04T12:35:21.587228742Z" level=info msg="connecting to shim c9868fde06a5c8b71c9fb374bb4fb2c24897fed3ecc5fb58c5e5656762f149ff" address="unix:///run/containerd/s/0f52a848e1f1cdf8775c1f3681198affc20e8d3eaa02a0a018b67d6b3498ddbf" protocol=ttrpc version=3 Nov 4 12:35:21.604715 systemd[1]: Started cri-containerd-c9868fde06a5c8b71c9fb374bb4fb2c24897fed3ecc5fb58c5e5656762f149ff.scope - libcontainer container c9868fde06a5c8b71c9fb374bb4fb2c24897fed3ecc5fb58c5e5656762f149ff. Nov 4 12:35:21.632455 containerd[1556]: time="2025-11-04T12:35:21.632327097Z" level=info msg="StartContainer for \"c9868fde06a5c8b71c9fb374bb4fb2c24897fed3ecc5fb58c5e5656762f149ff\" returns successfully" Nov 4 12:35:21.727804 containerd[1556]: time="2025-11-04T12:35:21.727728163Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c9868fde06a5c8b71c9fb374bb4fb2c24897fed3ecc5fb58c5e5656762f149ff\" id:\"21c5952c847608e39e4bde06e8887fc2c6cb77c4dddba606c1d491d38598a948\" pid:3395 exited_at:{seconds:1762259721 nanos:727309114}" Nov 4 12:35:21.778532 kubelet[2698]: I1104 12:35:21.778500 2698 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 4 12:35:21.836294 systemd[1]: Created slice kubepods-burstable-pod20f5f3bc_83c7_48b2_9bb4_0e87fdbcfc3b.slice - libcontainer container kubepods-burstable-pod20f5f3bc_83c7_48b2_9bb4_0e87fdbcfc3b.slice. Nov 4 12:35:21.844373 systemd[1]: Created slice kubepods-burstable-pod230cbf07_0d4a_4030_9367_3da44c9ca7ce.slice - libcontainer container kubepods-burstable-pod230cbf07_0d4a_4030_9367_3da44c9ca7ce.slice. Nov 4 12:35:21.935026 kubelet[2698]: I1104 12:35:21.934989 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn9d6\" (UniqueName: \"kubernetes.io/projected/230cbf07-0d4a-4030-9367-3da44c9ca7ce-kube-api-access-rn9d6\") pod \"coredns-674b8bbfcf-7k9hc\" (UID: \"230cbf07-0d4a-4030-9367-3da44c9ca7ce\") " pod="kube-system/coredns-674b8bbfcf-7k9hc" Nov 4 12:35:21.935026 kubelet[2698]: I1104 12:35:21.935034 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20f5f3bc-83c7-48b2-9bb4-0e87fdbcfc3b-config-volume\") pod \"coredns-674b8bbfcf-fx672\" (UID: \"20f5f3bc-83c7-48b2-9bb4-0e87fdbcfc3b\") " pod="kube-system/coredns-674b8bbfcf-fx672" Nov 4 12:35:21.935304 kubelet[2698]: I1104 12:35:21.935057 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/230cbf07-0d4a-4030-9367-3da44c9ca7ce-config-volume\") pod \"coredns-674b8bbfcf-7k9hc\" (UID: \"230cbf07-0d4a-4030-9367-3da44c9ca7ce\") " pod="kube-system/coredns-674b8bbfcf-7k9hc" Nov 4 12:35:21.935304 kubelet[2698]: I1104 12:35:21.935078 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c767v\" (UniqueName: \"kubernetes.io/projected/20f5f3bc-83c7-48b2-9bb4-0e87fdbcfc3b-kube-api-access-c767v\") pod \"coredns-674b8bbfcf-fx672\" (UID: \"20f5f3bc-83c7-48b2-9bb4-0e87fdbcfc3b\") " pod="kube-system/coredns-674b8bbfcf-fx672" Nov 4 12:35:22.140788 kubelet[2698]: E1104 12:35:22.140533 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:35:22.141441 containerd[1556]: time="2025-11-04T12:35:22.141340682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fx672,Uid:20f5f3bc-83c7-48b2-9bb4-0e87fdbcfc3b,Namespace:kube-system,Attempt:0,}" Nov 4 12:35:22.150802 kubelet[2698]: E1104 12:35:22.150773 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:35:22.151574 containerd[1556]: time="2025-11-04T12:35:22.151490137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7k9hc,Uid:230cbf07-0d4a-4030-9367-3da44c9ca7ce,Namespace:kube-system,Attempt:0,}" Nov 4 12:35:22.566665 kubelet[2698]: E1104 12:35:22.566616 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:35:22.589351 kubelet[2698]: I1104 12:35:22.589119 2698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-565jf" podStartSLOduration=6.008389302 podStartE2EDuration="18.589102989s" podCreationTimestamp="2025-11-04 12:35:04 +0000 UTC" firstStartedPulling="2025-11-04 12:35:04.712764408 +0000 UTC m=+7.341289849" lastFinishedPulling="2025-11-04 12:35:17.293478095 +0000 UTC m=+19.922003536" observedRunningTime="2025-11-04 12:35:22.58816577 +0000 UTC m=+25.216691211" watchObservedRunningTime="2025-11-04 12:35:22.589102989 +0000 UTC m=+25.217628430" Nov 4 12:35:23.568731 kubelet[2698]: E1104 12:35:23.568698 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:35:23.653159 systemd-networkd[1468]: cilium_host: Link UP Nov 4 12:35:23.654089 systemd-networkd[1468]: cilium_net: Link UP Nov 4 12:35:23.654602 systemd-networkd[1468]: cilium_net: Gained carrier Nov 4 12:35:23.655312 systemd-networkd[1468]: cilium_host: Gained carrier Nov 4 12:35:23.728056 systemd-networkd[1468]: cilium_vxlan: Link UP Nov 4 12:35:23.728063 systemd-networkd[1468]: cilium_vxlan: Gained carrier Nov 4 12:35:23.769781 systemd[1]: Started sshd@7-10.0.0.141:22-10.0.0.1:60958.service - OpenSSH per-connection server daemon (10.0.0.1:60958). Nov 4 12:35:23.794671 systemd-networkd[1468]: cilium_host: Gained IPv6LL Nov 4 12:35:23.828244 sshd[3585]: Accepted publickey for core from 10.0.0.1 port 60958 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:35:23.830578 sshd-session[3585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:35:23.838398 systemd-logind[1532]: New session 8 of user core. Nov 4 12:35:23.850726 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 4 12:35:23.984042 sshd[3588]: Connection closed by 10.0.0.1 port 60958 Nov 4 12:35:23.984742 sshd-session[3585]: pam_unix(sshd:session): session closed for user core Nov 4 12:35:23.986655 kernel: NET: Registered PF_ALG protocol family Nov 4 12:35:23.990030 systemd[1]: sshd@7-10.0.0.141:22-10.0.0.1:60958.service: Deactivated successfully. Nov 4 12:35:23.991729 systemd[1]: session-8.scope: Deactivated successfully. Nov 4 12:35:23.992896 systemd-logind[1532]: Session 8 logged out. Waiting for processes to exit. Nov 4 12:35:23.994009 systemd-logind[1532]: Removed session 8. Nov 4 12:35:24.337810 systemd-networkd[1468]: cilium_net: Gained IPv6LL Nov 4 12:35:24.567015 systemd-networkd[1468]: lxc_health: Link UP Nov 4 12:35:24.568808 systemd-networkd[1468]: lxc_health: Gained carrier Nov 4 12:35:24.570944 kubelet[2698]: E1104 12:35:24.570921 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:35:24.674891 systemd-networkd[1468]: lxc2971485b13d6: Link UP Nov 4 12:35:24.683567 kernel: eth0: renamed from tmpe810d Nov 4 12:35:24.684362 systemd-networkd[1468]: lxc2971485b13d6: Gained carrier Nov 4 12:35:24.685508 systemd-networkd[1468]: lxce64a19e9b358: Link UP Nov 4 12:35:24.697993 kernel: eth0: renamed from tmp1e068 Nov 4 12:35:24.698796 systemd-networkd[1468]: lxce64a19e9b358: Gained carrier Nov 4 12:35:25.299309 systemd-networkd[1468]: cilium_vxlan: Gained IPv6LL Nov 4 12:35:25.572374 kubelet[2698]: E1104 12:35:25.572254 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:35:26.258846 systemd-networkd[1468]: lxc_health: Gained IPv6LL Nov 4 12:35:26.385857 systemd-networkd[1468]: lxc2971485b13d6: Gained IPv6LL Nov 4 12:35:26.641909 systemd-networkd[1468]: lxce64a19e9b358: Gained IPv6LL Nov 4 12:35:28.135529 containerd[1556]: time="2025-11-04T12:35:28.135486191Z" level=info msg="connecting to shim e810d3b86c946e7eed35c5f8cd790955d60bc7ccfcf1df289d3d2c260b3d92c1" address="unix:///run/containerd/s/ee7b96cb8b49a5efd87503ecdc8ce399339f3ce3e163be84b68cdf8d7fb05900" namespace=k8s.io protocol=ttrpc version=3 Nov 4 12:35:28.138707 containerd[1556]: time="2025-11-04T12:35:28.138640203Z" level=info msg="connecting to shim 1e0687eec8c46a61aa417b5da979bd7f4a222d98f0da560f50382236c83ac8aa" address="unix:///run/containerd/s/2478dcfe342baf9b2586b702fbe1f5a13ec2fa6590ff748da917f08335bed77e" namespace=k8s.io protocol=ttrpc version=3 Nov 4 12:35:28.167714 systemd[1]: Started cri-containerd-1e0687eec8c46a61aa417b5da979bd7f4a222d98f0da560f50382236c83ac8aa.scope - libcontainer container 1e0687eec8c46a61aa417b5da979bd7f4a222d98f0da560f50382236c83ac8aa. Nov 4 12:35:28.169404 systemd[1]: Started cri-containerd-e810d3b86c946e7eed35c5f8cd790955d60bc7ccfcf1df289d3d2c260b3d92c1.scope - libcontainer container e810d3b86c946e7eed35c5f8cd790955d60bc7ccfcf1df289d3d2c260b3d92c1. Nov 4 12:35:28.183106 systemd-resolved[1268]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 12:35:28.186623 systemd-resolved[1268]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 12:35:28.207598 containerd[1556]: time="2025-11-04T12:35:28.207403069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fx672,Uid:20f5f3bc-83c7-48b2-9bb4-0e87fdbcfc3b,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e0687eec8c46a61aa417b5da979bd7f4a222d98f0da560f50382236c83ac8aa\"" Nov 4 12:35:28.208500 kubelet[2698]: E1104 12:35:28.208474 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:35:28.211733 containerd[1556]: time="2025-11-04T12:35:28.211665940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7k9hc,Uid:230cbf07-0d4a-4030-9367-3da44c9ca7ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"e810d3b86c946e7eed35c5f8cd790955d60bc7ccfcf1df289d3d2c260b3d92c1\"" Nov 4 12:35:28.212621 kubelet[2698]: E1104 12:35:28.212601 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:35:28.215120 containerd[1556]: time="2025-11-04T12:35:28.215052117Z" level=info msg="CreateContainer within sandbox \"1e0687eec8c46a61aa417b5da979bd7f4a222d98f0da560f50382236c83ac8aa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 4 12:35:28.216098 containerd[1556]: time="2025-11-04T12:35:28.216073414Z" level=info msg="CreateContainer within sandbox \"e810d3b86c946e7eed35c5f8cd790955d60bc7ccfcf1df289d3d2c260b3d92c1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 4 12:35:28.228435 containerd[1556]: time="2025-11-04T12:35:28.228398579Z" level=info msg="Container 21f93ce1c889512f531f72b1f31d57d701fa4e68994e2e0a499546c1a50e4cd1: CDI devices from CRI Config.CDIDevices: []" Nov 4 12:35:28.230217 containerd[1556]: time="2025-11-04T12:35:28.230192009Z" level=info msg="Container 0a57b79b35842628a282be0f3600a508e33d1f9a37203677c6620d28b49068fe: CDI devices from CRI Config.CDIDevices: []" Nov 4 12:35:28.233623 containerd[1556]: time="2025-11-04T12:35:28.233591626Z" level=info msg="CreateContainer within sandbox \"e810d3b86c946e7eed35c5f8cd790955d60bc7ccfcf1df289d3d2c260b3d92c1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"21f93ce1c889512f531f72b1f31d57d701fa4e68994e2e0a499546c1a50e4cd1\"" Nov 4 12:35:28.234184 containerd[1556]: time="2025-11-04T12:35:28.234125995Z" level=info msg="StartContainer for \"21f93ce1c889512f531f72b1f31d57d701fa4e68994e2e0a499546c1a50e4cd1\"" Nov 4 12:35:28.235130 containerd[1556]: time="2025-11-04T12:35:28.235059050Z" level=info msg="connecting to shim 21f93ce1c889512f531f72b1f31d57d701fa4e68994e2e0a499546c1a50e4cd1" address="unix:///run/containerd/s/ee7b96cb8b49a5efd87503ecdc8ce399339f3ce3e163be84b68cdf8d7fb05900" protocol=ttrpc version=3 Nov 4 12:35:28.235891 containerd[1556]: time="2025-11-04T12:35:28.235860984Z" level=info msg="CreateContainer within sandbox \"1e0687eec8c46a61aa417b5da979bd7f4a222d98f0da560f50382236c83ac8aa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0a57b79b35842628a282be0f3600a508e33d1f9a37203677c6620d28b49068fe\"" Nov 4 12:35:28.236657 containerd[1556]: time="2025-11-04T12:35:28.236618356Z" level=info msg="StartContainer for \"0a57b79b35842628a282be0f3600a508e33d1f9a37203677c6620d28b49068fe\"" Nov 4 12:35:28.238195 containerd[1556]: time="2025-11-04T12:35:28.238169582Z" level=info msg="connecting to shim 0a57b79b35842628a282be0f3600a508e33d1f9a37203677c6620d28b49068fe" address="unix:///run/containerd/s/2478dcfe342baf9b2586b702fbe1f5a13ec2fa6590ff748da917f08335bed77e" protocol=ttrpc version=3 Nov 4 12:35:28.261703 systemd[1]: Started cri-containerd-21f93ce1c889512f531f72b1f31d57d701fa4e68994e2e0a499546c1a50e4cd1.scope - libcontainer container 21f93ce1c889512f531f72b1f31d57d701fa4e68994e2e0a499546c1a50e4cd1. Nov 4 12:35:28.265243 systemd[1]: Started cri-containerd-0a57b79b35842628a282be0f3600a508e33d1f9a37203677c6620d28b49068fe.scope - libcontainer container 0a57b79b35842628a282be0f3600a508e33d1f9a37203677c6620d28b49068fe. Nov 4 12:35:28.295277 containerd[1556]: time="2025-11-04T12:35:28.295223813Z" level=info msg="StartContainer for \"0a57b79b35842628a282be0f3600a508e33d1f9a37203677c6620d28b49068fe\" returns successfully" Nov 4 12:35:28.303839 containerd[1556]: time="2025-11-04T12:35:28.303801156Z" level=info msg="StartContainer for \"21f93ce1c889512f531f72b1f31d57d701fa4e68994e2e0a499546c1a50e4cd1\" returns successfully" Nov 4 12:35:28.580156 kubelet[2698]: E1104 12:35:28.579906 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:35:28.584213 kubelet[2698]: E1104 12:35:28.584193 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:35:28.596809 kubelet[2698]: I1104 12:35:28.596760 2698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-7k9hc" podStartSLOduration=24.596745399 podStartE2EDuration="24.596745399s" podCreationTimestamp="2025-11-04 12:35:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 12:35:28.596153389 +0000 UTC m=+31.224678830" watchObservedRunningTime="2025-11-04 12:35:28.596745399 +0000 UTC m=+31.225270840" Nov 4 12:35:28.619380 kubelet[2698]: I1104 12:35:28.618755 2698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-fx672" podStartSLOduration=24.618741326 podStartE2EDuration="24.618741326s" podCreationTimestamp="2025-11-04 12:35:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 12:35:28.618730006 +0000 UTC m=+31.247255447" watchObservedRunningTime="2025-11-04 12:35:28.618741326 +0000 UTC m=+31.247266767" Nov 4 12:35:28.999088 systemd[1]: Started sshd@8-10.0.0.141:22-10.0.0.1:60972.service - OpenSSH per-connection server daemon (10.0.0.1:60972). Nov 4 12:35:29.072599 sshd[4063]: Accepted publickey for core from 10.0.0.1 port 60972 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:35:29.074053 sshd-session[4063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:35:29.078752 systemd-logind[1532]: New session 9 of user core. Nov 4 12:35:29.090696 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 4 12:35:29.208943 sshd[4066]: Connection closed by 10.0.0.1 port 60972 Nov 4 12:35:29.209248 sshd-session[4063]: pam_unix(sshd:session): session closed for user core Nov 4 12:35:29.212862 systemd[1]: sshd@8-10.0.0.141:22-10.0.0.1:60972.service: Deactivated successfully. Nov 4 12:35:29.214865 systemd[1]: session-9.scope: Deactivated successfully. Nov 4 12:35:29.215687 systemd-logind[1532]: Session 9 logged out. Waiting for processes to exit. Nov 4 12:35:29.216840 systemd-logind[1532]: Removed session 9. Nov 4 12:35:29.584933 kubelet[2698]: E1104 12:35:29.584895 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:35:29.585260 kubelet[2698]: E1104 12:35:29.584991 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:35:30.588006 kubelet[2698]: E1104 12:35:30.587922 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:35:30.589462 kubelet[2698]: E1104 12:35:30.589438 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:35:33.165655 kubelet[2698]: I1104 12:35:33.165601 2698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 4 12:35:33.166058 kubelet[2698]: E1104 12:35:33.166031 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:35:33.594000 kubelet[2698]: E1104 12:35:33.593767 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:35:34.224504 systemd[1]: Started sshd@9-10.0.0.141:22-10.0.0.1:50698.service - OpenSSH per-connection server daemon (10.0.0.1:50698). Nov 4 12:35:34.289659 sshd[4080]: Accepted publickey for core from 10.0.0.1 port 50698 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:35:34.291662 sshd-session[4080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:35:34.299734 systemd-logind[1532]: New session 10 of user core. Nov 4 12:35:34.307791 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 4 12:35:34.436914 sshd[4083]: Connection closed by 10.0.0.1 port 50698 Nov 4 12:35:34.437225 sshd-session[4080]: pam_unix(sshd:session): session closed for user core Nov 4 12:35:34.442351 systemd[1]: sshd@9-10.0.0.141:22-10.0.0.1:50698.service: Deactivated successfully. Nov 4 12:35:34.442625 systemd-logind[1532]: Session 10 logged out. Waiting for processes to exit. Nov 4 12:35:34.446053 systemd[1]: session-10.scope: Deactivated successfully. Nov 4 12:35:34.447680 systemd-logind[1532]: Removed session 10. Nov 4 12:35:39.459689 systemd[1]: Started sshd@10-10.0.0.141:22-10.0.0.1:35580.service - OpenSSH per-connection server daemon (10.0.0.1:35580). Nov 4 12:35:39.519190 sshd[4103]: Accepted publickey for core from 10.0.0.1 port 35580 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:35:39.520335 sshd-session[4103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:35:39.524474 systemd-logind[1532]: New session 11 of user core. Nov 4 12:35:39.534718 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 4 12:35:39.670125 sshd[4106]: Connection closed by 10.0.0.1 port 35580 Nov 4 12:35:39.670832 sshd-session[4103]: pam_unix(sshd:session): session closed for user core Nov 4 12:35:39.680321 systemd[1]: sshd@10-10.0.0.141:22-10.0.0.1:35580.service: Deactivated successfully. Nov 4 12:35:39.683172 systemd[1]: session-11.scope: Deactivated successfully. Nov 4 12:35:39.684005 systemd-logind[1532]: Session 11 logged out. Waiting for processes to exit. Nov 4 12:35:39.686448 systemd[1]: Started sshd@11-10.0.0.141:22-10.0.0.1:35594.service - OpenSSH per-connection server daemon (10.0.0.1:35594). Nov 4 12:35:39.687207 systemd-logind[1532]: Removed session 11. Nov 4 12:35:39.757129 sshd[4121]: Accepted publickey for core from 10.0.0.1 port 35594 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:35:39.758468 sshd-session[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:35:39.763272 systemd-logind[1532]: New session 12 of user core. Nov 4 12:35:39.766697 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 4 12:35:39.917013 sshd[4126]: Connection closed by 10.0.0.1 port 35594 Nov 4 12:35:39.917739 sshd-session[4121]: pam_unix(sshd:session): session closed for user core Nov 4 12:35:39.926027 systemd[1]: sshd@11-10.0.0.141:22-10.0.0.1:35594.service: Deactivated successfully. Nov 4 12:35:39.928413 systemd[1]: session-12.scope: Deactivated successfully. Nov 4 12:35:39.930367 systemd-logind[1532]: Session 12 logged out. Waiting for processes to exit. Nov 4 12:35:39.933139 systemd-logind[1532]: Removed session 12. Nov 4 12:35:39.936041 systemd[1]: Started sshd@12-10.0.0.141:22-10.0.0.1:35608.service - OpenSSH per-connection server daemon (10.0.0.1:35608). Nov 4 12:35:39.999128 sshd[4138]: Accepted publickey for core from 10.0.0.1 port 35608 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:35:40.000754 sshd-session[4138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:35:40.004589 systemd-logind[1532]: New session 13 of user core. Nov 4 12:35:40.010717 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 4 12:35:40.127040 sshd[4141]: Connection closed by 10.0.0.1 port 35608 Nov 4 12:35:40.127588 sshd-session[4138]: pam_unix(sshd:session): session closed for user core Nov 4 12:35:40.131462 systemd[1]: sshd@12-10.0.0.141:22-10.0.0.1:35608.service: Deactivated successfully. Nov 4 12:35:40.133585 systemd[1]: session-13.scope: Deactivated successfully. Nov 4 12:35:40.134561 systemd-logind[1532]: Session 13 logged out. Waiting for processes to exit. Nov 4 12:35:40.135998 systemd-logind[1532]: Removed session 13. Nov 4 12:35:45.147528 systemd[1]: Started sshd@13-10.0.0.141:22-10.0.0.1:35620.service - OpenSSH per-connection server daemon (10.0.0.1:35620). Nov 4 12:35:45.214954 sshd[4155]: Accepted publickey for core from 10.0.0.1 port 35620 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:35:45.216226 sshd-session[4155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:35:45.220996 systemd-logind[1532]: New session 14 of user core. Nov 4 12:35:45.234766 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 4 12:35:45.357405 sshd[4158]: Connection closed by 10.0.0.1 port 35620 Nov 4 12:35:45.358343 sshd-session[4155]: pam_unix(sshd:session): session closed for user core Nov 4 12:35:45.369725 systemd[1]: sshd@13-10.0.0.141:22-10.0.0.1:35620.service: Deactivated successfully. Nov 4 12:35:45.372051 systemd[1]: session-14.scope: Deactivated successfully. Nov 4 12:35:45.372790 systemd-logind[1532]: Session 14 logged out. Waiting for processes to exit. Nov 4 12:35:45.375121 systemd[1]: Started sshd@14-10.0.0.141:22-10.0.0.1:35628.service - OpenSSH per-connection server daemon (10.0.0.1:35628). Nov 4 12:35:45.376405 systemd-logind[1532]: Removed session 14. Nov 4 12:35:45.438960 sshd[4172]: Accepted publickey for core from 10.0.0.1 port 35628 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:35:45.440254 sshd-session[4172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:35:45.444847 systemd-logind[1532]: New session 15 of user core. Nov 4 12:35:45.453703 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 4 12:35:45.643347 sshd[4175]: Connection closed by 10.0.0.1 port 35628 Nov 4 12:35:45.644611 sshd-session[4172]: pam_unix(sshd:session): session closed for user core Nov 4 12:35:45.657903 systemd[1]: sshd@14-10.0.0.141:22-10.0.0.1:35628.service: Deactivated successfully. Nov 4 12:35:45.659821 systemd[1]: session-15.scope: Deactivated successfully. Nov 4 12:35:45.661704 systemd-logind[1532]: Session 15 logged out. Waiting for processes to exit. Nov 4 12:35:45.663496 systemd[1]: Started sshd@15-10.0.0.141:22-10.0.0.1:35636.service - OpenSSH per-connection server daemon (10.0.0.1:35636). Nov 4 12:35:45.665002 systemd-logind[1532]: Removed session 15. Nov 4 12:35:45.740956 sshd[4187]: Accepted publickey for core from 10.0.0.1 port 35636 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:35:45.742256 sshd-session[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:35:45.746734 systemd-logind[1532]: New session 16 of user core. Nov 4 12:35:45.757706 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 4 12:35:46.348256 sshd[4190]: Connection closed by 10.0.0.1 port 35636 Nov 4 12:35:46.348895 sshd-session[4187]: pam_unix(sshd:session): session closed for user core Nov 4 12:35:46.359656 systemd[1]: sshd@15-10.0.0.141:22-10.0.0.1:35636.service: Deactivated successfully. Nov 4 12:35:46.361205 systemd[1]: session-16.scope: Deactivated successfully. Nov 4 12:35:46.362037 systemd-logind[1532]: Session 16 logged out. Waiting for processes to exit. Nov 4 12:35:46.363845 systemd[1]: Started sshd@16-10.0.0.141:22-10.0.0.1:35640.service - OpenSSH per-connection server daemon (10.0.0.1:35640). Nov 4 12:35:46.368170 systemd-logind[1532]: Removed session 16. Nov 4 12:35:46.427676 sshd[4209]: Accepted publickey for core from 10.0.0.1 port 35640 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:35:46.429023 sshd-session[4209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:35:46.433286 systemd-logind[1532]: New session 17 of user core. Nov 4 12:35:46.446725 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 4 12:35:46.683332 sshd[4212]: Connection closed by 10.0.0.1 port 35640 Nov 4 12:35:46.683862 sshd-session[4209]: pam_unix(sshd:session): session closed for user core Nov 4 12:35:46.691631 systemd[1]: sshd@16-10.0.0.141:22-10.0.0.1:35640.service: Deactivated successfully. Nov 4 12:35:46.693301 systemd[1]: session-17.scope: Deactivated successfully. Nov 4 12:35:46.696002 systemd-logind[1532]: Session 17 logged out. Waiting for processes to exit. Nov 4 12:35:46.698323 systemd[1]: Started sshd@17-10.0.0.141:22-10.0.0.1:35646.service - OpenSSH per-connection server daemon (10.0.0.1:35646). Nov 4 12:35:46.700694 systemd-logind[1532]: Removed session 17. Nov 4 12:35:46.770270 sshd[4223]: Accepted publickey for core from 10.0.0.1 port 35646 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:35:46.771485 sshd-session[4223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:35:46.776162 systemd-logind[1532]: New session 18 of user core. Nov 4 12:35:46.786751 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 4 12:35:46.897262 sshd[4226]: Connection closed by 10.0.0.1 port 35646 Nov 4 12:35:46.897286 sshd-session[4223]: pam_unix(sshd:session): session closed for user core Nov 4 12:35:46.901106 systemd[1]: sshd@17-10.0.0.141:22-10.0.0.1:35646.service: Deactivated successfully. Nov 4 12:35:46.904023 systemd[1]: session-18.scope: Deactivated successfully. Nov 4 12:35:46.904689 systemd-logind[1532]: Session 18 logged out. Waiting for processes to exit. Nov 4 12:35:46.905678 systemd-logind[1532]: Removed session 18. Nov 4 12:35:51.912639 systemd[1]: Started sshd@18-10.0.0.141:22-10.0.0.1:35824.service - OpenSSH per-connection server daemon (10.0.0.1:35824). Nov 4 12:35:51.974834 sshd[4241]: Accepted publickey for core from 10.0.0.1 port 35824 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:35:51.975927 sshd-session[4241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:35:51.979519 systemd-logind[1532]: New session 19 of user core. Nov 4 12:35:51.991693 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 4 12:35:52.098582 sshd[4244]: Connection closed by 10.0.0.1 port 35824 Nov 4 12:35:52.099064 sshd-session[4241]: pam_unix(sshd:session): session closed for user core Nov 4 12:35:52.102700 systemd[1]: sshd@18-10.0.0.141:22-10.0.0.1:35824.service: Deactivated successfully. Nov 4 12:35:52.104265 systemd[1]: session-19.scope: Deactivated successfully. Nov 4 12:35:52.104955 systemd-logind[1532]: Session 19 logged out. Waiting for processes to exit. Nov 4 12:35:52.105818 systemd-logind[1532]: Removed session 19. Nov 4 12:35:57.114919 systemd[1]: Started sshd@19-10.0.0.141:22-10.0.0.1:35840.service - OpenSSH per-connection server daemon (10.0.0.1:35840). Nov 4 12:35:57.175609 sshd[4258]: Accepted publickey for core from 10.0.0.1 port 35840 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:35:57.174070 sshd-session[4258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:35:57.180662 systemd-logind[1532]: New session 20 of user core. Nov 4 12:35:57.190698 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 4 12:35:57.306607 sshd[4261]: Connection closed by 10.0.0.1 port 35840 Nov 4 12:35:57.306015 sshd-session[4258]: pam_unix(sshd:session): session closed for user core Nov 4 12:35:57.309748 systemd[1]: sshd@19-10.0.0.141:22-10.0.0.1:35840.service: Deactivated successfully. Nov 4 12:35:57.311410 systemd[1]: session-20.scope: Deactivated successfully. Nov 4 12:35:57.312129 systemd-logind[1532]: Session 20 logged out. Waiting for processes to exit. Nov 4 12:35:57.313880 systemd-logind[1532]: Removed session 20. Nov 4 12:36:02.317756 systemd[1]: Started sshd@20-10.0.0.141:22-10.0.0.1:54328.service - OpenSSH per-connection server daemon (10.0.0.1:54328). Nov 4 12:36:02.364682 sshd[4279]: Accepted publickey for core from 10.0.0.1 port 54328 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:36:02.365792 sshd-session[4279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:36:02.370087 systemd-logind[1532]: New session 21 of user core. Nov 4 12:36:02.387701 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 4 12:36:02.491978 sshd[4282]: Connection closed by 10.0.0.1 port 54328 Nov 4 12:36:02.492695 sshd-session[4279]: pam_unix(sshd:session): session closed for user core Nov 4 12:36:02.506625 systemd[1]: sshd@20-10.0.0.141:22-10.0.0.1:54328.service: Deactivated successfully. Nov 4 12:36:02.508584 systemd[1]: session-21.scope: Deactivated successfully. Nov 4 12:36:02.510521 systemd-logind[1532]: Session 21 logged out. Waiting for processes to exit. Nov 4 12:36:02.511520 systemd[1]: Started sshd@21-10.0.0.141:22-10.0.0.1:54336.service - OpenSSH per-connection server daemon (10.0.0.1:54336). Nov 4 12:36:02.513099 systemd-logind[1532]: Removed session 21. Nov 4 12:36:02.571780 sshd[4295]: Accepted publickey for core from 10.0.0.1 port 54336 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:36:02.572827 sshd-session[4295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:36:02.576788 systemd-logind[1532]: New session 22 of user core. Nov 4 12:36:02.587685 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 4 12:36:04.087661 containerd[1556]: time="2025-11-04T12:36:04.087588941Z" level=info msg="StopContainer for \"3580cc6302f14888151f6d6d38244422918ac4dd3c28f19744a1ae89eea56056\" with timeout 30 (s)" Nov 4 12:36:04.088957 containerd[1556]: time="2025-11-04T12:36:04.088769631Z" level=info msg="Stop container \"3580cc6302f14888151f6d6d38244422918ac4dd3c28f19744a1ae89eea56056\" with signal terminated" Nov 4 12:36:04.099765 systemd[1]: cri-containerd-3580cc6302f14888151f6d6d38244422918ac4dd3c28f19744a1ae89eea56056.scope: Deactivated successfully. Nov 4 12:36:04.102611 containerd[1556]: time="2025-11-04T12:36:04.102443742Z" level=info msg="received exit event container_id:\"3580cc6302f14888151f6d6d38244422918ac4dd3c28f19744a1ae89eea56056\" id:\"3580cc6302f14888151f6d6d38244422918ac4dd3c28f19744a1ae89eea56056\" pid:3257 exited_at:{seconds:1762259764 nanos:101001130}" Nov 4 12:36:04.102885 containerd[1556]: time="2025-11-04T12:36:04.102861905Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3580cc6302f14888151f6d6d38244422918ac4dd3c28f19744a1ae89eea56056\" id:\"3580cc6302f14888151f6d6d38244422918ac4dd3c28f19744a1ae89eea56056\" pid:3257 exited_at:{seconds:1762259764 nanos:101001130}" Nov 4 12:36:04.118151 containerd[1556]: time="2025-11-04T12:36:04.118115909Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c9868fde06a5c8b71c9fb374bb4fb2c24897fed3ecc5fb58c5e5656762f149ff\" id:\"b1ed24c97afe27c023b0360dfa5276646e82108bcd53040a60bee08ff0990fbf\" pid:4325 exited_at:{seconds:1762259764 nanos:117865147}" Nov 4 12:36:04.118674 containerd[1556]: time="2025-11-04T12:36:04.118633633Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 4 12:36:04.129362 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3580cc6302f14888151f6d6d38244422918ac4dd3c28f19744a1ae89eea56056-rootfs.mount: Deactivated successfully. Nov 4 12:36:04.139346 containerd[1556]: time="2025-11-04T12:36:04.139293922Z" level=info msg="StopContainer for \"3580cc6302f14888151f6d6d38244422918ac4dd3c28f19744a1ae89eea56056\" returns successfully" Nov 4 12:36:04.145730 containerd[1556]: time="2025-11-04T12:36:04.145699214Z" level=info msg="StopContainer for \"c9868fde06a5c8b71c9fb374bb4fb2c24897fed3ecc5fb58c5e5656762f149ff\" with timeout 2 (s)" Nov 4 12:36:04.146009 containerd[1556]: time="2025-11-04T12:36:04.145987736Z" level=info msg="Stop container \"c9868fde06a5c8b71c9fb374bb4fb2c24897fed3ecc5fb58c5e5656762f149ff\" with signal terminated" Nov 4 12:36:04.146462 containerd[1556]: time="2025-11-04T12:36:04.146433860Z" level=info msg="StopPodSandbox for \"1f63f793b7a7143ce55adf46f6af9e56fe25edc6c6c9d4b00924a059e5bdc140\"" Nov 4 12:36:04.151842 systemd-networkd[1468]: lxc_health: Link DOWN Nov 4 12:36:04.151852 systemd-networkd[1468]: lxc_health: Lost carrier Nov 4 12:36:04.164979 systemd[1]: cri-containerd-c9868fde06a5c8b71c9fb374bb4fb2c24897fed3ecc5fb58c5e5656762f149ff.scope: Deactivated successfully. Nov 4 12:36:04.165299 systemd[1]: cri-containerd-c9868fde06a5c8b71c9fb374bb4fb2c24897fed3ecc5fb58c5e5656762f149ff.scope: Consumed 6.070s CPU time, 125.4M memory peak, 144K read from disk, 12.9M written to disk. Nov 4 12:36:04.166855 containerd[1556]: time="2025-11-04T12:36:04.166352581Z" level=info msg="received exit event container_id:\"c9868fde06a5c8b71c9fb374bb4fb2c24897fed3ecc5fb58c5e5656762f149ff\" id:\"c9868fde06a5c8b71c9fb374bb4fb2c24897fed3ecc5fb58c5e5656762f149ff\" pid:3365 exited_at:{seconds:1762259764 nanos:166136699}" Nov 4 12:36:04.166855 containerd[1556]: time="2025-11-04T12:36:04.166521102Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c9868fde06a5c8b71c9fb374bb4fb2c24897fed3ecc5fb58c5e5656762f149ff\" id:\"c9868fde06a5c8b71c9fb374bb4fb2c24897fed3ecc5fb58c5e5656762f149ff\" pid:3365 exited_at:{seconds:1762259764 nanos:166136699}" Nov 4 12:36:04.170171 containerd[1556]: time="2025-11-04T12:36:04.170140412Z" level=info msg="Container to stop \"3580cc6302f14888151f6d6d38244422918ac4dd3c28f19744a1ae89eea56056\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 4 12:36:04.176651 systemd[1]: cri-containerd-1f63f793b7a7143ce55adf46f6af9e56fe25edc6c6c9d4b00924a059e5bdc140.scope: Deactivated successfully. Nov 4 12:36:04.184061 containerd[1556]: time="2025-11-04T12:36:04.183957843Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1f63f793b7a7143ce55adf46f6af9e56fe25edc6c6c9d4b00924a059e5bdc140\" id:\"1f63f793b7a7143ce55adf46f6af9e56fe25edc6c6c9d4b00924a059e5bdc140\" pid:3075 exit_status:137 exited_at:{seconds:1762259764 nanos:183639401}" Nov 4 12:36:04.187445 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c9868fde06a5c8b71c9fb374bb4fb2c24897fed3ecc5fb58c5e5656762f149ff-rootfs.mount: Deactivated successfully. Nov 4 12:36:04.197558 containerd[1556]: time="2025-11-04T12:36:04.197438432Z" level=info msg="StopContainer for \"c9868fde06a5c8b71c9fb374bb4fb2c24897fed3ecc5fb58c5e5656762f149ff\" returns successfully" Nov 4 12:36:04.198721 containerd[1556]: time="2025-11-04T12:36:04.198687282Z" level=info msg="StopPodSandbox for \"870428edb30c9f3dc3c31f520d8b0ce1d8e0f4611cbf646b577a5912040876a3\"" Nov 4 12:36:04.198786 containerd[1556]: time="2025-11-04T12:36:04.198761483Z" level=info msg="Container to stop \"0aac5dc2e2c26f56f52b55487945a616d99e23e543212e7f7f6d708ae2870c07\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 4 12:36:04.198786 containerd[1556]: time="2025-11-04T12:36:04.198774203Z" level=info msg="Container to stop \"25a4a968fff9a842b15c47d6591a30bb0da13698d9bef204fb2f9c307a183f7f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 4 12:36:04.198786 containerd[1556]: time="2025-11-04T12:36:04.198783363Z" level=info msg="Container to stop \"c153e68b84c8b18b05e3078494aff3a55eb383797ce413f24f4107351fe7b167\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 4 12:36:04.198855 containerd[1556]: time="2025-11-04T12:36:04.198792363Z" level=info msg="Container to stop \"2ee4a3556cefcb665bff491ebf81a146c1f73330d5ab805c9f8bf7ca705f41b7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 4 12:36:04.198855 containerd[1556]: time="2025-11-04T12:36:04.198800283Z" level=info msg="Container to stop \"c9868fde06a5c8b71c9fb374bb4fb2c24897fed3ecc5fb58c5e5656762f149ff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 4 12:36:04.205165 systemd[1]: cri-containerd-870428edb30c9f3dc3c31f520d8b0ce1d8e0f4611cbf646b577a5912040876a3.scope: Deactivated successfully. Nov 4 12:36:04.214532 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f63f793b7a7143ce55adf46f6af9e56fe25edc6c6c9d4b00924a059e5bdc140-rootfs.mount: Deactivated successfully. Nov 4 12:36:04.219086 containerd[1556]: time="2025-11-04T12:36:04.218951126Z" level=info msg="shim disconnected" id=1f63f793b7a7143ce55adf46f6af9e56fe25edc6c6c9d4b00924a059e5bdc140 namespace=k8s.io Nov 4 12:36:04.230188 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-870428edb30c9f3dc3c31f520d8b0ce1d8e0f4611cbf646b577a5912040876a3-rootfs.mount: Deactivated successfully. Nov 4 12:36:04.236008 containerd[1556]: time="2025-11-04T12:36:04.218989727Z" level=warning msg="cleaning up after shim disconnected" id=1f63f793b7a7143ce55adf46f6af9e56fe25edc6c6c9d4b00924a059e5bdc140 namespace=k8s.io Nov 4 12:36:04.236008 containerd[1556]: time="2025-11-04T12:36:04.236000304Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 4 12:36:04.236148 containerd[1556]: time="2025-11-04T12:36:04.232575197Z" level=info msg="shim disconnected" id=870428edb30c9f3dc3c31f520d8b0ce1d8e0f4611cbf646b577a5912040876a3 namespace=k8s.io Nov 4 12:36:04.236148 containerd[1556]: time="2025-11-04T12:36:04.236091105Z" level=warning msg="cleaning up after shim disconnected" id=870428edb30c9f3dc3c31f520d8b0ce1d8e0f4611cbf646b577a5912040876a3 namespace=k8s.io Nov 4 12:36:04.236148 containerd[1556]: time="2025-11-04T12:36:04.236116185Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 4 12:36:04.257902 containerd[1556]: time="2025-11-04T12:36:04.257838881Z" level=info msg="TearDown network for sandbox \"870428edb30c9f3dc3c31f520d8b0ce1d8e0f4611cbf646b577a5912040876a3\" successfully" Nov 4 12:36:04.257902 containerd[1556]: time="2025-11-04T12:36:04.257890081Z" level=info msg="StopPodSandbox for \"870428edb30c9f3dc3c31f520d8b0ce1d8e0f4611cbf646b577a5912040876a3\" returns successfully" Nov 4 12:36:04.259685 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-870428edb30c9f3dc3c31f520d8b0ce1d8e0f4611cbf646b577a5912040876a3-shm.mount: Deactivated successfully. Nov 4 12:36:04.266814 containerd[1556]: time="2025-11-04T12:36:04.266766033Z" level=info msg="received exit event sandbox_id:\"870428edb30c9f3dc3c31f520d8b0ce1d8e0f4611cbf646b577a5912040876a3\" exit_status:137 exited_at:{seconds:1762259764 nanos:205315816}" Nov 4 12:36:04.269209 containerd[1556]: time="2025-11-04T12:36:04.269079132Z" level=info msg="received exit event sandbox_id:\"1f63f793b7a7143ce55adf46f6af9e56fe25edc6c6c9d4b00924a059e5bdc140\" exit_status:137 exited_at:{seconds:1762259764 nanos:183639401}" Nov 4 12:36:04.269468 containerd[1556]: time="2025-11-04T12:36:04.269431855Z" level=info msg="TearDown network for sandbox \"1f63f793b7a7143ce55adf46f6af9e56fe25edc6c6c9d4b00924a059e5bdc140\" successfully" Nov 4 12:36:04.269576 containerd[1556]: time="2025-11-04T12:36:04.269560736Z" level=info msg="StopPodSandbox for \"1f63f793b7a7143ce55adf46f6af9e56fe25edc6c6c9d4b00924a059e5bdc140\" returns successfully" Nov 4 12:36:04.269745 containerd[1556]: time="2025-11-04T12:36:04.269626736Z" level=info msg="TaskExit event in podsandbox handler container_id:\"870428edb30c9f3dc3c31f520d8b0ce1d8e0f4611cbf646b577a5912040876a3\" id:\"870428edb30c9f3dc3c31f520d8b0ce1d8e0f4611cbf646b577a5912040876a3\" pid:2859 exit_status:137 exited_at:{seconds:1762259764 nanos:205315816}" Nov 4 12:36:04.391425 kubelet[2698]: I1104 12:36:04.391309 2698 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-host-proc-sys-kernel\") pod \"14afe496-c962-41b4-ad4e-b8fcffe6e1d4\" (UID: \"14afe496-c962-41b4-ad4e-b8fcffe6e1d4\") " Nov 4 12:36:04.391425 kubelet[2698]: I1104 12:36:04.391364 2698 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-hubble-tls\") pod \"14afe496-c962-41b4-ad4e-b8fcffe6e1d4\" (UID: \"14afe496-c962-41b4-ad4e-b8fcffe6e1d4\") " Nov 4 12:36:04.391425 kubelet[2698]: I1104 12:36:04.391388 2698 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eeb86756-7b27-45f4-b0b9-1af4e818db03-cilium-config-path\") pod \"eeb86756-7b27-45f4-b0b9-1af4e818db03\" (UID: \"eeb86756-7b27-45f4-b0b9-1af4e818db03\") " Nov 4 12:36:04.391425 kubelet[2698]: I1104 12:36:04.391405 2698 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-xtables-lock\") pod \"14afe496-c962-41b4-ad4e-b8fcffe6e1d4\" (UID: \"14afe496-c962-41b4-ad4e-b8fcffe6e1d4\") " Nov 4 12:36:04.391425 kubelet[2698]: I1104 12:36:04.391428 2698 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-host-proc-sys-net\") pod \"14afe496-c962-41b4-ad4e-b8fcffe6e1d4\" (UID: \"14afe496-c962-41b4-ad4e-b8fcffe6e1d4\") " Nov 4 12:36:04.391880 kubelet[2698]: I1104 12:36:04.391445 2698 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9vsk\" (UniqueName: \"kubernetes.io/projected/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-kube-api-access-s9vsk\") pod \"14afe496-c962-41b4-ad4e-b8fcffe6e1d4\" (UID: \"14afe496-c962-41b4-ad4e-b8fcffe6e1d4\") " Nov 4 12:36:04.391880 kubelet[2698]: I1104 12:36:04.391460 2698 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-etc-cni-netd\") pod \"14afe496-c962-41b4-ad4e-b8fcffe6e1d4\" (UID: \"14afe496-c962-41b4-ad4e-b8fcffe6e1d4\") " Nov 4 12:36:04.391880 kubelet[2698]: I1104 12:36:04.391475 2698 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-hostproc\") pod \"14afe496-c962-41b4-ad4e-b8fcffe6e1d4\" (UID: \"14afe496-c962-41b4-ad4e-b8fcffe6e1d4\") " Nov 4 12:36:04.391880 kubelet[2698]: I1104 12:36:04.391497 2698 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-bpf-maps\") pod \"14afe496-c962-41b4-ad4e-b8fcffe6e1d4\" (UID: \"14afe496-c962-41b4-ad4e-b8fcffe6e1d4\") " Nov 4 12:36:04.391880 kubelet[2698]: I1104 12:36:04.391512 2698 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-cilium-config-path\") pod \"14afe496-c962-41b4-ad4e-b8fcffe6e1d4\" (UID: \"14afe496-c962-41b4-ad4e-b8fcffe6e1d4\") " Nov 4 12:36:04.391880 kubelet[2698]: I1104 12:36:04.391526 2698 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-cilium-run\") pod \"14afe496-c962-41b4-ad4e-b8fcffe6e1d4\" (UID: \"14afe496-c962-41b4-ad4e-b8fcffe6e1d4\") " Nov 4 12:36:04.392015 kubelet[2698]: I1104 12:36:04.391558 2698 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-lib-modules\") pod \"14afe496-c962-41b4-ad4e-b8fcffe6e1d4\" (UID: \"14afe496-c962-41b4-ad4e-b8fcffe6e1d4\") " Nov 4 12:36:04.392015 kubelet[2698]: I1104 12:36:04.391574 2698 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-cni-path\") pod \"14afe496-c962-41b4-ad4e-b8fcffe6e1d4\" (UID: \"14afe496-c962-41b4-ad4e-b8fcffe6e1d4\") " Nov 4 12:36:04.392015 kubelet[2698]: I1104 12:36:04.391610 2698 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grt9d\" (UniqueName: \"kubernetes.io/projected/eeb86756-7b27-45f4-b0b9-1af4e818db03-kube-api-access-grt9d\") pod \"eeb86756-7b27-45f4-b0b9-1af4e818db03\" (UID: \"eeb86756-7b27-45f4-b0b9-1af4e818db03\") " Nov 4 12:36:04.392015 kubelet[2698]: I1104 12:36:04.391631 2698 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-cilium-cgroup\") pod \"14afe496-c962-41b4-ad4e-b8fcffe6e1d4\" (UID: \"14afe496-c962-41b4-ad4e-b8fcffe6e1d4\") " Nov 4 12:36:04.392015 kubelet[2698]: I1104 12:36:04.391650 2698 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-clustermesh-secrets\") pod \"14afe496-c962-41b4-ad4e-b8fcffe6e1d4\" (UID: \"14afe496-c962-41b4-ad4e-b8fcffe6e1d4\") " Nov 4 12:36:04.393558 kubelet[2698]: I1104 12:36:04.392760 2698 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "14afe496-c962-41b4-ad4e-b8fcffe6e1d4" (UID: "14afe496-c962-41b4-ad4e-b8fcffe6e1d4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 12:36:04.393672 kubelet[2698]: I1104 12:36:04.393651 2698 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "14afe496-c962-41b4-ad4e-b8fcffe6e1d4" (UID: "14afe496-c962-41b4-ad4e-b8fcffe6e1d4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 12:36:04.394591 kubelet[2698]: I1104 12:36:04.394554 2698 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eeb86756-7b27-45f4-b0b9-1af4e818db03-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "eeb86756-7b27-45f4-b0b9-1af4e818db03" (UID: "eeb86756-7b27-45f4-b0b9-1af4e818db03"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 4 12:36:04.394669 kubelet[2698]: I1104 12:36:04.394603 2698 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "14afe496-c962-41b4-ad4e-b8fcffe6e1d4" (UID: "14afe496-c962-41b4-ad4e-b8fcffe6e1d4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 12:36:04.394669 kubelet[2698]: I1104 12:36:04.394620 2698 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "14afe496-c962-41b4-ad4e-b8fcffe6e1d4" (UID: "14afe496-c962-41b4-ad4e-b8fcffe6e1d4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 12:36:04.395344 kubelet[2698]: I1104 12:36:04.395311 2698 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "14afe496-c962-41b4-ad4e-b8fcffe6e1d4" (UID: "14afe496-c962-41b4-ad4e-b8fcffe6e1d4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 4 12:36:04.395410 kubelet[2698]: I1104 12:36:04.395354 2698 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "14afe496-c962-41b4-ad4e-b8fcffe6e1d4" (UID: "14afe496-c962-41b4-ad4e-b8fcffe6e1d4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 12:36:04.395410 kubelet[2698]: I1104 12:36:04.395370 2698 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "14afe496-c962-41b4-ad4e-b8fcffe6e1d4" (UID: "14afe496-c962-41b4-ad4e-b8fcffe6e1d4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 12:36:04.395410 kubelet[2698]: I1104 12:36:04.395384 2698 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-cni-path" (OuterVolumeSpecName: "cni-path") pod "14afe496-c962-41b4-ad4e-b8fcffe6e1d4" (UID: "14afe496-c962-41b4-ad4e-b8fcffe6e1d4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 12:36:04.398928 kubelet[2698]: I1104 12:36:04.398886 2698 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "14afe496-c962-41b4-ad4e-b8fcffe6e1d4" (UID: "14afe496-c962-41b4-ad4e-b8fcffe6e1d4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 12:36:04.399020 kubelet[2698]: I1104 12:36:04.398933 2698 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-hostproc" (OuterVolumeSpecName: "hostproc") pod "14afe496-c962-41b4-ad4e-b8fcffe6e1d4" (UID: "14afe496-c962-41b4-ad4e-b8fcffe6e1d4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 12:36:04.399020 kubelet[2698]: I1104 12:36:04.398945 2698 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "14afe496-c962-41b4-ad4e-b8fcffe6e1d4" (UID: "14afe496-c962-41b4-ad4e-b8fcffe6e1d4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 12:36:04.399935 kubelet[2698]: I1104 12:36:04.399912 2698 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "14afe496-c962-41b4-ad4e-b8fcffe6e1d4" (UID: "14afe496-c962-41b4-ad4e-b8fcffe6e1d4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 4 12:36:04.400054 kubelet[2698]: I1104 12:36:04.399948 2698 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-kube-api-access-s9vsk" (OuterVolumeSpecName: "kube-api-access-s9vsk") pod "14afe496-c962-41b4-ad4e-b8fcffe6e1d4" (UID: "14afe496-c962-41b4-ad4e-b8fcffe6e1d4"). InnerVolumeSpecName "kube-api-access-s9vsk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 4 12:36:04.400106 kubelet[2698]: I1104 12:36:04.399967 2698 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eeb86756-7b27-45f4-b0b9-1af4e818db03-kube-api-access-grt9d" (OuterVolumeSpecName: "kube-api-access-grt9d") pod "eeb86756-7b27-45f4-b0b9-1af4e818db03" (UID: "eeb86756-7b27-45f4-b0b9-1af4e818db03"). InnerVolumeSpecName "kube-api-access-grt9d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 4 12:36:04.400284 kubelet[2698]: I1104 12:36:04.400262 2698 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "14afe496-c962-41b4-ad4e-b8fcffe6e1d4" (UID: "14afe496-c962-41b4-ad4e-b8fcffe6e1d4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 4 12:36:04.492106 kubelet[2698]: I1104 12:36:04.492057 2698 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-lib-modules\") on node \"localhost\" DevicePath \"\"" Nov 4 12:36:04.492106 kubelet[2698]: I1104 12:36:04.492096 2698 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-cni-path\") on node \"localhost\" DevicePath \"\"" Nov 4 12:36:04.492106 kubelet[2698]: I1104 12:36:04.492106 2698 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grt9d\" (UniqueName: \"kubernetes.io/projected/eeb86756-7b27-45f4-b0b9-1af4e818db03-kube-api-access-grt9d\") on node \"localhost\" DevicePath \"\"" Nov 4 12:36:04.492106 kubelet[2698]: I1104 12:36:04.492117 2698 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Nov 4 12:36:04.492299 kubelet[2698]: I1104 12:36:04.492125 2698 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Nov 4 12:36:04.492299 kubelet[2698]: I1104 12:36:04.492134 2698 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Nov 4 12:36:04.492299 kubelet[2698]: I1104 12:36:04.492142 2698 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-hubble-tls\") on node \"localhost\" DevicePath \"\"" Nov 4 12:36:04.492299 kubelet[2698]: I1104 12:36:04.492149 2698 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eeb86756-7b27-45f4-b0b9-1af4e818db03-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 4 12:36:04.492299 kubelet[2698]: I1104 12:36:04.492157 2698 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-xtables-lock\") on node \"localhost\" DevicePath \"\"" Nov 4 12:36:04.492299 kubelet[2698]: I1104 12:36:04.492164 2698 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Nov 4 12:36:04.492299 kubelet[2698]: I1104 12:36:04.492171 2698 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-s9vsk\" (UniqueName: \"kubernetes.io/projected/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-kube-api-access-s9vsk\") on node \"localhost\" DevicePath \"\"" Nov 4 12:36:04.492299 kubelet[2698]: I1104 12:36:04.492178 2698 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Nov 4 12:36:04.492450 kubelet[2698]: I1104 12:36:04.492186 2698 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-hostproc\") on node \"localhost\" DevicePath \"\"" Nov 4 12:36:04.492450 kubelet[2698]: I1104 12:36:04.492192 2698 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-bpf-maps\") on node \"localhost\" DevicePath \"\"" Nov 4 12:36:04.492450 kubelet[2698]: I1104 12:36:04.492199 2698 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 4 12:36:04.492450 kubelet[2698]: I1104 12:36:04.492207 2698 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/14afe496-c962-41b4-ad4e-b8fcffe6e1d4-cilium-run\") on node \"localhost\" DevicePath \"\"" Nov 4 12:36:04.655052 systemd[1]: Removed slice kubepods-besteffort-podeeb86756_7b27_45f4_b0b9_1af4e818db03.slice - libcontainer container kubepods-besteffort-podeeb86756_7b27_45f4_b0b9_1af4e818db03.slice. Nov 4 12:36:04.655654 kubelet[2698]: I1104 12:36:04.655628 2698 scope.go:117] "RemoveContainer" containerID="3580cc6302f14888151f6d6d38244422918ac4dd3c28f19744a1ae89eea56056" Nov 4 12:36:04.658116 containerd[1556]: time="2025-11-04T12:36:04.658081358Z" level=info msg="RemoveContainer for \"3580cc6302f14888151f6d6d38244422918ac4dd3c28f19744a1ae89eea56056\"" Nov 4 12:36:04.662701 systemd[1]: Removed slice kubepods-burstable-pod14afe496_c962_41b4_ad4e_b8fcffe6e1d4.slice - libcontainer container kubepods-burstable-pod14afe496_c962_41b4_ad4e_b8fcffe6e1d4.slice. Nov 4 12:36:04.663000 systemd[1]: kubepods-burstable-pod14afe496_c962_41b4_ad4e_b8fcffe6e1d4.slice: Consumed 6.153s CPU time, 125.7M memory peak, 156K read from disk, 12.9M written to disk. Nov 4 12:36:04.663213 containerd[1556]: time="2025-11-04T12:36:04.663150159Z" level=info msg="RemoveContainer for \"3580cc6302f14888151f6d6d38244422918ac4dd3c28f19744a1ae89eea56056\" returns successfully" Nov 4 12:36:04.663465 kubelet[2698]: I1104 12:36:04.663433 2698 scope.go:117] "RemoveContainer" containerID="3580cc6302f14888151f6d6d38244422918ac4dd3c28f19744a1ae89eea56056" Nov 4 12:36:04.671149 containerd[1556]: time="2025-11-04T12:36:04.663762404Z" level=error msg="ContainerStatus for \"3580cc6302f14888151f6d6d38244422918ac4dd3c28f19744a1ae89eea56056\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3580cc6302f14888151f6d6d38244422918ac4dd3c28f19744a1ae89eea56056\": not found" Nov 4 12:36:04.675164 kubelet[2698]: E1104 12:36:04.675079 2698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3580cc6302f14888151f6d6d38244422918ac4dd3c28f19744a1ae89eea56056\": not found" containerID="3580cc6302f14888151f6d6d38244422918ac4dd3c28f19744a1ae89eea56056" Nov 4 12:36:04.675164 kubelet[2698]: I1104 12:36:04.675136 2698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3580cc6302f14888151f6d6d38244422918ac4dd3c28f19744a1ae89eea56056"} err="failed to get container status \"3580cc6302f14888151f6d6d38244422918ac4dd3c28f19744a1ae89eea56056\": rpc error: code = NotFound desc = an error occurred when try to find container \"3580cc6302f14888151f6d6d38244422918ac4dd3c28f19744a1ae89eea56056\": not found" Nov 4 12:36:04.675283 kubelet[2698]: I1104 12:36:04.675177 2698 scope.go:117] "RemoveContainer" containerID="c9868fde06a5c8b71c9fb374bb4fb2c24897fed3ecc5fb58c5e5656762f149ff" Nov 4 12:36:04.677912 containerd[1556]: time="2025-11-04T12:36:04.677880998Z" level=info msg="RemoveContainer for \"c9868fde06a5c8b71c9fb374bb4fb2c24897fed3ecc5fb58c5e5656762f149ff\"" Nov 4 12:36:04.683933 containerd[1556]: time="2025-11-04T12:36:04.683881327Z" level=info msg="RemoveContainer for \"c9868fde06a5c8b71c9fb374bb4fb2c24897fed3ecc5fb58c5e5656762f149ff\" returns successfully" Nov 4 12:36:04.684193 kubelet[2698]: I1104 12:36:04.684158 2698 scope.go:117] "RemoveContainer" containerID="2ee4a3556cefcb665bff491ebf81a146c1f73330d5ab805c9f8bf7ca705f41b7" Nov 4 12:36:04.685425 containerd[1556]: time="2025-11-04T12:36:04.685394419Z" level=info msg="RemoveContainer for \"2ee4a3556cefcb665bff491ebf81a146c1f73330d5ab805c9f8bf7ca705f41b7\"" Nov 4 12:36:04.689784 containerd[1556]: time="2025-11-04T12:36:04.689756214Z" level=info msg="RemoveContainer for \"2ee4a3556cefcb665bff491ebf81a146c1f73330d5ab805c9f8bf7ca705f41b7\" returns successfully" Nov 4 12:36:04.689921 kubelet[2698]: I1104 12:36:04.689901 2698 scope.go:117] "RemoveContainer" containerID="25a4a968fff9a842b15c47d6591a30bb0da13698d9bef204fb2f9c307a183f7f" Nov 4 12:36:04.692252 containerd[1556]: time="2025-11-04T12:36:04.692229714Z" level=info msg="RemoveContainer for \"25a4a968fff9a842b15c47d6591a30bb0da13698d9bef204fb2f9c307a183f7f\"" Nov 4 12:36:04.695965 containerd[1556]: time="2025-11-04T12:36:04.695930664Z" level=info msg="RemoveContainer for \"25a4a968fff9a842b15c47d6591a30bb0da13698d9bef204fb2f9c307a183f7f\" returns successfully" Nov 4 12:36:04.696272 kubelet[2698]: I1104 12:36:04.696253 2698 scope.go:117] "RemoveContainer" containerID="c153e68b84c8b18b05e3078494aff3a55eb383797ce413f24f4107351fe7b167" Nov 4 12:36:04.697743 containerd[1556]: time="2025-11-04T12:36:04.697710918Z" level=info msg="RemoveContainer for \"c153e68b84c8b18b05e3078494aff3a55eb383797ce413f24f4107351fe7b167\"" Nov 4 12:36:04.700495 containerd[1556]: time="2025-11-04T12:36:04.700460661Z" level=info msg="RemoveContainer for \"c153e68b84c8b18b05e3078494aff3a55eb383797ce413f24f4107351fe7b167\" returns successfully" Nov 4 12:36:04.700726 kubelet[2698]: I1104 12:36:04.700640 2698 scope.go:117] "RemoveContainer" containerID="0aac5dc2e2c26f56f52b55487945a616d99e23e543212e7f7f6d708ae2870c07" Nov 4 12:36:04.702371 containerd[1556]: time="2025-11-04T12:36:04.702195555Z" level=info msg="RemoveContainer for \"0aac5dc2e2c26f56f52b55487945a616d99e23e543212e7f7f6d708ae2870c07\"" Nov 4 12:36:04.705237 containerd[1556]: time="2025-11-04T12:36:04.705208059Z" level=info msg="RemoveContainer for \"0aac5dc2e2c26f56f52b55487945a616d99e23e543212e7f7f6d708ae2870c07\" returns successfully" Nov 4 12:36:04.705575 kubelet[2698]: I1104 12:36:04.705523 2698 scope.go:117] "RemoveContainer" containerID="c9868fde06a5c8b71c9fb374bb4fb2c24897fed3ecc5fb58c5e5656762f149ff" Nov 4 12:36:04.705852 containerd[1556]: time="2025-11-04T12:36:04.705818984Z" level=error msg="ContainerStatus for \"c9868fde06a5c8b71c9fb374bb4fb2c24897fed3ecc5fb58c5e5656762f149ff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c9868fde06a5c8b71c9fb374bb4fb2c24897fed3ecc5fb58c5e5656762f149ff\": not found" Nov 4 12:36:04.706104 kubelet[2698]: E1104 12:36:04.706049 2698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c9868fde06a5c8b71c9fb374bb4fb2c24897fed3ecc5fb58c5e5656762f149ff\": not found" containerID="c9868fde06a5c8b71c9fb374bb4fb2c24897fed3ecc5fb58c5e5656762f149ff" Nov 4 12:36:04.706155 kubelet[2698]: I1104 12:36:04.706112 2698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c9868fde06a5c8b71c9fb374bb4fb2c24897fed3ecc5fb58c5e5656762f149ff"} err="failed to get container status \"c9868fde06a5c8b71c9fb374bb4fb2c24897fed3ecc5fb58c5e5656762f149ff\": rpc error: code = NotFound desc = an error occurred when try to find container \"c9868fde06a5c8b71c9fb374bb4fb2c24897fed3ecc5fb58c5e5656762f149ff\": not found" Nov 4 12:36:04.706155 kubelet[2698]: I1104 12:36:04.706137 2698 scope.go:117] "RemoveContainer" containerID="2ee4a3556cefcb665bff491ebf81a146c1f73330d5ab805c9f8bf7ca705f41b7" Nov 4 12:36:04.706349 containerd[1556]: time="2025-11-04T12:36:04.706315788Z" level=error msg="ContainerStatus for \"2ee4a3556cefcb665bff491ebf81a146c1f73330d5ab805c9f8bf7ca705f41b7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2ee4a3556cefcb665bff491ebf81a146c1f73330d5ab805c9f8bf7ca705f41b7\": not found" Nov 4 12:36:04.706453 kubelet[2698]: E1104 12:36:04.706435 2698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2ee4a3556cefcb665bff491ebf81a146c1f73330d5ab805c9f8bf7ca705f41b7\": not found" containerID="2ee4a3556cefcb665bff491ebf81a146c1f73330d5ab805c9f8bf7ca705f41b7" Nov 4 12:36:04.706498 kubelet[2698]: I1104 12:36:04.706459 2698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2ee4a3556cefcb665bff491ebf81a146c1f73330d5ab805c9f8bf7ca705f41b7"} err="failed to get container status \"2ee4a3556cefcb665bff491ebf81a146c1f73330d5ab805c9f8bf7ca705f41b7\": rpc error: code = NotFound desc = an error occurred when try to find container \"2ee4a3556cefcb665bff491ebf81a146c1f73330d5ab805c9f8bf7ca705f41b7\": not found" Nov 4 12:36:04.706498 kubelet[2698]: I1104 12:36:04.706477 2698 scope.go:117] "RemoveContainer" containerID="25a4a968fff9a842b15c47d6591a30bb0da13698d9bef204fb2f9c307a183f7f" Nov 4 12:36:04.706749 containerd[1556]: time="2025-11-04T12:36:04.706663151Z" level=error msg="ContainerStatus for \"25a4a968fff9a842b15c47d6591a30bb0da13698d9bef204fb2f9c307a183f7f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"25a4a968fff9a842b15c47d6591a30bb0da13698d9bef204fb2f9c307a183f7f\": not found" Nov 4 12:36:04.706785 kubelet[2698]: E1104 12:36:04.706757 2698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"25a4a968fff9a842b15c47d6591a30bb0da13698d9bef204fb2f9c307a183f7f\": not found" containerID="25a4a968fff9a842b15c47d6591a30bb0da13698d9bef204fb2f9c307a183f7f" Nov 4 12:36:04.706785 kubelet[2698]: I1104 12:36:04.706776 2698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"25a4a968fff9a842b15c47d6591a30bb0da13698d9bef204fb2f9c307a183f7f"} err="failed to get container status \"25a4a968fff9a842b15c47d6591a30bb0da13698d9bef204fb2f9c307a183f7f\": rpc error: code = NotFound desc = an error occurred when try to find container \"25a4a968fff9a842b15c47d6591a30bb0da13698d9bef204fb2f9c307a183f7f\": not found" Nov 4 12:36:04.706840 kubelet[2698]: I1104 12:36:04.706789 2698 scope.go:117] "RemoveContainer" containerID="c153e68b84c8b18b05e3078494aff3a55eb383797ce413f24f4107351fe7b167" Nov 4 12:36:04.707104 containerd[1556]: time="2025-11-04T12:36:04.707081794Z" level=error msg="ContainerStatus for \"c153e68b84c8b18b05e3078494aff3a55eb383797ce413f24f4107351fe7b167\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c153e68b84c8b18b05e3078494aff3a55eb383797ce413f24f4107351fe7b167\": not found" Nov 4 12:36:04.707286 kubelet[2698]: E1104 12:36:04.707268 2698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c153e68b84c8b18b05e3078494aff3a55eb383797ce413f24f4107351fe7b167\": not found" containerID="c153e68b84c8b18b05e3078494aff3a55eb383797ce413f24f4107351fe7b167" Nov 4 12:36:04.707336 kubelet[2698]: I1104 12:36:04.707289 2698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c153e68b84c8b18b05e3078494aff3a55eb383797ce413f24f4107351fe7b167"} err="failed to get container status \"c153e68b84c8b18b05e3078494aff3a55eb383797ce413f24f4107351fe7b167\": rpc error: code = NotFound desc = an error occurred when try to find container \"c153e68b84c8b18b05e3078494aff3a55eb383797ce413f24f4107351fe7b167\": not found" Nov 4 12:36:04.707336 kubelet[2698]: I1104 12:36:04.707305 2698 scope.go:117] "RemoveContainer" containerID="0aac5dc2e2c26f56f52b55487945a616d99e23e543212e7f7f6d708ae2870c07" Nov 4 12:36:04.707536 containerd[1556]: time="2025-11-04T12:36:04.707504758Z" level=error msg="ContainerStatus for \"0aac5dc2e2c26f56f52b55487945a616d99e23e543212e7f7f6d708ae2870c07\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0aac5dc2e2c26f56f52b55487945a616d99e23e543212e7f7f6d708ae2870c07\": not found" Nov 4 12:36:04.707773 kubelet[2698]: E1104 12:36:04.707753 2698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0aac5dc2e2c26f56f52b55487945a616d99e23e543212e7f7f6d708ae2870c07\": not found" containerID="0aac5dc2e2c26f56f52b55487945a616d99e23e543212e7f7f6d708ae2870c07" Nov 4 12:36:04.707824 kubelet[2698]: I1104 12:36:04.707793 2698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0aac5dc2e2c26f56f52b55487945a616d99e23e543212e7f7f6d708ae2870c07"} err="failed to get container status \"0aac5dc2e2c26f56f52b55487945a616d99e23e543212e7f7f6d708ae2870c07\": rpc error: code = NotFound desc = an error occurred when try to find container \"0aac5dc2e2c26f56f52b55487945a616d99e23e543212e7f7f6d708ae2870c07\": not found" Nov 4 12:36:05.129394 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1f63f793b7a7143ce55adf46f6af9e56fe25edc6c6c9d4b00924a059e5bdc140-shm.mount: Deactivated successfully. Nov 4 12:36:05.129516 systemd[1]: var-lib-kubelet-pods-eeb86756\x2d7b27\x2d45f4\x2db0b9\x2d1af4e818db03-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgrt9d.mount: Deactivated successfully. Nov 4 12:36:05.129597 systemd[1]: var-lib-kubelet-pods-14afe496\x2dc962\x2d41b4\x2dad4e\x2db8fcffe6e1d4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ds9vsk.mount: Deactivated successfully. Nov 4 12:36:05.129655 systemd[1]: var-lib-kubelet-pods-14afe496\x2dc962\x2d41b4\x2dad4e\x2db8fcffe6e1d4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 4 12:36:05.129861 systemd[1]: var-lib-kubelet-pods-14afe496\x2dc962\x2d41b4\x2dad4e\x2db8fcffe6e1d4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 4 12:36:05.470811 kubelet[2698]: I1104 12:36:05.470766 2698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14afe496-c962-41b4-ad4e-b8fcffe6e1d4" path="/var/lib/kubelet/pods/14afe496-c962-41b4-ad4e-b8fcffe6e1d4/volumes" Nov 4 12:36:05.471289 kubelet[2698]: I1104 12:36:05.471266 2698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eeb86756-7b27-45f4-b0b9-1af4e818db03" path="/var/lib/kubelet/pods/eeb86756-7b27-45f4-b0b9-1af4e818db03/volumes" Nov 4 12:36:06.049933 sshd[4298]: Connection closed by 10.0.0.1 port 54336 Nov 4 12:36:06.050473 sshd-session[4295]: pam_unix(sshd:session): session closed for user core Nov 4 12:36:06.057633 systemd[1]: sshd@21-10.0.0.141:22-10.0.0.1:54336.service: Deactivated successfully. Nov 4 12:36:06.059194 systemd[1]: session-22.scope: Deactivated successfully. Nov 4 12:36:06.060120 systemd-logind[1532]: Session 22 logged out. Waiting for processes to exit. Nov 4 12:36:06.062372 systemd[1]: Started sshd@22-10.0.0.141:22-10.0.0.1:54342.service - OpenSSH per-connection server daemon (10.0.0.1:54342). Nov 4 12:36:06.063074 systemd-logind[1532]: Removed session 22. Nov 4 12:36:06.125861 sshd[4450]: Accepted publickey for core from 10.0.0.1 port 54342 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:36:06.127192 sshd-session[4450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:36:06.131003 systemd-logind[1532]: New session 23 of user core. Nov 4 12:36:06.140699 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 4 12:36:06.936859 sshd[4453]: Connection closed by 10.0.0.1 port 54342 Nov 4 12:36:06.937309 sshd-session[4450]: pam_unix(sshd:session): session closed for user core Nov 4 12:36:06.953225 systemd[1]: sshd@22-10.0.0.141:22-10.0.0.1:54342.service: Deactivated successfully. Nov 4 12:36:06.955883 systemd[1]: session-23.scope: Deactivated successfully. Nov 4 12:36:06.956871 systemd-logind[1532]: Session 23 logged out. Waiting for processes to exit. Nov 4 12:36:06.961795 systemd[1]: Started sshd@23-10.0.0.141:22-10.0.0.1:54348.service - OpenSSH per-connection server daemon (10.0.0.1:54348). Nov 4 12:36:06.964086 systemd-logind[1532]: Removed session 23. Nov 4 12:36:06.971001 systemd[1]: Created slice kubepods-burstable-pod38fc86a9_321e_401c_8c54_793929f07026.slice - libcontainer container kubepods-burstable-pod38fc86a9_321e_401c_8c54_793929f07026.slice. Nov 4 12:36:07.007277 kubelet[2698]: I1104 12:36:07.007230 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/38fc86a9-321e-401c-8c54-793929f07026-hostproc\") pod \"cilium-75rgh\" (UID: \"38fc86a9-321e-401c-8c54-793929f07026\") " pod="kube-system/cilium-75rgh" Nov 4 12:36:07.007277 kubelet[2698]: I1104 12:36:07.007273 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/38fc86a9-321e-401c-8c54-793929f07026-lib-modules\") pod \"cilium-75rgh\" (UID: \"38fc86a9-321e-401c-8c54-793929f07026\") " pod="kube-system/cilium-75rgh" Nov 4 12:36:07.007639 kubelet[2698]: I1104 12:36:07.007293 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/38fc86a9-321e-401c-8c54-793929f07026-host-proc-sys-net\") pod \"cilium-75rgh\" (UID: \"38fc86a9-321e-401c-8c54-793929f07026\") " pod="kube-system/cilium-75rgh" Nov 4 12:36:07.007639 kubelet[2698]: I1104 12:36:07.007310 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/38fc86a9-321e-401c-8c54-793929f07026-hubble-tls\") pod \"cilium-75rgh\" (UID: \"38fc86a9-321e-401c-8c54-793929f07026\") " pod="kube-system/cilium-75rgh" Nov 4 12:36:07.007639 kubelet[2698]: I1104 12:36:07.007329 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/38fc86a9-321e-401c-8c54-793929f07026-cni-path\") pod \"cilium-75rgh\" (UID: \"38fc86a9-321e-401c-8c54-793929f07026\") " pod="kube-system/cilium-75rgh" Nov 4 12:36:07.007639 kubelet[2698]: I1104 12:36:07.007343 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/38fc86a9-321e-401c-8c54-793929f07026-xtables-lock\") pod \"cilium-75rgh\" (UID: \"38fc86a9-321e-401c-8c54-793929f07026\") " pod="kube-system/cilium-75rgh" Nov 4 12:36:07.007639 kubelet[2698]: I1104 12:36:07.007358 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/38fc86a9-321e-401c-8c54-793929f07026-etc-cni-netd\") pod \"cilium-75rgh\" (UID: \"38fc86a9-321e-401c-8c54-793929f07026\") " pod="kube-system/cilium-75rgh" Nov 4 12:36:07.007639 kubelet[2698]: I1104 12:36:07.007373 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/38fc86a9-321e-401c-8c54-793929f07026-clustermesh-secrets\") pod \"cilium-75rgh\" (UID: \"38fc86a9-321e-401c-8c54-793929f07026\") " pod="kube-system/cilium-75rgh" Nov 4 12:36:07.007760 kubelet[2698]: I1104 12:36:07.007388 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/38fc86a9-321e-401c-8c54-793929f07026-cilium-ipsec-secrets\") pod \"cilium-75rgh\" (UID: \"38fc86a9-321e-401c-8c54-793929f07026\") " pod="kube-system/cilium-75rgh" Nov 4 12:36:07.007760 kubelet[2698]: I1104 12:36:07.007404 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/38fc86a9-321e-401c-8c54-793929f07026-bpf-maps\") pod \"cilium-75rgh\" (UID: \"38fc86a9-321e-401c-8c54-793929f07026\") " pod="kube-system/cilium-75rgh" Nov 4 12:36:07.007760 kubelet[2698]: I1104 12:36:07.007418 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/38fc86a9-321e-401c-8c54-793929f07026-cilium-run\") pod \"cilium-75rgh\" (UID: \"38fc86a9-321e-401c-8c54-793929f07026\") " pod="kube-system/cilium-75rgh" Nov 4 12:36:07.007760 kubelet[2698]: I1104 12:36:07.007432 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/38fc86a9-321e-401c-8c54-793929f07026-cilium-cgroup\") pod \"cilium-75rgh\" (UID: \"38fc86a9-321e-401c-8c54-793929f07026\") " pod="kube-system/cilium-75rgh" Nov 4 12:36:07.007760 kubelet[2698]: I1104 12:36:07.007448 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/38fc86a9-321e-401c-8c54-793929f07026-host-proc-sys-kernel\") pod \"cilium-75rgh\" (UID: \"38fc86a9-321e-401c-8c54-793929f07026\") " pod="kube-system/cilium-75rgh" Nov 4 12:36:07.007760 kubelet[2698]: I1104 12:36:07.007466 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/38fc86a9-321e-401c-8c54-793929f07026-cilium-config-path\") pod \"cilium-75rgh\" (UID: \"38fc86a9-321e-401c-8c54-793929f07026\") " pod="kube-system/cilium-75rgh" Nov 4 12:36:07.007869 kubelet[2698]: I1104 12:36:07.007480 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhjw4\" (UniqueName: \"kubernetes.io/projected/38fc86a9-321e-401c-8c54-793929f07026-kube-api-access-hhjw4\") pod \"cilium-75rgh\" (UID: \"38fc86a9-321e-401c-8c54-793929f07026\") " pod="kube-system/cilium-75rgh" Nov 4 12:36:07.032305 sshd[4465]: Accepted publickey for core from 10.0.0.1 port 54348 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:36:07.033715 sshd-session[4465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:36:07.037426 systemd-logind[1532]: New session 24 of user core. Nov 4 12:36:07.047771 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 4 12:36:07.095696 sshd[4468]: Connection closed by 10.0.0.1 port 54348 Nov 4 12:36:07.096073 sshd-session[4465]: pam_unix(sshd:session): session closed for user core Nov 4 12:36:07.106080 systemd[1]: sshd@23-10.0.0.141:22-10.0.0.1:54348.service: Deactivated successfully. Nov 4 12:36:07.109109 systemd[1]: session-24.scope: Deactivated successfully. Nov 4 12:36:07.110139 systemd-logind[1532]: Session 24 logged out. Waiting for processes to exit. Nov 4 12:36:07.113818 systemd[1]: Started sshd@24-10.0.0.141:22-10.0.0.1:54358.service - OpenSSH per-connection server daemon (10.0.0.1:54358). Nov 4 12:36:07.124577 systemd-logind[1532]: Removed session 24. Nov 4 12:36:07.168361 sshd[4478]: Accepted publickey for core from 10.0.0.1 port 54358 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:36:07.169591 sshd-session[4478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:36:07.174041 systemd-logind[1532]: New session 25 of user core. Nov 4 12:36:07.184719 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 4 12:36:07.275320 kubelet[2698]: E1104 12:36:07.275189 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:36:07.277519 containerd[1556]: time="2025-11-04T12:36:07.277013217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-75rgh,Uid:38fc86a9-321e-401c-8c54-793929f07026,Namespace:kube-system,Attempt:0,}" Nov 4 12:36:07.299091 containerd[1556]: time="2025-11-04T12:36:07.299045207Z" level=info msg="connecting to shim 19e943e931bc8ddf747b0358d9d6e2fbd90d15466738920c482762c83d032b00" address="unix:///run/containerd/s/8e62bc69a06ca7c4ff6b92c4a5a7dd2ba8937e804af8b02125058b3691ffe3e1" namespace=k8s.io protocol=ttrpc version=3 Nov 4 12:36:07.329741 systemd[1]: Started cri-containerd-19e943e931bc8ddf747b0358d9d6e2fbd90d15466738920c482762c83d032b00.scope - libcontainer container 19e943e931bc8ddf747b0358d9d6e2fbd90d15466738920c482762c83d032b00. Nov 4 12:36:07.401908 containerd[1556]: time="2025-11-04T12:36:07.401849588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-75rgh,Uid:38fc86a9-321e-401c-8c54-793929f07026,Namespace:kube-system,Attempt:0,} returns sandbox id \"19e943e931bc8ddf747b0358d9d6e2fbd90d15466738920c482762c83d032b00\"" Nov 4 12:36:07.402852 kubelet[2698]: E1104 12:36:07.402824 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:36:07.417894 containerd[1556]: time="2025-11-04T12:36:07.417838217Z" level=info msg="CreateContainer within sandbox \"19e943e931bc8ddf747b0358d9d6e2fbd90d15466738920c482762c83d032b00\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 4 12:36:07.470649 containerd[1556]: time="2025-11-04T12:36:07.470614937Z" level=info msg="Container 37e0d52bc63b10bae7bbbe01e8b60102f4e446c39b1b468c1255f047feafd0c0: CDI devices from CRI Config.CDIDevices: []" Nov 4 12:36:07.480270 containerd[1556]: time="2025-11-04T12:36:07.480229203Z" level=info msg="CreateContainer within sandbox \"19e943e931bc8ddf747b0358d9d6e2fbd90d15466738920c482762c83d032b00\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"37e0d52bc63b10bae7bbbe01e8b60102f4e446c39b1b468c1255f047feafd0c0\"" Nov 4 12:36:07.480874 containerd[1556]: time="2025-11-04T12:36:07.480845087Z" level=info msg="StartContainer for \"37e0d52bc63b10bae7bbbe01e8b60102f4e446c39b1b468c1255f047feafd0c0\"" Nov 4 12:36:07.481910 containerd[1556]: time="2025-11-04T12:36:07.481868294Z" level=info msg="connecting to shim 37e0d52bc63b10bae7bbbe01e8b60102f4e446c39b1b468c1255f047feafd0c0" address="unix:///run/containerd/s/8e62bc69a06ca7c4ff6b92c4a5a7dd2ba8937e804af8b02125058b3691ffe3e1" protocol=ttrpc version=3 Nov 4 12:36:07.506817 systemd[1]: Started cri-containerd-37e0d52bc63b10bae7bbbe01e8b60102f4e446c39b1b468c1255f047feafd0c0.scope - libcontainer container 37e0d52bc63b10bae7bbbe01e8b60102f4e446c39b1b468c1255f047feafd0c0. Nov 4 12:36:07.543255 containerd[1556]: time="2025-11-04T12:36:07.543139592Z" level=info msg="StartContainer for \"37e0d52bc63b10bae7bbbe01e8b60102f4e446c39b1b468c1255f047feafd0c0\" returns successfully" Nov 4 12:36:07.550673 systemd[1]: cri-containerd-37e0d52bc63b10bae7bbbe01e8b60102f4e446c39b1b468c1255f047feafd0c0.scope: Deactivated successfully. Nov 4 12:36:07.552003 containerd[1556]: time="2025-11-04T12:36:07.551963252Z" level=info msg="received exit event container_id:\"37e0d52bc63b10bae7bbbe01e8b60102f4e446c39b1b468c1255f047feafd0c0\" id:\"37e0d52bc63b10bae7bbbe01e8b60102f4e446c39b1b468c1255f047feafd0c0\" pid:4547 exited_at:{seconds:1762259767 nanos:551710490}" Nov 4 12:36:07.552194 containerd[1556]: time="2025-11-04T12:36:07.552121493Z" level=info msg="TaskExit event in podsandbox handler container_id:\"37e0d52bc63b10bae7bbbe01e8b60102f4e446c39b1b468c1255f047feafd0c0\" id:\"37e0d52bc63b10bae7bbbe01e8b60102f4e446c39b1b468c1255f047feafd0c0\" pid:4547 exited_at:{seconds:1762259767 nanos:551710490}" Nov 4 12:36:07.589227 kubelet[2698]: E1104 12:36:07.589186 2698 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 4 12:36:07.664227 kubelet[2698]: E1104 12:36:07.664195 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:36:07.668821 containerd[1556]: time="2025-11-04T12:36:07.668725688Z" level=info msg="CreateContainer within sandbox \"19e943e931bc8ddf747b0358d9d6e2fbd90d15466738920c482762c83d032b00\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 4 12:36:07.676634 containerd[1556]: time="2025-11-04T12:36:07.676598981Z" level=info msg="Container 92fdb27b1551fe66531c88e0c8f10b3e160529a5c294c5b32c4d9091c1103c3c: CDI devices from CRI Config.CDIDevices: []" Nov 4 12:36:07.682531 containerd[1556]: time="2025-11-04T12:36:07.682496862Z" level=info msg="CreateContainer within sandbox \"19e943e931bc8ddf747b0358d9d6e2fbd90d15466738920c482762c83d032b00\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"92fdb27b1551fe66531c88e0c8f10b3e160529a5c294c5b32c4d9091c1103c3c\"" Nov 4 12:36:07.682990 containerd[1556]: time="2025-11-04T12:36:07.682964785Z" level=info msg="StartContainer for \"92fdb27b1551fe66531c88e0c8f10b3e160529a5c294c5b32c4d9091c1103c3c\"" Nov 4 12:36:07.684255 containerd[1556]: time="2025-11-04T12:36:07.684222633Z" level=info msg="connecting to shim 92fdb27b1551fe66531c88e0c8f10b3e160529a5c294c5b32c4d9091c1103c3c" address="unix:///run/containerd/s/8e62bc69a06ca7c4ff6b92c4a5a7dd2ba8937e804af8b02125058b3691ffe3e1" protocol=ttrpc version=3 Nov 4 12:36:07.701704 systemd[1]: Started cri-containerd-92fdb27b1551fe66531c88e0c8f10b3e160529a5c294c5b32c4d9091c1103c3c.scope - libcontainer container 92fdb27b1551fe66531c88e0c8f10b3e160529a5c294c5b32c4d9091c1103c3c. Nov 4 12:36:07.727303 containerd[1556]: time="2025-11-04T12:36:07.727239727Z" level=info msg="StartContainer for \"92fdb27b1551fe66531c88e0c8f10b3e160529a5c294c5b32c4d9091c1103c3c\" returns successfully" Nov 4 12:36:07.734581 systemd[1]: cri-containerd-92fdb27b1551fe66531c88e0c8f10b3e160529a5c294c5b32c4d9091c1103c3c.scope: Deactivated successfully. Nov 4 12:36:07.735242 containerd[1556]: time="2025-11-04T12:36:07.734668177Z" level=info msg="received exit event container_id:\"92fdb27b1551fe66531c88e0c8f10b3e160529a5c294c5b32c4d9091c1103c3c\" id:\"92fdb27b1551fe66531c88e0c8f10b3e160529a5c294c5b32c4d9091c1103c3c\" pid:4593 exited_at:{seconds:1762259767 nanos:734336055}" Nov 4 12:36:07.735242 containerd[1556]: time="2025-11-04T12:36:07.734872459Z" level=info msg="TaskExit event in podsandbox handler container_id:\"92fdb27b1551fe66531c88e0c8f10b3e160529a5c294c5b32c4d9091c1103c3c\" id:\"92fdb27b1551fe66531c88e0c8f10b3e160529a5c294c5b32c4d9091c1103c3c\" pid:4593 exited_at:{seconds:1762259767 nanos:734336055}" Nov 4 12:36:08.668930 kubelet[2698]: E1104 12:36:08.668163 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:36:08.673587 containerd[1556]: time="2025-11-04T12:36:08.672310458Z" level=info msg="CreateContainer within sandbox \"19e943e931bc8ddf747b0358d9d6e2fbd90d15466738920c482762c83d032b00\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 4 12:36:08.680281 containerd[1556]: time="2025-11-04T12:36:08.680236192Z" level=info msg="Container 2a864a1e6bcbb009499be7bf8e7f0f251bac45303a1e4a3676faeeeda41f477a: CDI devices from CRI Config.CDIDevices: []" Nov 4 12:36:08.688465 containerd[1556]: time="2025-11-04T12:36:08.688154806Z" level=info msg="CreateContainer within sandbox \"19e943e931bc8ddf747b0358d9d6e2fbd90d15466738920c482762c83d032b00\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2a864a1e6bcbb009499be7bf8e7f0f251bac45303a1e4a3676faeeeda41f477a\"" Nov 4 12:36:08.688912 containerd[1556]: time="2025-11-04T12:36:08.688884931Z" level=info msg="StartContainer for \"2a864a1e6bcbb009499be7bf8e7f0f251bac45303a1e4a3676faeeeda41f477a\"" Nov 4 12:36:08.692550 containerd[1556]: time="2025-11-04T12:36:08.692518716Z" level=info msg="connecting to shim 2a864a1e6bcbb009499be7bf8e7f0f251bac45303a1e4a3676faeeeda41f477a" address="unix:///run/containerd/s/8e62bc69a06ca7c4ff6b92c4a5a7dd2ba8937e804af8b02125058b3691ffe3e1" protocol=ttrpc version=3 Nov 4 12:36:08.716748 systemd[1]: Started cri-containerd-2a864a1e6bcbb009499be7bf8e7f0f251bac45303a1e4a3676faeeeda41f477a.scope - libcontainer container 2a864a1e6bcbb009499be7bf8e7f0f251bac45303a1e4a3676faeeeda41f477a. Nov 4 12:36:08.750581 containerd[1556]: time="2025-11-04T12:36:08.750536032Z" level=info msg="StartContainer for \"2a864a1e6bcbb009499be7bf8e7f0f251bac45303a1e4a3676faeeeda41f477a\" returns successfully" Nov 4 12:36:08.750973 systemd[1]: cri-containerd-2a864a1e6bcbb009499be7bf8e7f0f251bac45303a1e4a3676faeeeda41f477a.scope: Deactivated successfully. Nov 4 12:36:08.754496 containerd[1556]: time="2025-11-04T12:36:08.754437379Z" level=info msg="received exit event container_id:\"2a864a1e6bcbb009499be7bf8e7f0f251bac45303a1e4a3676faeeeda41f477a\" id:\"2a864a1e6bcbb009499be7bf8e7f0f251bac45303a1e4a3676faeeeda41f477a\" pid:4637 exited_at:{seconds:1762259768 nanos:754035616}" Nov 4 12:36:08.754712 containerd[1556]: time="2025-11-04T12:36:08.754461939Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2a864a1e6bcbb009499be7bf8e7f0f251bac45303a1e4a3676faeeeda41f477a\" id:\"2a864a1e6bcbb009499be7bf8e7f0f251bac45303a1e4a3676faeeeda41f477a\" pid:4637 exited_at:{seconds:1762259768 nanos:754035616}" Nov 4 12:36:08.772169 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a864a1e6bcbb009499be7bf8e7f0f251bac45303a1e4a3676faeeeda41f477a-rootfs.mount: Deactivated successfully. Nov 4 12:36:08.983427 kubelet[2698]: I1104 12:36:08.983280 2698 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-04T12:36:08Z","lastTransitionTime":"2025-11-04T12:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 4 12:36:09.672242 kubelet[2698]: E1104 12:36:09.672201 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:36:09.678064 containerd[1556]: time="2025-11-04T12:36:09.678027653Z" level=info msg="CreateContainer within sandbox \"19e943e931bc8ddf747b0358d9d6e2fbd90d15466738920c482762c83d032b00\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 4 12:36:09.688901 containerd[1556]: time="2025-11-04T12:36:09.688849407Z" level=info msg="Container da7a59899778f426ec5259625849b03b3545c217fb670294c22c20ed2b646dac: CDI devices from CRI Config.CDIDevices: []" Nov 4 12:36:09.697192 containerd[1556]: time="2025-11-04T12:36:09.697142983Z" level=info msg="CreateContainer within sandbox \"19e943e931bc8ddf747b0358d9d6e2fbd90d15466738920c482762c83d032b00\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"da7a59899778f426ec5259625849b03b3545c217fb670294c22c20ed2b646dac\"" Nov 4 12:36:09.697928 containerd[1556]: time="2025-11-04T12:36:09.697889549Z" level=info msg="StartContainer for \"da7a59899778f426ec5259625849b03b3545c217fb670294c22c20ed2b646dac\"" Nov 4 12:36:09.699073 containerd[1556]: time="2025-11-04T12:36:09.699042236Z" level=info msg="connecting to shim da7a59899778f426ec5259625849b03b3545c217fb670294c22c20ed2b646dac" address="unix:///run/containerd/s/8e62bc69a06ca7c4ff6b92c4a5a7dd2ba8937e804af8b02125058b3691ffe3e1" protocol=ttrpc version=3 Nov 4 12:36:09.722701 systemd[1]: Started cri-containerd-da7a59899778f426ec5259625849b03b3545c217fb670294c22c20ed2b646dac.scope - libcontainer container da7a59899778f426ec5259625849b03b3545c217fb670294c22c20ed2b646dac. Nov 4 12:36:09.744376 systemd[1]: cri-containerd-da7a59899778f426ec5259625849b03b3545c217fb670294c22c20ed2b646dac.scope: Deactivated successfully. Nov 4 12:36:09.744826 containerd[1556]: time="2025-11-04T12:36:09.744790949Z" level=info msg="TaskExit event in podsandbox handler container_id:\"da7a59899778f426ec5259625849b03b3545c217fb670294c22c20ed2b646dac\" id:\"da7a59899778f426ec5259625849b03b3545c217fb670294c22c20ed2b646dac\" pid:4677 exited_at:{seconds:1762259769 nanos:744520868}" Nov 4 12:36:09.744947 containerd[1556]: time="2025-11-04T12:36:09.744928950Z" level=info msg="received exit event container_id:\"da7a59899778f426ec5259625849b03b3545c217fb670294c22c20ed2b646dac\" id:\"da7a59899778f426ec5259625849b03b3545c217fb670294c22c20ed2b646dac\" pid:4677 exited_at:{seconds:1762259769 nanos:744520868}" Nov 4 12:36:09.751259 containerd[1556]: time="2025-11-04T12:36:09.751200913Z" level=info msg="StartContainer for \"da7a59899778f426ec5259625849b03b3545c217fb670294c22c20ed2b646dac\" returns successfully" Nov 4 12:36:09.761811 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da7a59899778f426ec5259625849b03b3545c217fb670294c22c20ed2b646dac-rootfs.mount: Deactivated successfully. Nov 4 12:36:10.678293 kubelet[2698]: E1104 12:36:10.678245 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:36:10.683535 containerd[1556]: time="2025-11-04T12:36:10.683442456Z" level=info msg="CreateContainer within sandbox \"19e943e931bc8ddf747b0358d9d6e2fbd90d15466738920c482762c83d032b00\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 4 12:36:10.697153 containerd[1556]: time="2025-11-04T12:36:10.697106350Z" level=info msg="Container ba11a6299555806a8108e8ccaf70d8fa999eb2186c238877df92097cc42714d6: CDI devices from CRI Config.CDIDevices: []" Nov 4 12:36:10.704571 containerd[1556]: time="2025-11-04T12:36:10.704510680Z" level=info msg="CreateContainer within sandbox \"19e943e931bc8ddf747b0358d9d6e2fbd90d15466738920c482762c83d032b00\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ba11a6299555806a8108e8ccaf70d8fa999eb2186c238877df92097cc42714d6\"" Nov 4 12:36:10.706395 containerd[1556]: time="2025-11-04T12:36:10.706342293Z" level=info msg="StartContainer for \"ba11a6299555806a8108e8ccaf70d8fa999eb2186c238877df92097cc42714d6\"" Nov 4 12:36:10.707565 containerd[1556]: time="2025-11-04T12:36:10.707284019Z" level=info msg="connecting to shim ba11a6299555806a8108e8ccaf70d8fa999eb2186c238877df92097cc42714d6" address="unix:///run/containerd/s/8e62bc69a06ca7c4ff6b92c4a5a7dd2ba8937e804af8b02125058b3691ffe3e1" protocol=ttrpc version=3 Nov 4 12:36:10.735752 systemd[1]: Started cri-containerd-ba11a6299555806a8108e8ccaf70d8fa999eb2186c238877df92097cc42714d6.scope - libcontainer container ba11a6299555806a8108e8ccaf70d8fa999eb2186c238877df92097cc42714d6. Nov 4 12:36:10.766280 containerd[1556]: time="2025-11-04T12:36:10.766216063Z" level=info msg="StartContainer for \"ba11a6299555806a8108e8ccaf70d8fa999eb2186c238877df92097cc42714d6\" returns successfully" Nov 4 12:36:10.820868 containerd[1556]: time="2025-11-04T12:36:10.820828397Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba11a6299555806a8108e8ccaf70d8fa999eb2186c238877df92097cc42714d6\" id:\"83738c056c127c7ebbfe51aabbeafdfbe34a80d2a116b5ddcb5b2dc79774c897\" pid:4746 exited_at:{seconds:1762259770 nanos:820573595}" Nov 4 12:36:11.026560 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Nov 4 12:36:11.683950 kubelet[2698]: E1104 12:36:11.683874 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:36:13.278688 kubelet[2698]: E1104 12:36:13.278503 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:36:13.647163 containerd[1556]: time="2025-11-04T12:36:13.647045922Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba11a6299555806a8108e8ccaf70d8fa999eb2186c238877df92097cc42714d6\" id:\"d26ba0f13b11ec97063f18cf6c3e27aec5211048c80334a94c616be332c094ee\" pid:5170 exit_status:1 exited_at:{seconds:1762259773 nanos:646403758}" Nov 4 12:36:13.822186 systemd-networkd[1468]: lxc_health: Link UP Nov 4 12:36:13.830347 systemd-networkd[1468]: lxc_health: Gained carrier Nov 4 12:36:15.278179 kubelet[2698]: E1104 12:36:15.278141 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:36:15.292999 kubelet[2698]: I1104 12:36:15.292934 2698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-75rgh" podStartSLOduration=9.292918056 podStartE2EDuration="9.292918056s" podCreationTimestamp="2025-11-04 12:36:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 12:36:11.711348543 +0000 UTC m=+74.339873984" watchObservedRunningTime="2025-11-04 12:36:15.292918056 +0000 UTC m=+77.921443497" Nov 4 12:36:15.473707 systemd-networkd[1468]: lxc_health: Gained IPv6LL Nov 4 12:36:15.692181 kubelet[2698]: E1104 12:36:15.692118 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:36:15.756662 containerd[1556]: time="2025-11-04T12:36:15.756615493Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba11a6299555806a8108e8ccaf70d8fa999eb2186c238877df92097cc42714d6\" id:\"135515b7d39207494c622d4738271de31368bbd31b20b3a35d77c362c8d655e1\" pid:5285 exited_at:{seconds:1762259775 nanos:755749367}" Nov 4 12:36:16.468483 kubelet[2698]: E1104 12:36:16.468448 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:36:16.693940 kubelet[2698]: E1104 12:36:16.693913 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:36:17.865586 containerd[1556]: time="2025-11-04T12:36:17.865520297Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba11a6299555806a8108e8ccaf70d8fa999eb2186c238877df92097cc42714d6\" id:\"fe45faa044738e0fd799c30b989ab8f82975b8b7869b554f14f3faa1bcccae15\" pid:5319 exited_at:{seconds:1762259777 nanos:865255735}" Nov 4 12:36:19.968617 containerd[1556]: time="2025-11-04T12:36:19.968465934Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba11a6299555806a8108e8ccaf70d8fa999eb2186c238877df92097cc42714d6\" id:\"c03c49028ab159be210e7a16e977358489375c446f24ce6877926218ffece55a\" pid:5343 exited_at:{seconds:1762259779 nanos:968199612}" Nov 4 12:36:19.975193 sshd[4482]: Connection closed by 10.0.0.1 port 54358 Nov 4 12:36:19.975651 sshd-session[4478]: pam_unix(sshd:session): session closed for user core Nov 4 12:36:19.979196 systemd-logind[1532]: Session 25 logged out. Waiting for processes to exit. Nov 4 12:36:19.979798 systemd[1]: sshd@24-10.0.0.141:22-10.0.0.1:54358.service: Deactivated successfully. Nov 4 12:36:19.981918 systemd[1]: session-25.scope: Deactivated successfully. Nov 4 12:36:19.983954 systemd-logind[1532]: Removed session 25.