Nov 6 23:22:15.852560 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Nov 6 23:22:15.852581 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Thu Nov 6 21:59:06 -00 2025 Nov 6 23:22:15.852591 kernel: KASLR enabled Nov 6 23:22:15.852596 kernel: efi: EFI v2.7 by EDK II Nov 6 23:22:15.852602 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Nov 6 23:22:15.852607 kernel: random: crng init done Nov 6 23:22:15.852614 kernel: secureboot: Secure boot disabled Nov 6 23:22:15.852619 kernel: ACPI: Early table checksum verification disabled Nov 6 23:22:15.852625 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Nov 6 23:22:15.852632 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Nov 6 23:22:15.852638 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:22:15.852644 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:22:15.852650 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:22:15.852656 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:22:15.852663 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:22:15.852670 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:22:15.852676 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:22:15.852682 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:22:15.852688 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:22:15.852694 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Nov 6 23:22:15.852700 kernel: NUMA: Failed to initialise from firmware Nov 6 23:22:15.852707 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Nov 6 23:22:15.852713 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Nov 6 23:22:15.852719 kernel: Zone ranges: Nov 6 23:22:15.852725 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Nov 6 23:22:15.852732 kernel: DMA32 empty Nov 6 23:22:15.852738 kernel: Normal empty Nov 6 23:22:15.852744 kernel: Movable zone start for each node Nov 6 23:22:15.852750 kernel: Early memory node ranges Nov 6 23:22:15.852756 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Nov 6 23:22:15.852762 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Nov 6 23:22:15.852768 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Nov 6 23:22:15.852774 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Nov 6 23:22:15.852780 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Nov 6 23:22:15.852786 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Nov 6 23:22:15.852792 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Nov 6 23:22:15.852798 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Nov 6 23:22:15.852807 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Nov 6 23:22:15.852813 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Nov 6 23:22:15.852819 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Nov 6 23:22:15.852828 kernel: psci: probing for conduit method from ACPI. Nov 6 23:22:15.852834 kernel: psci: PSCIv1.1 detected in firmware. Nov 6 23:22:15.852841 kernel: psci: Using standard PSCI v0.2 function IDs Nov 6 23:22:15.852848 kernel: psci: Trusted OS migration not required Nov 6 23:22:15.852855 kernel: psci: SMC Calling Convention v1.1 Nov 6 23:22:15.852861 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Nov 6 23:22:15.852868 kernel: percpu: Embedded 31 pages/cpu s86120 r8192 d32664 u126976 Nov 6 23:22:15.852874 kernel: pcpu-alloc: s86120 r8192 d32664 u126976 alloc=31*4096 Nov 6 23:22:15.852881 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Nov 6 23:22:15.852896 kernel: Detected PIPT I-cache on CPU0 Nov 6 23:22:15.852903 kernel: CPU features: detected: GIC system register CPU interface Nov 6 23:22:15.852910 kernel: CPU features: detected: Hardware dirty bit management Nov 6 23:22:15.852916 kernel: CPU features: detected: Spectre-v4 Nov 6 23:22:15.852924 kernel: CPU features: detected: Spectre-BHB Nov 6 23:22:15.852931 kernel: CPU features: kernel page table isolation forced ON by KASLR Nov 6 23:22:15.852937 kernel: CPU features: detected: Kernel page table isolation (KPTI) Nov 6 23:22:15.852944 kernel: CPU features: detected: ARM erratum 1418040 Nov 6 23:22:15.852954 kernel: CPU features: detected: SSBS not fully self-synchronizing Nov 6 23:22:15.852960 kernel: alternatives: applying boot alternatives Nov 6 23:22:15.852968 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=463065366e5b9a391e66d180eedbf8fe1b0462c2e722921ef25580943d9b67c6 Nov 6 23:22:15.852975 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 6 23:22:15.852981 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 6 23:22:15.852987 kernel: Fallback order for Node 0: 0 Nov 6 23:22:15.852994 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Nov 6 23:22:15.853001 kernel: Policy zone: DMA Nov 6 23:22:15.853008 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 6 23:22:15.853014 kernel: software IO TLB: area num 4. Nov 6 23:22:15.853021 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Nov 6 23:22:15.853027 kernel: Memory: 2387412K/2572288K available (10368K kernel code, 2180K rwdata, 8104K rodata, 38400K init, 897K bss, 184876K reserved, 0K cma-reserved) Nov 6 23:22:15.853034 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 6 23:22:15.853040 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 6 23:22:15.853047 kernel: rcu: RCU event tracing is enabled. Nov 6 23:22:15.853054 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 6 23:22:15.853061 kernel: Trampoline variant of Tasks RCU enabled. Nov 6 23:22:15.853067 kernel: Tracing variant of Tasks RCU enabled. Nov 6 23:22:15.853074 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 6 23:22:15.853083 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 6 23:22:15.853089 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 6 23:22:15.853096 kernel: GICv3: 256 SPIs implemented Nov 6 23:22:15.853102 kernel: GICv3: 0 Extended SPIs implemented Nov 6 23:22:15.853108 kernel: Root IRQ handler: gic_handle_irq Nov 6 23:22:15.853115 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Nov 6 23:22:15.853121 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Nov 6 23:22:15.853127 kernel: ITS [mem 0x08080000-0x0809ffff] Nov 6 23:22:15.853134 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Nov 6 23:22:15.853141 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Nov 6 23:22:15.853147 kernel: GICv3: using LPI property table @0x00000000400f0000 Nov 6 23:22:15.853155 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Nov 6 23:22:15.853161 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 6 23:22:15.853168 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 6 23:22:15.853174 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Nov 6 23:22:15.853181 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Nov 6 23:22:15.853187 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Nov 6 23:22:15.853194 kernel: arm-pv: using stolen time PV Nov 6 23:22:15.853200 kernel: Console: colour dummy device 80x25 Nov 6 23:22:15.853207 kernel: ACPI: Core revision 20230628 Nov 6 23:22:15.853214 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Nov 6 23:22:15.853220 kernel: pid_max: default: 32768 minimum: 301 Nov 6 23:22:15.853229 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 6 23:22:15.853235 kernel: landlock: Up and running. Nov 6 23:22:15.853302 kernel: SELinux: Initializing. Nov 6 23:22:15.853312 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 6 23:22:15.853318 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 6 23:22:15.853325 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 6 23:22:15.853332 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 6 23:22:15.853339 kernel: rcu: Hierarchical SRCU implementation. Nov 6 23:22:15.853345 kernel: rcu: Max phase no-delay instances is 400. Nov 6 23:22:15.853354 kernel: Platform MSI: ITS@0x8080000 domain created Nov 6 23:22:15.853360 kernel: PCI/MSI: ITS@0x8080000 domain created Nov 6 23:22:15.853367 kernel: Remapping and enabling EFI services. Nov 6 23:22:15.853374 kernel: smp: Bringing up secondary CPUs ... Nov 6 23:22:15.853380 kernel: Detected PIPT I-cache on CPU1 Nov 6 23:22:15.853387 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Nov 6 23:22:15.853394 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Nov 6 23:22:15.853400 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 6 23:22:15.853407 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Nov 6 23:22:15.853415 kernel: Detected PIPT I-cache on CPU2 Nov 6 23:22:15.853422 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Nov 6 23:22:15.853433 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Nov 6 23:22:15.853441 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 6 23:22:15.853448 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Nov 6 23:22:15.853455 kernel: Detected PIPT I-cache on CPU3 Nov 6 23:22:15.853462 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Nov 6 23:22:15.853470 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Nov 6 23:22:15.853478 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 6 23:22:15.853485 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Nov 6 23:22:15.853506 kernel: smp: Brought up 1 node, 4 CPUs Nov 6 23:22:15.853513 kernel: SMP: Total of 4 processors activated. Nov 6 23:22:15.853520 kernel: CPU features: detected: 32-bit EL0 Support Nov 6 23:22:15.853527 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Nov 6 23:22:15.853534 kernel: CPU features: detected: Common not Private translations Nov 6 23:22:15.853541 kernel: CPU features: detected: CRC32 instructions Nov 6 23:22:15.853548 kernel: CPU features: detected: Enhanced Virtualization Traps Nov 6 23:22:15.853556 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Nov 6 23:22:15.853563 kernel: CPU features: detected: LSE atomic instructions Nov 6 23:22:15.853570 kernel: CPU features: detected: Privileged Access Never Nov 6 23:22:15.853577 kernel: CPU features: detected: RAS Extension Support Nov 6 23:22:15.853584 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Nov 6 23:22:15.853591 kernel: CPU: All CPU(s) started at EL1 Nov 6 23:22:15.853598 kernel: alternatives: applying system-wide alternatives Nov 6 23:22:15.853605 kernel: devtmpfs: initialized Nov 6 23:22:15.853612 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 6 23:22:15.853620 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 6 23:22:15.853627 kernel: pinctrl core: initialized pinctrl subsystem Nov 6 23:22:15.853634 kernel: SMBIOS 3.0.0 present. Nov 6 23:22:15.853641 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Nov 6 23:22:15.853648 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 6 23:22:15.853655 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 6 23:22:15.853662 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 6 23:22:15.853669 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 6 23:22:15.853676 kernel: audit: initializing netlink subsys (disabled) Nov 6 23:22:15.853684 kernel: audit: type=2000 audit(0.017:1): state=initialized audit_enabled=0 res=1 Nov 6 23:22:15.853691 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 6 23:22:15.853698 kernel: cpuidle: using governor menu Nov 6 23:22:15.853705 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 6 23:22:15.853712 kernel: ASID allocator initialised with 32768 entries Nov 6 23:22:15.853719 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 6 23:22:15.853726 kernel: Serial: AMBA PL011 UART driver Nov 6 23:22:15.853733 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Nov 6 23:22:15.853740 kernel: Modules: 0 pages in range for non-PLT usage Nov 6 23:22:15.853748 kernel: Modules: 509248 pages in range for PLT usage Nov 6 23:22:15.853755 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 6 23:22:15.853762 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 6 23:22:15.853769 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 6 23:22:15.853776 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 6 23:22:15.853783 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 6 23:22:15.853790 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 6 23:22:15.853796 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 6 23:22:15.853803 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 6 23:22:15.853811 kernel: ACPI: Added _OSI(Module Device) Nov 6 23:22:15.853818 kernel: ACPI: Added _OSI(Processor Device) Nov 6 23:22:15.853825 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 6 23:22:15.853832 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 6 23:22:15.853839 kernel: ACPI: Interpreter enabled Nov 6 23:22:15.853846 kernel: ACPI: Using GIC for interrupt routing Nov 6 23:22:15.853853 kernel: ACPI: MCFG table detected, 1 entries Nov 6 23:22:15.853860 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Nov 6 23:22:15.853867 kernel: printk: console [ttyAMA0] enabled Nov 6 23:22:15.853874 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 6 23:22:15.854024 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 6 23:22:15.854098 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 6 23:22:15.854160 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 6 23:22:15.854221 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Nov 6 23:22:15.854310 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Nov 6 23:22:15.854320 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Nov 6 23:22:15.854328 kernel: PCI host bridge to bus 0000:00 Nov 6 23:22:15.854401 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Nov 6 23:22:15.854458 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 6 23:22:15.854513 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Nov 6 23:22:15.854567 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 6 23:22:15.854642 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Nov 6 23:22:15.854720 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Nov 6 23:22:15.854792 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Nov 6 23:22:15.854856 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Nov 6 23:22:15.854933 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Nov 6 23:22:15.854997 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Nov 6 23:22:15.855059 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Nov 6 23:22:15.855122 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Nov 6 23:22:15.855180 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Nov 6 23:22:15.855238 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 6 23:22:15.855323 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Nov 6 23:22:15.855333 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 6 23:22:15.855340 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 6 23:22:15.855347 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 6 23:22:15.855354 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 6 23:22:15.855361 kernel: iommu: Default domain type: Translated Nov 6 23:22:15.855368 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 6 23:22:15.855378 kernel: efivars: Registered efivars operations Nov 6 23:22:15.855385 kernel: vgaarb: loaded Nov 6 23:22:15.855392 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 6 23:22:15.855399 kernel: VFS: Disk quotas dquot_6.6.0 Nov 6 23:22:15.855406 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 6 23:22:15.855413 kernel: pnp: PnP ACPI init Nov 6 23:22:15.855483 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Nov 6 23:22:15.855494 kernel: pnp: PnP ACPI: found 1 devices Nov 6 23:22:15.855501 kernel: NET: Registered PF_INET protocol family Nov 6 23:22:15.855510 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 6 23:22:15.855517 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 6 23:22:15.855524 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 6 23:22:15.855532 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 6 23:22:15.855539 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 6 23:22:15.855546 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 6 23:22:15.855553 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 6 23:22:15.855560 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 6 23:22:15.855567 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 6 23:22:15.855576 kernel: PCI: CLS 0 bytes, default 64 Nov 6 23:22:15.855583 kernel: kvm [1]: HYP mode not available Nov 6 23:22:15.855590 kernel: Initialise system trusted keyrings Nov 6 23:22:15.855597 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 6 23:22:15.855604 kernel: Key type asymmetric registered Nov 6 23:22:15.855611 kernel: Asymmetric key parser 'x509' registered Nov 6 23:22:15.855618 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 6 23:22:15.855625 kernel: io scheduler mq-deadline registered Nov 6 23:22:15.855632 kernel: io scheduler kyber registered Nov 6 23:22:15.855640 kernel: io scheduler bfq registered Nov 6 23:22:15.855647 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 6 23:22:15.855654 kernel: ACPI: button: Power Button [PWRB] Nov 6 23:22:15.855661 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 6 23:22:15.855726 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Nov 6 23:22:15.855735 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 6 23:22:15.855742 kernel: thunder_xcv, ver 1.0 Nov 6 23:22:15.855749 kernel: thunder_bgx, ver 1.0 Nov 6 23:22:15.855756 kernel: nicpf, ver 1.0 Nov 6 23:22:15.855766 kernel: nicvf, ver 1.0 Nov 6 23:22:15.855836 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 6 23:22:15.855906 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-06T23:22:15 UTC (1762471335) Nov 6 23:22:15.855916 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 6 23:22:15.855923 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Nov 6 23:22:15.855930 kernel: watchdog: Delayed init of the lockup detector failed: -19 Nov 6 23:22:15.855937 kernel: watchdog: Hard watchdog permanently disabled Nov 6 23:22:15.855944 kernel: NET: Registered PF_INET6 protocol family Nov 6 23:22:15.855953 kernel: Segment Routing with IPv6 Nov 6 23:22:15.855960 kernel: In-situ OAM (IOAM) with IPv6 Nov 6 23:22:15.855967 kernel: NET: Registered PF_PACKET protocol family Nov 6 23:22:15.855974 kernel: Key type dns_resolver registered Nov 6 23:22:15.855981 kernel: registered taskstats version 1 Nov 6 23:22:15.855988 kernel: Loading compiled-in X.509 certificates Nov 6 23:22:15.855996 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: e53d3b094875ce4245a8b2684246260baeee1996' Nov 6 23:22:15.856003 kernel: Key type .fscrypt registered Nov 6 23:22:15.856010 kernel: Key type fscrypt-provisioning registered Nov 6 23:22:15.856018 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 6 23:22:15.856025 kernel: ima: Allocated hash algorithm: sha1 Nov 6 23:22:15.856045 kernel: ima: No architecture policies found Nov 6 23:22:15.856052 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 6 23:22:15.856060 kernel: clk: Disabling unused clocks Nov 6 23:22:15.856067 kernel: Freeing unused kernel memory: 38400K Nov 6 23:22:15.856074 kernel: Run /init as init process Nov 6 23:22:15.856081 kernel: with arguments: Nov 6 23:22:15.856088 kernel: /init Nov 6 23:22:15.856096 kernel: with environment: Nov 6 23:22:15.856103 kernel: HOME=/ Nov 6 23:22:15.856110 kernel: TERM=linux Nov 6 23:22:15.856118 systemd[1]: Successfully made /usr/ read-only. Nov 6 23:22:15.856128 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 23:22:15.856136 systemd[1]: Detected virtualization kvm. Nov 6 23:22:15.856143 systemd[1]: Detected architecture arm64. Nov 6 23:22:15.856153 systemd[1]: Running in initrd. Nov 6 23:22:15.856160 systemd[1]: No hostname configured, using default hostname. Nov 6 23:22:15.856168 systemd[1]: Hostname set to . Nov 6 23:22:15.856175 systemd[1]: Initializing machine ID from VM UUID. Nov 6 23:22:15.856183 systemd[1]: Queued start job for default target initrd.target. Nov 6 23:22:15.856190 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 23:22:15.856198 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 23:22:15.856206 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 6 23:22:15.856213 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 23:22:15.856222 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 6 23:22:15.856231 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 6 23:22:15.856240 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 6 23:22:15.856257 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 6 23:22:15.856277 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 23:22:15.856285 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 23:22:15.856295 systemd[1]: Reached target paths.target - Path Units. Nov 6 23:22:15.856302 systemd[1]: Reached target slices.target - Slice Units. Nov 6 23:22:15.856310 systemd[1]: Reached target swap.target - Swaps. Nov 6 23:22:15.856317 systemd[1]: Reached target timers.target - Timer Units. Nov 6 23:22:15.856325 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 23:22:15.856333 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 23:22:15.856340 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 6 23:22:15.856348 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 6 23:22:15.856355 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 23:22:15.856365 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 23:22:15.856372 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 23:22:15.856380 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 23:22:15.856387 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 6 23:22:15.856395 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 23:22:15.856402 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 6 23:22:15.856410 systemd[1]: Starting systemd-fsck-usr.service... Nov 6 23:22:15.856417 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 23:22:15.856425 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 23:22:15.856434 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 23:22:15.856441 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 6 23:22:15.856449 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 23:22:15.856457 systemd[1]: Finished systemd-fsck-usr.service. Nov 6 23:22:15.856484 systemd-journald[238]: Collecting audit messages is disabled. Nov 6 23:22:15.856503 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 6 23:22:15.856511 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:22:15.856520 systemd-journald[238]: Journal started Nov 6 23:22:15.856541 systemd-journald[238]: Runtime Journal (/run/log/journal/f7c60496119a4bd8a45a65bbb3047b77) is 5.9M, max 47.3M, 41.4M free. Nov 6 23:22:15.847318 systemd-modules-load[240]: Inserted module 'overlay' Nov 6 23:22:15.861279 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 6 23:22:15.861323 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 23:22:15.863595 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 23:22:15.866461 kernel: Bridge firewalling registered Nov 6 23:22:15.863815 systemd-modules-load[240]: Inserted module 'br_netfilter' Nov 6 23:22:15.866266 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 23:22:15.869894 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 6 23:22:15.871695 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 23:22:15.876411 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 23:22:15.878118 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 23:22:15.882833 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 23:22:15.888101 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 23:22:15.889700 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 23:22:15.893460 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 23:22:15.895741 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 23:22:15.897806 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 6 23:22:15.918878 dracut-cmdline[278]: dracut-dracut-053 Nov 6 23:22:15.921438 dracut-cmdline[278]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=463065366e5b9a391e66d180eedbf8fe1b0462c2e722921ef25580943d9b67c6 Nov 6 23:22:15.926121 systemd-resolved[276]: Positive Trust Anchors: Nov 6 23:22:15.926131 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 23:22:15.926161 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 23:22:15.931111 systemd-resolved[276]: Defaulting to hostname 'linux'. Nov 6 23:22:15.932075 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 23:22:15.936000 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 23:22:15.984277 kernel: SCSI subsystem initialized Nov 6 23:22:15.989267 kernel: Loading iSCSI transport class v2.0-870. Nov 6 23:22:15.996271 kernel: iscsi: registered transport (tcp) Nov 6 23:22:16.009267 kernel: iscsi: registered transport (qla4xxx) Nov 6 23:22:16.009284 kernel: QLogic iSCSI HBA Driver Nov 6 23:22:16.050045 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 6 23:22:16.065386 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 6 23:22:16.080399 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 6 23:22:16.080440 kernel: device-mapper: uevent: version 1.0.3 Nov 6 23:22:16.081534 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 6 23:22:16.126295 kernel: raid6: neonx8 gen() 15771 MB/s Nov 6 23:22:16.143278 kernel: raid6: neonx4 gen() 15807 MB/s Nov 6 23:22:16.160278 kernel: raid6: neonx2 gen() 13204 MB/s Nov 6 23:22:16.177271 kernel: raid6: neonx1 gen() 10511 MB/s Nov 6 23:22:16.194282 kernel: raid6: int64x8 gen() 6786 MB/s Nov 6 23:22:16.211270 kernel: raid6: int64x4 gen() 7334 MB/s Nov 6 23:22:16.228285 kernel: raid6: int64x2 gen() 6105 MB/s Nov 6 23:22:16.245422 kernel: raid6: int64x1 gen() 5053 MB/s Nov 6 23:22:16.245448 kernel: raid6: using algorithm neonx4 gen() 15807 MB/s Nov 6 23:22:16.263512 kernel: raid6: .... xor() 12369 MB/s, rmw enabled Nov 6 23:22:16.263538 kernel: raid6: using neon recovery algorithm Nov 6 23:22:16.269787 kernel: xor: measuring software checksum speed Nov 6 23:22:16.269804 kernel: 8regs : 21618 MB/sec Nov 6 23:22:16.269813 kernel: 32regs : 21590 MB/sec Nov 6 23:22:16.270461 kernel: arm64_neon : 24587 MB/sec Nov 6 23:22:16.270480 kernel: xor: using function: arm64_neon (24587 MB/sec) Nov 6 23:22:16.319273 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 6 23:22:16.330303 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 6 23:22:16.338387 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 23:22:16.352462 systemd-udevd[462]: Using default interface naming scheme 'v255'. Nov 6 23:22:16.356258 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 23:22:16.365420 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 6 23:22:16.377142 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation Nov 6 23:22:16.404660 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 23:22:16.416420 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 23:22:16.457708 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 23:22:16.471527 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 6 23:22:16.485171 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 6 23:22:16.488107 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 23:22:16.489859 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 23:22:16.492199 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 23:22:16.499409 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 6 23:22:16.510748 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 6 23:22:16.514267 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Nov 6 23:22:16.514390 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 6 23:22:16.522522 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 6 23:22:16.522558 kernel: GPT:9289727 != 19775487 Nov 6 23:22:16.522568 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 6 23:22:16.523271 kernel: GPT:9289727 != 19775487 Nov 6 23:22:16.524388 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 6 23:22:16.524422 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 6 23:22:16.527291 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 6 23:22:16.527411 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 23:22:16.531103 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 6 23:22:16.532430 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 23:22:16.532566 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:22:16.538600 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 23:22:16.545425 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 23:22:16.554303 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by (udev-worker) (525) Nov 6 23:22:16.554337 kernel: BTRFS: device fsid 8ac35527-52fd-4925-acbb-f12804e07c02 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (524) Nov 6 23:22:16.562069 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 6 23:22:16.563629 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:22:16.574139 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 6 23:22:16.590523 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 6 23:22:16.596817 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 6 23:22:16.598179 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 6 23:22:16.613411 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 6 23:22:16.615331 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 6 23:22:16.620876 disk-uuid[552]: Primary Header is updated. Nov 6 23:22:16.620876 disk-uuid[552]: Secondary Entries is updated. Nov 6 23:22:16.620876 disk-uuid[552]: Secondary Header is updated. Nov 6 23:22:16.624276 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 6 23:22:16.645146 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 23:22:17.630626 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 6 23:22:17.630980 disk-uuid[553]: The operation has completed successfully. Nov 6 23:22:17.655568 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 6 23:22:17.655669 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 6 23:22:17.695391 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 6 23:22:17.698225 sh[574]: Success Nov 6 23:22:17.708259 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Nov 6 23:22:17.737575 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 6 23:22:17.748107 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 6 23:22:17.750445 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 6 23:22:17.760780 kernel: BTRFS info (device dm-0): first mount of filesystem 8ac35527-52fd-4925-acbb-f12804e07c02 Nov 6 23:22:17.760815 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 6 23:22:17.760825 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 6 23:22:17.762746 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 6 23:22:17.762761 kernel: BTRFS info (device dm-0): using free space tree Nov 6 23:22:17.767506 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 6 23:22:17.768830 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 6 23:22:17.781466 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 6 23:22:17.783080 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 6 23:22:17.797681 kernel: BTRFS info (device vda6): first mount of filesystem 9553d21b-1d44-4f16-bc6d-739b0555444a Nov 6 23:22:17.797729 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 6 23:22:17.798509 kernel: BTRFS info (device vda6): using free space tree Nov 6 23:22:17.801266 kernel: BTRFS info (device vda6): auto enabling async discard Nov 6 23:22:17.805283 kernel: BTRFS info (device vda6): last unmount of filesystem 9553d21b-1d44-4f16-bc6d-739b0555444a Nov 6 23:22:17.807977 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 6 23:22:17.817553 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 6 23:22:17.866856 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 23:22:17.873426 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 23:22:17.878964 ignition[666]: Ignition 2.20.0 Nov 6 23:22:17.878974 ignition[666]: Stage: fetch-offline Nov 6 23:22:17.879007 ignition[666]: no configs at "/usr/lib/ignition/base.d" Nov 6 23:22:17.879015 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 23:22:17.879170 ignition[666]: parsed url from cmdline: "" Nov 6 23:22:17.879173 ignition[666]: no config URL provided Nov 6 23:22:17.879178 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" Nov 6 23:22:17.879185 ignition[666]: no config at "/usr/lib/ignition/user.ign" Nov 6 23:22:17.879208 ignition[666]: op(1): [started] loading QEMU firmware config module Nov 6 23:22:17.879212 ignition[666]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 6 23:22:17.885720 ignition[666]: op(1): [finished] loading QEMU firmware config module Nov 6 23:22:17.899782 systemd-networkd[762]: lo: Link UP Nov 6 23:22:17.899797 systemd-networkd[762]: lo: Gained carrier Nov 6 23:22:17.900613 systemd-networkd[762]: Enumeration completed Nov 6 23:22:17.900865 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 23:22:17.901021 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 23:22:17.901025 systemd-networkd[762]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 23:22:17.901830 systemd-networkd[762]: eth0: Link UP Nov 6 23:22:17.901834 systemd-networkd[762]: eth0: Gained carrier Nov 6 23:22:17.901840 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 23:22:17.903499 systemd[1]: Reached target network.target - Network. Nov 6 23:22:17.917286 systemd-networkd[762]: eth0: DHCPv4 address 10.0.0.81/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 6 23:22:17.939589 ignition[666]: parsing config with SHA512: 7fad20cff1161feed4f4173faa560aca59acebe7c8a708eca40946f4153fa58a7a97a0dca9c9d5d493b7ae02f836e9b55308c63a6942f352f895af924ef3f03b Nov 6 23:22:17.944007 unknown[666]: fetched base config from "system" Nov 6 23:22:17.944019 unknown[666]: fetched user config from "qemu" Nov 6 23:22:17.944431 ignition[666]: fetch-offline: fetch-offline passed Nov 6 23:22:17.946434 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 23:22:17.944503 ignition[666]: Ignition finished successfully Nov 6 23:22:17.947736 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 6 23:22:17.955429 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 6 23:22:17.968264 ignition[769]: Ignition 2.20.0 Nov 6 23:22:17.968272 ignition[769]: Stage: kargs Nov 6 23:22:17.968422 ignition[769]: no configs at "/usr/lib/ignition/base.d" Nov 6 23:22:17.968432 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 23:22:17.969319 ignition[769]: kargs: kargs passed Nov 6 23:22:17.969365 ignition[769]: Ignition finished successfully Nov 6 23:22:17.973288 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 6 23:22:17.981376 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 6 23:22:17.991866 ignition[777]: Ignition 2.20.0 Nov 6 23:22:17.991876 ignition[777]: Stage: disks Nov 6 23:22:17.992031 ignition[777]: no configs at "/usr/lib/ignition/base.d" Nov 6 23:22:17.992041 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 23:22:17.994481 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 6 23:22:17.992863 ignition[777]: disks: disks passed Nov 6 23:22:17.996334 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 6 23:22:17.992918 ignition[777]: Ignition finished successfully Nov 6 23:22:17.998143 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 6 23:22:17.999888 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 23:22:18.001821 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 23:22:18.003510 systemd[1]: Reached target basic.target - Basic System. Nov 6 23:22:18.013431 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 6 23:22:18.024593 systemd-fsck[788]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 6 23:22:18.029061 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 6 23:22:18.041375 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 6 23:22:18.081260 kernel: EXT4-fs (vda9): mounted filesystem 93ef6c07-4a07-4e6a-86ce-df7a94c95ac7 r/w with ordered data mode. Quota mode: none. Nov 6 23:22:18.081801 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 6 23:22:18.083092 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 6 23:22:18.094360 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 23:22:18.096187 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 6 23:22:18.097227 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 6 23:22:18.097280 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 6 23:22:18.097302 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 23:22:18.109925 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (796) Nov 6 23:22:18.109949 kernel: BTRFS info (device vda6): first mount of filesystem 9553d21b-1d44-4f16-bc6d-739b0555444a Nov 6 23:22:18.109960 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 6 23:22:18.109969 kernel: BTRFS info (device vda6): using free space tree Nov 6 23:22:18.101410 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 6 23:22:18.103133 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 6 23:22:18.114268 kernel: BTRFS info (device vda6): auto enabling async discard Nov 6 23:22:18.115492 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 23:22:18.145107 initrd-setup-root[820]: cut: /sysroot/etc/passwd: No such file or directory Nov 6 23:22:18.148480 initrd-setup-root[827]: cut: /sysroot/etc/group: No such file or directory Nov 6 23:22:18.151377 initrd-setup-root[834]: cut: /sysroot/etc/shadow: No such file or directory Nov 6 23:22:18.155034 initrd-setup-root[841]: cut: /sysroot/etc/gshadow: No such file or directory Nov 6 23:22:18.219010 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 6 23:22:18.228349 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 6 23:22:18.229921 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 6 23:22:18.236271 kernel: BTRFS info (device vda6): last unmount of filesystem 9553d21b-1d44-4f16-bc6d-739b0555444a Nov 6 23:22:18.249968 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 6 23:22:18.257561 ignition[909]: INFO : Ignition 2.20.0 Nov 6 23:22:18.257561 ignition[909]: INFO : Stage: mount Nov 6 23:22:18.257561 ignition[909]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 23:22:18.257561 ignition[909]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 23:22:18.257561 ignition[909]: INFO : mount: mount passed Nov 6 23:22:18.257561 ignition[909]: INFO : Ignition finished successfully Nov 6 23:22:18.260450 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 6 23:22:18.274366 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 6 23:22:18.877074 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 6 23:22:18.886438 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 23:22:18.892262 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (925) Nov 6 23:22:18.894566 kernel: BTRFS info (device vda6): first mount of filesystem 9553d21b-1d44-4f16-bc6d-739b0555444a Nov 6 23:22:18.894595 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 6 23:22:18.894606 kernel: BTRFS info (device vda6): using free space tree Nov 6 23:22:18.897261 kernel: BTRFS info (device vda6): auto enabling async discard Nov 6 23:22:18.898577 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 23:22:18.914295 ignition[942]: INFO : Ignition 2.20.0 Nov 6 23:22:18.914295 ignition[942]: INFO : Stage: files Nov 6 23:22:18.915956 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 23:22:18.915956 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 23:22:18.915956 ignition[942]: DEBUG : files: compiled without relabeling support, skipping Nov 6 23:22:18.919531 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 6 23:22:18.919531 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 6 23:22:18.923053 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 6 23:22:18.924513 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 6 23:22:18.924513 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 6 23:22:18.923639 unknown[942]: wrote ssh authorized keys file for user: core Nov 6 23:22:18.928341 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 6 23:22:18.928341 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Nov 6 23:22:19.140421 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 6 23:22:19.590506 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 6 23:22:19.590506 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 6 23:22:19.594724 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Nov 6 23:22:19.649370 systemd-networkd[762]: eth0: Gained IPv6LL Nov 6 23:22:19.801681 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 6 23:22:19.920152 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 6 23:22:19.920152 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 6 23:22:19.924029 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 6 23:22:19.924029 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 6 23:22:19.924029 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 6 23:22:19.924029 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 23:22:19.924029 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 23:22:19.924029 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 23:22:19.924029 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 23:22:19.924029 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 23:22:19.924029 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 23:22:19.924029 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 6 23:22:19.924029 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 6 23:22:19.924029 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 6 23:22:19.924029 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Nov 6 23:22:21.379692 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 6 23:22:21.903456 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 6 23:22:21.903456 ignition[942]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 6 23:22:21.907381 ignition[942]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 23:22:21.907381 ignition[942]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 23:22:21.907381 ignition[942]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 6 23:22:21.907381 ignition[942]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Nov 6 23:22:21.907381 ignition[942]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 6 23:22:21.907381 ignition[942]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 6 23:22:21.907381 ignition[942]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Nov 6 23:22:21.907381 ignition[942]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Nov 6 23:22:21.921952 ignition[942]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 6 23:22:21.923520 ignition[942]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 6 23:22:21.923520 ignition[942]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Nov 6 23:22:21.923520 ignition[942]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Nov 6 23:22:21.923520 ignition[942]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Nov 6 23:22:21.923520 ignition[942]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 6 23:22:21.932264 ignition[942]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 6 23:22:21.932264 ignition[942]: INFO : files: files passed Nov 6 23:22:21.932264 ignition[942]: INFO : Ignition finished successfully Nov 6 23:22:21.925538 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 6 23:22:21.936364 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 6 23:22:21.940394 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 6 23:22:21.941763 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 6 23:22:21.941839 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 6 23:22:21.945771 initrd-setup-root-after-ignition[971]: grep: /sysroot/oem/oem-release: No such file or directory Nov 6 23:22:21.947161 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 23:22:21.947161 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 6 23:22:21.950240 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 23:22:21.949051 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 23:22:21.951807 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 6 23:22:21.965400 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 6 23:22:21.980725 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 6 23:22:21.980839 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 6 23:22:21.983113 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 6 23:22:21.985094 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 6 23:22:21.987106 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 6 23:22:21.987939 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 6 23:22:22.002942 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 23:22:22.013386 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 6 23:22:22.020464 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 6 23:22:22.021746 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 23:22:22.023870 systemd[1]: Stopped target timers.target - Timer Units. Nov 6 23:22:22.025687 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 6 23:22:22.025792 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 23:22:22.028313 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 6 23:22:22.030323 systemd[1]: Stopped target basic.target - Basic System. Nov 6 23:22:22.032080 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 6 23:22:22.033912 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 23:22:22.035932 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 6 23:22:22.037960 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 6 23:22:22.039872 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 23:22:22.041885 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 6 23:22:22.043936 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 6 23:22:22.045731 systemd[1]: Stopped target swap.target - Swaps. Nov 6 23:22:22.047309 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 6 23:22:22.047437 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 6 23:22:22.049905 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 6 23:22:22.051973 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 23:22:22.053983 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 6 23:22:22.057310 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 23:22:22.058572 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 6 23:22:22.058681 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 6 23:22:22.061594 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 6 23:22:22.061712 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 23:22:22.063770 systemd[1]: Stopped target paths.target - Path Units. Nov 6 23:22:22.065358 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 6 23:22:22.066394 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 23:22:22.068383 systemd[1]: Stopped target slices.target - Slice Units. Nov 6 23:22:22.070197 systemd[1]: Stopped target sockets.target - Socket Units. Nov 6 23:22:22.072404 systemd[1]: iscsid.socket: Deactivated successfully. Nov 6 23:22:22.072487 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 23:22:22.074062 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 6 23:22:22.074139 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 23:22:22.075797 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 6 23:22:22.075914 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 23:22:22.077615 systemd[1]: ignition-files.service: Deactivated successfully. Nov 6 23:22:22.077713 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 6 23:22:22.088394 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 6 23:22:22.089291 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 6 23:22:22.089416 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 23:22:22.092100 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 6 23:22:22.093046 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 6 23:22:22.093173 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 23:22:22.099337 ignition[998]: INFO : Ignition 2.20.0 Nov 6 23:22:22.099337 ignition[998]: INFO : Stage: umount Nov 6 23:22:22.099337 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 23:22:22.099337 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 23:22:22.095357 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 6 23:22:22.105515 ignition[998]: INFO : umount: umount passed Nov 6 23:22:22.105515 ignition[998]: INFO : Ignition finished successfully Nov 6 23:22:22.095455 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 23:22:22.101264 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 6 23:22:22.102316 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 6 23:22:22.104703 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 6 23:22:22.104785 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 6 23:22:22.107381 systemd[1]: Stopped target network.target - Network. Nov 6 23:22:22.108331 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 6 23:22:22.108396 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 6 23:22:22.110513 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 6 23:22:22.110567 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 6 23:22:22.112285 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 6 23:22:22.112336 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 6 23:22:22.114283 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 6 23:22:22.114332 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 6 23:22:22.116399 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 6 23:22:22.118614 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 6 23:22:22.121238 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 6 23:22:22.128962 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 6 23:22:22.129083 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 6 23:22:22.132580 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 6 23:22:22.132780 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 6 23:22:22.132889 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 6 23:22:22.135889 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 6 23:22:22.136503 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 6 23:22:22.136555 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 6 23:22:22.148371 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 6 23:22:22.149297 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 6 23:22:22.149366 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 23:22:22.151603 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 6 23:22:22.151651 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 6 23:22:22.154801 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 6 23:22:22.154847 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 6 23:22:22.156818 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 6 23:22:22.156872 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 23:22:22.159815 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 23:22:22.163972 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 6 23:22:22.164036 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 6 23:22:22.169911 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 6 23:22:22.170044 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 6 23:22:22.174897 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 6 23:22:22.175045 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 23:22:22.177656 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 6 23:22:22.177763 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 6 23:22:22.179616 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 6 23:22:22.179647 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 23:22:22.181461 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 6 23:22:22.181511 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 6 23:22:22.184300 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 6 23:22:22.184350 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 6 23:22:22.187175 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 6 23:22:22.187222 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 23:22:22.201416 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 6 23:22:22.202463 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 6 23:22:22.202518 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 23:22:22.205655 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 23:22:22.205696 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:22:22.209607 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 6 23:22:22.209658 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 6 23:22:22.209939 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 6 23:22:22.210015 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 6 23:22:22.211211 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 6 23:22:22.211338 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 6 23:22:22.214038 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 6 23:22:22.215639 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 6 23:22:22.215700 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 6 23:22:22.218106 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 6 23:22:22.226969 systemd[1]: Switching root. Nov 6 23:22:22.254233 systemd-journald[238]: Journal stopped Nov 6 23:22:23.020646 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Nov 6 23:22:23.020700 kernel: SELinux: policy capability network_peer_controls=1 Nov 6 23:22:23.020712 kernel: SELinux: policy capability open_perms=1 Nov 6 23:22:23.020724 kernel: SELinux: policy capability extended_socket_class=1 Nov 6 23:22:23.020734 kernel: SELinux: policy capability always_check_network=0 Nov 6 23:22:23.020747 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 6 23:22:23.020756 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 6 23:22:23.020769 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 6 23:22:23.020779 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 6 23:22:23.020788 kernel: audit: type=1403 audit(1762471342.435:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 6 23:22:23.020799 systemd[1]: Successfully loaded SELinux policy in 31.239ms. Nov 6 23:22:23.020820 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.478ms. Nov 6 23:22:23.020832 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 23:22:23.020843 systemd[1]: Detected virtualization kvm. Nov 6 23:22:23.020853 systemd[1]: Detected architecture arm64. Nov 6 23:22:23.020871 systemd[1]: Detected first boot. Nov 6 23:22:23.020886 systemd[1]: Initializing machine ID from VM UUID. Nov 6 23:22:23.020897 zram_generator::config[1049]: No configuration found. Nov 6 23:22:23.020908 kernel: NET: Registered PF_VSOCK protocol family Nov 6 23:22:23.020918 systemd[1]: Populated /etc with preset unit settings. Nov 6 23:22:23.020931 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 6 23:22:23.020942 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 6 23:22:23.020953 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 6 23:22:23.020963 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 6 23:22:23.020981 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 6 23:22:23.020992 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 6 23:22:23.021003 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 6 23:22:23.021013 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 6 23:22:23.021024 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 6 23:22:23.021034 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 6 23:22:23.021044 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 6 23:22:23.021055 systemd[1]: Created slice user.slice - User and Session Slice. Nov 6 23:22:23.021066 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 23:22:23.021078 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 23:22:23.021089 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 6 23:22:23.021100 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 6 23:22:23.021111 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 6 23:22:23.021121 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 23:22:23.021132 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Nov 6 23:22:23.021142 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 23:22:23.021152 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 6 23:22:23.021164 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 6 23:22:23.021174 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 6 23:22:23.021184 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 6 23:22:23.021194 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 23:22:23.021205 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 23:22:23.021216 systemd[1]: Reached target slices.target - Slice Units. Nov 6 23:22:23.021226 systemd[1]: Reached target swap.target - Swaps. Nov 6 23:22:23.021236 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 6 23:22:23.021256 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 6 23:22:23.021269 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 6 23:22:23.021279 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 23:22:23.021289 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 23:22:23.021299 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 23:22:23.021309 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 6 23:22:23.021319 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 6 23:22:23.021329 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 6 23:22:23.021340 systemd[1]: Mounting media.mount - External Media Directory... Nov 6 23:22:23.021352 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 6 23:22:23.021362 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 6 23:22:23.021372 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 6 23:22:23.021382 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 6 23:22:23.021457 systemd[1]: Reached target machines.target - Containers. Nov 6 23:22:23.021472 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 6 23:22:23.021483 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 23:22:23.021494 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 23:22:23.021506 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 6 23:22:23.021523 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 23:22:23.021534 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 23:22:23.021544 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 23:22:23.021554 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 6 23:22:23.021564 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 23:22:23.021574 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 6 23:22:23.021584 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 6 23:22:23.021594 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 6 23:22:23.021606 kernel: loop: module loaded Nov 6 23:22:23.021616 kernel: fuse: init (API version 7.39) Nov 6 23:22:23.021626 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 6 23:22:23.021636 systemd[1]: Stopped systemd-fsck-usr.service. Nov 6 23:22:23.021647 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 23:22:23.021657 kernel: ACPI: bus type drm_connector registered Nov 6 23:22:23.021666 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 23:22:23.021676 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 23:22:23.021687 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 6 23:22:23.021698 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 6 23:22:23.021709 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 6 23:22:23.021719 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 23:22:23.021752 systemd-journald[1124]: Collecting audit messages is disabled. Nov 6 23:22:23.021775 systemd[1]: verity-setup.service: Deactivated successfully. Nov 6 23:22:23.021785 systemd[1]: Stopped verity-setup.service. Nov 6 23:22:23.021796 systemd-journald[1124]: Journal started Nov 6 23:22:23.021817 systemd-journald[1124]: Runtime Journal (/run/log/journal/f7c60496119a4bd8a45a65bbb3047b77) is 5.9M, max 47.3M, 41.4M free. Nov 6 23:22:22.811369 systemd[1]: Queued start job for default target multi-user.target. Nov 6 23:22:22.823596 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 6 23:22:22.823985 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 6 23:22:23.027264 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 23:22:23.027836 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 6 23:22:23.029096 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 6 23:22:23.030380 systemd[1]: Mounted media.mount - External Media Directory. Nov 6 23:22:23.031473 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 6 23:22:23.032628 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 6 23:22:23.033837 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 6 23:22:23.036290 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 6 23:22:23.037681 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 23:22:23.040604 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 6 23:22:23.040774 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 6 23:22:23.042199 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 23:22:23.042395 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 23:22:23.043873 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 23:22:23.044029 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 23:22:23.045386 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 23:22:23.045552 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 23:22:23.047076 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 6 23:22:23.047225 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 6 23:22:23.048554 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 23:22:23.048714 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 23:22:23.050085 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 23:22:23.054283 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 23:22:23.055900 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 6 23:22:23.057502 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 6 23:22:23.070005 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 6 23:22:23.077336 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 6 23:22:23.079253 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 6 23:22:23.080332 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 6 23:22:23.080370 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 23:22:23.082175 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 6 23:22:23.084391 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 6 23:22:23.086403 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 6 23:22:23.087544 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 23:22:23.088908 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 6 23:22:23.091299 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 6 23:22:23.092552 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 23:22:23.097371 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 6 23:22:23.099002 systemd-journald[1124]: Time spent on flushing to /var/log/journal/f7c60496119a4bd8a45a65bbb3047b77 is 13.537ms for 865 entries. Nov 6 23:22:23.099002 systemd-journald[1124]: System Journal (/var/log/journal/f7c60496119a4bd8a45a65bbb3047b77) is 8M, max 195.6M, 187.6M free. Nov 6 23:22:23.129500 systemd-journald[1124]: Received client request to flush runtime journal. Nov 6 23:22:23.099416 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 23:22:23.101399 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 23:22:23.105451 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 6 23:22:23.108963 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 6 23:22:23.112971 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 23:22:23.114538 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 6 23:22:23.117259 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 6 23:22:23.119004 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 6 23:22:23.131545 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 6 23:22:23.133278 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 6 23:22:23.134902 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 23:22:23.138362 kernel: loop0: detected capacity change from 0 to 123192 Nov 6 23:22:23.141632 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 6 23:22:23.150275 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 6 23:22:23.156704 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 6 23:22:23.158973 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 6 23:22:23.162289 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 6 23:22:23.165517 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 23:22:23.170263 kernel: loop1: detected capacity change from 0 to 113512 Nov 6 23:22:23.173157 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 6 23:22:23.192284 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 6 23:22:23.195200 udevadm[1182]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 6 23:22:23.204478 kernel: loop2: detected capacity change from 0 to 211168 Nov 6 23:22:23.207831 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Nov 6 23:22:23.207848 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Nov 6 23:22:23.212635 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 23:22:23.245271 kernel: loop3: detected capacity change from 0 to 123192 Nov 6 23:22:23.251275 kernel: loop4: detected capacity change from 0 to 113512 Nov 6 23:22:23.257274 kernel: loop5: detected capacity change from 0 to 211168 Nov 6 23:22:23.262995 (sd-merge)[1190]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 6 23:22:23.263451 (sd-merge)[1190]: Merged extensions into '/usr'. Nov 6 23:22:23.267763 systemd[1]: Reload requested from client PID 1166 ('systemd-sysext') (unit systemd-sysext.service)... Nov 6 23:22:23.267787 systemd[1]: Reloading... Nov 6 23:22:23.325274 zram_generator::config[1216]: No configuration found. Nov 6 23:22:23.385978 ldconfig[1161]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 6 23:22:23.431752 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 6 23:22:23.481295 systemd[1]: Reloading finished in 212 ms. Nov 6 23:22:23.500257 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 6 23:22:23.501828 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 6 23:22:23.513509 systemd[1]: Starting ensure-sysext.service... Nov 6 23:22:23.515316 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 23:22:23.524205 systemd[1]: Reload requested from client PID 1253 ('systemctl') (unit ensure-sysext.service)... Nov 6 23:22:23.524219 systemd[1]: Reloading... Nov 6 23:22:23.530043 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 6 23:22:23.530334 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 6 23:22:23.530968 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 6 23:22:23.531183 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Nov 6 23:22:23.531235 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Nov 6 23:22:23.533818 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 23:22:23.533825 systemd-tmpfiles[1254]: Skipping /boot Nov 6 23:22:23.541912 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 23:22:23.541928 systemd-tmpfiles[1254]: Skipping /boot Nov 6 23:22:23.569265 zram_generator::config[1283]: No configuration found. Nov 6 23:22:23.649142 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 6 23:22:23.699206 systemd[1]: Reloading finished in 174 ms. Nov 6 23:22:23.709878 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 6 23:22:23.728310 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 23:22:23.736917 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 23:22:23.739478 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 6 23:22:23.742007 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 6 23:22:23.745076 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 23:22:23.749550 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 23:22:23.755168 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 6 23:22:23.761989 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 23:22:23.762967 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 23:22:23.767983 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 23:22:23.774522 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 23:22:23.776581 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 23:22:23.776784 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 23:22:23.784471 systemd-udevd[1324]: Using default interface naming scheme 'v255'. Nov 6 23:22:23.789456 augenrules[1348]: No rules Nov 6 23:22:23.789517 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 6 23:22:23.791807 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 23:22:23.792020 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 23:22:23.794394 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 6 23:22:23.796625 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 23:22:23.797079 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 23:22:23.800669 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 23:22:23.803174 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 23:22:23.807652 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 23:22:23.808021 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 23:22:23.811976 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 23:22:23.819288 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 6 23:22:23.835285 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 6 23:22:23.845842 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 6 23:22:23.853921 systemd[1]: Finished ensure-sysext.service. Nov 6 23:22:23.866285 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Nov 6 23:22:23.868265 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1368) Nov 6 23:22:23.869793 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 23:22:23.871914 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 23:22:23.874420 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 23:22:23.881089 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 23:22:23.885702 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 23:22:23.889153 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 23:22:23.890376 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 23:22:23.890421 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 23:22:23.892994 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 23:22:23.898018 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 6 23:22:23.901450 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 6 23:22:23.902533 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 6 23:22:23.906552 augenrules[1388]: /sbin/augenrules: No change Nov 6 23:22:23.911383 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 23:22:23.912359 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 23:22:23.913790 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 23:22:23.913980 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 23:22:23.915409 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 23:22:23.915576 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 23:22:23.917089 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 23:22:23.917239 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 23:22:23.923333 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 6 23:22:23.929727 augenrules[1420]: No rules Nov 6 23:22:23.933189 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 23:22:23.933405 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 23:22:23.938605 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 6 23:22:23.951093 systemd-resolved[1322]: Positive Trust Anchors: Nov 6 23:22:23.951114 systemd-resolved[1322]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 23:22:23.951145 systemd-resolved[1322]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 23:22:23.953441 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 6 23:22:23.954757 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 23:22:23.954823 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 23:22:23.959736 systemd-resolved[1322]: Defaulting to hostname 'linux'. Nov 6 23:22:23.961805 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 23:22:23.963618 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 23:22:23.965232 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 6 23:22:23.991414 systemd-networkd[1401]: lo: Link UP Nov 6 23:22:23.991423 systemd-networkd[1401]: lo: Gained carrier Nov 6 23:22:23.992237 systemd-networkd[1401]: Enumeration completed Nov 6 23:22:23.992665 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 23:22:23.993080 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 23:22:23.993156 systemd-networkd[1401]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 23:22:23.993695 systemd-networkd[1401]: eth0: Link UP Nov 6 23:22:23.993780 systemd-networkd[1401]: eth0: Gained carrier Nov 6 23:22:23.993831 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 23:22:23.994832 systemd[1]: Reached target network.target - Network. Nov 6 23:22:24.004294 systemd-networkd[1401]: eth0: DHCPv4 address 10.0.0.81/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 6 23:22:24.004409 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 6 23:22:24.007043 systemd-timesyncd[1403]: Network configuration changed, trying to establish connection. Nov 6 23:22:24.007214 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 6 23:22:24.009201 systemd-timesyncd[1403]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 6 23:22:24.009271 systemd-timesyncd[1403]: Initial clock synchronization to Thu 2025-11-06 23:22:23.722773 UTC. Nov 6 23:22:24.009302 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 6 23:22:24.016778 systemd[1]: Reached target time-set.target - System Time Set. Nov 6 23:22:24.022493 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 23:22:24.024760 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 6 23:22:24.026594 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 6 23:22:24.032136 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 6 23:22:24.043622 lvm[1441]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 6 23:22:24.056400 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:22:24.081762 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 6 23:22:24.083384 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 23:22:24.084608 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 23:22:24.085843 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 6 23:22:24.087219 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 6 23:22:24.088644 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 6 23:22:24.089848 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 6 23:22:24.091206 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 6 23:22:24.092655 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 6 23:22:24.092694 systemd[1]: Reached target paths.target - Path Units. Nov 6 23:22:24.093656 systemd[1]: Reached target timers.target - Timer Units. Nov 6 23:22:24.095503 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 6 23:22:24.097993 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 6 23:22:24.101452 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 6 23:22:24.102945 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 6 23:22:24.104300 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 6 23:22:24.107448 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 6 23:22:24.108902 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 6 23:22:24.111936 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 6 23:22:24.113635 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 6 23:22:24.114896 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 23:22:24.115895 systemd[1]: Reached target basic.target - Basic System. Nov 6 23:22:24.116935 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 6 23:22:24.116969 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 6 23:22:24.117878 systemd[1]: Starting containerd.service - containerd container runtime... Nov 6 23:22:24.119811 lvm[1449]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 6 23:22:24.120219 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 6 23:22:24.122692 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 6 23:22:24.124802 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 6 23:22:24.126310 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 6 23:22:24.128469 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 6 23:22:24.131426 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 6 23:22:24.134000 jq[1452]: false Nov 6 23:22:24.142434 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 6 23:22:24.145693 extend-filesystems[1453]: Found loop3 Nov 6 23:22:24.145693 extend-filesystems[1453]: Found loop4 Nov 6 23:22:24.145693 extend-filesystems[1453]: Found loop5 Nov 6 23:22:24.145693 extend-filesystems[1453]: Found vda Nov 6 23:22:24.145693 extend-filesystems[1453]: Found vda1 Nov 6 23:22:24.145693 extend-filesystems[1453]: Found vda2 Nov 6 23:22:24.145693 extend-filesystems[1453]: Found vda3 Nov 6 23:22:24.145693 extend-filesystems[1453]: Found usr Nov 6 23:22:24.145693 extend-filesystems[1453]: Found vda4 Nov 6 23:22:24.145693 extend-filesystems[1453]: Found vda6 Nov 6 23:22:24.145693 extend-filesystems[1453]: Found vda7 Nov 6 23:22:24.145693 extend-filesystems[1453]: Found vda9 Nov 6 23:22:24.145693 extend-filesystems[1453]: Checking size of /dev/vda9 Nov 6 23:22:24.145421 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 6 23:22:24.165719 extend-filesystems[1453]: Resized partition /dev/vda9 Nov 6 23:22:24.147571 dbus-daemon[1451]: [system] SELinux support is enabled Nov 6 23:22:24.170500 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1370) Nov 6 23:22:24.170523 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 6 23:22:24.155182 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 6 23:22:24.170694 extend-filesystems[1473]: resize2fs 1.47.1 (20-May-2024) Nov 6 23:22:24.157129 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 6 23:22:24.157666 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 6 23:22:24.175813 jq[1471]: true Nov 6 23:22:24.158345 systemd[1]: Starting update-engine.service - Update Engine... Nov 6 23:22:24.160323 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 6 23:22:24.162471 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 6 23:22:24.170516 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 6 23:22:24.174822 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 6 23:22:24.176779 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 6 23:22:24.177174 systemd[1]: motdgen.service: Deactivated successfully. Nov 6 23:22:24.177346 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 6 23:22:24.181615 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 6 23:22:24.181800 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 6 23:22:24.198844 update_engine[1470]: I20251106 23:22:24.195837 1470 main.cc:92] Flatcar Update Engine starting Nov 6 23:22:24.199090 update_engine[1470]: I20251106 23:22:24.198897 1470 update_check_scheduler.cc:74] Next update check in 2m26s Nov 6 23:22:24.203075 (ntainerd)[1479]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 6 23:22:24.222479 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 6 23:22:24.204747 systemd[1]: Started update-engine.service - Update Engine. Nov 6 23:22:24.222612 jq[1478]: true Nov 6 23:22:24.209068 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 6 23:22:24.209098 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 6 23:22:24.211610 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 6 23:22:24.211631 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 6 23:22:24.215922 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 6 23:22:24.224092 extend-filesystems[1473]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 6 23:22:24.224092 extend-filesystems[1473]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 6 23:22:24.224092 extend-filesystems[1473]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 6 23:22:24.240410 extend-filesystems[1453]: Resized filesystem in /dev/vda9 Nov 6 23:22:24.241432 tar[1476]: linux-arm64/LICENSE Nov 6 23:22:24.241432 tar[1476]: linux-arm64/helm Nov 6 23:22:24.226610 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 6 23:22:24.226773 systemd-logind[1467]: Watching system buttons on /dev/input/event0 (Power Button) Nov 6 23:22:24.226829 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 6 23:22:24.227257 systemd-logind[1467]: New seat seat0. Nov 6 23:22:24.228812 systemd[1]: Started systemd-logind.service - User Login Management. Nov 6 23:22:24.249253 bash[1507]: Updated "/home/core/.ssh/authorized_keys" Nov 6 23:22:24.253930 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 6 23:22:24.257096 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 6 23:22:24.274645 locksmithd[1492]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 6 23:22:24.359165 containerd[1479]: time="2025-11-06T23:22:24.359083720Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Nov 6 23:22:24.402274 containerd[1479]: time="2025-11-06T23:22:24.400756640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 6 23:22:24.402418 containerd[1479]: time="2025-11-06T23:22:24.402287160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 6 23:22:24.402418 containerd[1479]: time="2025-11-06T23:22:24.402315440Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 6 23:22:24.402418 containerd[1479]: time="2025-11-06T23:22:24.402330960Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 6 23:22:24.402516 containerd[1479]: time="2025-11-06T23:22:24.402493360Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 6 23:22:24.402540 containerd[1479]: time="2025-11-06T23:22:24.402520760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 6 23:22:24.402612 containerd[1479]: time="2025-11-06T23:22:24.402580800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 6 23:22:24.402612 containerd[1479]: time="2025-11-06T23:22:24.402596920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 6 23:22:24.402799 containerd[1479]: time="2025-11-06T23:22:24.402779480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 6 23:22:24.402846 containerd[1479]: time="2025-11-06T23:22:24.402798640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 6 23:22:24.402846 containerd[1479]: time="2025-11-06T23:22:24.402811360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 6 23:22:24.402846 containerd[1479]: time="2025-11-06T23:22:24.402820360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 6 23:22:24.402961 containerd[1479]: time="2025-11-06T23:22:24.402900840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 6 23:22:24.403115 containerd[1479]: time="2025-11-06T23:22:24.403095040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 6 23:22:24.403235 containerd[1479]: time="2025-11-06T23:22:24.403217400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 6 23:22:24.403282 containerd[1479]: time="2025-11-06T23:22:24.403235280Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 6 23:22:24.403353 containerd[1479]: time="2025-11-06T23:22:24.403335200Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 6 23:22:24.403397 containerd[1479]: time="2025-11-06T23:22:24.403383560Z" level=info msg="metadata content store policy set" policy=shared Nov 6 23:22:24.411787 containerd[1479]: time="2025-11-06T23:22:24.411724120Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 6 23:22:24.411787 containerd[1479]: time="2025-11-06T23:22:24.411783080Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 6 23:22:24.411890 containerd[1479]: time="2025-11-06T23:22:24.411809000Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 6 23:22:24.411890 containerd[1479]: time="2025-11-06T23:22:24.411826400Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 6 23:22:24.411890 containerd[1479]: time="2025-11-06T23:22:24.411842160Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 6 23:22:24.412034 containerd[1479]: time="2025-11-06T23:22:24.411983400Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 6 23:22:24.412330 containerd[1479]: time="2025-11-06T23:22:24.412313080Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 6 23:22:24.412444 containerd[1479]: time="2025-11-06T23:22:24.412426000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 6 23:22:24.412466 containerd[1479]: time="2025-11-06T23:22:24.412448520Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 6 23:22:24.412466 containerd[1479]: time="2025-11-06T23:22:24.412463080Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 6 23:22:24.412516 containerd[1479]: time="2025-11-06T23:22:24.412477040Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 6 23:22:24.412516 containerd[1479]: time="2025-11-06T23:22:24.412490080Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 6 23:22:24.412516 containerd[1479]: time="2025-11-06T23:22:24.412503360Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 6 23:22:24.412569 containerd[1479]: time="2025-11-06T23:22:24.412517600Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 6 23:22:24.412569 containerd[1479]: time="2025-11-06T23:22:24.412537400Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 6 23:22:24.412569 containerd[1479]: time="2025-11-06T23:22:24.412550400Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 6 23:22:24.412569 containerd[1479]: time="2025-11-06T23:22:24.412563200Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 6 23:22:24.412634 containerd[1479]: time="2025-11-06T23:22:24.412573880Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 6 23:22:24.412634 containerd[1479]: time="2025-11-06T23:22:24.412594200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 6 23:22:24.412634 containerd[1479]: time="2025-11-06T23:22:24.412607240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 6 23:22:24.412634 containerd[1479]: time="2025-11-06T23:22:24.412618960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 6 23:22:24.412634 containerd[1479]: time="2025-11-06T23:22:24.412631280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 6 23:22:24.412718 containerd[1479]: time="2025-11-06T23:22:24.412642800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 6 23:22:24.412718 containerd[1479]: time="2025-11-06T23:22:24.412655280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 6 23:22:24.412718 containerd[1479]: time="2025-11-06T23:22:24.412668560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 6 23:22:24.412718 containerd[1479]: time="2025-11-06T23:22:24.412682240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 6 23:22:24.412718 containerd[1479]: time="2025-11-06T23:22:24.412695040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 6 23:22:24.412718 containerd[1479]: time="2025-11-06T23:22:24.412710960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 6 23:22:24.412979 containerd[1479]: time="2025-11-06T23:22:24.412722400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 6 23:22:24.412979 containerd[1479]: time="2025-11-06T23:22:24.412734240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 6 23:22:24.412979 containerd[1479]: time="2025-11-06T23:22:24.412746680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 6 23:22:24.412979 containerd[1479]: time="2025-11-06T23:22:24.412760720Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 6 23:22:24.412979 containerd[1479]: time="2025-11-06T23:22:24.412780400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 6 23:22:24.412979 containerd[1479]: time="2025-11-06T23:22:24.412795080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 6 23:22:24.412979 containerd[1479]: time="2025-11-06T23:22:24.412805360Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 6 23:22:24.412979 containerd[1479]: time="2025-11-06T23:22:24.412977160Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 6 23:22:24.413109 containerd[1479]: time="2025-11-06T23:22:24.412995560Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 6 23:22:24.413109 containerd[1479]: time="2025-11-06T23:22:24.413006280Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 6 23:22:24.413109 containerd[1479]: time="2025-11-06T23:22:24.413018920Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 6 23:22:24.413109 containerd[1479]: time="2025-11-06T23:22:24.413027640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 6 23:22:24.413109 containerd[1479]: time="2025-11-06T23:22:24.413039800Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 6 23:22:24.413109 containerd[1479]: time="2025-11-06T23:22:24.413049040Z" level=info msg="NRI interface is disabled by configuration." Nov 6 23:22:24.413109 containerd[1479]: time="2025-11-06T23:22:24.413058400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 6 23:22:24.413467 containerd[1479]: time="2025-11-06T23:22:24.413412480Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 6 23:22:24.413467 containerd[1479]: time="2025-11-06T23:22:24.413465360Z" level=info msg="Connect containerd service" Nov 6 23:22:24.413663 containerd[1479]: time="2025-11-06T23:22:24.413498560Z" level=info msg="using legacy CRI server" Nov 6 23:22:24.413663 containerd[1479]: time="2025-11-06T23:22:24.413505520Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 6 23:22:24.413764 containerd[1479]: time="2025-11-06T23:22:24.413742560Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 6 23:22:24.414506 containerd[1479]: time="2025-11-06T23:22:24.414478120Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 6 23:22:24.414695 containerd[1479]: time="2025-11-06T23:22:24.414662440Z" level=info msg="Start subscribing containerd event" Nov 6 23:22:24.414726 containerd[1479]: time="2025-11-06T23:22:24.414708440Z" level=info msg="Start recovering state" Nov 6 23:22:24.414777 containerd[1479]: time="2025-11-06T23:22:24.414763840Z" level=info msg="Start event monitor" Nov 6 23:22:24.414801 containerd[1479]: time="2025-11-06T23:22:24.414777320Z" level=info msg="Start snapshots syncer" Nov 6 23:22:24.414801 containerd[1479]: time="2025-11-06T23:22:24.414786920Z" level=info msg="Start cni network conf syncer for default" Nov 6 23:22:24.414801 containerd[1479]: time="2025-11-06T23:22:24.414793680Z" level=info msg="Start streaming server" Nov 6 23:22:24.415404 containerd[1479]: time="2025-11-06T23:22:24.415363760Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 6 23:22:24.415450 containerd[1479]: time="2025-11-06T23:22:24.415415920Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 6 23:22:24.416771 containerd[1479]: time="2025-11-06T23:22:24.416722760Z" level=info msg="containerd successfully booted in 0.059078s" Nov 6 23:22:24.416934 systemd[1]: Started containerd.service - containerd container runtime. Nov 6 23:22:24.484741 sshd_keygen[1475]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 6 23:22:24.502831 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 6 23:22:24.516501 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 6 23:22:24.522293 systemd[1]: issuegen.service: Deactivated successfully. Nov 6 23:22:24.522499 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 6 23:22:24.526238 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 6 23:22:24.537499 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 6 23:22:24.550629 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 6 23:22:24.553360 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Nov 6 23:22:24.555079 systemd[1]: Reached target getty.target - Login Prompts. Nov 6 23:22:24.603753 tar[1476]: linux-arm64/README.md Nov 6 23:22:24.620679 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 6 23:22:25.217360 systemd-networkd[1401]: eth0: Gained IPv6LL Nov 6 23:22:25.219873 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 6 23:22:25.221655 systemd[1]: Reached target network-online.target - Network is Online. Nov 6 23:22:25.233491 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 6 23:22:25.235916 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:22:25.237969 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 6 23:22:25.250666 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 6 23:22:25.250863 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 6 23:22:25.252907 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 6 23:22:25.254348 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 6 23:22:25.757030 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:22:25.758587 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 6 23:22:25.761533 (kubelet)[1564]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 23:22:25.764887 systemd[1]: Startup finished in 570ms (kernel) + 6.758s (initrd) + 3.360s (userspace) = 10.690s. Nov 6 23:22:26.096500 kubelet[1564]: E1106 23:22:26.096388 1564 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 23:22:26.098981 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 23:22:26.099130 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 23:22:26.099511 systemd[1]: kubelet.service: Consumed 749ms CPU time, 262M memory peak. Nov 6 23:22:28.167937 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 6 23:22:28.169117 systemd[1]: Started sshd@0-10.0.0.81:22-10.0.0.1:48520.service - OpenSSH per-connection server daemon (10.0.0.1:48520). Nov 6 23:22:28.221646 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 48520 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:22:28.223194 sshd-session[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:22:28.228727 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 6 23:22:28.243999 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 6 23:22:28.252456 systemd-logind[1467]: New session 1 of user core. Nov 6 23:22:28.255729 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 6 23:22:28.268543 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 6 23:22:28.271034 (systemd)[1581]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 6 23:22:28.273035 systemd-logind[1467]: New session c1 of user core. Nov 6 23:22:28.369161 systemd[1581]: Queued start job for default target default.target. Nov 6 23:22:28.381152 systemd[1581]: Created slice app.slice - User Application Slice. Nov 6 23:22:28.381180 systemd[1581]: Reached target paths.target - Paths. Nov 6 23:22:28.381216 systemd[1581]: Reached target timers.target - Timers. Nov 6 23:22:28.382367 systemd[1581]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 6 23:22:28.391077 systemd[1581]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 6 23:22:28.391141 systemd[1581]: Reached target sockets.target - Sockets. Nov 6 23:22:28.391176 systemd[1581]: Reached target basic.target - Basic System. Nov 6 23:22:28.391204 systemd[1581]: Reached target default.target - Main User Target. Nov 6 23:22:28.391235 systemd[1581]: Startup finished in 113ms. Nov 6 23:22:28.391428 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 6 23:22:28.392885 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 6 23:22:28.451464 systemd[1]: Started sshd@1-10.0.0.81:22-10.0.0.1:48532.service - OpenSSH per-connection server daemon (10.0.0.1:48532). Nov 6 23:22:28.495936 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 48532 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:22:28.497104 sshd-session[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:22:28.501313 systemd-logind[1467]: New session 2 of user core. Nov 6 23:22:28.513393 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 6 23:22:28.564886 sshd[1594]: Connection closed by 10.0.0.1 port 48532 Nov 6 23:22:28.565349 sshd-session[1592]: pam_unix(sshd:session): session closed for user core Nov 6 23:22:28.579228 systemd[1]: sshd@1-10.0.0.81:22-10.0.0.1:48532.service: Deactivated successfully. Nov 6 23:22:28.580526 systemd[1]: session-2.scope: Deactivated successfully. Nov 6 23:22:28.583999 systemd-logind[1467]: Session 2 logged out. Waiting for processes to exit. Nov 6 23:22:28.591517 systemd[1]: Started sshd@2-10.0.0.81:22-10.0.0.1:48538.service - OpenSSH per-connection server daemon (10.0.0.1:48538). Nov 6 23:22:28.592733 systemd-logind[1467]: Removed session 2. Nov 6 23:22:28.635133 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 48538 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:22:28.636533 sshd-session[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:22:28.640331 systemd-logind[1467]: New session 3 of user core. Nov 6 23:22:28.653375 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 6 23:22:28.700500 sshd[1602]: Connection closed by 10.0.0.1 port 48538 Nov 6 23:22:28.700903 sshd-session[1599]: pam_unix(sshd:session): session closed for user core Nov 6 23:22:28.715988 systemd[1]: Started sshd@3-10.0.0.81:22-10.0.0.1:48540.service - OpenSSH per-connection server daemon (10.0.0.1:48540). Nov 6 23:22:28.716414 systemd[1]: sshd@2-10.0.0.81:22-10.0.0.1:48538.service: Deactivated successfully. Nov 6 23:22:28.719450 systemd[1]: session-3.scope: Deactivated successfully. Nov 6 23:22:28.721491 systemd-logind[1467]: Session 3 logged out. Waiting for processes to exit. Nov 6 23:22:28.723364 systemd-logind[1467]: Removed session 3. Nov 6 23:22:28.762770 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 48540 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:22:28.763168 sshd-session[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:22:28.766690 systemd-logind[1467]: New session 4 of user core. Nov 6 23:22:28.774476 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 6 23:22:28.827253 sshd[1610]: Connection closed by 10.0.0.1 port 48540 Nov 6 23:22:28.826515 sshd-session[1605]: pam_unix(sshd:session): session closed for user core Nov 6 23:22:28.836139 systemd[1]: sshd@3-10.0.0.81:22-10.0.0.1:48540.service: Deactivated successfully. Nov 6 23:22:28.837454 systemd[1]: session-4.scope: Deactivated successfully. Nov 6 23:22:28.838612 systemd-logind[1467]: Session 4 logged out. Waiting for processes to exit. Nov 6 23:22:28.851646 systemd[1]: Started sshd@4-10.0.0.81:22-10.0.0.1:48550.service - OpenSSH per-connection server daemon (10.0.0.1:48550). Nov 6 23:22:28.852461 systemd-logind[1467]: Removed session 4. Nov 6 23:22:28.895730 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 48550 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:22:28.897030 sshd-session[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:22:28.900686 systemd-logind[1467]: New session 5 of user core. Nov 6 23:22:28.915367 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 6 23:22:28.970883 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 6 23:22:28.971127 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 23:22:28.987177 sudo[1619]: pam_unix(sudo:session): session closed for user root Nov 6 23:22:28.989436 sshd[1618]: Connection closed by 10.0.0.1 port 48550 Nov 6 23:22:28.988780 sshd-session[1615]: pam_unix(sshd:session): session closed for user core Nov 6 23:22:29.014127 systemd[1]: sshd@4-10.0.0.81:22-10.0.0.1:48550.service: Deactivated successfully. Nov 6 23:22:29.016174 systemd[1]: session-5.scope: Deactivated successfully. Nov 6 23:22:29.019444 systemd-logind[1467]: Session 5 logged out. Waiting for processes to exit. Nov 6 23:22:29.028516 systemd[1]: Started sshd@5-10.0.0.81:22-10.0.0.1:48562.service - OpenSSH per-connection server daemon (10.0.0.1:48562). Nov 6 23:22:29.029439 systemd-logind[1467]: Removed session 5. Nov 6 23:22:29.071980 sshd[1624]: Accepted publickey for core from 10.0.0.1 port 48562 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:22:29.073123 sshd-session[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:22:29.077115 systemd-logind[1467]: New session 6 of user core. Nov 6 23:22:29.089370 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 6 23:22:29.137822 sudo[1629]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 6 23:22:29.138075 sudo[1629]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 23:22:29.140873 sudo[1629]: pam_unix(sudo:session): session closed for user root Nov 6 23:22:29.146262 sudo[1628]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 6 23:22:29.146508 sudo[1628]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 23:22:29.171574 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 23:22:29.196122 augenrules[1651]: No rules Nov 6 23:22:29.197200 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 23:22:29.198328 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 23:22:29.199467 sudo[1628]: pam_unix(sudo:session): session closed for user root Nov 6 23:22:29.200513 sshd[1627]: Connection closed by 10.0.0.1 port 48562 Nov 6 23:22:29.200832 sshd-session[1624]: pam_unix(sshd:session): session closed for user core Nov 6 23:22:29.212229 systemd[1]: sshd@5-10.0.0.81:22-10.0.0.1:48562.service: Deactivated successfully. Nov 6 23:22:29.213668 systemd[1]: session-6.scope: Deactivated successfully. Nov 6 23:22:29.216880 systemd-logind[1467]: Session 6 logged out. Waiting for processes to exit. Nov 6 23:22:29.228492 systemd[1]: Started sshd@6-10.0.0.81:22-10.0.0.1:48568.service - OpenSSH per-connection server daemon (10.0.0.1:48568). Nov 6 23:22:29.232665 systemd-logind[1467]: Removed session 6. Nov 6 23:22:29.269199 sshd[1659]: Accepted publickey for core from 10.0.0.1 port 48568 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:22:29.270261 sshd-session[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:22:29.274298 systemd-logind[1467]: New session 7 of user core. Nov 6 23:22:29.280394 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 6 23:22:29.328563 sudo[1663]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 6 23:22:29.328822 sudo[1663]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 23:22:29.613491 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 6 23:22:29.613589 (dockerd)[1683]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 6 23:22:29.854591 dockerd[1683]: time="2025-11-06T23:22:29.854532277Z" level=info msg="Starting up" Nov 6 23:22:30.031385 dockerd[1683]: time="2025-11-06T23:22:30.031291184Z" level=info msg="Loading containers: start." Nov 6 23:22:30.163268 kernel: Initializing XFRM netlink socket Nov 6 23:22:30.225378 systemd-networkd[1401]: docker0: Link UP Nov 6 23:22:30.259363 dockerd[1683]: time="2025-11-06T23:22:30.259329959Z" level=info msg="Loading containers: done." Nov 6 23:22:30.270423 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3495706342-merged.mount: Deactivated successfully. Nov 6 23:22:30.273229 dockerd[1683]: time="2025-11-06T23:22:30.273178865Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 6 23:22:30.273313 dockerd[1683]: time="2025-11-06T23:22:30.273284372Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Nov 6 23:22:30.273466 dockerd[1683]: time="2025-11-06T23:22:30.273446578Z" level=info msg="Daemon has completed initialization" Nov 6 23:22:30.299996 dockerd[1683]: time="2025-11-06T23:22:30.299891933Z" level=info msg="API listen on /run/docker.sock" Nov 6 23:22:30.300060 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 6 23:22:30.884286 containerd[1479]: time="2025-11-06T23:22:30.884225329Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 6 23:22:31.464167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2774058774.mount: Deactivated successfully. Nov 6 23:22:32.489933 containerd[1479]: time="2025-11-06T23:22:32.489878021Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:22:32.490575 containerd[1479]: time="2025-11-06T23:22:32.490538445Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=27390230" Nov 6 23:22:32.491153 containerd[1479]: time="2025-11-06T23:22:32.491131313Z" level=info msg="ImageCreate event name:\"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:22:32.494355 containerd[1479]: time="2025-11-06T23:22:32.494303488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:22:32.495480 containerd[1479]: time="2025-11-06T23:22:32.495435193Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"27386827\" in 1.611148379s" Nov 6 23:22:32.495480 containerd[1479]: time="2025-11-06T23:22:32.495479245Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\"" Nov 6 23:22:32.496728 containerd[1479]: time="2025-11-06T23:22:32.496693178Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 6 23:22:33.637860 containerd[1479]: time="2025-11-06T23:22:33.637801276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:22:33.638864 containerd[1479]: time="2025-11-06T23:22:33.638804345Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=23547919" Nov 6 23:22:33.639587 containerd[1479]: time="2025-11-06T23:22:33.639555748Z" level=info msg="ImageCreate event name:\"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:22:33.642521 containerd[1479]: time="2025-11-06T23:22:33.642484557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:22:33.644550 containerd[1479]: time="2025-11-06T23:22:33.644501402Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"25135832\" in 1.14777788s" Nov 6 23:22:33.644550 containerd[1479]: time="2025-11-06T23:22:33.644548337Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\"" Nov 6 23:22:33.645020 containerd[1479]: time="2025-11-06T23:22:33.644997623Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 6 23:22:34.909336 containerd[1479]: time="2025-11-06T23:22:34.909267057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:22:34.909861 containerd[1479]: time="2025-11-06T23:22:34.909813077Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=18295979" Nov 6 23:22:34.910621 containerd[1479]: time="2025-11-06T23:22:34.910575305Z" level=info msg="ImageCreate event name:\"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:22:34.915228 containerd[1479]: time="2025-11-06T23:22:34.914330571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:22:34.915228 containerd[1479]: time="2025-11-06T23:22:34.915069650Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"19883910\" in 1.27004021s" Nov 6 23:22:34.915228 containerd[1479]: time="2025-11-06T23:22:34.915100633Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\"" Nov 6 23:22:34.916007 containerd[1479]: time="2025-11-06T23:22:34.915982203Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 6 23:22:35.884012 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount853698014.mount: Deactivated successfully. Nov 6 23:22:36.130959 containerd[1479]: time="2025-11-06T23:22:36.130451603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:22:36.131330 containerd[1479]: time="2025-11-06T23:22:36.131010598Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=28240108" Nov 6 23:22:36.132619 containerd[1479]: time="2025-11-06T23:22:36.132586625Z" level=info msg="ImageCreate event name:\"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:22:36.134926 containerd[1479]: time="2025-11-06T23:22:36.134819514Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:22:36.135464 containerd[1479]: time="2025-11-06T23:22:36.135439562Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"28239125\" in 1.219419584s" Nov 6 23:22:36.135547 containerd[1479]: time="2025-11-06T23:22:36.135532431Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\"" Nov 6 23:22:36.136056 containerd[1479]: time="2025-11-06T23:22:36.136014386Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 6 23:22:36.349505 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 6 23:22:36.358421 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:22:36.462666 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:22:36.465994 (kubelet)[1963]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 23:22:36.561504 kubelet[1963]: E1106 23:22:36.561442 1963 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 23:22:36.564864 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 23:22:36.565017 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 23:22:36.565323 systemd[1]: kubelet.service: Consumed 133ms CPU time, 109.6M memory peak. Nov 6 23:22:36.829566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount932050991.mount: Deactivated successfully. Nov 6 23:22:37.743298 containerd[1479]: time="2025-11-06T23:22:37.742680406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:22:37.743659 containerd[1479]: time="2025-11-06T23:22:37.743353836Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Nov 6 23:22:37.744315 containerd[1479]: time="2025-11-06T23:22:37.744280903Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:22:37.748120 containerd[1479]: time="2025-11-06T23:22:37.748082615Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:22:37.750008 containerd[1479]: time="2025-11-06T23:22:37.749967884Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.613916872s" Nov 6 23:22:37.750053 containerd[1479]: time="2025-11-06T23:22:37.750008985Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Nov 6 23:22:37.750519 containerd[1479]: time="2025-11-06T23:22:37.750480481Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 6 23:22:38.192714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1693572867.mount: Deactivated successfully. Nov 6 23:22:38.197118 containerd[1479]: time="2025-11-06T23:22:38.197078914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:22:38.197580 containerd[1479]: time="2025-11-06T23:22:38.197538954Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Nov 6 23:22:38.198546 containerd[1479]: time="2025-11-06T23:22:38.198506372Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:22:38.201014 containerd[1479]: time="2025-11-06T23:22:38.200614508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:22:38.201590 containerd[1479]: time="2025-11-06T23:22:38.201563762Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 451.050789ms" Nov 6 23:22:38.201656 containerd[1479]: time="2025-11-06T23:22:38.201596117Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Nov 6 23:22:38.202287 containerd[1479]: time="2025-11-06T23:22:38.202259227Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 6 23:22:38.625371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3386962337.mount: Deactivated successfully. Nov 6 23:22:40.406311 containerd[1479]: time="2025-11-06T23:22:40.406074713Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:22:40.406743 containerd[1479]: time="2025-11-06T23:22:40.406690603Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465859" Nov 6 23:22:40.407668 containerd[1479]: time="2025-11-06T23:22:40.407619296Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:22:40.411444 containerd[1479]: time="2025-11-06T23:22:40.411413521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:22:40.413581 containerd[1479]: time="2025-11-06T23:22:40.413424043Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.211134755s" Nov 6 23:22:40.413581 containerd[1479]: time="2025-11-06T23:22:40.413460666Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Nov 6 23:22:46.307799 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:22:46.307936 systemd[1]: kubelet.service: Consumed 133ms CPU time, 109.6M memory peak. Nov 6 23:22:46.320485 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:22:46.343048 systemd[1]: Reload requested from client PID 2112 ('systemctl') (unit session-7.scope)... Nov 6 23:22:46.343064 systemd[1]: Reloading... Nov 6 23:22:46.426347 zram_generator::config[2162]: No configuration found. Nov 6 23:22:46.530826 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 6 23:22:46.605674 systemd[1]: Reloading finished in 262 ms. Nov 6 23:22:46.642320 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:22:46.645099 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:22:46.646353 systemd[1]: kubelet.service: Deactivated successfully. Nov 6 23:22:46.647322 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:22:46.647377 systemd[1]: kubelet.service: Consumed 85ms CPU time, 95M memory peak. Nov 6 23:22:46.648974 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:22:46.748695 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:22:46.752348 (kubelet)[2203]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 23:22:46.789256 kubelet[2203]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 23:22:46.789256 kubelet[2203]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 23:22:46.789577 kubelet[2203]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 23:22:46.789577 kubelet[2203]: I1106 23:22:46.789330 2203 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 23:22:47.463875 kubelet[2203]: I1106 23:22:47.463825 2203 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 6 23:22:47.463875 kubelet[2203]: I1106 23:22:47.463859 2203 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 23:22:47.464104 kubelet[2203]: I1106 23:22:47.464061 2203 server.go:956] "Client rotation is on, will bootstrap in background" Nov 6 23:22:47.479072 kubelet[2203]: E1106 23:22:47.479032 2203 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.81:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 6 23:22:47.480027 kubelet[2203]: I1106 23:22:47.479951 2203 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 23:22:47.489495 kubelet[2203]: E1106 23:22:47.489455 2203 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 6 23:22:47.489495 kubelet[2203]: I1106 23:22:47.489495 2203 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 6 23:22:47.491894 kubelet[2203]: I1106 23:22:47.491858 2203 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 6 23:22:47.492200 kubelet[2203]: I1106 23:22:47.492162 2203 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 23:22:47.492361 kubelet[2203]: I1106 23:22:47.492188 2203 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 23:22:47.492456 kubelet[2203]: I1106 23:22:47.492417 2203 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 23:22:47.492456 kubelet[2203]: I1106 23:22:47.492426 2203 container_manager_linux.go:303] "Creating device plugin manager" Nov 6 23:22:47.492628 kubelet[2203]: I1106 23:22:47.492613 2203 state_mem.go:36] "Initialized new in-memory state store" Nov 6 23:22:47.495094 kubelet[2203]: I1106 23:22:47.495063 2203 kubelet.go:480] "Attempting to sync node with API server" Nov 6 23:22:47.495094 kubelet[2203]: I1106 23:22:47.495089 2203 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 23:22:47.495165 kubelet[2203]: I1106 23:22:47.495114 2203 kubelet.go:386] "Adding apiserver pod source" Nov 6 23:22:47.495165 kubelet[2203]: I1106 23:22:47.495129 2203 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 23:22:47.496366 kubelet[2203]: I1106 23:22:47.496150 2203 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Nov 6 23:22:47.496846 kubelet[2203]: I1106 23:22:47.496820 2203 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 6 23:22:47.496950 kubelet[2203]: W1106 23:22:47.496936 2203 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 6 23:22:47.497495 kubelet[2203]: E1106 23:22:47.497456 2203 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 6 23:22:47.497748 kubelet[2203]: E1106 23:22:47.497714 2203 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 6 23:22:47.499237 kubelet[2203]: I1106 23:22:47.499217 2203 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 6 23:22:47.499314 kubelet[2203]: I1106 23:22:47.499279 2203 server.go:1289] "Started kubelet" Nov 6 23:22:47.500390 kubelet[2203]: I1106 23:22:47.499389 2203 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 23:22:47.501189 kubelet[2203]: I1106 23:22:47.501173 2203 server.go:317] "Adding debug handlers to kubelet server" Nov 6 23:22:47.501559 kubelet[2203]: I1106 23:22:47.501514 2203 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 23:22:47.501815 kubelet[2203]: I1106 23:22:47.501786 2203 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 23:22:47.503047 kubelet[2203]: I1106 23:22:47.503023 2203 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 23:22:47.503118 kubelet[2203]: I1106 23:22:47.503073 2203 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 23:22:47.504017 kubelet[2203]: E1106 23:22:47.503035 2203 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.81:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.81:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18758e5eea96b998 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-06 23:22:47.499233688 +0000 UTC m=+0.743618787,LastTimestamp:2025-11-06 23:22:47.499233688 +0000 UTC m=+0.743618787,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 6 23:22:47.504017 kubelet[2203]: E1106 23:22:47.503980 2203 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 6 23:22:47.505209 kubelet[2203]: E1106 23:22:47.504199 2203 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 23:22:47.505209 kubelet[2203]: E1106 23:22:47.504827 2203 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="200ms" Nov 6 23:22:47.505209 kubelet[2203]: I1106 23:22:47.505000 2203 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 6 23:22:47.505209 kubelet[2203]: I1106 23:22:47.505095 2203 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 6 23:22:47.505209 kubelet[2203]: I1106 23:22:47.505160 2203 reconciler.go:26] "Reconciler: start to sync state" Nov 6 23:22:47.505792 kubelet[2203]: E1106 23:22:47.505751 2203 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 6 23:22:47.506631 kubelet[2203]: I1106 23:22:47.506605 2203 factory.go:223] Registration of the containerd container factory successfully Nov 6 23:22:47.506722 kubelet[2203]: I1106 23:22:47.506710 2203 factory.go:223] Registration of the systemd container factory successfully Nov 6 23:22:47.506873 kubelet[2203]: I1106 23:22:47.506843 2203 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 23:22:47.520545 kubelet[2203]: I1106 23:22:47.520499 2203 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 6 23:22:47.521848 kubelet[2203]: I1106 23:22:47.521813 2203 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 6 23:22:47.521848 kubelet[2203]: I1106 23:22:47.521840 2203 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 6 23:22:47.521926 kubelet[2203]: I1106 23:22:47.521858 2203 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 23:22:47.521926 kubelet[2203]: I1106 23:22:47.521866 2203 kubelet.go:2436] "Starting kubelet main sync loop" Nov 6 23:22:47.521926 kubelet[2203]: E1106 23:22:47.521907 2203 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 23:22:47.522965 kubelet[2203]: E1106 23:22:47.522925 2203 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 6 23:22:47.524641 kubelet[2203]: I1106 23:22:47.524591 2203 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 23:22:47.524772 kubelet[2203]: I1106 23:22:47.524749 2203 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 23:22:47.524839 kubelet[2203]: I1106 23:22:47.524830 2203 state_mem.go:36] "Initialized new in-memory state store" Nov 6 23:22:47.598413 kubelet[2203]: I1106 23:22:47.598383 2203 policy_none.go:49] "None policy: Start" Nov 6 23:22:47.598548 kubelet[2203]: I1106 23:22:47.598536 2203 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 6 23:22:47.598643 kubelet[2203]: I1106 23:22:47.598633 2203 state_mem.go:35] "Initializing new in-memory state store" Nov 6 23:22:47.604595 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 6 23:22:47.604996 kubelet[2203]: E1106 23:22:47.604964 2203 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 23:22:47.622806 kubelet[2203]: E1106 23:22:47.622769 2203 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 6 23:22:47.623564 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 6 23:22:47.626794 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 6 23:22:47.636282 kubelet[2203]: E1106 23:22:47.636184 2203 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 6 23:22:47.636517 kubelet[2203]: I1106 23:22:47.636449 2203 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 23:22:47.636517 kubelet[2203]: I1106 23:22:47.636466 2203 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 23:22:47.636720 kubelet[2203]: I1106 23:22:47.636686 2203 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 23:22:47.637405 kubelet[2203]: E1106 23:22:47.637361 2203 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 23:22:47.637405 kubelet[2203]: E1106 23:22:47.637409 2203 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 6 23:22:47.705763 kubelet[2203]: E1106 23:22:47.705697 2203 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="400ms" Nov 6 23:22:47.738042 kubelet[2203]: I1106 23:22:47.737951 2203 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 23:22:47.738424 kubelet[2203]: E1106 23:22:47.738387 2203 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" Nov 6 23:22:47.832580 systemd[1]: Created slice kubepods-burstable-poda76cb8f598976f146deaed339cf03720.slice - libcontainer container kubepods-burstable-poda76cb8f598976f146deaed339cf03720.slice. Nov 6 23:22:47.841122 kubelet[2203]: E1106 23:22:47.840926 2203 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 23:22:47.842993 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice - libcontainer container kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Nov 6 23:22:47.845056 kubelet[2203]: E1106 23:22:47.844757 2203 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 23:22:47.846986 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice - libcontainer container kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Nov 6 23:22:47.848396 kubelet[2203]: E1106 23:22:47.848374 2203 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 23:22:47.906603 kubelet[2203]: I1106 23:22:47.906558 2203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a76cb8f598976f146deaed339cf03720-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a76cb8f598976f146deaed339cf03720\") " pod="kube-system/kube-apiserver-localhost" Nov 6 23:22:47.906603 kubelet[2203]: I1106 23:22:47.906598 2203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a76cb8f598976f146deaed339cf03720-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a76cb8f598976f146deaed339cf03720\") " pod="kube-system/kube-apiserver-localhost" Nov 6 23:22:47.906603 kubelet[2203]: I1106 23:22:47.906618 2203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a76cb8f598976f146deaed339cf03720-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a76cb8f598976f146deaed339cf03720\") " pod="kube-system/kube-apiserver-localhost" Nov 6 23:22:47.906782 kubelet[2203]: I1106 23:22:47.906634 2203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 23:22:47.906782 kubelet[2203]: I1106 23:22:47.906650 2203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 23:22:47.906782 kubelet[2203]: I1106 23:22:47.906664 2203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 6 23:22:47.906782 kubelet[2203]: I1106 23:22:47.906676 2203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 23:22:47.906782 kubelet[2203]: I1106 23:22:47.906690 2203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 23:22:47.906888 kubelet[2203]: I1106 23:22:47.906703 2203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 23:22:47.939673 kubelet[2203]: I1106 23:22:47.939630 2203 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 23:22:47.939984 kubelet[2203]: E1106 23:22:47.939948 2203 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" Nov 6 23:22:48.106522 kubelet[2203]: E1106 23:22:48.106398 2203 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="800ms" Nov 6 23:22:48.141888 kubelet[2203]: E1106 23:22:48.141817 2203 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:22:48.142539 containerd[1479]: time="2025-11-06T23:22:48.142504235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a76cb8f598976f146deaed339cf03720,Namespace:kube-system,Attempt:0,}" Nov 6 23:22:48.145678 kubelet[2203]: E1106 23:22:48.145656 2203 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:22:48.146100 containerd[1479]: time="2025-11-06T23:22:48.146072144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Nov 6 23:22:48.149366 kubelet[2203]: E1106 23:22:48.149346 2203 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:22:48.149702 containerd[1479]: time="2025-11-06T23:22:48.149676991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Nov 6 23:22:48.341351 kubelet[2203]: I1106 23:22:48.341320 2203 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 23:22:48.341690 kubelet[2203]: E1106 23:22:48.341650 2203 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" Nov 6 23:22:48.585676 kubelet[2203]: E1106 23:22:48.585547 2203 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 6 23:22:48.658482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount898877646.mount: Deactivated successfully. Nov 6 23:22:48.663700 containerd[1479]: time="2025-11-06T23:22:48.663608727Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 23:22:48.665274 containerd[1479]: time="2025-11-06T23:22:48.665216893Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Nov 6 23:22:48.667616 containerd[1479]: time="2025-11-06T23:22:48.667571978Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 23:22:48.670268 containerd[1479]: time="2025-11-06T23:22:48.669206980Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 23:22:48.670268 containerd[1479]: time="2025-11-06T23:22:48.669870557Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 6 23:22:48.670919 containerd[1479]: time="2025-11-06T23:22:48.670891899Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 23:22:48.671524 containerd[1479]: time="2025-11-06T23:22:48.671482517Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 6 23:22:48.673668 containerd[1479]: time="2025-11-06T23:22:48.673621282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 23:22:48.674958 containerd[1479]: time="2025-11-06T23:22:48.674933181Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 525.198086ms" Nov 6 23:22:48.676212 containerd[1479]: time="2025-11-06T23:22:48.676181466Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 530.042833ms" Nov 6 23:22:48.679144 containerd[1479]: time="2025-11-06T23:22:48.679111715Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 536.530608ms" Nov 6 23:22:48.756560 kubelet[2203]: E1106 23:22:48.756508 2203 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 6 23:22:48.792567 containerd[1479]: time="2025-11-06T23:22:48.792280067Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:22:48.792567 containerd[1479]: time="2025-11-06T23:22:48.792349352Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:22:48.792567 containerd[1479]: time="2025-11-06T23:22:48.792360454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:22:48.792567 containerd[1479]: time="2025-11-06T23:22:48.792442757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:22:48.796730 containerd[1479]: time="2025-11-06T23:22:48.796616099Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:22:48.796730 containerd[1479]: time="2025-11-06T23:22:48.796685384Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:22:48.797171 containerd[1479]: time="2025-11-06T23:22:48.797104887Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:22:48.797171 containerd[1479]: time="2025-11-06T23:22:48.797159037Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:22:48.797240 containerd[1479]: time="2025-11-06T23:22:48.797171895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:22:48.797299 containerd[1479]: time="2025-11-06T23:22:48.797252921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:22:48.799221 containerd[1479]: time="2025-11-06T23:22:48.796701118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:22:48.799221 containerd[1479]: time="2025-11-06T23:22:48.797889423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:22:48.819449 systemd[1]: Started cri-containerd-2530e4bd77ec50d6c60eb98d1dc3e6661eb4d824d91a72b923b948ed28f233d5.scope - libcontainer container 2530e4bd77ec50d6c60eb98d1dc3e6661eb4d824d91a72b923b948ed28f233d5. Nov 6 23:22:48.821003 systemd[1]: Started cri-containerd-3a9c5ef327196fb2318294a7326470f82f9bd9879b9eef092d6888ac8b1cbd0e.scope - libcontainer container 3a9c5ef327196fb2318294a7326470f82f9bd9879b9eef092d6888ac8b1cbd0e. Nov 6 23:22:48.822352 systemd[1]: Started cri-containerd-a63a3401d96f3e0e56d525e9da11cbc41adf5fbe04bc6cd868500dca8fd1474a.scope - libcontainer container a63a3401d96f3e0e56d525e9da11cbc41adf5fbe04bc6cd868500dca8fd1474a. Nov 6 23:22:48.825687 kubelet[2203]: E1106 23:22:48.825594 2203 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 6 23:22:48.829296 kubelet[2203]: E1106 23:22:48.829230 2203 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 6 23:22:48.857340 containerd[1479]: time="2025-11-06T23:22:48.856941816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"2530e4bd77ec50d6c60eb98d1dc3e6661eb4d824d91a72b923b948ed28f233d5\"" Nov 6 23:22:48.859417 containerd[1479]: time="2025-11-06T23:22:48.859350531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"a63a3401d96f3e0e56d525e9da11cbc41adf5fbe04bc6cd868500dca8fd1474a\"" Nov 6 23:22:48.860096 kubelet[2203]: E1106 23:22:48.860061 2203 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:22:48.861836 kubelet[2203]: E1106 23:22:48.861816 2203 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:22:48.864399 containerd[1479]: time="2025-11-06T23:22:48.864364876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a76cb8f598976f146deaed339cf03720,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a9c5ef327196fb2318294a7326470f82f9bd9879b9eef092d6888ac8b1cbd0e\"" Nov 6 23:22:48.865081 kubelet[2203]: E1106 23:22:48.865017 2203 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:22:48.866948 containerd[1479]: time="2025-11-06T23:22:48.866904055Z" level=info msg="CreateContainer within sandbox \"2530e4bd77ec50d6c60eb98d1dc3e6661eb4d824d91a72b923b948ed28f233d5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 6 23:22:48.868335 containerd[1479]: time="2025-11-06T23:22:48.868307322Z" level=info msg="CreateContainer within sandbox \"a63a3401d96f3e0e56d525e9da11cbc41adf5fbe04bc6cd868500dca8fd1474a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 6 23:22:48.869882 containerd[1479]: time="2025-11-06T23:22:48.869782949Z" level=info msg="CreateContainer within sandbox \"3a9c5ef327196fb2318294a7326470f82f9bd9879b9eef092d6888ac8b1cbd0e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 6 23:22:48.886124 containerd[1479]: time="2025-11-06T23:22:48.886076263Z" level=info msg="CreateContainer within sandbox \"a63a3401d96f3e0e56d525e9da11cbc41adf5fbe04bc6cd868500dca8fd1474a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"432b60766e1dd5280c356fa1af5b317986f4de880fc59a0cffff4e16e91556bd\"" Nov 6 23:22:48.887005 containerd[1479]: time="2025-11-06T23:22:48.886857485Z" level=info msg="StartContainer for \"432b60766e1dd5280c356fa1af5b317986f4de880fc59a0cffff4e16e91556bd\"" Nov 6 23:22:48.887005 containerd[1479]: time="2025-11-06T23:22:48.886923176Z" level=info msg="CreateContainer within sandbox \"3a9c5ef327196fb2318294a7326470f82f9bd9879b9eef092d6888ac8b1cbd0e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4002371d1084c0005c85e3ef5ed53f2526bf9d9d4bc27df69375ea13eeb5806c\"" Nov 6 23:22:48.887331 containerd[1479]: time="2025-11-06T23:22:48.887237852Z" level=info msg="StartContainer for \"4002371d1084c0005c85e3ef5ed53f2526bf9d9d4bc27df69375ea13eeb5806c\"" Nov 6 23:22:48.888381 containerd[1479]: time="2025-11-06T23:22:48.888349564Z" level=info msg="CreateContainer within sandbox \"2530e4bd77ec50d6c60eb98d1dc3e6661eb4d824d91a72b923b948ed28f233d5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7241ae85d62a3efdd06a78585a09c6356527a0b38ad147025f86e958043e18a2\"" Nov 6 23:22:48.888751 containerd[1479]: time="2025-11-06T23:22:48.888724621Z" level=info msg="StartContainer for \"7241ae85d62a3efdd06a78585a09c6356527a0b38ad147025f86e958043e18a2\"" Nov 6 23:22:48.910460 kubelet[2203]: E1106 23:22:48.908516 2203 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="1.6s" Nov 6 23:22:48.919398 systemd[1]: Started cri-containerd-4002371d1084c0005c85e3ef5ed53f2526bf9d9d4bc27df69375ea13eeb5806c.scope - libcontainer container 4002371d1084c0005c85e3ef5ed53f2526bf9d9d4bc27df69375ea13eeb5806c. Nov 6 23:22:48.920367 systemd[1]: Started cri-containerd-432b60766e1dd5280c356fa1af5b317986f4de880fc59a0cffff4e16e91556bd.scope - libcontainer container 432b60766e1dd5280c356fa1af5b317986f4de880fc59a0cffff4e16e91556bd. Nov 6 23:22:48.921200 systemd[1]: Started cri-containerd-7241ae85d62a3efdd06a78585a09c6356527a0b38ad147025f86e958043e18a2.scope - libcontainer container 7241ae85d62a3efdd06a78585a09c6356527a0b38ad147025f86e958043e18a2. Nov 6 23:22:48.957199 containerd[1479]: time="2025-11-06T23:22:48.957139330Z" level=info msg="StartContainer for \"4002371d1084c0005c85e3ef5ed53f2526bf9d9d4bc27df69375ea13eeb5806c\" returns successfully" Nov 6 23:22:48.964704 containerd[1479]: time="2025-11-06T23:22:48.964657033Z" level=info msg="StartContainer for \"432b60766e1dd5280c356fa1af5b317986f4de880fc59a0cffff4e16e91556bd\" returns successfully" Nov 6 23:22:48.964930 containerd[1479]: time="2025-11-06T23:22:48.964670371Z" level=info msg="StartContainer for \"7241ae85d62a3efdd06a78585a09c6356527a0b38ad147025f86e958043e18a2\" returns successfully" Nov 6 23:22:49.143722 kubelet[2203]: I1106 23:22:49.143624 2203 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 23:22:49.533560 kubelet[2203]: E1106 23:22:49.531500 2203 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 23:22:49.533560 kubelet[2203]: E1106 23:22:49.533079 2203 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 23:22:49.533560 kubelet[2203]: E1106 23:22:49.533208 2203 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:22:49.533560 kubelet[2203]: E1106 23:22:49.533371 2203 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:22:49.535091 kubelet[2203]: E1106 23:22:49.534946 2203 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 23:22:49.535091 kubelet[2203]: E1106 23:22:49.535043 2203 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:22:50.518851 kubelet[2203]: E1106 23:22:50.518800 2203 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 6 23:22:50.523651 kubelet[2203]: I1106 23:22:50.523487 2203 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 6 23:22:50.523651 kubelet[2203]: E1106 23:22:50.523519 2203 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 6 23:22:50.533321 kubelet[2203]: E1106 23:22:50.533283 2203 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 23:22:50.536700 kubelet[2203]: E1106 23:22:50.536673 2203 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 23:22:50.536807 kubelet[2203]: E1106 23:22:50.536791 2203 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:22:50.536939 kubelet[2203]: E1106 23:22:50.536923 2203 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 23:22:50.537024 kubelet[2203]: E1106 23:22:50.537017 2203 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:22:50.633432 kubelet[2203]: E1106 23:22:50.633390 2203 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 23:22:50.737370 kubelet[2203]: E1106 23:22:50.734116 2203 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 23:22:50.834909 kubelet[2203]: E1106 23:22:50.834778 2203 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 23:22:50.935636 kubelet[2203]: E1106 23:22:50.935593 2203 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 23:22:51.005846 kubelet[2203]: I1106 23:22:51.005769 2203 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 6 23:22:51.012489 kubelet[2203]: E1106 23:22:51.012282 2203 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 6 23:22:51.012489 kubelet[2203]: I1106 23:22:51.012310 2203 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 6 23:22:51.014749 kubelet[2203]: E1106 23:22:51.014725 2203 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 6 23:22:51.015027 kubelet[2203]: I1106 23:22:51.014830 2203 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 6 23:22:51.016256 kubelet[2203]: E1106 23:22:51.016215 2203 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 6 23:22:51.498834 kubelet[2203]: I1106 23:22:51.498789 2203 apiserver.go:52] "Watching apiserver" Nov 6 23:22:51.505406 kubelet[2203]: I1106 23:22:51.505379 2203 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 6 23:22:52.692152 systemd[1]: Reload requested from client PID 2488 ('systemctl') (unit session-7.scope)... Nov 6 23:22:52.692170 systemd[1]: Reloading... Nov 6 23:22:52.793286 zram_generator::config[2532]: No configuration found. Nov 6 23:22:52.958849 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 6 23:22:53.042023 systemd[1]: Reloading finished in 349 ms. Nov 6 23:22:53.067823 kubelet[2203]: I1106 23:22:53.067696 2203 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 23:22:53.067937 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:22:53.089501 systemd[1]: kubelet.service: Deactivated successfully. Nov 6 23:22:53.090344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:22:53.090408 systemd[1]: kubelet.service: Consumed 1.079s CPU time, 131M memory peak. Nov 6 23:22:53.102605 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:22:53.206709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:22:53.210192 (kubelet)[2574]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 23:22:53.240293 kubelet[2574]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 23:22:53.240293 kubelet[2574]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 23:22:53.240293 kubelet[2574]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 23:22:53.240293 kubelet[2574]: I1106 23:22:53.240223 2574 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 23:22:53.246109 kubelet[2574]: I1106 23:22:53.246081 2574 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 6 23:22:53.246109 kubelet[2574]: I1106 23:22:53.246106 2574 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 23:22:53.246321 kubelet[2574]: I1106 23:22:53.246307 2574 server.go:956] "Client rotation is on, will bootstrap in background" Nov 6 23:22:53.247547 kubelet[2574]: I1106 23:22:53.247530 2574 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 6 23:22:53.249760 kubelet[2574]: I1106 23:22:53.249716 2574 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 23:22:53.252461 kubelet[2574]: E1106 23:22:53.252324 2574 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 6 23:22:53.252461 kubelet[2574]: I1106 23:22:53.252379 2574 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 6 23:22:53.255616 kubelet[2574]: I1106 23:22:53.255598 2574 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 6 23:22:53.255945 kubelet[2574]: I1106 23:22:53.255873 2574 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 23:22:53.256153 kubelet[2574]: I1106 23:22:53.256022 2574 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 23:22:53.256291 kubelet[2574]: I1106 23:22:53.256279 2574 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 23:22:53.256385 kubelet[2574]: I1106 23:22:53.256375 2574 container_manager_linux.go:303] "Creating device plugin manager" Nov 6 23:22:53.256496 kubelet[2574]: I1106 23:22:53.256486 2574 state_mem.go:36] "Initialized new in-memory state store" Nov 6 23:22:53.256933 kubelet[2574]: I1106 23:22:53.256917 2574 kubelet.go:480] "Attempting to sync node with API server" Nov 6 23:22:53.257035 kubelet[2574]: I1106 23:22:53.257024 2574 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 23:22:53.257109 kubelet[2574]: I1106 23:22:53.257100 2574 kubelet.go:386] "Adding apiserver pod source" Nov 6 23:22:53.257161 kubelet[2574]: I1106 23:22:53.257153 2574 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 23:22:53.258107 kubelet[2574]: I1106 23:22:53.258087 2574 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Nov 6 23:22:53.260569 kubelet[2574]: I1106 23:22:53.259203 2574 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 6 23:22:53.266268 kubelet[2574]: I1106 23:22:53.263792 2574 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 6 23:22:53.266268 kubelet[2574]: I1106 23:22:53.263848 2574 server.go:1289] "Started kubelet" Nov 6 23:22:53.266268 kubelet[2574]: I1106 23:22:53.264939 2574 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 23:22:53.266268 kubelet[2574]: I1106 23:22:53.265027 2574 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 23:22:53.266268 kubelet[2574]: I1106 23:22:53.265187 2574 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 23:22:53.266268 kubelet[2574]: I1106 23:22:53.265233 2574 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 23:22:53.266268 kubelet[2574]: I1106 23:22:53.266036 2574 server.go:317] "Adding debug handlers to kubelet server" Nov 6 23:22:53.271778 kubelet[2574]: I1106 23:22:53.269219 2574 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 23:22:53.275502 kubelet[2574]: E1106 23:22:53.275480 2574 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 23:22:53.275591 kubelet[2574]: I1106 23:22:53.275583 2574 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 6 23:22:53.275826 kubelet[2574]: I1106 23:22:53.275810 2574 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 6 23:22:53.276003 kubelet[2574]: I1106 23:22:53.275992 2574 reconciler.go:26] "Reconciler: start to sync state" Nov 6 23:22:53.278833 kubelet[2574]: E1106 23:22:53.278358 2574 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 6 23:22:53.278833 kubelet[2574]: I1106 23:22:53.278581 2574 factory.go:223] Registration of the systemd container factory successfully Nov 6 23:22:53.278833 kubelet[2574]: I1106 23:22:53.278662 2574 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 23:22:53.280455 kubelet[2574]: I1106 23:22:53.280433 2574 factory.go:223] Registration of the containerd container factory successfully Nov 6 23:22:53.288446 kubelet[2574]: I1106 23:22:53.288403 2574 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 6 23:22:53.289314 kubelet[2574]: I1106 23:22:53.289237 2574 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 6 23:22:53.289314 kubelet[2574]: I1106 23:22:53.289318 2574 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 6 23:22:53.289419 kubelet[2574]: I1106 23:22:53.289338 2574 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 23:22:53.289419 kubelet[2574]: I1106 23:22:53.289345 2574 kubelet.go:2436] "Starting kubelet main sync loop" Nov 6 23:22:53.289419 kubelet[2574]: E1106 23:22:53.289396 2574 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 23:22:53.310447 kubelet[2574]: I1106 23:22:53.310425 2574 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 23:22:53.310447 kubelet[2574]: I1106 23:22:53.310444 2574 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 23:22:53.310546 kubelet[2574]: I1106 23:22:53.310464 2574 state_mem.go:36] "Initialized new in-memory state store" Nov 6 23:22:53.310598 kubelet[2574]: I1106 23:22:53.310574 2574 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 6 23:22:53.310625 kubelet[2574]: I1106 23:22:53.310598 2574 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 6 23:22:53.310625 kubelet[2574]: I1106 23:22:53.310613 2574 policy_none.go:49] "None policy: Start" Nov 6 23:22:53.310625 kubelet[2574]: I1106 23:22:53.310621 2574 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 6 23:22:53.310680 kubelet[2574]: I1106 23:22:53.310631 2574 state_mem.go:35] "Initializing new in-memory state store" Nov 6 23:22:53.310719 kubelet[2574]: I1106 23:22:53.310709 2574 state_mem.go:75] "Updated machine memory state" Nov 6 23:22:53.314037 kubelet[2574]: E1106 23:22:53.314016 2574 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 6 23:22:53.314209 kubelet[2574]: I1106 23:22:53.314182 2574 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 23:22:53.314209 kubelet[2574]: I1106 23:22:53.314194 2574 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 23:22:53.314419 kubelet[2574]: I1106 23:22:53.314401 2574 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 23:22:53.315872 kubelet[2574]: E1106 23:22:53.315848 2574 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 23:22:53.390527 kubelet[2574]: I1106 23:22:53.390458 2574 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 6 23:22:53.390527 kubelet[2574]: I1106 23:22:53.390500 2574 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 6 23:22:53.390929 kubelet[2574]: I1106 23:22:53.390723 2574 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 6 23:22:53.417904 kubelet[2574]: I1106 23:22:53.417877 2574 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 23:22:53.477357 kubelet[2574]: I1106 23:22:53.477219 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 6 23:22:53.477357 kubelet[2574]: I1106 23:22:53.477271 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a76cb8f598976f146deaed339cf03720-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a76cb8f598976f146deaed339cf03720\") " pod="kube-system/kube-apiserver-localhost" Nov 6 23:22:53.477357 kubelet[2574]: I1106 23:22:53.477293 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a76cb8f598976f146deaed339cf03720-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a76cb8f598976f146deaed339cf03720\") " pod="kube-system/kube-apiserver-localhost" Nov 6 23:22:53.477357 kubelet[2574]: I1106 23:22:53.477318 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 23:22:53.477357 kubelet[2574]: I1106 23:22:53.477363 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 23:22:53.477605 kubelet[2574]: I1106 23:22:53.477384 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a76cb8f598976f146deaed339cf03720-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a76cb8f598976f146deaed339cf03720\") " pod="kube-system/kube-apiserver-localhost" Nov 6 23:22:53.477605 kubelet[2574]: I1106 23:22:53.477431 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 23:22:53.477605 kubelet[2574]: I1106 23:22:53.477448 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 23:22:53.477605 kubelet[2574]: I1106 23:22:53.477462 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 23:22:53.551449 kubelet[2574]: I1106 23:22:53.551412 2574 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 6 23:22:53.551579 kubelet[2574]: I1106 23:22:53.551497 2574 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 6 23:22:53.729185 kubelet[2574]: E1106 23:22:53.729070 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:22:53.729185 kubelet[2574]: E1106 23:22:53.729099 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:22:53.729331 kubelet[2574]: E1106 23:22:53.729216 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:22:53.828697 sudo[2615]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 6 23:22:53.828986 sudo[2615]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 6 23:22:54.259644 kubelet[2574]: I1106 23:22:54.259593 2574 apiserver.go:52] "Watching apiserver" Nov 6 23:22:54.270470 sudo[2615]: pam_unix(sudo:session): session closed for user root Nov 6 23:22:54.276839 kubelet[2574]: I1106 23:22:54.276799 2574 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 6 23:22:54.307412 kubelet[2574]: E1106 23:22:54.307267 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:22:54.307412 kubelet[2574]: E1106 23:22:54.307326 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:22:54.307664 kubelet[2574]: E1106 23:22:54.307649 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:22:54.332501 kubelet[2574]: I1106 23:22:54.331740 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.331722728 podStartE2EDuration="1.331722728s" podCreationTimestamp="2025-11-06 23:22:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:22:54.322620273 +0000 UTC m=+1.108329805" watchObservedRunningTime="2025-11-06 23:22:54.331722728 +0000 UTC m=+1.117432220" Nov 6 23:22:54.332862 kubelet[2574]: I1106 23:22:54.332737 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.332711671 podStartE2EDuration="1.332711671s" podCreationTimestamp="2025-11-06 23:22:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:22:54.331658696 +0000 UTC m=+1.117368228" watchObservedRunningTime="2025-11-06 23:22:54.332711671 +0000 UTC m=+1.118421163" Nov 6 23:22:54.345411 kubelet[2574]: I1106 23:22:54.345306 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.345292414 podStartE2EDuration="1.345292414s" podCreationTimestamp="2025-11-06 23:22:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:22:54.345289656 +0000 UTC m=+1.130999188" watchObservedRunningTime="2025-11-06 23:22:54.345292414 +0000 UTC m=+1.131001946" Nov 6 23:22:55.308436 kubelet[2574]: E1106 23:22:55.308405 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:22:55.308756 kubelet[2574]: E1106 23:22:55.308509 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:22:55.365507 kubelet[2574]: E1106 23:22:55.365469 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:22:55.741741 sudo[1663]: pam_unix(sudo:session): session closed for user root Nov 6 23:22:55.742974 sshd[1662]: Connection closed by 10.0.0.1 port 48568 Nov 6 23:22:55.743416 sshd-session[1659]: pam_unix(sshd:session): session closed for user core Nov 6 23:22:55.746520 systemd[1]: sshd@6-10.0.0.81:22-10.0.0.1:48568.service: Deactivated successfully. Nov 6 23:22:55.748916 systemd[1]: session-7.scope: Deactivated successfully. Nov 6 23:22:55.749191 systemd[1]: session-7.scope: Consumed 7.796s CPU time, 256.2M memory peak. Nov 6 23:22:55.750804 systemd-logind[1467]: Session 7 logged out. Waiting for processes to exit. Nov 6 23:22:55.751723 systemd-logind[1467]: Removed session 7. Nov 6 23:22:59.323509 kubelet[2574]: I1106 23:22:59.323470 2574 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 6 23:22:59.323909 containerd[1479]: time="2025-11-06T23:22:59.323817638Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 6 23:22:59.324107 kubelet[2574]: I1106 23:22:59.323991 2574 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 6 23:22:59.718102 kubelet[2574]: I1106 23:22:59.717385 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f1c736f9-2300-4aa3-a489-c2ab79f7da34-kube-proxy\") pod \"kube-proxy-2rfrt\" (UID: \"f1c736f9-2300-4aa3-a489-c2ab79f7da34\") " pod="kube-system/kube-proxy-2rfrt" Nov 6 23:22:59.718102 kubelet[2574]: I1106 23:22:59.717419 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1c736f9-2300-4aa3-a489-c2ab79f7da34-lib-modules\") pod \"kube-proxy-2rfrt\" (UID: \"f1c736f9-2300-4aa3-a489-c2ab79f7da34\") " pod="kube-system/kube-proxy-2rfrt" Nov 6 23:22:59.718102 kubelet[2574]: I1106 23:22:59.717438 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx4wl\" (UniqueName: \"kubernetes.io/projected/f1c736f9-2300-4aa3-a489-c2ab79f7da34-kube-api-access-cx4wl\") pod \"kube-proxy-2rfrt\" (UID: \"f1c736f9-2300-4aa3-a489-c2ab79f7da34\") " pod="kube-system/kube-proxy-2rfrt" Nov 6 23:22:59.718102 kubelet[2574]: I1106 23:22:59.717458 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1c736f9-2300-4aa3-a489-c2ab79f7da34-xtables-lock\") pod \"kube-proxy-2rfrt\" (UID: \"f1c736f9-2300-4aa3-a489-c2ab79f7da34\") " pod="kube-system/kube-proxy-2rfrt" Nov 6 23:22:59.734416 systemd[1]: Created slice kubepods-besteffort-podf1c736f9_2300_4aa3_a489_c2ab79f7da34.slice - libcontainer container kubepods-besteffort-podf1c736f9_2300_4aa3_a489_c2ab79f7da34.slice. Nov 6 23:22:59.749192 systemd[1]: Created slice kubepods-burstable-poded5292a9_e268_454c_bd6a_1912f01cc6bc.slice - libcontainer container kubepods-burstable-poded5292a9_e268_454c_bd6a_1912f01cc6bc.slice. Nov 6 23:22:59.818280 kubelet[2574]: I1106 23:22:59.818221 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ed5292a9-e268-454c-bd6a-1912f01cc6bc-bpf-maps\") pod \"cilium-xgnm9\" (UID: \"ed5292a9-e268-454c-bd6a-1912f01cc6bc\") " pod="kube-system/cilium-xgnm9" Nov 6 23:22:59.818917 kubelet[2574]: I1106 23:22:59.818447 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ed5292a9-e268-454c-bd6a-1912f01cc6bc-cilium-cgroup\") pod \"cilium-xgnm9\" (UID: \"ed5292a9-e268-454c-bd6a-1912f01cc6bc\") " pod="kube-system/cilium-xgnm9" Nov 6 23:22:59.818917 kubelet[2574]: I1106 23:22:59.818475 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ed5292a9-e268-454c-bd6a-1912f01cc6bc-etc-cni-netd\") pod \"cilium-xgnm9\" (UID: \"ed5292a9-e268-454c-bd6a-1912f01cc6bc\") " pod="kube-system/cilium-xgnm9" Nov 6 23:22:59.818917 kubelet[2574]: I1106 23:22:59.818490 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ed5292a9-e268-454c-bd6a-1912f01cc6bc-xtables-lock\") pod \"cilium-xgnm9\" (UID: \"ed5292a9-e268-454c-bd6a-1912f01cc6bc\") " pod="kube-system/cilium-xgnm9" Nov 6 23:22:59.818917 kubelet[2574]: I1106 23:22:59.818505 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ed5292a9-e268-454c-bd6a-1912f01cc6bc-hostproc\") pod \"cilium-xgnm9\" (UID: \"ed5292a9-e268-454c-bd6a-1912f01cc6bc\") " pod="kube-system/cilium-xgnm9" Nov 6 23:22:59.818917 kubelet[2574]: I1106 23:22:59.818520 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ed5292a9-e268-454c-bd6a-1912f01cc6bc-cni-path\") pod \"cilium-xgnm9\" (UID: \"ed5292a9-e268-454c-bd6a-1912f01cc6bc\") " pod="kube-system/cilium-xgnm9" Nov 6 23:22:59.818917 kubelet[2574]: I1106 23:22:59.818533 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ed5292a9-e268-454c-bd6a-1912f01cc6bc-lib-modules\") pod \"cilium-xgnm9\" (UID: \"ed5292a9-e268-454c-bd6a-1912f01cc6bc\") " pod="kube-system/cilium-xgnm9" Nov 6 23:22:59.819128 kubelet[2574]: I1106 23:22:59.818552 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ed5292a9-e268-454c-bd6a-1912f01cc6bc-clustermesh-secrets\") pod \"cilium-xgnm9\" (UID: \"ed5292a9-e268-454c-bd6a-1912f01cc6bc\") " pod="kube-system/cilium-xgnm9" Nov 6 23:22:59.819128 kubelet[2574]: I1106 23:22:59.818578 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ed5292a9-e268-454c-bd6a-1912f01cc6bc-cilium-config-path\") pod \"cilium-xgnm9\" (UID: \"ed5292a9-e268-454c-bd6a-1912f01cc6bc\") " pod="kube-system/cilium-xgnm9" Nov 6 23:22:59.819128 kubelet[2574]: I1106 23:22:59.818601 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ed5292a9-e268-454c-bd6a-1912f01cc6bc-cilium-run\") pod \"cilium-xgnm9\" (UID: \"ed5292a9-e268-454c-bd6a-1912f01cc6bc\") " pod="kube-system/cilium-xgnm9" Nov 6 23:22:59.819128 kubelet[2574]: I1106 23:22:59.818670 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ed5292a9-e268-454c-bd6a-1912f01cc6bc-host-proc-sys-kernel\") pod \"cilium-xgnm9\" (UID: \"ed5292a9-e268-454c-bd6a-1912f01cc6bc\") " pod="kube-system/cilium-xgnm9" Nov 6 23:22:59.819128 kubelet[2574]: I1106 23:22:59.818693 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ed5292a9-e268-454c-bd6a-1912f01cc6bc-host-proc-sys-net\") pod \"cilium-xgnm9\" (UID: \"ed5292a9-e268-454c-bd6a-1912f01cc6bc\") " pod="kube-system/cilium-xgnm9" Nov 6 23:22:59.819231 kubelet[2574]: I1106 23:22:59.818717 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ed5292a9-e268-454c-bd6a-1912f01cc6bc-hubble-tls\") pod \"cilium-xgnm9\" (UID: \"ed5292a9-e268-454c-bd6a-1912f01cc6bc\") " pod="kube-system/cilium-xgnm9" Nov 6 23:22:59.819231 kubelet[2574]: I1106 23:22:59.818731 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6htdt\" (UniqueName: \"kubernetes.io/projected/ed5292a9-e268-454c-bd6a-1912f01cc6bc-kube-api-access-6htdt\") pod \"cilium-xgnm9\" (UID: \"ed5292a9-e268-454c-bd6a-1912f01cc6bc\") " pod="kube-system/cilium-xgnm9" Nov 6 23:22:59.828258 kubelet[2574]: E1106 23:22:59.828212 2574 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 6 23:22:59.828359 kubelet[2574]: E1106 23:22:59.828346 2574 projected.go:194] Error preparing data for projected volume kube-api-access-cx4wl for pod kube-system/kube-proxy-2rfrt: configmap "kube-root-ca.crt" not found Nov 6 23:22:59.828488 kubelet[2574]: E1106 23:22:59.828475 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f1c736f9-2300-4aa3-a489-c2ab79f7da34-kube-api-access-cx4wl podName:f1c736f9-2300-4aa3-a489-c2ab79f7da34 nodeName:}" failed. No retries permitted until 2025-11-06 23:23:00.328444449 +0000 UTC m=+7.114153981 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cx4wl" (UniqueName: "kubernetes.io/projected/f1c736f9-2300-4aa3-a489-c2ab79f7da34-kube-api-access-cx4wl") pod "kube-proxy-2rfrt" (UID: "f1c736f9-2300-4aa3-a489-c2ab79f7da34") : configmap "kube-root-ca.crt" not found Nov 6 23:22:59.927843 kubelet[2574]: E1106 23:22:59.927358 2574 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 6 23:22:59.927843 kubelet[2574]: E1106 23:22:59.927385 2574 projected.go:194] Error preparing data for projected volume kube-api-access-6htdt for pod kube-system/cilium-xgnm9: configmap "kube-root-ca.crt" not found Nov 6 23:22:59.927843 kubelet[2574]: E1106 23:22:59.927439 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed5292a9-e268-454c-bd6a-1912f01cc6bc-kube-api-access-6htdt podName:ed5292a9-e268-454c-bd6a-1912f01cc6bc nodeName:}" failed. No retries permitted until 2025-11-06 23:23:00.427421539 +0000 UTC m=+7.213131071 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6htdt" (UniqueName: "kubernetes.io/projected/ed5292a9-e268-454c-bd6a-1912f01cc6bc-kube-api-access-6htdt") pod "cilium-xgnm9" (UID: "ed5292a9-e268-454c-bd6a-1912f01cc6bc") : configmap "kube-root-ca.crt" not found Nov 6 23:23:00.473904 systemd[1]: Created slice kubepods-besteffort-podad0e2e00_67f3_448a_a5e7_1534cbe79fff.slice - libcontainer container kubepods-besteffort-podad0e2e00_67f3_448a_a5e7_1534cbe79fff.slice. Nov 6 23:23:00.523995 kubelet[2574]: I1106 23:23:00.523940 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ad0e2e00-67f3-448a-a5e7-1534cbe79fff-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-6dms2\" (UID: \"ad0e2e00-67f3-448a-a5e7-1534cbe79fff\") " pod="kube-system/cilium-operator-6c4d7847fc-6dms2" Nov 6 23:23:00.524397 kubelet[2574]: I1106 23:23:00.524015 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snm6f\" (UniqueName: \"kubernetes.io/projected/ad0e2e00-67f3-448a-a5e7-1534cbe79fff-kube-api-access-snm6f\") pod \"cilium-operator-6c4d7847fc-6dms2\" (UID: \"ad0e2e00-67f3-448a-a5e7-1534cbe79fff\") " pod="kube-system/cilium-operator-6c4d7847fc-6dms2" Nov 6 23:23:00.646678 kubelet[2574]: E1106 23:23:00.646638 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:23:00.647358 containerd[1479]: time="2025-11-06T23:23:00.647286754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2rfrt,Uid:f1c736f9-2300-4aa3-a489-c2ab79f7da34,Namespace:kube-system,Attempt:0,}" Nov 6 23:23:00.654056 kubelet[2574]: E1106 23:23:00.653943 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:23:00.654788 containerd[1479]: time="2025-11-06T23:23:00.654501117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xgnm9,Uid:ed5292a9-e268-454c-bd6a-1912f01cc6bc,Namespace:kube-system,Attempt:0,}" Nov 6 23:23:00.679471 containerd[1479]: time="2025-11-06T23:23:00.679292931Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:23:00.679471 containerd[1479]: time="2025-11-06T23:23:00.679417821Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:23:00.679471 containerd[1479]: time="2025-11-06T23:23:00.679433742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:23:00.680124 containerd[1479]: time="2025-11-06T23:23:00.680058151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:23:00.681871 containerd[1479]: time="2025-11-06T23:23:00.681714040Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:23:00.681871 containerd[1479]: time="2025-11-06T23:23:00.681832089Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:23:00.681871 containerd[1479]: time="2025-11-06T23:23:00.681844250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:23:00.682164 containerd[1479]: time="2025-11-06T23:23:00.682111151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:23:00.703439 systemd[1]: Started cri-containerd-0ab70476a6552149aee9cf06db4f0a29d3b05d079140cfc8d037a9c4bfd2cdd4.scope - libcontainer container 0ab70476a6552149aee9cf06db4f0a29d3b05d079140cfc8d037a9c4bfd2cdd4. Nov 6 23:23:00.704846 systemd[1]: Started cri-containerd-26539140293d99e22cfc52da9734a00dd5fd6ef871b959bc109e3fd07241e655.scope - libcontainer container 26539140293d99e22cfc52da9734a00dd5fd6ef871b959bc109e3fd07241e655. Nov 6 23:23:00.729898 containerd[1479]: time="2025-11-06T23:23:00.729513610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xgnm9,Uid:ed5292a9-e268-454c-bd6a-1912f01cc6bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ab70476a6552149aee9cf06db4f0a29d3b05d079140cfc8d037a9c4bfd2cdd4\"" Nov 6 23:23:00.730263 kubelet[2574]: E1106 23:23:00.730213 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:23:00.732366 containerd[1479]: time="2025-11-06T23:23:00.731870394Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 6 23:23:00.733418 containerd[1479]: time="2025-11-06T23:23:00.733321747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2rfrt,Uid:f1c736f9-2300-4aa3-a489-c2ab79f7da34,Namespace:kube-system,Attempt:0,} returns sandbox id \"26539140293d99e22cfc52da9734a00dd5fd6ef871b959bc109e3fd07241e655\"" Nov 6 23:23:00.734131 kubelet[2574]: E1106 23:23:00.734105 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:23:00.777635 kubelet[2574]: E1106 23:23:00.777591 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:23:00.778392 containerd[1479]: time="2025-11-06T23:23:00.778038996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-6dms2,Uid:ad0e2e00-67f3-448a-a5e7-1534cbe79fff,Namespace:kube-system,Attempt:0,}" Nov 6 23:23:00.804138 containerd[1479]: time="2025-11-06T23:23:00.804087628Z" level=info msg="CreateContainer within sandbox \"26539140293d99e22cfc52da9734a00dd5fd6ef871b959bc109e3fd07241e655\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 6 23:23:00.847620 containerd[1479]: time="2025-11-06T23:23:00.847485774Z" level=info msg="CreateContainer within sandbox \"26539140293d99e22cfc52da9734a00dd5fd6ef871b959bc109e3fd07241e655\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f62909b82e390bdfb7bf1e5cb6bf2a834c2eccec1565285fc29ea4c1219eb8bd\"" Nov 6 23:23:00.848902 containerd[1479]: time="2025-11-06T23:23:00.848190789Z" level=info msg="StartContainer for \"f62909b82e390bdfb7bf1e5cb6bf2a834c2eccec1565285fc29ea4c1219eb8bd\"" Nov 6 23:23:00.854268 containerd[1479]: time="2025-11-06T23:23:00.853967960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:23:00.854268 containerd[1479]: time="2025-11-06T23:23:00.854072008Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:23:00.854268 containerd[1479]: time="2025-11-06T23:23:00.854096130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:23:00.854268 containerd[1479]: time="2025-11-06T23:23:00.854194218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:23:00.873441 systemd[1]: Started cri-containerd-2af916feaa617b91a359ad7d6e390dc6dcd3e4c1ec846120f66402223da1db5f.scope - libcontainer container 2af916feaa617b91a359ad7d6e390dc6dcd3e4c1ec846120f66402223da1db5f. Nov 6 23:23:00.878882 systemd[1]: Started cri-containerd-f62909b82e390bdfb7bf1e5cb6bf2a834c2eccec1565285fc29ea4c1219eb8bd.scope - libcontainer container f62909b82e390bdfb7bf1e5cb6bf2a834c2eccec1565285fc29ea4c1219eb8bd. Nov 6 23:23:00.907827 containerd[1479]: time="2025-11-06T23:23:00.907787639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-6dms2,Uid:ad0e2e00-67f3-448a-a5e7-1534cbe79fff,Namespace:kube-system,Attempt:0,} returns sandbox id \"2af916feaa617b91a359ad7d6e390dc6dcd3e4c1ec846120f66402223da1db5f\"" Nov 6 23:23:00.908890 kubelet[2574]: E1106 23:23:00.908422 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:23:00.914554 containerd[1479]: time="2025-11-06T23:23:00.914510244Z" level=info msg="StartContainer for \"f62909b82e390bdfb7bf1e5cb6bf2a834c2eccec1565285fc29ea4c1219eb8bd\" returns successfully" Nov 6 23:23:00.932227 kubelet[2574]: E1106 23:23:00.932127 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:23:00.985970 kubelet[2574]: E1106 23:23:00.985858 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:23:01.321075 kubelet[2574]: E1106 23:23:01.320689 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:23:01.321075 kubelet[2574]: E1106 23:23:01.320710 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:23:01.321075 kubelet[2574]: E1106 23:23:01.320913 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:23:01.343580 kubelet[2574]: I1106 23:23:01.343380 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2rfrt" podStartSLOduration=2.343365296 podStartE2EDuration="2.343365296s" podCreationTimestamp="2025-11-06 23:22:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:23:01.333445604 +0000 UTC m=+8.119155136" watchObservedRunningTime="2025-11-06 23:23:01.343365296 +0000 UTC m=+8.129074828" Nov 6 23:23:02.323659 kubelet[2574]: E1106 23:23:02.323608 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:23:05.374358 kubelet[2574]: E1106 23:23:05.374316 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:23:09.083892 update_engine[1470]: I20251106 23:23:09.083375 1470 update_attempter.cc:509] Updating boot flags... Nov 6 23:23:09.148283 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2966) Nov 6 23:23:09.196638 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2970) Nov 6 23:23:12.619139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4187370551.mount: Deactivated successfully. Nov 6 23:23:13.896588 containerd[1479]: time="2025-11-06T23:23:13.896541896Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:23:13.897048 containerd[1479]: time="2025-11-06T23:23:13.897013794Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Nov 6 23:23:13.897938 containerd[1479]: time="2025-11-06T23:23:13.897896229Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:23:13.899532 containerd[1479]: time="2025-11-06T23:23:13.899507333Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 13.167169063s" Nov 6 23:23:13.899573 containerd[1479]: time="2025-11-06T23:23:13.899537734Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Nov 6 23:23:13.900801 containerd[1479]: time="2025-11-06T23:23:13.900776623Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 6 23:23:13.913677 containerd[1479]: time="2025-11-06T23:23:13.913626211Z" level=info msg="CreateContainer within sandbox \"0ab70476a6552149aee9cf06db4f0a29d3b05d079140cfc8d037a9c4bfd2cdd4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 6 23:23:13.935592 containerd[1479]: time="2025-11-06T23:23:13.935478436Z" level=info msg="CreateContainer within sandbox \"0ab70476a6552149aee9cf06db4f0a29d3b05d079140cfc8d037a9c4bfd2cdd4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6250ffff3e18a19c9c9d264ad17a70f448d45b95b9f21d2d21305e4c40683e78\"" Nov 6 23:23:13.936751 containerd[1479]: time="2025-11-06T23:23:13.935967375Z" level=info msg="StartContainer for \"6250ffff3e18a19c9c9d264ad17a70f448d45b95b9f21d2d21305e4c40683e78\"" Nov 6 23:23:13.963446 systemd[1]: Started cri-containerd-6250ffff3e18a19c9c9d264ad17a70f448d45b95b9f21d2d21305e4c40683e78.scope - libcontainer container 6250ffff3e18a19c9c9d264ad17a70f448d45b95b9f21d2d21305e4c40683e78. Nov 6 23:23:13.990546 containerd[1479]: time="2025-11-06T23:23:13.990480011Z" level=info msg="StartContainer for \"6250ffff3e18a19c9c9d264ad17a70f448d45b95b9f21d2d21305e4c40683e78\" returns successfully" Nov 6 23:23:14.003841 systemd[1]: cri-containerd-6250ffff3e18a19c9c9d264ad17a70f448d45b95b9f21d2d21305e4c40683e78.scope: Deactivated successfully. Nov 6 23:23:14.196405 containerd[1479]: time="2025-11-06T23:23:14.181738644Z" level=info msg="shim disconnected" id=6250ffff3e18a19c9c9d264ad17a70f448d45b95b9f21d2d21305e4c40683e78 namespace=k8s.io Nov 6 23:23:14.196405 containerd[1479]: time="2025-11-06T23:23:14.196324955Z" level=warning msg="cleaning up after shim disconnected" id=6250ffff3e18a19c9c9d264ad17a70f448d45b95b9f21d2d21305e4c40683e78 namespace=k8s.io Nov 6 23:23:14.196405 containerd[1479]: time="2025-11-06T23:23:14.196339515Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:23:14.355079 kubelet[2574]: E1106 23:23:14.355028 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:23:14.359444 containerd[1479]: time="2025-11-06T23:23:14.359401187Z" level=info msg="CreateContainer within sandbox \"0ab70476a6552149aee9cf06db4f0a29d3b05d079140cfc8d037a9c4bfd2cdd4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 6 23:23:14.383416 containerd[1479]: time="2025-11-06T23:23:14.383363851Z" level=info msg="CreateContainer within sandbox \"0ab70476a6552149aee9cf06db4f0a29d3b05d079140cfc8d037a9c4bfd2cdd4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4e70efc24e1a8877f93cd6f4d1f8d8d6012afc925df1c4e725f260b322fbb788\"" Nov 6 23:23:14.384532 containerd[1479]: time="2025-11-06T23:23:14.384319767Z" level=info msg="StartContainer for \"4e70efc24e1a8877f93cd6f4d1f8d8d6012afc925df1c4e725f260b322fbb788\"" Nov 6 23:23:14.418444 systemd[1]: Started cri-containerd-4e70efc24e1a8877f93cd6f4d1f8d8d6012afc925df1c4e725f260b322fbb788.scope - libcontainer container 4e70efc24e1a8877f93cd6f4d1f8d8d6012afc925df1c4e725f260b322fbb788. Nov 6 23:23:14.441854 containerd[1479]: time="2025-11-06T23:23:14.440966544Z" level=info msg="StartContainer for \"4e70efc24e1a8877f93cd6f4d1f8d8d6012afc925df1c4e725f260b322fbb788\" returns successfully" Nov 6 23:23:14.452178 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 6 23:23:14.452423 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 6 23:23:14.452788 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 6 23:23:14.459554 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 23:23:14.459795 systemd[1]: cri-containerd-4e70efc24e1a8877f93cd6f4d1f8d8d6012afc925df1c4e725f260b322fbb788.scope: Deactivated successfully. Nov 6 23:23:14.473742 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 23:23:14.497077 containerd[1479]: time="2025-11-06T23:23:14.497020778Z" level=info msg="shim disconnected" id=4e70efc24e1a8877f93cd6f4d1f8d8d6012afc925df1c4e725f260b322fbb788 namespace=k8s.io Nov 6 23:23:14.497524 containerd[1479]: time="2025-11-06T23:23:14.497342430Z" level=warning msg="cleaning up after shim disconnected" id=4e70efc24e1a8877f93cd6f4d1f8d8d6012afc925df1c4e725f260b322fbb788 namespace=k8s.io Nov 6 23:23:14.497524 containerd[1479]: time="2025-11-06T23:23:14.497360311Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:23:14.933150 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6250ffff3e18a19c9c9d264ad17a70f448d45b95b9f21d2d21305e4c40683e78-rootfs.mount: Deactivated successfully. Nov 6 23:23:15.305505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1680913053.mount: Deactivated successfully. Nov 6 23:23:15.355357 kubelet[2574]: E1106 23:23:15.355325 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:23:15.360554 containerd[1479]: time="2025-11-06T23:23:15.360502935Z" level=info msg="CreateContainer within sandbox \"0ab70476a6552149aee9cf06db4f0a29d3b05d079140cfc8d037a9c4bfd2cdd4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 6 23:23:15.380465 containerd[1479]: time="2025-11-06T23:23:15.380340129Z" level=info msg="CreateContainer within sandbox \"0ab70476a6552149aee9cf06db4f0a29d3b05d079140cfc8d037a9c4bfd2cdd4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"74a166f1fed1403364741fc52881d536805b75932ff3e017536059eafc43a0cd\"" Nov 6 23:23:15.383263 containerd[1479]: time="2025-11-06T23:23:15.383171591Z" level=info msg="StartContainer for \"74a166f1fed1403364741fc52881d536805b75932ff3e017536059eafc43a0cd\"" Nov 6 23:23:15.410425 systemd[1]: Started cri-containerd-74a166f1fed1403364741fc52881d536805b75932ff3e017536059eafc43a0cd.scope - libcontainer container 74a166f1fed1403364741fc52881d536805b75932ff3e017536059eafc43a0cd. Nov 6 23:23:15.450475 containerd[1479]: time="2025-11-06T23:23:15.450241686Z" level=info msg="StartContainer for \"74a166f1fed1403364741fc52881d536805b75932ff3e017536059eafc43a0cd\" returns successfully" Nov 6 23:23:15.453993 systemd[1]: cri-containerd-74a166f1fed1403364741fc52881d536805b75932ff3e017536059eafc43a0cd.scope: Deactivated successfully. Nov 6 23:23:15.498264 containerd[1479]: time="2025-11-06T23:23:15.498026647Z" level=info msg="shim disconnected" id=74a166f1fed1403364741fc52881d536805b75932ff3e017536059eafc43a0cd namespace=k8s.io Nov 6 23:23:15.498264 containerd[1479]: time="2025-11-06T23:23:15.498083929Z" level=warning msg="cleaning up after shim disconnected" id=74a166f1fed1403364741fc52881d536805b75932ff3e017536059eafc43a0cd namespace=k8s.io Nov 6 23:23:15.498264 containerd[1479]: time="2025-11-06T23:23:15.498092209Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:23:15.666575 containerd[1479]: time="2025-11-06T23:23:15.666463192Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:23:15.667539 containerd[1479]: time="2025-11-06T23:23:15.667360225Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Nov 6 23:23:15.668290 containerd[1479]: time="2025-11-06T23:23:15.668260337Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:23:15.670487 containerd[1479]: time="2025-11-06T23:23:15.670453416Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.769550388s" Nov 6 23:23:15.670487 containerd[1479]: time="2025-11-06T23:23:15.670489057Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Nov 6 23:23:15.674728 containerd[1479]: time="2025-11-06T23:23:15.674692849Z" level=info msg="CreateContainer within sandbox \"2af916feaa617b91a359ad7d6e390dc6dcd3e4c1ec846120f66402223da1db5f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 6 23:23:15.687477 containerd[1479]: time="2025-11-06T23:23:15.687429587Z" level=info msg="CreateContainer within sandbox \"2af916feaa617b91a359ad7d6e390dc6dcd3e4c1ec846120f66402223da1db5f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"304ec312a02bb99fe66d31ceb65f3a7ad67cc58fd7ecd58dd9e0794257d8e4d5\"" Nov 6 23:23:15.688217 containerd[1479]: time="2025-11-06T23:23:15.688176454Z" level=info msg="StartContainer for \"304ec312a02bb99fe66d31ceb65f3a7ad67cc58fd7ecd58dd9e0794257d8e4d5\"" Nov 6 23:23:15.713442 systemd[1]: Started cri-containerd-304ec312a02bb99fe66d31ceb65f3a7ad67cc58fd7ecd58dd9e0794257d8e4d5.scope - libcontainer container 304ec312a02bb99fe66d31ceb65f3a7ad67cc58fd7ecd58dd9e0794257d8e4d5. Nov 6 23:23:15.735163 containerd[1479]: time="2025-11-06T23:23:15.735105824Z" level=info msg="StartContainer for \"304ec312a02bb99fe66d31ceb65f3a7ad67cc58fd7ecd58dd9e0794257d8e4d5\" returns successfully" Nov 6 23:23:16.358239 kubelet[2574]: E1106 23:23:16.358196 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:23:16.367446 kubelet[2574]: E1106 23:23:16.367407 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:23:16.370379 kubelet[2574]: I1106 23:23:16.370046 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-6dms2" podStartSLOduration=1.608196855 podStartE2EDuration="16.370032212s" podCreationTimestamp="2025-11-06 23:23:00 +0000 UTC" firstStartedPulling="2025-11-06 23:23:00.909446489 +0000 UTC m=+7.695155981" lastFinishedPulling="2025-11-06 23:23:15.671281806 +0000 UTC m=+22.456991338" observedRunningTime="2025-11-06 23:23:16.368482239 +0000 UTC m=+23.154191771" watchObservedRunningTime="2025-11-06 23:23:16.370032212 +0000 UTC m=+23.155741744" Nov 6 23:23:16.383422 containerd[1479]: time="2025-11-06T23:23:16.383381312Z" level=info msg="CreateContainer within sandbox \"0ab70476a6552149aee9cf06db4f0a29d3b05d079140cfc8d037a9c4bfd2cdd4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 6 23:23:16.415447 containerd[1479]: time="2025-11-06T23:23:16.415396573Z" level=info msg="CreateContainer within sandbox \"0ab70476a6552149aee9cf06db4f0a29d3b05d079140cfc8d037a9c4bfd2cdd4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4551baaa3eecd2d2cba3f7d542d5f059bc0d9631f7d02b9c8509463125061a6c\"" Nov 6 23:23:16.416072 containerd[1479]: time="2025-11-06T23:23:16.416040955Z" level=info msg="StartContainer for \"4551baaa3eecd2d2cba3f7d542d5f059bc0d9631f7d02b9c8509463125061a6c\"" Nov 6 23:23:16.416060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount768041021.mount: Deactivated successfully. Nov 6 23:23:16.454467 systemd[1]: Started cri-containerd-4551baaa3eecd2d2cba3f7d542d5f059bc0d9631f7d02b9c8509463125061a6c.scope - libcontainer container 4551baaa3eecd2d2cba3f7d542d5f059bc0d9631f7d02b9c8509463125061a6c. Nov 6 23:23:16.488122 systemd[1]: cri-containerd-4551baaa3eecd2d2cba3f7d542d5f059bc0d9631f7d02b9c8509463125061a6c.scope: Deactivated successfully. Nov 6 23:23:16.495726 containerd[1479]: time="2025-11-06T23:23:16.495548250Z" level=info msg="StartContainer for \"4551baaa3eecd2d2cba3f7d542d5f059bc0d9631f7d02b9c8509463125061a6c\" returns successfully" Nov 6 23:23:16.531869 containerd[1479]: time="2025-11-06T23:23:16.531612891Z" level=info msg="shim disconnected" id=4551baaa3eecd2d2cba3f7d542d5f059bc0d9631f7d02b9c8509463125061a6c namespace=k8s.io Nov 6 23:23:16.531869 containerd[1479]: time="2025-11-06T23:23:16.531686134Z" level=warning msg="cleaning up after shim disconnected" id=4551baaa3eecd2d2cba3f7d542d5f059bc0d9631f7d02b9c8509463125061a6c namespace=k8s.io Nov 6 23:23:16.531869 containerd[1479]: time="2025-11-06T23:23:16.531696134Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:23:16.931716 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4551baaa3eecd2d2cba3f7d542d5f059bc0d9631f7d02b9c8509463125061a6c-rootfs.mount: Deactivated successfully. Nov 6 23:23:17.366298 kubelet[2574]: E1106 23:23:17.365855 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:23:17.366298 kubelet[2574]: E1106 23:23:17.365905 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:23:17.371551 containerd[1479]: time="2025-11-06T23:23:17.371463344Z" level=info msg="CreateContainer within sandbox \"0ab70476a6552149aee9cf06db4f0a29d3b05d079140cfc8d037a9c4bfd2cdd4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 6 23:23:17.390546 containerd[1479]: time="2025-11-06T23:23:17.390500130Z" level=info msg="CreateContainer within sandbox \"0ab70476a6552149aee9cf06db4f0a29d3b05d079140cfc8d037a9c4bfd2cdd4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5b63ca59f42a7940da7c539721c17a9ff18151bd285daac99ba1bb97d18e740d\"" Nov 6 23:23:17.393459 containerd[1479]: time="2025-11-06T23:23:17.393424106Z" level=info msg="StartContainer for \"5b63ca59f42a7940da7c539721c17a9ff18151bd285daac99ba1bb97d18e740d\"" Nov 6 23:23:17.421444 systemd[1]: Started cri-containerd-5b63ca59f42a7940da7c539721c17a9ff18151bd285daac99ba1bb97d18e740d.scope - libcontainer container 5b63ca59f42a7940da7c539721c17a9ff18151bd285daac99ba1bb97d18e740d. Nov 6 23:23:17.454675 containerd[1479]: time="2025-11-06T23:23:17.454595918Z" level=info msg="StartContainer for \"5b63ca59f42a7940da7c539721c17a9ff18151bd285daac99ba1bb97d18e740d\" returns successfully" Nov 6 23:23:17.524973 kubelet[2574]: I1106 23:23:17.524926 2574 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 6 23:23:17.568911 systemd[1]: Created slice kubepods-burstable-pod67a0798f_bb8c_4633_afa3_bfe9c87e53ab.slice - libcontainer container kubepods-burstable-pod67a0798f_bb8c_4633_afa3_bfe9c87e53ab.slice. Nov 6 23:23:17.575476 systemd[1]: Created slice kubepods-burstable-pod6adcd0bc_69ce_4bf6_b476_75d912597d13.slice - libcontainer container kubepods-burstable-pod6adcd0bc_69ce_4bf6_b476_75d912597d13.slice. Nov 6 23:23:17.639877 kubelet[2574]: I1106 23:23:17.639758 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6adcd0bc-69ce-4bf6-b476-75d912597d13-config-volume\") pod \"coredns-674b8bbfcf-xkmmm\" (UID: \"6adcd0bc-69ce-4bf6-b476-75d912597d13\") " pod="kube-system/coredns-674b8bbfcf-xkmmm" Nov 6 23:23:17.640345 kubelet[2574]: I1106 23:23:17.640050 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67a0798f-bb8c-4633-afa3-bfe9c87e53ab-config-volume\") pod \"coredns-674b8bbfcf-spgmg\" (UID: \"67a0798f-bb8c-4633-afa3-bfe9c87e53ab\") " pod="kube-system/coredns-674b8bbfcf-spgmg" Nov 6 23:23:17.640345 kubelet[2574]: I1106 23:23:17.640287 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlkdk\" (UniqueName: \"kubernetes.io/projected/67a0798f-bb8c-4633-afa3-bfe9c87e53ab-kube-api-access-vlkdk\") pod \"coredns-674b8bbfcf-spgmg\" (UID: \"67a0798f-bb8c-4633-afa3-bfe9c87e53ab\") " pod="kube-system/coredns-674b8bbfcf-spgmg" Nov 6 23:23:17.640345 kubelet[2574]: I1106 23:23:17.640316 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9bhh\" (UniqueName: \"kubernetes.io/projected/6adcd0bc-69ce-4bf6-b476-75d912597d13-kube-api-access-t9bhh\") pod \"coredns-674b8bbfcf-xkmmm\" (UID: \"6adcd0bc-69ce-4bf6-b476-75d912597d13\") " pod="kube-system/coredns-674b8bbfcf-xkmmm" Nov 6 23:23:17.873349 kubelet[2574]: E1106 23:23:17.873307 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:23:17.874921 containerd[1479]: time="2025-11-06T23:23:17.874200280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-spgmg,Uid:67a0798f-bb8c-4633-afa3-bfe9c87e53ab,Namespace:kube-system,Attempt:0,}" Nov 6 23:23:17.880037 kubelet[2574]: E1106 23:23:17.879992 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:23:17.881026 containerd[1479]: time="2025-11-06T23:23:17.880973463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xkmmm,Uid:6adcd0bc-69ce-4bf6-b476-75d912597d13,Namespace:kube-system,Attempt:0,}" Nov 6 23:23:18.371214 kubelet[2574]: E1106 23:23:18.371179 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:23:18.388185 kubelet[2574]: I1106 23:23:18.388122 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xgnm9" podStartSLOduration=6.218952257 podStartE2EDuration="19.388105436s" podCreationTimestamp="2025-11-06 23:22:59 +0000 UTC" firstStartedPulling="2025-11-06 23:23:00.731367594 +0000 UTC m=+7.517077126" lastFinishedPulling="2025-11-06 23:23:13.900520813 +0000 UTC m=+20.686230305" observedRunningTime="2025-11-06 23:23:18.386478265 +0000 UTC m=+25.172187797" watchObservedRunningTime="2025-11-06 23:23:18.388105436 +0000 UTC m=+25.173814968" Nov 6 23:23:19.214625 systemd[1]: Started sshd@7-10.0.0.81:22-10.0.0.1:33552.service - OpenSSH per-connection server daemon (10.0.0.1:33552). Nov 6 23:23:19.263676 sshd[3443]: Accepted publickey for core from 10.0.0.1 port 33552 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:23:19.264949 sshd-session[3443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:23:19.269218 systemd-logind[1467]: New session 8 of user core. Nov 6 23:23:19.277410 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 6 23:23:19.373809 kubelet[2574]: E1106 23:23:19.373316 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:23:19.405523 sshd[3445]: Connection closed by 10.0.0.1 port 33552 Nov 6 23:23:19.406855 sshd-session[3443]: pam_unix(sshd:session): session closed for user core Nov 6 23:23:19.409918 systemd[1]: sshd@7-10.0.0.81:22-10.0.0.1:33552.service: Deactivated successfully. Nov 6 23:23:19.411803 systemd[1]: session-8.scope: Deactivated successfully. Nov 6 23:23:19.412479 systemd-logind[1467]: Session 8 logged out. Waiting for processes to exit. Nov 6 23:23:19.413380 systemd-logind[1467]: Removed session 8. Nov 6 23:23:19.525027 systemd-networkd[1401]: cilium_host: Link UP Nov 6 23:23:19.525276 systemd-networkd[1401]: cilium_net: Link UP Nov 6 23:23:19.525575 systemd-networkd[1401]: cilium_net: Gained carrier Nov 6 23:23:19.525734 systemd-networkd[1401]: cilium_host: Gained carrier Nov 6 23:23:19.525831 systemd-networkd[1401]: cilium_net: Gained IPv6LL Nov 6 23:23:19.525954 systemd-networkd[1401]: cilium_host: Gained IPv6LL Nov 6 23:23:19.599793 systemd-networkd[1401]: cilium_vxlan: Link UP Nov 6 23:23:19.599803 systemd-networkd[1401]: cilium_vxlan: Gained carrier Nov 6 23:23:19.853291 kernel: NET: Registered PF_ALG protocol family Nov 6 23:23:20.376388 kubelet[2574]: E1106 23:23:20.375233 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:23:20.420149 systemd-networkd[1401]: lxc_health: Link UP Nov 6 23:23:20.420510 systemd-networkd[1401]: lxc_health: Gained carrier Nov 6 23:23:20.961290 kernel: eth0: renamed from tmp9725e Nov 6 23:23:20.967574 systemd-networkd[1401]: lxc2f150fc2b365: Link UP Nov 6 23:23:20.968357 kernel: eth0: renamed from tmp4680a Nov 6 23:23:20.977194 systemd-networkd[1401]: lxc164f463a9ee8: Link UP Nov 6 23:23:20.977653 systemd-networkd[1401]: lxc2f150fc2b365: Gained carrier Nov 6 23:23:20.977772 systemd-networkd[1401]: lxc164f463a9ee8: Gained carrier Nov 6 23:23:21.153437 systemd-networkd[1401]: cilium_vxlan: Gained IPv6LL Nov 6 23:23:21.377885 kubelet[2574]: E1106 23:23:21.377768 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:23:22.369403 systemd-networkd[1401]: lxc_health: Gained IPv6LL Nov 6 23:23:22.378894 kubelet[2574]: E1106 23:23:22.378863 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:23:22.689369 systemd-networkd[1401]: lxc2f150fc2b365: Gained IPv6LL Nov 6 23:23:22.689635 systemd-networkd[1401]: lxc164f463a9ee8: Gained IPv6LL Nov 6 23:23:23.379824 kubelet[2574]: E1106 23:23:23.379791 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:23:24.415823 systemd[1]: Started sshd@8-10.0.0.81:22-10.0.0.1:45782.service - OpenSSH per-connection server daemon (10.0.0.1:45782). Nov 6 23:23:24.468583 sshd[3844]: Accepted publickey for core from 10.0.0.1 port 45782 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:23:24.470563 sshd-session[3844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:23:24.476498 systemd-logind[1467]: New session 9 of user core. Nov 6 23:23:24.481376 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 6 23:23:24.568938 containerd[1479]: time="2025-11-06T23:23:24.568600238Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:23:24.568938 containerd[1479]: time="2025-11-06T23:23:24.568677960Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:23:24.568938 containerd[1479]: time="2025-11-06T23:23:24.568692801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:23:24.568938 containerd[1479]: time="2025-11-06T23:23:24.568792483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:23:24.578631 containerd[1479]: time="2025-11-06T23:23:24.578521963Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:23:24.578631 containerd[1479]: time="2025-11-06T23:23:24.578596845Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:23:24.578631 containerd[1479]: time="2025-11-06T23:23:24.578615686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:23:24.578834 containerd[1479]: time="2025-11-06T23:23:24.578709808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:23:24.603453 systemd[1]: Started cri-containerd-9725e19b0e1499adb766ec01069ff7fa410e70780b21e18c9beac7d9c0b60141.scope - libcontainer container 9725e19b0e1499adb766ec01069ff7fa410e70780b21e18c9beac7d9c0b60141. Nov 6 23:23:24.606865 systemd[1]: Started cri-containerd-4680af60aaf436fe928f26a94696461a3190ac9090f7643264b9d6bbb721ecc7.scope - libcontainer container 4680af60aaf436fe928f26a94696461a3190ac9090f7643264b9d6bbb721ecc7. Nov 6 23:23:24.615721 systemd-resolved[1322]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 23:23:24.623015 systemd-resolved[1322]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 23:23:24.640471 sshd[3849]: Connection closed by 10.0.0.1 port 45782 Nov 6 23:23:24.641177 sshd-session[3844]: pam_unix(sshd:session): session closed for user core Nov 6 23:23:24.644278 containerd[1479]: time="2025-11-06T23:23:24.644213264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xkmmm,Uid:6adcd0bc-69ce-4bf6-b476-75d912597d13,Namespace:kube-system,Attempt:0,} returns sandbox id \"9725e19b0e1499adb766ec01069ff7fa410e70780b21e18c9beac7d9c0b60141\"" Nov 6 23:23:24.645070 kubelet[2574]: E1106 23:23:24.645039 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:23:24.647701 systemd[1]: sshd@8-10.0.0.81:22-10.0.0.1:45782.service: Deactivated successfully. Nov 6 23:23:24.649984 systemd[1]: session-9.scope: Deactivated successfully. Nov 6 23:23:24.651092 systemd-logind[1467]: Session 9 logged out. Waiting for processes to exit. Nov 6 23:23:24.652068 systemd-logind[1467]: Removed session 9. Nov 6 23:23:24.652587 containerd[1479]: time="2025-11-06T23:23:24.652119419Z" level=info msg="CreateContainer within sandbox \"9725e19b0e1499adb766ec01069ff7fa410e70780b21e18c9beac7d9c0b60141\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 23:23:24.669218 containerd[1479]: time="2025-11-06T23:23:24.669111438Z" level=info msg="CreateContainer within sandbox \"9725e19b0e1499adb766ec01069ff7fa410e70780b21e18c9beac7d9c0b60141\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3507011c4da25ab83852e809e72469e89f2b629971f456d3456e93939e698d5d\"" Nov 6 23:23:24.669967 containerd[1479]: time="2025-11-06T23:23:24.669935899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-spgmg,Uid:67a0798f-bb8c-4633-afa3-bfe9c87e53ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"4680af60aaf436fe928f26a94696461a3190ac9090f7643264b9d6bbb721ecc7\"" Nov 6 23:23:24.670818 containerd[1479]: time="2025-11-06T23:23:24.670791920Z" level=info msg="StartContainer for \"3507011c4da25ab83852e809e72469e89f2b629971f456d3456e93939e698d5d\"" Nov 6 23:23:24.671493 kubelet[2574]: E1106 23:23:24.671468 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:23:24.675087 containerd[1479]: time="2025-11-06T23:23:24.675049905Z" level=info msg="CreateContainer within sandbox \"4680af60aaf436fe928f26a94696461a3190ac9090f7643264b9d6bbb721ecc7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 23:23:24.691505 containerd[1479]: time="2025-11-06T23:23:24.691460710Z" level=info msg="CreateContainer within sandbox \"4680af60aaf436fe928f26a94696461a3190ac9090f7643264b9d6bbb721ecc7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"645b9ded65a1b79c3b70b23e58eeebe53cfd51c97716687360b9d1976b916fd5\"" Nov 6 23:23:24.694772 containerd[1479]: time="2025-11-06T23:23:24.692617858Z" level=info msg="StartContainer for \"645b9ded65a1b79c3b70b23e58eeebe53cfd51c97716687360b9d1976b916fd5\"" Nov 6 23:23:24.729468 systemd[1]: Started cri-containerd-3507011c4da25ab83852e809e72469e89f2b629971f456d3456e93939e698d5d.scope - libcontainer container 3507011c4da25ab83852e809e72469e89f2b629971f456d3456e93939e698d5d. Nov 6 23:23:24.730823 systemd[1]: Started cri-containerd-645b9ded65a1b79c3b70b23e58eeebe53cfd51c97716687360b9d1976b916fd5.scope - libcontainer container 645b9ded65a1b79c3b70b23e58eeebe53cfd51c97716687360b9d1976b916fd5. Nov 6 23:23:24.790265 containerd[1479]: time="2025-11-06T23:23:24.790184826Z" level=info msg="StartContainer for \"3507011c4da25ab83852e809e72469e89f2b629971f456d3456e93939e698d5d\" returns successfully" Nov 6 23:23:24.790457 containerd[1479]: time="2025-11-06T23:23:24.790194146Z" level=info msg="StartContainer for \"645b9ded65a1b79c3b70b23e58eeebe53cfd51c97716687360b9d1976b916fd5\" returns successfully" Nov 6 23:23:25.405932 kubelet[2574]: E1106 23:23:25.405612 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:23:25.408966 kubelet[2574]: E1106 23:23:25.408790 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:23:25.419997 kubelet[2574]: I1106 23:23:25.419108 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-xkmmm" podStartSLOduration=25.419091406 podStartE2EDuration="25.419091406s" podCreationTimestamp="2025-11-06 23:23:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:23:25.41881996 +0000 UTC m=+32.204529492" watchObservedRunningTime="2025-11-06 23:23:25.419091406 +0000 UTC m=+32.204800938" Nov 6 23:23:25.458253 kubelet[2574]: I1106 23:23:25.458183 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-spgmg" podStartSLOduration=25.458166655 podStartE2EDuration="25.458166655s" podCreationTimestamp="2025-11-06 23:23:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:23:25.457655803 +0000 UTC m=+32.243365335" watchObservedRunningTime="2025-11-06 23:23:25.458166655 +0000 UTC m=+32.243876147" Nov 6 23:23:25.580467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3792040500.mount: Deactivated successfully. Nov 6 23:23:26.410145 kubelet[2574]: E1106 23:23:26.410098 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:23:26.410519 kubelet[2574]: E1106 23:23:26.410186 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:23:27.411682 kubelet[2574]: E1106 23:23:27.411536 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:23:27.411682 kubelet[2574]: E1106 23:23:27.411580 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:23:29.652568 systemd[1]: Started sshd@9-10.0.0.81:22-10.0.0.1:48876.service - OpenSSH per-connection server daemon (10.0.0.1:48876). Nov 6 23:23:29.704635 sshd[4027]: Accepted publickey for core from 10.0.0.1 port 48876 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:23:29.706373 sshd-session[4027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:23:29.710169 systemd-logind[1467]: New session 10 of user core. Nov 6 23:23:29.717481 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 6 23:23:29.833540 sshd[4029]: Connection closed by 10.0.0.1 port 48876 Nov 6 23:23:29.834253 sshd-session[4027]: pam_unix(sshd:session): session closed for user core Nov 6 23:23:29.837963 systemd[1]: sshd@9-10.0.0.81:22-10.0.0.1:48876.service: Deactivated successfully. Nov 6 23:23:29.839928 systemd[1]: session-10.scope: Deactivated successfully. Nov 6 23:23:29.840527 systemd-logind[1467]: Session 10 logged out. Waiting for processes to exit. Nov 6 23:23:29.841325 systemd-logind[1467]: Removed session 10. Nov 6 23:23:34.849042 systemd[1]: Started sshd@10-10.0.0.81:22-10.0.0.1:48892.service - OpenSSH per-connection server daemon (10.0.0.1:48892). Nov 6 23:23:34.893040 sshd[4048]: Accepted publickey for core from 10.0.0.1 port 48892 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:23:34.894393 sshd-session[4048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:23:34.898313 systemd-logind[1467]: New session 11 of user core. Nov 6 23:23:34.906427 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 6 23:23:35.033941 sshd[4050]: Connection closed by 10.0.0.1 port 48892 Nov 6 23:23:35.034558 sshd-session[4048]: pam_unix(sshd:session): session closed for user core Nov 6 23:23:35.043702 systemd[1]: sshd@10-10.0.0.81:22-10.0.0.1:48892.service: Deactivated successfully. Nov 6 23:23:35.046317 systemd[1]: session-11.scope: Deactivated successfully. Nov 6 23:23:35.047902 systemd-logind[1467]: Session 11 logged out. Waiting for processes to exit. Nov 6 23:23:35.058551 systemd[1]: Started sshd@11-10.0.0.81:22-10.0.0.1:48904.service - OpenSSH per-connection server daemon (10.0.0.1:48904). Nov 6 23:23:35.060166 systemd-logind[1467]: Removed session 11. Nov 6 23:23:35.101692 sshd[4063]: Accepted publickey for core from 10.0.0.1 port 48904 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:23:35.103028 sshd-session[4063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:23:35.108468 systemd-logind[1467]: New session 12 of user core. Nov 6 23:23:35.120459 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 6 23:23:35.308872 sshd[4066]: Connection closed by 10.0.0.1 port 48904 Nov 6 23:23:35.309257 sshd-session[4063]: pam_unix(sshd:session): session closed for user core Nov 6 23:23:35.325962 systemd[1]: sshd@11-10.0.0.81:22-10.0.0.1:48904.service: Deactivated successfully. Nov 6 23:23:35.327649 systemd[1]: session-12.scope: Deactivated successfully. Nov 6 23:23:35.328594 systemd-logind[1467]: Session 12 logged out. Waiting for processes to exit. Nov 6 23:23:35.335590 systemd[1]: Started sshd@12-10.0.0.81:22-10.0.0.1:48906.service - OpenSSH per-connection server daemon (10.0.0.1:48906). Nov 6 23:23:35.336718 systemd-logind[1467]: Removed session 12. Nov 6 23:23:35.390873 sshd[4076]: Accepted publickey for core from 10.0.0.1 port 48906 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:23:35.395397 sshd-session[4076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:23:35.404164 systemd-logind[1467]: New session 13 of user core. Nov 6 23:23:35.410436 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 6 23:23:35.520906 sshd[4079]: Connection closed by 10.0.0.1 port 48906 Nov 6 23:23:35.521293 sshd-session[4076]: pam_unix(sshd:session): session closed for user core Nov 6 23:23:35.524579 systemd[1]: sshd@12-10.0.0.81:22-10.0.0.1:48906.service: Deactivated successfully. Nov 6 23:23:35.526758 systemd[1]: session-13.scope: Deactivated successfully. Nov 6 23:23:35.528700 systemd-logind[1467]: Session 13 logged out. Waiting for processes to exit. Nov 6 23:23:35.529886 systemd-logind[1467]: Removed session 13. Nov 6 23:23:40.537231 systemd[1]: Started sshd@13-10.0.0.81:22-10.0.0.1:48350.service - OpenSSH per-connection server daemon (10.0.0.1:48350). Nov 6 23:23:40.587929 sshd[4093]: Accepted publickey for core from 10.0.0.1 port 48350 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:23:40.589514 sshd-session[4093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:23:40.596941 systemd-logind[1467]: New session 14 of user core. Nov 6 23:23:40.607517 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 6 23:23:40.724382 sshd[4095]: Connection closed by 10.0.0.1 port 48350 Nov 6 23:23:40.723861 sshd-session[4093]: pam_unix(sshd:session): session closed for user core Nov 6 23:23:40.728208 systemd[1]: sshd@13-10.0.0.81:22-10.0.0.1:48350.service: Deactivated successfully. Nov 6 23:23:40.730082 systemd[1]: session-14.scope: Deactivated successfully. Nov 6 23:23:40.732230 systemd-logind[1467]: Session 14 logged out. Waiting for processes to exit. Nov 6 23:23:40.737951 systemd-logind[1467]: Removed session 14. Nov 6 23:23:45.735798 systemd[1]: Started sshd@14-10.0.0.81:22-10.0.0.1:48366.service - OpenSSH per-connection server daemon (10.0.0.1:48366). Nov 6 23:23:45.781233 sshd[4108]: Accepted publickey for core from 10.0.0.1 port 48366 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:23:45.783178 sshd-session[4108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:23:45.787942 systemd-logind[1467]: New session 15 of user core. Nov 6 23:23:45.795499 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 6 23:23:45.914772 sshd[4110]: Connection closed by 10.0.0.1 port 48366 Nov 6 23:23:45.915121 sshd-session[4108]: pam_unix(sshd:session): session closed for user core Nov 6 23:23:45.926608 systemd[1]: sshd@14-10.0.0.81:22-10.0.0.1:48366.service: Deactivated successfully. Nov 6 23:23:45.929935 systemd[1]: session-15.scope: Deactivated successfully. Nov 6 23:23:45.931565 systemd-logind[1467]: Session 15 logged out. Waiting for processes to exit. Nov 6 23:23:45.939613 systemd[1]: Started sshd@15-10.0.0.81:22-10.0.0.1:48382.service - OpenSSH per-connection server daemon (10.0.0.1:48382). Nov 6 23:23:45.940809 systemd-logind[1467]: Removed session 15. Nov 6 23:23:45.988952 sshd[4123]: Accepted publickey for core from 10.0.0.1 port 48382 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:23:45.990397 sshd-session[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:23:45.995330 systemd-logind[1467]: New session 16 of user core. Nov 6 23:23:46.011456 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 6 23:23:46.226618 sshd[4126]: Connection closed by 10.0.0.1 port 48382 Nov 6 23:23:46.226728 sshd-session[4123]: pam_unix(sshd:session): session closed for user core Nov 6 23:23:46.235598 systemd[1]: sshd@15-10.0.0.81:22-10.0.0.1:48382.service: Deactivated successfully. Nov 6 23:23:46.237726 systemd[1]: session-16.scope: Deactivated successfully. Nov 6 23:23:46.242323 systemd-logind[1467]: Session 16 logged out. Waiting for processes to exit. Nov 6 23:23:46.255598 systemd[1]: Started sshd@16-10.0.0.81:22-10.0.0.1:48386.service - OpenSSH per-connection server daemon (10.0.0.1:48386). Nov 6 23:23:46.256386 systemd-logind[1467]: Removed session 16. Nov 6 23:23:46.305741 sshd[4137]: Accepted publickey for core from 10.0.0.1 port 48386 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:23:46.307197 sshd-session[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:23:46.312650 systemd-logind[1467]: New session 17 of user core. Nov 6 23:23:46.327502 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 6 23:23:46.875038 sshd[4140]: Connection closed by 10.0.0.1 port 48386 Nov 6 23:23:46.875687 sshd-session[4137]: pam_unix(sshd:session): session closed for user core Nov 6 23:23:46.888801 systemd[1]: sshd@16-10.0.0.81:22-10.0.0.1:48386.service: Deactivated successfully. Nov 6 23:23:46.892435 systemd[1]: session-17.scope: Deactivated successfully. Nov 6 23:23:46.894547 systemd-logind[1467]: Session 17 logged out. Waiting for processes to exit. Nov 6 23:23:46.903067 systemd[1]: Started sshd@17-10.0.0.81:22-10.0.0.1:48400.service - OpenSSH per-connection server daemon (10.0.0.1:48400). Nov 6 23:23:46.906866 systemd-logind[1467]: Removed session 17. Nov 6 23:23:46.953124 sshd[4160]: Accepted publickey for core from 10.0.0.1 port 48400 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:23:46.954476 sshd-session[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:23:46.958590 systemd-logind[1467]: New session 18 of user core. Nov 6 23:23:46.969420 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 6 23:23:47.209970 sshd[4163]: Connection closed by 10.0.0.1 port 48400 Nov 6 23:23:47.209745 sshd-session[4160]: pam_unix(sshd:session): session closed for user core Nov 6 23:23:47.225675 systemd[1]: sshd@17-10.0.0.81:22-10.0.0.1:48400.service: Deactivated successfully. Nov 6 23:23:47.227393 systemd[1]: session-18.scope: Deactivated successfully. Nov 6 23:23:47.228648 systemd-logind[1467]: Session 18 logged out. Waiting for processes to exit. Nov 6 23:23:47.229845 systemd-logind[1467]: Removed session 18. Nov 6 23:23:47.238560 systemd[1]: Started sshd@18-10.0.0.81:22-10.0.0.1:48402.service - OpenSSH per-connection server daemon (10.0.0.1:48402). Nov 6 23:23:47.287735 sshd[4175]: Accepted publickey for core from 10.0.0.1 port 48402 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:23:47.289000 sshd-session[4175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:23:47.297379 systemd-logind[1467]: New session 19 of user core. Nov 6 23:23:47.312516 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 6 23:23:47.432237 sshd[4177]: Connection closed by 10.0.0.1 port 48402 Nov 6 23:23:47.432584 sshd-session[4175]: pam_unix(sshd:session): session closed for user core Nov 6 23:23:47.435519 systemd[1]: sshd@18-10.0.0.81:22-10.0.0.1:48402.service: Deactivated successfully. Nov 6 23:23:47.437797 systemd[1]: session-19.scope: Deactivated successfully. Nov 6 23:23:47.438603 systemd-logind[1467]: Session 19 logged out. Waiting for processes to exit. Nov 6 23:23:47.439294 systemd-logind[1467]: Removed session 19. Nov 6 23:23:52.452793 systemd[1]: Started sshd@19-10.0.0.81:22-10.0.0.1:57852.service - OpenSSH per-connection server daemon (10.0.0.1:57852). Nov 6 23:23:52.501194 sshd[4191]: Accepted publickey for core from 10.0.0.1 port 57852 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:23:52.502785 sshd-session[4191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:23:52.507898 systemd-logind[1467]: New session 20 of user core. Nov 6 23:23:52.522504 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 6 23:23:52.654560 sshd[4193]: Connection closed by 10.0.0.1 port 57852 Nov 6 23:23:52.654949 sshd-session[4191]: pam_unix(sshd:session): session closed for user core Nov 6 23:23:52.658160 systemd[1]: sshd@19-10.0.0.81:22-10.0.0.1:57852.service: Deactivated successfully. Nov 6 23:23:52.660484 systemd[1]: session-20.scope: Deactivated successfully. Nov 6 23:23:52.661945 systemd-logind[1467]: Session 20 logged out. Waiting for processes to exit. Nov 6 23:23:52.663147 systemd-logind[1467]: Removed session 20. Nov 6 23:23:57.689870 systemd[1]: Started sshd@20-10.0.0.81:22-10.0.0.1:57858.service - OpenSSH per-connection server daemon (10.0.0.1:57858). Nov 6 23:23:57.739757 sshd[4211]: Accepted publickey for core from 10.0.0.1 port 57858 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:23:57.740993 sshd-session[4211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:23:57.744739 systemd-logind[1467]: New session 21 of user core. Nov 6 23:23:57.755488 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 6 23:23:57.866736 sshd[4213]: Connection closed by 10.0.0.1 port 57858 Nov 6 23:23:57.867069 sshd-session[4211]: pam_unix(sshd:session): session closed for user core Nov 6 23:23:57.871598 systemd[1]: sshd@20-10.0.0.81:22-10.0.0.1:57858.service: Deactivated successfully. Nov 6 23:23:57.873202 systemd[1]: session-21.scope: Deactivated successfully. Nov 6 23:23:57.874887 systemd-logind[1467]: Session 21 logged out. Waiting for processes to exit. Nov 6 23:23:57.875856 systemd-logind[1467]: Removed session 21. Nov 6 23:24:02.881968 systemd[1]: Started sshd@21-10.0.0.81:22-10.0.0.1:39558.service - OpenSSH per-connection server daemon (10.0.0.1:39558). Nov 6 23:24:02.925779 sshd[4228]: Accepted publickey for core from 10.0.0.1 port 39558 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:24:02.926952 sshd-session[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:24:02.930725 systemd-logind[1467]: New session 22 of user core. Nov 6 23:24:02.944436 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 6 23:24:03.050767 sshd[4230]: Connection closed by 10.0.0.1 port 39558 Nov 6 23:24:03.051313 sshd-session[4228]: pam_unix(sshd:session): session closed for user core Nov 6 23:24:03.064658 systemd[1]: sshd@21-10.0.0.81:22-10.0.0.1:39558.service: Deactivated successfully. Nov 6 23:24:03.067949 systemd[1]: session-22.scope: Deactivated successfully. Nov 6 23:24:03.068753 systemd-logind[1467]: Session 22 logged out. Waiting for processes to exit. Nov 6 23:24:03.075520 systemd[1]: Started sshd@22-10.0.0.81:22-10.0.0.1:39560.service - OpenSSH per-connection server daemon (10.0.0.1:39560). Nov 6 23:24:03.076509 systemd-logind[1467]: Removed session 22. Nov 6 23:24:03.116666 sshd[4242]: Accepted publickey for core from 10.0.0.1 port 39560 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:24:03.117867 sshd-session[4242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:24:03.122517 systemd-logind[1467]: New session 23 of user core. Nov 6 23:24:03.137414 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 6 23:24:05.561772 containerd[1479]: time="2025-11-06T23:24:05.561549025Z" level=info msg="StopContainer for \"304ec312a02bb99fe66d31ceb65f3a7ad67cc58fd7ecd58dd9e0794257d8e4d5\" with timeout 30 (s)" Nov 6 23:24:05.567159 containerd[1479]: time="2025-11-06T23:24:05.566181639Z" level=info msg="Stop container \"304ec312a02bb99fe66d31ceb65f3a7ad67cc58fd7ecd58dd9e0794257d8e4d5\" with signal terminated" Nov 6 23:24:05.575667 systemd[1]: run-containerd-runc-k8s.io-5b63ca59f42a7940da7c539721c17a9ff18151bd285daac99ba1bb97d18e740d-runc.rDs0iD.mount: Deactivated successfully. Nov 6 23:24:05.581514 systemd[1]: cri-containerd-304ec312a02bb99fe66d31ceb65f3a7ad67cc58fd7ecd58dd9e0794257d8e4d5.scope: Deactivated successfully. Nov 6 23:24:05.600135 containerd[1479]: time="2025-11-06T23:24:05.600097634Z" level=info msg="StopContainer for \"5b63ca59f42a7940da7c539721c17a9ff18151bd285daac99ba1bb97d18e740d\" with timeout 2 (s)" Nov 6 23:24:05.602996 containerd[1479]: time="2025-11-06T23:24:05.602845786Z" level=info msg="Stop container \"5b63ca59f42a7940da7c539721c17a9ff18151bd285daac99ba1bb97d18e740d\" with signal terminated" Nov 6 23:24:05.605671 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-304ec312a02bb99fe66d31ceb65f3a7ad67cc58fd7ecd58dd9e0794257d8e4d5-rootfs.mount: Deactivated successfully. Nov 6 23:24:05.611720 systemd-networkd[1401]: lxc_health: Link DOWN Nov 6 23:24:05.611728 systemd-networkd[1401]: lxc_health: Lost carrier Nov 6 23:24:05.616648 containerd[1479]: time="2025-11-06T23:24:05.616452105Z" level=info msg="shim disconnected" id=304ec312a02bb99fe66d31ceb65f3a7ad67cc58fd7ecd58dd9e0794257d8e4d5 namespace=k8s.io Nov 6 23:24:05.616648 containerd[1479]: time="2025-11-06T23:24:05.616513146Z" level=warning msg="cleaning up after shim disconnected" id=304ec312a02bb99fe66d31ceb65f3a7ad67cc58fd7ecd58dd9e0794257d8e4d5 namespace=k8s.io Nov 6 23:24:05.616648 containerd[1479]: time="2025-11-06T23:24:05.616521186Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:24:05.616648 containerd[1479]: time="2025-11-06T23:24:05.616618187Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 6 23:24:05.626529 systemd[1]: cri-containerd-5b63ca59f42a7940da7c539721c17a9ff18151bd285daac99ba1bb97d18e740d.scope: Deactivated successfully. Nov 6 23:24:05.626976 systemd[1]: cri-containerd-5b63ca59f42a7940da7c539721c17a9ff18151bd285daac99ba1bb97d18e740d.scope: Consumed 6.264s CPU time, 124.6M memory peak, 152K read from disk, 12.9M written to disk. Nov 6 23:24:05.631972 containerd[1479]: time="2025-11-06T23:24:05.631920685Z" level=info msg="StopContainer for \"304ec312a02bb99fe66d31ceb65f3a7ad67cc58fd7ecd58dd9e0794257d8e4d5\" returns successfully" Nov 6 23:24:05.632704 containerd[1479]: time="2025-11-06T23:24:05.632673974Z" level=info msg="StopPodSandbox for \"2af916feaa617b91a359ad7d6e390dc6dcd3e4c1ec846120f66402223da1db5f\"" Nov 6 23:24:05.632794 containerd[1479]: time="2025-11-06T23:24:05.632712495Z" level=info msg="Container to stop \"304ec312a02bb99fe66d31ceb65f3a7ad67cc58fd7ecd58dd9e0794257d8e4d5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 23:24:05.635931 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2af916feaa617b91a359ad7d6e390dc6dcd3e4c1ec846120f66402223da1db5f-shm.mount: Deactivated successfully. Nov 6 23:24:05.642356 systemd[1]: cri-containerd-2af916feaa617b91a359ad7d6e390dc6dcd3e4c1ec846120f66402223da1db5f.scope: Deactivated successfully. Nov 6 23:24:05.651017 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b63ca59f42a7940da7c539721c17a9ff18151bd285daac99ba1bb97d18e740d-rootfs.mount: Deactivated successfully. Nov 6 23:24:05.657924 containerd[1479]: time="2025-11-06T23:24:05.657866868Z" level=info msg="shim disconnected" id=5b63ca59f42a7940da7c539721c17a9ff18151bd285daac99ba1bb97d18e740d namespace=k8s.io Nov 6 23:24:05.657924 containerd[1479]: time="2025-11-06T23:24:05.657917268Z" level=warning msg="cleaning up after shim disconnected" id=5b63ca59f42a7940da7c539721c17a9ff18151bd285daac99ba1bb97d18e740d namespace=k8s.io Nov 6 23:24:05.657924 containerd[1479]: time="2025-11-06T23:24:05.657925548Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:24:05.671903 containerd[1479]: time="2025-11-06T23:24:05.671863751Z" level=info msg="StopContainer for \"5b63ca59f42a7940da7c539721c17a9ff18151bd285daac99ba1bb97d18e740d\" returns successfully" Nov 6 23:24:05.672489 containerd[1479]: time="2025-11-06T23:24:05.672419037Z" level=info msg="StopPodSandbox for \"0ab70476a6552149aee9cf06db4f0a29d3b05d079140cfc8d037a9c4bfd2cdd4\"" Nov 6 23:24:05.672550 containerd[1479]: time="2025-11-06T23:24:05.672496358Z" level=info msg="Container to stop \"4551baaa3eecd2d2cba3f7d542d5f059bc0d9631f7d02b9c8509463125061a6c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 23:24:05.672550 containerd[1479]: time="2025-11-06T23:24:05.672509918Z" level=info msg="Container to stop \"5b63ca59f42a7940da7c539721c17a9ff18151bd285daac99ba1bb97d18e740d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 23:24:05.672550 containerd[1479]: time="2025-11-06T23:24:05.672542839Z" level=info msg="Container to stop \"6250ffff3e18a19c9c9d264ad17a70f448d45b95b9f21d2d21305e4c40683e78\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 23:24:05.672625 containerd[1479]: time="2025-11-06T23:24:05.672553799Z" level=info msg="Container to stop \"4e70efc24e1a8877f93cd6f4d1f8d8d6012afc925df1c4e725f260b322fbb788\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 23:24:05.672625 containerd[1479]: time="2025-11-06T23:24:05.672562559Z" level=info msg="Container to stop \"74a166f1fed1403364741fc52881d536805b75932ff3e017536059eafc43a0cd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 23:24:05.677310 systemd[1]: cri-containerd-0ab70476a6552149aee9cf06db4f0a29d3b05d079140cfc8d037a9c4bfd2cdd4.scope: Deactivated successfully. Nov 6 23:24:05.689898 containerd[1479]: time="2025-11-06T23:24:05.689814320Z" level=info msg="shim disconnected" id=2af916feaa617b91a359ad7d6e390dc6dcd3e4c1ec846120f66402223da1db5f namespace=k8s.io Nov 6 23:24:05.689898 containerd[1479]: time="2025-11-06T23:24:05.689866281Z" level=warning msg="cleaning up after shim disconnected" id=2af916feaa617b91a359ad7d6e390dc6dcd3e4c1ec846120f66402223da1db5f namespace=k8s.io Nov 6 23:24:05.689898 containerd[1479]: time="2025-11-06T23:24:05.689874401Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:24:05.699490 containerd[1479]: time="2025-11-06T23:24:05.699287031Z" level=info msg="shim disconnected" id=0ab70476a6552149aee9cf06db4f0a29d3b05d079140cfc8d037a9c4bfd2cdd4 namespace=k8s.io Nov 6 23:24:05.699490 containerd[1479]: time="2025-11-06T23:24:05.699345471Z" level=warning msg="cleaning up after shim disconnected" id=0ab70476a6552149aee9cf06db4f0a29d3b05d079140cfc8d037a9c4bfd2cdd4 namespace=k8s.io Nov 6 23:24:05.699490 containerd[1479]: time="2025-11-06T23:24:05.699353911Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:24:05.702543 containerd[1479]: time="2025-11-06T23:24:05.702495348Z" level=info msg="TearDown network for sandbox \"2af916feaa617b91a359ad7d6e390dc6dcd3e4c1ec846120f66402223da1db5f\" successfully" Nov 6 23:24:05.702543 containerd[1479]: time="2025-11-06T23:24:05.702528708Z" level=info msg="StopPodSandbox for \"2af916feaa617b91a359ad7d6e390dc6dcd3e4c1ec846120f66402223da1db5f\" returns successfully" Nov 6 23:24:05.717921 containerd[1479]: time="2025-11-06T23:24:05.716736834Z" level=info msg="TearDown network for sandbox \"0ab70476a6552149aee9cf06db4f0a29d3b05d079140cfc8d037a9c4bfd2cdd4\" successfully" Nov 6 23:24:05.717921 containerd[1479]: time="2025-11-06T23:24:05.716773394Z" level=info msg="StopPodSandbox for \"0ab70476a6552149aee9cf06db4f0a29d3b05d079140cfc8d037a9c4bfd2cdd4\" returns successfully" Nov 6 23:24:05.746782 kubelet[2574]: I1106 23:24:05.746743 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ad0e2e00-67f3-448a-a5e7-1534cbe79fff-cilium-config-path\") pod \"ad0e2e00-67f3-448a-a5e7-1534cbe79fff\" (UID: \"ad0e2e00-67f3-448a-a5e7-1534cbe79fff\") " Nov 6 23:24:05.746782 kubelet[2574]: I1106 23:24:05.746789 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snm6f\" (UniqueName: \"kubernetes.io/projected/ad0e2e00-67f3-448a-a5e7-1534cbe79fff-kube-api-access-snm6f\") pod \"ad0e2e00-67f3-448a-a5e7-1534cbe79fff\" (UID: \"ad0e2e00-67f3-448a-a5e7-1534cbe79fff\") " Nov 6 23:24:05.757392 kubelet[2574]: I1106 23:24:05.757284 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad0e2e00-67f3-448a-a5e7-1534cbe79fff-kube-api-access-snm6f" (OuterVolumeSpecName: "kube-api-access-snm6f") pod "ad0e2e00-67f3-448a-a5e7-1534cbe79fff" (UID: "ad0e2e00-67f3-448a-a5e7-1534cbe79fff"). InnerVolumeSpecName "kube-api-access-snm6f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 23:24:05.762848 kubelet[2574]: I1106 23:24:05.762802 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad0e2e00-67f3-448a-a5e7-1534cbe79fff-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ad0e2e00-67f3-448a-a5e7-1534cbe79fff" (UID: "ad0e2e00-67f3-448a-a5e7-1534cbe79fff"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 6 23:24:05.847911 kubelet[2574]: I1106 23:24:05.847790 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ed5292a9-e268-454c-bd6a-1912f01cc6bc-xtables-lock\") pod \"ed5292a9-e268-454c-bd6a-1912f01cc6bc\" (UID: \"ed5292a9-e268-454c-bd6a-1912f01cc6bc\") " Nov 6 23:24:05.847911 kubelet[2574]: I1106 23:24:05.847829 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ed5292a9-e268-454c-bd6a-1912f01cc6bc-host-proc-sys-net\") pod \"ed5292a9-e268-454c-bd6a-1912f01cc6bc\" (UID: \"ed5292a9-e268-454c-bd6a-1912f01cc6bc\") " Nov 6 23:24:05.847911 kubelet[2574]: I1106 23:24:05.847849 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ed5292a9-e268-454c-bd6a-1912f01cc6bc-lib-modules\") pod \"ed5292a9-e268-454c-bd6a-1912f01cc6bc\" (UID: \"ed5292a9-e268-454c-bd6a-1912f01cc6bc\") " Nov 6 23:24:05.847911 kubelet[2574]: I1106 23:24:05.847865 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ed5292a9-e268-454c-bd6a-1912f01cc6bc-host-proc-sys-kernel\") pod \"ed5292a9-e268-454c-bd6a-1912f01cc6bc\" (UID: \"ed5292a9-e268-454c-bd6a-1912f01cc6bc\") " Nov 6 23:24:05.847911 kubelet[2574]: I1106 23:24:05.847888 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ed5292a9-e268-454c-bd6a-1912f01cc6bc-hubble-tls\") pod \"ed5292a9-e268-454c-bd6a-1912f01cc6bc\" (UID: \"ed5292a9-e268-454c-bd6a-1912f01cc6bc\") " Nov 6 23:24:05.847911 kubelet[2574]: I1106 23:24:05.847912 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ed5292a9-e268-454c-bd6a-1912f01cc6bc-etc-cni-netd\") pod \"ed5292a9-e268-454c-bd6a-1912f01cc6bc\" (UID: \"ed5292a9-e268-454c-bd6a-1912f01cc6bc\") " Nov 6 23:24:05.848127 kubelet[2574]: I1106 23:24:05.847928 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ed5292a9-e268-454c-bd6a-1912f01cc6bc-hostproc\") pod \"ed5292a9-e268-454c-bd6a-1912f01cc6bc\" (UID: \"ed5292a9-e268-454c-bd6a-1912f01cc6bc\") " Nov 6 23:24:05.848127 kubelet[2574]: I1106 23:24:05.847942 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ed5292a9-e268-454c-bd6a-1912f01cc6bc-cni-path\") pod \"ed5292a9-e268-454c-bd6a-1912f01cc6bc\" (UID: \"ed5292a9-e268-454c-bd6a-1912f01cc6bc\") " Nov 6 23:24:05.848127 kubelet[2574]: I1106 23:24:05.847930 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed5292a9-e268-454c-bd6a-1912f01cc6bc-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ed5292a9-e268-454c-bd6a-1912f01cc6bc" (UID: "ed5292a9-e268-454c-bd6a-1912f01cc6bc"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:24:05.848127 kubelet[2574]: I1106 23:24:05.847956 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ed5292a9-e268-454c-bd6a-1912f01cc6bc-cilium-run\") pod \"ed5292a9-e268-454c-bd6a-1912f01cc6bc\" (UID: \"ed5292a9-e268-454c-bd6a-1912f01cc6bc\") " Nov 6 23:24:05.848127 kubelet[2574]: I1106 23:24:05.847973 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6htdt\" (UniqueName: \"kubernetes.io/projected/ed5292a9-e268-454c-bd6a-1912f01cc6bc-kube-api-access-6htdt\") pod \"ed5292a9-e268-454c-bd6a-1912f01cc6bc\" (UID: \"ed5292a9-e268-454c-bd6a-1912f01cc6bc\") " Nov 6 23:24:05.848127 kubelet[2574]: I1106 23:24:05.847991 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ed5292a9-e268-454c-bd6a-1912f01cc6bc-clustermesh-secrets\") pod \"ed5292a9-e268-454c-bd6a-1912f01cc6bc\" (UID: \"ed5292a9-e268-454c-bd6a-1912f01cc6bc\") " Nov 6 23:24:05.848284 kubelet[2574]: I1106 23:24:05.847993 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed5292a9-e268-454c-bd6a-1912f01cc6bc-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ed5292a9-e268-454c-bd6a-1912f01cc6bc" (UID: "ed5292a9-e268-454c-bd6a-1912f01cc6bc"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:24:05.848284 kubelet[2574]: I1106 23:24:05.848006 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ed5292a9-e268-454c-bd6a-1912f01cc6bc-bpf-maps\") pod \"ed5292a9-e268-454c-bd6a-1912f01cc6bc\" (UID: \"ed5292a9-e268-454c-bd6a-1912f01cc6bc\") " Nov 6 23:24:05.848284 kubelet[2574]: I1106 23:24:05.848020 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed5292a9-e268-454c-bd6a-1912f01cc6bc-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ed5292a9-e268-454c-bd6a-1912f01cc6bc" (UID: "ed5292a9-e268-454c-bd6a-1912f01cc6bc"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:24:05.848284 kubelet[2574]: I1106 23:24:05.848025 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ed5292a9-e268-454c-bd6a-1912f01cc6bc-cilium-config-path\") pod \"ed5292a9-e268-454c-bd6a-1912f01cc6bc\" (UID: \"ed5292a9-e268-454c-bd6a-1912f01cc6bc\") " Nov 6 23:24:05.848376 kubelet[2574]: I1106 23:24:05.848276 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ed5292a9-e268-454c-bd6a-1912f01cc6bc-cilium-cgroup\") pod \"ed5292a9-e268-454c-bd6a-1912f01cc6bc\" (UID: \"ed5292a9-e268-454c-bd6a-1912f01cc6bc\") " Nov 6 23:24:05.848376 kubelet[2574]: I1106 23:24:05.848356 2574 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ed5292a9-e268-454c-bd6a-1912f01cc6bc-lib-modules\") on node \"localhost\" DevicePath \"\"" Nov 6 23:24:05.848376 kubelet[2574]: I1106 23:24:05.848368 2574 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ed5292a9-e268-454c-bd6a-1912f01cc6bc-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Nov 6 23:24:05.848438 kubelet[2574]: I1106 23:24:05.848377 2574 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ad0e2e00-67f3-448a-a5e7-1534cbe79fff-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 6 23:24:05.848438 kubelet[2574]: I1106 23:24:05.848386 2574 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-snm6f\" (UniqueName: \"kubernetes.io/projected/ad0e2e00-67f3-448a-a5e7-1534cbe79fff-kube-api-access-snm6f\") on node \"localhost\" DevicePath \"\"" Nov 6 23:24:05.848438 kubelet[2574]: I1106 23:24:05.848394 2574 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ed5292a9-e268-454c-bd6a-1912f01cc6bc-xtables-lock\") on node \"localhost\" DevicePath \"\"" Nov 6 23:24:05.848438 kubelet[2574]: I1106 23:24:05.848300 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed5292a9-e268-454c-bd6a-1912f01cc6bc-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ed5292a9-e268-454c-bd6a-1912f01cc6bc" (UID: "ed5292a9-e268-454c-bd6a-1912f01cc6bc"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:24:05.848438 kubelet[2574]: I1106 23:24:05.848318 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed5292a9-e268-454c-bd6a-1912f01cc6bc-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ed5292a9-e268-454c-bd6a-1912f01cc6bc" (UID: "ed5292a9-e268-454c-bd6a-1912f01cc6bc"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:24:05.848438 kubelet[2574]: I1106 23:24:05.848348 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed5292a9-e268-454c-bd6a-1912f01cc6bc-hostproc" (OuterVolumeSpecName: "hostproc") pod "ed5292a9-e268-454c-bd6a-1912f01cc6bc" (UID: "ed5292a9-e268-454c-bd6a-1912f01cc6bc"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:24:05.848577 kubelet[2574]: I1106 23:24:05.848358 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed5292a9-e268-454c-bd6a-1912f01cc6bc-cni-path" (OuterVolumeSpecName: "cni-path") pod "ed5292a9-e268-454c-bd6a-1912f01cc6bc" (UID: "ed5292a9-e268-454c-bd6a-1912f01cc6bc"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:24:05.848577 kubelet[2574]: I1106 23:24:05.848440 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed5292a9-e268-454c-bd6a-1912f01cc6bc-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ed5292a9-e268-454c-bd6a-1912f01cc6bc" (UID: "ed5292a9-e268-454c-bd6a-1912f01cc6bc"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:24:05.848819 kubelet[2574]: I1106 23:24:05.848685 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed5292a9-e268-454c-bd6a-1912f01cc6bc-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ed5292a9-e268-454c-bd6a-1912f01cc6bc" (UID: "ed5292a9-e268-454c-bd6a-1912f01cc6bc"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:24:05.850604 kubelet[2574]: I1106 23:24:05.850530 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed5292a9-e268-454c-bd6a-1912f01cc6bc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ed5292a9-e268-454c-bd6a-1912f01cc6bc" (UID: "ed5292a9-e268-454c-bd6a-1912f01cc6bc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 6 23:24:05.850604 kubelet[2574]: I1106 23:24:05.850580 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed5292a9-e268-454c-bd6a-1912f01cc6bc-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ed5292a9-e268-454c-bd6a-1912f01cc6bc" (UID: "ed5292a9-e268-454c-bd6a-1912f01cc6bc"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:24:05.850604 kubelet[2574]: I1106 23:24:05.850587 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed5292a9-e268-454c-bd6a-1912f01cc6bc-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ed5292a9-e268-454c-bd6a-1912f01cc6bc" (UID: "ed5292a9-e268-454c-bd6a-1912f01cc6bc"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 23:24:05.850735 kubelet[2574]: I1106 23:24:05.850638 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed5292a9-e268-454c-bd6a-1912f01cc6bc-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ed5292a9-e268-454c-bd6a-1912f01cc6bc" (UID: "ed5292a9-e268-454c-bd6a-1912f01cc6bc"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 6 23:24:05.851925 kubelet[2574]: I1106 23:24:05.851883 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed5292a9-e268-454c-bd6a-1912f01cc6bc-kube-api-access-6htdt" (OuterVolumeSpecName: "kube-api-access-6htdt") pod "ed5292a9-e268-454c-bd6a-1912f01cc6bc" (UID: "ed5292a9-e268-454c-bd6a-1912f01cc6bc"). InnerVolumeSpecName "kube-api-access-6htdt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 23:24:05.949369 kubelet[2574]: I1106 23:24:05.949323 2574 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ed5292a9-e268-454c-bd6a-1912f01cc6bc-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Nov 6 23:24:05.949369 kubelet[2574]: I1106 23:24:05.949355 2574 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ed5292a9-e268-454c-bd6a-1912f01cc6bc-hubble-tls\") on node \"localhost\" DevicePath \"\"" Nov 6 23:24:05.949369 kubelet[2574]: I1106 23:24:05.949365 2574 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ed5292a9-e268-454c-bd6a-1912f01cc6bc-hostproc\") on node \"localhost\" DevicePath \"\"" Nov 6 23:24:05.949369 kubelet[2574]: I1106 23:24:05.949374 2574 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ed5292a9-e268-454c-bd6a-1912f01cc6bc-cni-path\") on node \"localhost\" DevicePath \"\"" Nov 6 23:24:05.949369 kubelet[2574]: I1106 23:24:05.949383 2574 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ed5292a9-e268-454c-bd6a-1912f01cc6bc-cilium-run\") on node \"localhost\" DevicePath \"\"" Nov 6 23:24:05.949654 kubelet[2574]: I1106 23:24:05.949391 2574 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6htdt\" (UniqueName: \"kubernetes.io/projected/ed5292a9-e268-454c-bd6a-1912f01cc6bc-kube-api-access-6htdt\") on node \"localhost\" DevicePath \"\"" Nov 6 23:24:05.949654 kubelet[2574]: I1106 23:24:05.949401 2574 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ed5292a9-e268-454c-bd6a-1912f01cc6bc-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Nov 6 23:24:05.949654 kubelet[2574]: I1106 23:24:05.949409 2574 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ed5292a9-e268-454c-bd6a-1912f01cc6bc-bpf-maps\") on node \"localhost\" DevicePath \"\"" Nov 6 23:24:05.949654 kubelet[2574]: I1106 23:24:05.949416 2574 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ed5292a9-e268-454c-bd6a-1912f01cc6bc-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 6 23:24:05.949654 kubelet[2574]: I1106 23:24:05.949423 2574 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ed5292a9-e268-454c-bd6a-1912f01cc6bc-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Nov 6 23:24:05.949654 kubelet[2574]: I1106 23:24:05.949430 2574 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ed5292a9-e268-454c-bd6a-1912f01cc6bc-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Nov 6 23:24:06.500853 kubelet[2574]: I1106 23:24:06.500818 2574 scope.go:117] "RemoveContainer" containerID="304ec312a02bb99fe66d31ceb65f3a7ad67cc58fd7ecd58dd9e0794257d8e4d5" Nov 6 23:24:06.501686 systemd[1]: Removed slice kubepods-besteffort-podad0e2e00_67f3_448a_a5e7_1534cbe79fff.slice - libcontainer container kubepods-besteffort-podad0e2e00_67f3_448a_a5e7_1534cbe79fff.slice. Nov 6 23:24:06.502588 containerd[1479]: time="2025-11-06T23:24:06.502159011Z" level=info msg="RemoveContainer for \"304ec312a02bb99fe66d31ceb65f3a7ad67cc58fd7ecd58dd9e0794257d8e4d5\"" Nov 6 23:24:06.507253 systemd[1]: Removed slice kubepods-burstable-poded5292a9_e268_454c_bd6a_1912f01cc6bc.slice - libcontainer container kubepods-burstable-poded5292a9_e268_454c_bd6a_1912f01cc6bc.slice. Nov 6 23:24:06.507348 systemd[1]: kubepods-burstable-poded5292a9_e268_454c_bd6a_1912f01cc6bc.slice: Consumed 6.343s CPU time, 124.9M memory peak, 172K read from disk, 12.9M written to disk. Nov 6 23:24:06.519233 containerd[1479]: time="2025-11-06T23:24:06.518386919Z" level=info msg="RemoveContainer for \"304ec312a02bb99fe66d31ceb65f3a7ad67cc58fd7ecd58dd9e0794257d8e4d5\" returns successfully" Nov 6 23:24:06.519233 containerd[1479]: time="2025-11-06T23:24:06.518898005Z" level=error msg="ContainerStatus for \"304ec312a02bb99fe66d31ceb65f3a7ad67cc58fd7ecd58dd9e0794257d8e4d5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"304ec312a02bb99fe66d31ceb65f3a7ad67cc58fd7ecd58dd9e0794257d8e4d5\": not found" Nov 6 23:24:06.519387 kubelet[2574]: I1106 23:24:06.518656 2574 scope.go:117] "RemoveContainer" containerID="304ec312a02bb99fe66d31ceb65f3a7ad67cc58fd7ecd58dd9e0794257d8e4d5" Nov 6 23:24:06.519387 kubelet[2574]: E1106 23:24:06.519127 2574 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"304ec312a02bb99fe66d31ceb65f3a7ad67cc58fd7ecd58dd9e0794257d8e4d5\": not found" containerID="304ec312a02bb99fe66d31ceb65f3a7ad67cc58fd7ecd58dd9e0794257d8e4d5" Nov 6 23:24:06.519467 kubelet[2574]: I1106 23:24:06.519298 2574 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"304ec312a02bb99fe66d31ceb65f3a7ad67cc58fd7ecd58dd9e0794257d8e4d5"} err="failed to get container status \"304ec312a02bb99fe66d31ceb65f3a7ad67cc58fd7ecd58dd9e0794257d8e4d5\": rpc error: code = NotFound desc = an error occurred when try to find container \"304ec312a02bb99fe66d31ceb65f3a7ad67cc58fd7ecd58dd9e0794257d8e4d5\": not found" Nov 6 23:24:06.519515 kubelet[2574]: I1106 23:24:06.519465 2574 scope.go:117] "RemoveContainer" containerID="5b63ca59f42a7940da7c539721c17a9ff18151bd285daac99ba1bb97d18e740d" Nov 6 23:24:06.521760 containerd[1479]: time="2025-11-06T23:24:06.521729478Z" level=info msg="RemoveContainer for \"5b63ca59f42a7940da7c539721c17a9ff18151bd285daac99ba1bb97d18e740d\"" Nov 6 23:24:06.525461 containerd[1479]: time="2025-11-06T23:24:06.525290719Z" level=info msg="RemoveContainer for \"5b63ca59f42a7940da7c539721c17a9ff18151bd285daac99ba1bb97d18e740d\" returns successfully" Nov 6 23:24:06.526037 kubelet[2574]: I1106 23:24:06.525522 2574 scope.go:117] "RemoveContainer" containerID="4551baaa3eecd2d2cba3f7d542d5f059bc0d9631f7d02b9c8509463125061a6c" Nov 6 23:24:06.528069 containerd[1479]: time="2025-11-06T23:24:06.527807349Z" level=info msg="RemoveContainer for \"4551baaa3eecd2d2cba3f7d542d5f059bc0d9631f7d02b9c8509463125061a6c\"" Nov 6 23:24:06.535654 containerd[1479]: time="2025-11-06T23:24:06.534819150Z" level=info msg="RemoveContainer for \"4551baaa3eecd2d2cba3f7d542d5f059bc0d9631f7d02b9c8509463125061a6c\" returns successfully" Nov 6 23:24:06.537108 containerd[1479]: time="2025-11-06T23:24:06.536800573Z" level=info msg="RemoveContainer for \"74a166f1fed1403364741fc52881d536805b75932ff3e017536059eafc43a0cd\"" Nov 6 23:24:06.537141 kubelet[2574]: I1106 23:24:06.535850 2574 scope.go:117] "RemoveContainer" containerID="74a166f1fed1403364741fc52881d536805b75932ff3e017536059eafc43a0cd" Nov 6 23:24:06.539833 containerd[1479]: time="2025-11-06T23:24:06.539793048Z" level=info msg="RemoveContainer for \"74a166f1fed1403364741fc52881d536805b75932ff3e017536059eafc43a0cd\" returns successfully" Nov 6 23:24:06.540083 kubelet[2574]: I1106 23:24:06.539968 2574 scope.go:117] "RemoveContainer" containerID="4e70efc24e1a8877f93cd6f4d1f8d8d6012afc925df1c4e725f260b322fbb788" Nov 6 23:24:06.541158 containerd[1479]: time="2025-11-06T23:24:06.540988142Z" level=info msg="RemoveContainer for \"4e70efc24e1a8877f93cd6f4d1f8d8d6012afc925df1c4e725f260b322fbb788\"" Nov 6 23:24:06.549020 containerd[1479]: time="2025-11-06T23:24:06.548975394Z" level=info msg="RemoveContainer for \"4e70efc24e1a8877f93cd6f4d1f8d8d6012afc925df1c4e725f260b322fbb788\" returns successfully" Nov 6 23:24:06.549310 kubelet[2574]: I1106 23:24:06.549251 2574 scope.go:117] "RemoveContainer" containerID="6250ffff3e18a19c9c9d264ad17a70f448d45b95b9f21d2d21305e4c40683e78" Nov 6 23:24:06.550422 containerd[1479]: time="2025-11-06T23:24:06.550389851Z" level=info msg="RemoveContainer for \"6250ffff3e18a19c9c9d264ad17a70f448d45b95b9f21d2d21305e4c40683e78\"" Nov 6 23:24:06.552950 containerd[1479]: time="2025-11-06T23:24:06.552923240Z" level=info msg="RemoveContainer for \"6250ffff3e18a19c9c9d264ad17a70f448d45b95b9f21d2d21305e4c40683e78\" returns successfully" Nov 6 23:24:06.553144 kubelet[2574]: I1106 23:24:06.553105 2574 scope.go:117] "RemoveContainer" containerID="5b63ca59f42a7940da7c539721c17a9ff18151bd285daac99ba1bb97d18e740d" Nov 6 23:24:06.553406 containerd[1479]: time="2025-11-06T23:24:06.553370526Z" level=error msg="ContainerStatus for \"5b63ca59f42a7940da7c539721c17a9ff18151bd285daac99ba1bb97d18e740d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5b63ca59f42a7940da7c539721c17a9ff18151bd285daac99ba1bb97d18e740d\": not found" Nov 6 23:24:06.553542 kubelet[2574]: E1106 23:24:06.553503 2574 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5b63ca59f42a7940da7c539721c17a9ff18151bd285daac99ba1bb97d18e740d\": not found" containerID="5b63ca59f42a7940da7c539721c17a9ff18151bd285daac99ba1bb97d18e740d" Nov 6 23:24:06.553581 kubelet[2574]: I1106 23:24:06.553534 2574 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5b63ca59f42a7940da7c539721c17a9ff18151bd285daac99ba1bb97d18e740d"} err="failed to get container status \"5b63ca59f42a7940da7c539721c17a9ff18151bd285daac99ba1bb97d18e740d\": rpc error: code = NotFound desc = an error occurred when try to find container \"5b63ca59f42a7940da7c539721c17a9ff18151bd285daac99ba1bb97d18e740d\": not found" Nov 6 23:24:06.553581 kubelet[2574]: I1106 23:24:06.553553 2574 scope.go:117] "RemoveContainer" containerID="4551baaa3eecd2d2cba3f7d542d5f059bc0d9631f7d02b9c8509463125061a6c" Nov 6 23:24:06.553715 containerd[1479]: time="2025-11-06T23:24:06.553687049Z" level=error msg="ContainerStatus for \"4551baaa3eecd2d2cba3f7d542d5f059bc0d9631f7d02b9c8509463125061a6c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4551baaa3eecd2d2cba3f7d542d5f059bc0d9631f7d02b9c8509463125061a6c\": not found" Nov 6 23:24:06.553885 kubelet[2574]: E1106 23:24:06.553820 2574 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4551baaa3eecd2d2cba3f7d542d5f059bc0d9631f7d02b9c8509463125061a6c\": not found" containerID="4551baaa3eecd2d2cba3f7d542d5f059bc0d9631f7d02b9c8509463125061a6c" Nov 6 23:24:06.553885 kubelet[2574]: I1106 23:24:06.553852 2574 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4551baaa3eecd2d2cba3f7d542d5f059bc0d9631f7d02b9c8509463125061a6c"} err="failed to get container status \"4551baaa3eecd2d2cba3f7d542d5f059bc0d9631f7d02b9c8509463125061a6c\": rpc error: code = NotFound desc = an error occurred when try to find container \"4551baaa3eecd2d2cba3f7d542d5f059bc0d9631f7d02b9c8509463125061a6c\": not found" Nov 6 23:24:06.553885 kubelet[2574]: I1106 23:24:06.553870 2574 scope.go:117] "RemoveContainer" containerID="74a166f1fed1403364741fc52881d536805b75932ff3e017536059eafc43a0cd" Nov 6 23:24:06.554206 kubelet[2574]: E1106 23:24:06.554155 2574 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"74a166f1fed1403364741fc52881d536805b75932ff3e017536059eafc43a0cd\": not found" containerID="74a166f1fed1403364741fc52881d536805b75932ff3e017536059eafc43a0cd" Nov 6 23:24:06.554206 kubelet[2574]: I1106 23:24:06.554173 2574 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"74a166f1fed1403364741fc52881d536805b75932ff3e017536059eafc43a0cd"} err="failed to get container status \"74a166f1fed1403364741fc52881d536805b75932ff3e017536059eafc43a0cd\": rpc error: code = NotFound desc = an error occurred when try to find container \"74a166f1fed1403364741fc52881d536805b75932ff3e017536059eafc43a0cd\": not found" Nov 6 23:24:06.554206 kubelet[2574]: I1106 23:24:06.554186 2574 scope.go:117] "RemoveContainer" containerID="4e70efc24e1a8877f93cd6f4d1f8d8d6012afc925df1c4e725f260b322fbb788" Nov 6 23:24:06.554298 containerd[1479]: time="2025-11-06T23:24:06.554059774Z" level=error msg="ContainerStatus for \"74a166f1fed1403364741fc52881d536805b75932ff3e017536059eafc43a0cd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"74a166f1fed1403364741fc52881d536805b75932ff3e017536059eafc43a0cd\": not found" Nov 6 23:24:06.554378 containerd[1479]: time="2025-11-06T23:24:06.554322177Z" level=error msg="ContainerStatus for \"4e70efc24e1a8877f93cd6f4d1f8d8d6012afc925df1c4e725f260b322fbb788\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4e70efc24e1a8877f93cd6f4d1f8d8d6012afc925df1c4e725f260b322fbb788\": not found" Nov 6 23:24:06.554439 kubelet[2574]: E1106 23:24:06.554422 2574 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4e70efc24e1a8877f93cd6f4d1f8d8d6012afc925df1c4e725f260b322fbb788\": not found" containerID="4e70efc24e1a8877f93cd6f4d1f8d8d6012afc925df1c4e725f260b322fbb788" Nov 6 23:24:06.554469 kubelet[2574]: I1106 23:24:06.554442 2574 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4e70efc24e1a8877f93cd6f4d1f8d8d6012afc925df1c4e725f260b322fbb788"} err="failed to get container status \"4e70efc24e1a8877f93cd6f4d1f8d8d6012afc925df1c4e725f260b322fbb788\": rpc error: code = NotFound desc = an error occurred when try to find container \"4e70efc24e1a8877f93cd6f4d1f8d8d6012afc925df1c4e725f260b322fbb788\": not found" Nov 6 23:24:06.554510 kubelet[2574]: I1106 23:24:06.554475 2574 scope.go:117] "RemoveContainer" containerID="6250ffff3e18a19c9c9d264ad17a70f448d45b95b9f21d2d21305e4c40683e78" Nov 6 23:24:06.554692 containerd[1479]: time="2025-11-06T23:24:06.554625580Z" level=error msg="ContainerStatus for \"6250ffff3e18a19c9c9d264ad17a70f448d45b95b9f21d2d21305e4c40683e78\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6250ffff3e18a19c9c9d264ad17a70f448d45b95b9f21d2d21305e4c40683e78\": not found" Nov 6 23:24:06.558966 kubelet[2574]: E1106 23:24:06.554775 2574 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6250ffff3e18a19c9c9d264ad17a70f448d45b95b9f21d2d21305e4c40683e78\": not found" containerID="6250ffff3e18a19c9c9d264ad17a70f448d45b95b9f21d2d21305e4c40683e78" Nov 6 23:24:06.559035 kubelet[2574]: I1106 23:24:06.558969 2574 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6250ffff3e18a19c9c9d264ad17a70f448d45b95b9f21d2d21305e4c40683e78"} err="failed to get container status \"6250ffff3e18a19c9c9d264ad17a70f448d45b95b9f21d2d21305e4c40683e78\": rpc error: code = NotFound desc = an error occurred when try to find container \"6250ffff3e18a19c9c9d264ad17a70f448d45b95b9f21d2d21305e4c40683e78\": not found" Nov 6 23:24:06.569454 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2af916feaa617b91a359ad7d6e390dc6dcd3e4c1ec846120f66402223da1db5f-rootfs.mount: Deactivated successfully. Nov 6 23:24:06.569566 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ab70476a6552149aee9cf06db4f0a29d3b05d079140cfc8d037a9c4bfd2cdd4-rootfs.mount: Deactivated successfully. Nov 6 23:24:06.569621 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0ab70476a6552149aee9cf06db4f0a29d3b05d079140cfc8d037a9c4bfd2cdd4-shm.mount: Deactivated successfully. Nov 6 23:24:06.569671 systemd[1]: var-lib-kubelet-pods-ad0e2e00\x2d67f3\x2d448a\x2da5e7\x2d1534cbe79fff-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsnm6f.mount: Deactivated successfully. Nov 6 23:24:06.569726 systemd[1]: var-lib-kubelet-pods-ed5292a9\x2de268\x2d454c\x2dbd6a\x2d1912f01cc6bc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6htdt.mount: Deactivated successfully. Nov 6 23:24:06.569776 systemd[1]: var-lib-kubelet-pods-ed5292a9\x2de268\x2d454c\x2dbd6a\x2d1912f01cc6bc-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 6 23:24:06.569826 systemd[1]: var-lib-kubelet-pods-ed5292a9\x2de268\x2d454c\x2dbd6a\x2d1912f01cc6bc-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 6 23:24:07.294521 kubelet[2574]: I1106 23:24:07.292416 2574 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad0e2e00-67f3-448a-a5e7-1534cbe79fff" path="/var/lib/kubelet/pods/ad0e2e00-67f3-448a-a5e7-1534cbe79fff/volumes" Nov 6 23:24:07.294521 kubelet[2574]: I1106 23:24:07.292790 2574 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed5292a9-e268-454c-bd6a-1912f01cc6bc" path="/var/lib/kubelet/pods/ed5292a9-e268-454c-bd6a-1912f01cc6bc/volumes" Nov 6 23:24:07.518525 sshd[4245]: Connection closed by 10.0.0.1 port 39560 Nov 6 23:24:07.518796 sshd-session[4242]: pam_unix(sshd:session): session closed for user core Nov 6 23:24:07.530106 systemd[1]: sshd@22-10.0.0.81:22-10.0.0.1:39560.service: Deactivated successfully. Nov 6 23:24:07.531984 systemd[1]: session-23.scope: Deactivated successfully. Nov 6 23:24:07.532175 systemd[1]: session-23.scope: Consumed 1.746s CPU time, 28.7M memory peak. Nov 6 23:24:07.532938 systemd-logind[1467]: Session 23 logged out. Waiting for processes to exit. Nov 6 23:24:07.543521 systemd[1]: Started sshd@23-10.0.0.81:22-10.0.0.1:39572.service - OpenSSH per-connection server daemon (10.0.0.1:39572). Nov 6 23:24:07.544836 systemd-logind[1467]: Removed session 23. Nov 6 23:24:07.592345 sshd[4402]: Accepted publickey for core from 10.0.0.1 port 39572 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:24:07.593662 sshd-session[4402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:24:07.598308 systemd-logind[1467]: New session 24 of user core. Nov 6 23:24:07.604400 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 6 23:24:08.329796 kubelet[2574]: E1106 23:24:08.329753 2574 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 6 23:24:08.434342 sshd[4405]: Connection closed by 10.0.0.1 port 39572 Nov 6 23:24:08.435393 sshd-session[4402]: pam_unix(sshd:session): session closed for user core Nov 6 23:24:08.448649 systemd[1]: sshd@23-10.0.0.81:22-10.0.0.1:39572.service: Deactivated successfully. Nov 6 23:24:08.455927 systemd[1]: session-24.scope: Deactivated successfully. Nov 6 23:24:08.457696 systemd-logind[1467]: Session 24 logged out. Waiting for processes to exit. Nov 6 23:24:08.473893 systemd[1]: Started sshd@24-10.0.0.81:22-10.0.0.1:39582.service - OpenSSH per-connection server daemon (10.0.0.1:39582). Nov 6 23:24:08.482502 systemd-logind[1467]: Removed session 24. Nov 6 23:24:08.493474 systemd[1]: Created slice kubepods-burstable-podd1fbc783_e8e7_4ef5_9390_aa033a9274b5.slice - libcontainer container kubepods-burstable-podd1fbc783_e8e7_4ef5_9390_aa033a9274b5.slice. Nov 6 23:24:08.521108 sshd[4416]: Accepted publickey for core from 10.0.0.1 port 39582 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:24:08.523089 sshd-session[4416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:24:08.527865 systemd-logind[1467]: New session 25 of user core. Nov 6 23:24:08.534399 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 6 23:24:08.563199 kubelet[2574]: I1106 23:24:08.563157 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d1fbc783-e8e7-4ef5-9390-aa033a9274b5-bpf-maps\") pod \"cilium-lj2d9\" (UID: \"d1fbc783-e8e7-4ef5-9390-aa033a9274b5\") " pod="kube-system/cilium-lj2d9" Nov 6 23:24:08.563199 kubelet[2574]: I1106 23:24:08.563198 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d1fbc783-e8e7-4ef5-9390-aa033a9274b5-cilium-config-path\") pod \"cilium-lj2d9\" (UID: \"d1fbc783-e8e7-4ef5-9390-aa033a9274b5\") " pod="kube-system/cilium-lj2d9" Nov 6 23:24:08.563360 kubelet[2574]: I1106 23:24:08.563222 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d1fbc783-e8e7-4ef5-9390-aa033a9274b5-host-proc-sys-kernel\") pod \"cilium-lj2d9\" (UID: \"d1fbc783-e8e7-4ef5-9390-aa033a9274b5\") " pod="kube-system/cilium-lj2d9" Nov 6 23:24:08.563360 kubelet[2574]: I1106 23:24:08.563238 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwf5l\" (UniqueName: \"kubernetes.io/projected/d1fbc783-e8e7-4ef5-9390-aa033a9274b5-kube-api-access-jwf5l\") pod \"cilium-lj2d9\" (UID: \"d1fbc783-e8e7-4ef5-9390-aa033a9274b5\") " pod="kube-system/cilium-lj2d9" Nov 6 23:24:08.563360 kubelet[2574]: I1106 23:24:08.563309 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d1fbc783-e8e7-4ef5-9390-aa033a9274b5-cilium-ipsec-secrets\") pod \"cilium-lj2d9\" (UID: \"d1fbc783-e8e7-4ef5-9390-aa033a9274b5\") " pod="kube-system/cilium-lj2d9" Nov 6 23:24:08.563429 kubelet[2574]: I1106 23:24:08.563372 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d1fbc783-e8e7-4ef5-9390-aa033a9274b5-hubble-tls\") pod \"cilium-lj2d9\" (UID: \"d1fbc783-e8e7-4ef5-9390-aa033a9274b5\") " pod="kube-system/cilium-lj2d9" Nov 6 23:24:08.563429 kubelet[2574]: I1106 23:24:08.563411 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d1fbc783-e8e7-4ef5-9390-aa033a9274b5-hostproc\") pod \"cilium-lj2d9\" (UID: \"d1fbc783-e8e7-4ef5-9390-aa033a9274b5\") " pod="kube-system/cilium-lj2d9" Nov 6 23:24:08.563473 kubelet[2574]: I1106 23:24:08.563444 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d1fbc783-e8e7-4ef5-9390-aa033a9274b5-cilium-cgroup\") pod \"cilium-lj2d9\" (UID: \"d1fbc783-e8e7-4ef5-9390-aa033a9274b5\") " pod="kube-system/cilium-lj2d9" Nov 6 23:24:08.563473 kubelet[2574]: I1106 23:24:08.563466 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d1fbc783-e8e7-4ef5-9390-aa033a9274b5-clustermesh-secrets\") pod \"cilium-lj2d9\" (UID: \"d1fbc783-e8e7-4ef5-9390-aa033a9274b5\") " pod="kube-system/cilium-lj2d9" Nov 6 23:24:08.563524 kubelet[2574]: I1106 23:24:08.563482 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d1fbc783-e8e7-4ef5-9390-aa033a9274b5-host-proc-sys-net\") pod \"cilium-lj2d9\" (UID: \"d1fbc783-e8e7-4ef5-9390-aa033a9274b5\") " pod="kube-system/cilium-lj2d9" Nov 6 23:24:08.563524 kubelet[2574]: I1106 23:24:08.563509 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d1fbc783-e8e7-4ef5-9390-aa033a9274b5-lib-modules\") pod \"cilium-lj2d9\" (UID: \"d1fbc783-e8e7-4ef5-9390-aa033a9274b5\") " pod="kube-system/cilium-lj2d9" Nov 6 23:24:08.563569 kubelet[2574]: I1106 23:24:08.563532 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d1fbc783-e8e7-4ef5-9390-aa033a9274b5-cilium-run\") pod \"cilium-lj2d9\" (UID: \"d1fbc783-e8e7-4ef5-9390-aa033a9274b5\") " pod="kube-system/cilium-lj2d9" Nov 6 23:24:08.563569 kubelet[2574]: I1106 23:24:08.563548 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d1fbc783-e8e7-4ef5-9390-aa033a9274b5-cni-path\") pod \"cilium-lj2d9\" (UID: \"d1fbc783-e8e7-4ef5-9390-aa033a9274b5\") " pod="kube-system/cilium-lj2d9" Nov 6 23:24:08.563569 kubelet[2574]: I1106 23:24:08.563563 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d1fbc783-e8e7-4ef5-9390-aa033a9274b5-etc-cni-netd\") pod \"cilium-lj2d9\" (UID: \"d1fbc783-e8e7-4ef5-9390-aa033a9274b5\") " pod="kube-system/cilium-lj2d9" Nov 6 23:24:08.563625 kubelet[2574]: I1106 23:24:08.563576 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d1fbc783-e8e7-4ef5-9390-aa033a9274b5-xtables-lock\") pod \"cilium-lj2d9\" (UID: \"d1fbc783-e8e7-4ef5-9390-aa033a9274b5\") " pod="kube-system/cilium-lj2d9" Nov 6 23:24:08.582925 sshd[4419]: Connection closed by 10.0.0.1 port 39582 Nov 6 23:24:08.584141 sshd-session[4416]: pam_unix(sshd:session): session closed for user core Nov 6 23:24:08.602631 systemd[1]: sshd@24-10.0.0.81:22-10.0.0.1:39582.service: Deactivated successfully. Nov 6 23:24:08.604351 systemd[1]: session-25.scope: Deactivated successfully. Nov 6 23:24:08.605006 systemd-logind[1467]: Session 25 logged out. Waiting for processes to exit. Nov 6 23:24:08.613567 systemd[1]: Started sshd@25-10.0.0.81:22-10.0.0.1:39598.service - OpenSSH per-connection server daemon (10.0.0.1:39598). Nov 6 23:24:08.614633 systemd-logind[1467]: Removed session 25. Nov 6 23:24:08.654349 sshd[4425]: Accepted publickey for core from 10.0.0.1 port 39598 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:24:08.655523 sshd-session[4425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:24:08.659466 systemd-logind[1467]: New session 26 of user core. Nov 6 23:24:08.669452 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 6 23:24:08.798663 kubelet[2574]: E1106 23:24:08.798627 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:24:08.799688 containerd[1479]: time="2025-11-06T23:24:08.799654571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lj2d9,Uid:d1fbc783-e8e7-4ef5-9390-aa033a9274b5,Namespace:kube-system,Attempt:0,}" Nov 6 23:24:08.818166 containerd[1479]: time="2025-11-06T23:24:08.817946022Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:24:08.818166 containerd[1479]: time="2025-11-06T23:24:08.818000463Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:24:08.818166 containerd[1479]: time="2025-11-06T23:24:08.818012623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:24:08.818166 containerd[1479]: time="2025-11-06T23:24:08.818086064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:24:08.838430 systemd[1]: Started cri-containerd-7c6db4623a1ec53b6890bd2319ee81597d37de204d6e11bfb9c98f8073f3c3bb.scope - libcontainer container 7c6db4623a1ec53b6890bd2319ee81597d37de204d6e11bfb9c98f8073f3c3bb. Nov 6 23:24:08.860993 containerd[1479]: time="2025-11-06T23:24:08.860882598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lj2d9,Uid:d1fbc783-e8e7-4ef5-9390-aa033a9274b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c6db4623a1ec53b6890bd2319ee81597d37de204d6e11bfb9c98f8073f3c3bb\"" Nov 6 23:24:08.861872 kubelet[2574]: E1106 23:24:08.861792 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:24:08.867759 containerd[1479]: time="2025-11-06T23:24:08.867612476Z" level=info msg="CreateContainer within sandbox \"7c6db4623a1ec53b6890bd2319ee81597d37de204d6e11bfb9c98f8073f3c3bb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 6 23:24:08.877587 containerd[1479]: time="2025-11-06T23:24:08.877538990Z" level=info msg="CreateContainer within sandbox \"7c6db4623a1ec53b6890bd2319ee81597d37de204d6e11bfb9c98f8073f3c3bb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"96c4bdf6614c85bbb379ae4878b92d72b3c547bf1fdfe8123b08595bf72d1a8d\"" Nov 6 23:24:08.879355 containerd[1479]: time="2025-11-06T23:24:08.878334600Z" level=info msg="StartContainer for \"96c4bdf6614c85bbb379ae4878b92d72b3c547bf1fdfe8123b08595bf72d1a8d\"" Nov 6 23:24:08.904430 systemd[1]: Started cri-containerd-96c4bdf6614c85bbb379ae4878b92d72b3c547bf1fdfe8123b08595bf72d1a8d.scope - libcontainer container 96c4bdf6614c85bbb379ae4878b92d72b3c547bf1fdfe8123b08595bf72d1a8d. Nov 6 23:24:08.927716 containerd[1479]: time="2025-11-06T23:24:08.927669969Z" level=info msg="StartContainer for \"96c4bdf6614c85bbb379ae4878b92d72b3c547bf1fdfe8123b08595bf72d1a8d\" returns successfully" Nov 6 23:24:08.937448 systemd[1]: cri-containerd-96c4bdf6614c85bbb379ae4878b92d72b3c547bf1fdfe8123b08595bf72d1a8d.scope: Deactivated successfully. Nov 6 23:24:08.963795 containerd[1479]: time="2025-11-06T23:24:08.963739946Z" level=info msg="shim disconnected" id=96c4bdf6614c85bbb379ae4878b92d72b3c547bf1fdfe8123b08595bf72d1a8d namespace=k8s.io Nov 6 23:24:08.963795 containerd[1479]: time="2025-11-06T23:24:08.963790426Z" level=warning msg="cleaning up after shim disconnected" id=96c4bdf6614c85bbb379ae4878b92d72b3c547bf1fdfe8123b08595bf72d1a8d namespace=k8s.io Nov 6 23:24:08.963795 containerd[1479]: time="2025-11-06T23:24:08.963799986Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:24:09.511982 kubelet[2574]: E1106 23:24:09.511769 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:24:09.517466 containerd[1479]: time="2025-11-06T23:24:09.517426921Z" level=info msg="CreateContainer within sandbox \"7c6db4623a1ec53b6890bd2319ee81597d37de204d6e11bfb9c98f8073f3c3bb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 6 23:24:09.533305 containerd[1479]: time="2025-11-06T23:24:09.531508284Z" level=info msg="CreateContainer within sandbox \"7c6db4623a1ec53b6890bd2319ee81597d37de204d6e11bfb9c98f8073f3c3bb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5115a627f1445d6b8922b0190a220f28803afc73efb1871d7e58e2263bd19b82\"" Nov 6 23:24:09.533305 containerd[1479]: time="2025-11-06T23:24:09.532478175Z" level=info msg="StartContainer for \"5115a627f1445d6b8922b0190a220f28803afc73efb1871d7e58e2263bd19b82\"" Nov 6 23:24:09.559462 systemd[1]: Started cri-containerd-5115a627f1445d6b8922b0190a220f28803afc73efb1871d7e58e2263bd19b82.scope - libcontainer container 5115a627f1445d6b8922b0190a220f28803afc73efb1871d7e58e2263bd19b82. Nov 6 23:24:09.580047 containerd[1479]: time="2025-11-06T23:24:09.579896521Z" level=info msg="StartContainer for \"5115a627f1445d6b8922b0190a220f28803afc73efb1871d7e58e2263bd19b82\" returns successfully" Nov 6 23:24:09.585952 systemd[1]: cri-containerd-5115a627f1445d6b8922b0190a220f28803afc73efb1871d7e58e2263bd19b82.scope: Deactivated successfully. Nov 6 23:24:09.606220 containerd[1479]: time="2025-11-06T23:24:09.606023621Z" level=info msg="shim disconnected" id=5115a627f1445d6b8922b0190a220f28803afc73efb1871d7e58e2263bd19b82 namespace=k8s.io Nov 6 23:24:09.606220 containerd[1479]: time="2025-11-06T23:24:09.606076262Z" level=warning msg="cleaning up after shim disconnected" id=5115a627f1445d6b8922b0190a220f28803afc73efb1871d7e58e2263bd19b82 namespace=k8s.io Nov 6 23:24:09.606220 containerd[1479]: time="2025-11-06T23:24:09.606083822Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:24:10.515633 kubelet[2574]: E1106 23:24:10.515592 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:24:10.528202 containerd[1479]: time="2025-11-06T23:24:10.528142060Z" level=info msg="CreateContainer within sandbox \"7c6db4623a1ec53b6890bd2319ee81597d37de204d6e11bfb9c98f8073f3c3bb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 6 23:24:10.543977 containerd[1479]: time="2025-11-06T23:24:10.543881521Z" level=info msg="CreateContainer within sandbox \"7c6db4623a1ec53b6890bd2319ee81597d37de204d6e11bfb9c98f8073f3c3bb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"74239d7df45832f2287a29da0a89b69cfbef41367b325fe6ff06355b2faa932d\"" Nov 6 23:24:10.544630 containerd[1479]: time="2025-11-06T23:24:10.544377406Z" level=info msg="StartContainer for \"74239d7df45832f2287a29da0a89b69cfbef41367b325fe6ff06355b2faa932d\"" Nov 6 23:24:10.572426 systemd[1]: Started cri-containerd-74239d7df45832f2287a29da0a89b69cfbef41367b325fe6ff06355b2faa932d.scope - libcontainer container 74239d7df45832f2287a29da0a89b69cfbef41367b325fe6ff06355b2faa932d. Nov 6 23:24:10.596892 containerd[1479]: time="2025-11-06T23:24:10.596836769Z" level=info msg="StartContainer for \"74239d7df45832f2287a29da0a89b69cfbef41367b325fe6ff06355b2faa932d\" returns successfully" Nov 6 23:24:10.599710 systemd[1]: cri-containerd-74239d7df45832f2287a29da0a89b69cfbef41367b325fe6ff06355b2faa932d.scope: Deactivated successfully. Nov 6 23:24:10.621044 containerd[1479]: time="2025-11-06T23:24:10.620986046Z" level=info msg="shim disconnected" id=74239d7df45832f2287a29da0a89b69cfbef41367b325fe6ff06355b2faa932d namespace=k8s.io Nov 6 23:24:10.621493 containerd[1479]: time="2025-11-06T23:24:10.621316490Z" level=warning msg="cleaning up after shim disconnected" id=74239d7df45832f2287a29da0a89b69cfbef41367b325fe6ff06355b2faa932d namespace=k8s.io Nov 6 23:24:10.621493 containerd[1479]: time="2025-11-06T23:24:10.621334130Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:24:10.671355 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-74239d7df45832f2287a29da0a89b69cfbef41367b325fe6ff06355b2faa932d-rootfs.mount: Deactivated successfully. Nov 6 23:24:11.520044 kubelet[2574]: E1106 23:24:11.519980 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:24:11.526454 containerd[1479]: time="2025-11-06T23:24:11.526413063Z" level=info msg="CreateContainer within sandbox \"7c6db4623a1ec53b6890bd2319ee81597d37de204d6e11bfb9c98f8073f3c3bb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 6 23:24:11.543702 containerd[1479]: time="2025-11-06T23:24:11.543660260Z" level=info msg="CreateContainer within sandbox \"7c6db4623a1ec53b6890bd2319ee81597d37de204d6e11bfb9c98f8073f3c3bb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3465afe907795662120017c8d6ea64e9277c1dbe916c30182d0bbe3c883d8836\"" Nov 6 23:24:11.544895 containerd[1479]: time="2025-11-06T23:24:11.544472269Z" level=info msg="StartContainer for \"3465afe907795662120017c8d6ea64e9277c1dbe916c30182d0bbe3c883d8836\"" Nov 6 23:24:11.570658 systemd[1]: Started cri-containerd-3465afe907795662120017c8d6ea64e9277c1dbe916c30182d0bbe3c883d8836.scope - libcontainer container 3465afe907795662120017c8d6ea64e9277c1dbe916c30182d0bbe3c883d8836. Nov 6 23:24:11.590215 systemd[1]: cri-containerd-3465afe907795662120017c8d6ea64e9277c1dbe916c30182d0bbe3c883d8836.scope: Deactivated successfully. Nov 6 23:24:11.591996 containerd[1479]: time="2025-11-06T23:24:11.591959973Z" level=info msg="StartContainer for \"3465afe907795662120017c8d6ea64e9277c1dbe916c30182d0bbe3c883d8836\" returns successfully" Nov 6 23:24:11.620660 containerd[1479]: time="2025-11-06T23:24:11.620593341Z" level=info msg="shim disconnected" id=3465afe907795662120017c8d6ea64e9277c1dbe916c30182d0bbe3c883d8836 namespace=k8s.io Nov 6 23:24:11.620660 containerd[1479]: time="2025-11-06T23:24:11.620650021Z" level=warning msg="cleaning up after shim disconnected" id=3465afe907795662120017c8d6ea64e9277c1dbe916c30182d0bbe3c883d8836 namespace=k8s.io Nov 6 23:24:11.620660 containerd[1479]: time="2025-11-06T23:24:11.620658502Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:24:11.631253 containerd[1479]: time="2025-11-06T23:24:11.631187222Z" level=warning msg="cleanup warnings time=\"2025-11-06T23:24:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 6 23:24:11.671332 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3465afe907795662120017c8d6ea64e9277c1dbe916c30182d0bbe3c883d8836-rootfs.mount: Deactivated successfully. Nov 6 23:24:12.528824 kubelet[2574]: E1106 23:24:12.528764 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:24:12.540531 containerd[1479]: time="2025-11-06T23:24:12.540463094Z" level=info msg="CreateContainer within sandbox \"7c6db4623a1ec53b6890bd2319ee81597d37de204d6e11bfb9c98f8073f3c3bb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 6 23:24:12.563498 containerd[1479]: time="2025-11-06T23:24:12.563444716Z" level=info msg="CreateContainer within sandbox \"7c6db4623a1ec53b6890bd2319ee81597d37de204d6e11bfb9c98f8073f3c3bb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d3474b450cf079f84f2e5ccbb44d7497bf4a0b8ffd378654c0e57a27cf78190a\"" Nov 6 23:24:12.564886 containerd[1479]: time="2025-11-06T23:24:12.563998443Z" level=info msg="StartContainer for \"d3474b450cf079f84f2e5ccbb44d7497bf4a0b8ffd378654c0e57a27cf78190a\"" Nov 6 23:24:12.591424 systemd[1]: Started cri-containerd-d3474b450cf079f84f2e5ccbb44d7497bf4a0b8ffd378654c0e57a27cf78190a.scope - libcontainer container d3474b450cf079f84f2e5ccbb44d7497bf4a0b8ffd378654c0e57a27cf78190a. Nov 6 23:24:12.614932 containerd[1479]: time="2025-11-06T23:24:12.614807703Z" level=info msg="StartContainer for \"d3474b450cf079f84f2e5ccbb44d7497bf4a0b8ffd378654c0e57a27cf78190a\" returns successfully" Nov 6 23:24:12.900283 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Nov 6 23:24:13.533448 kubelet[2574]: E1106 23:24:13.533328 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:24:13.548005 kubelet[2574]: I1106 23:24:13.547848 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lj2d9" podStartSLOduration=5.547832858 podStartE2EDuration="5.547832858s" podCreationTimestamp="2025-11-06 23:24:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:24:13.54712645 +0000 UTC m=+80.332835982" watchObservedRunningTime="2025-11-06 23:24:13.547832858 +0000 UTC m=+80.333542390" Nov 6 23:24:14.797734 kubelet[2574]: E1106 23:24:14.797678 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:24:15.758228 systemd-networkd[1401]: lxc_health: Link UP Nov 6 23:24:15.763974 systemd-networkd[1401]: lxc_health: Gained carrier Nov 6 23:24:16.291468 kubelet[2574]: E1106 23:24:16.290442 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:24:16.798577 kubelet[2574]: E1106 23:24:16.798143 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:24:17.292653 kubelet[2574]: E1106 23:24:17.292613 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:24:17.537425 systemd-networkd[1401]: lxc_health: Gained IPv6LL Nov 6 23:24:17.541083 kubelet[2574]: E1106 23:24:17.541051 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:24:18.543079 kubelet[2574]: E1106 23:24:18.543038 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:24:19.294301 kubelet[2574]: E1106 23:24:19.293583 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:24:21.446989 systemd[1]: run-containerd-runc-k8s.io-d3474b450cf079f84f2e5ccbb44d7497bf4a0b8ffd378654c0e57a27cf78190a-runc.NlI3GA.mount: Deactivated successfully. Nov 6 23:24:22.290526 kubelet[2574]: E1106 23:24:22.290481 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:24:23.627562 sshd[4433]: Connection closed by 10.0.0.1 port 39598 Nov 6 23:24:23.628581 sshd-session[4425]: pam_unix(sshd:session): session closed for user core Nov 6 23:24:23.631509 systemd[1]: sshd@25-10.0.0.81:22-10.0.0.1:39598.service: Deactivated successfully. Nov 6 23:24:23.634050 systemd[1]: session-26.scope: Deactivated successfully. Nov 6 23:24:23.635658 systemd-logind[1467]: Session 26 logged out. Waiting for processes to exit. Nov 6 23:24:23.636590 systemd-logind[1467]: Removed session 26.