Nov 23 23:07:55.775668 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Nov 23 23:07:55.775696 kernel: Linux version 6.12.58-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Sun Nov 23 20:53:53 -00 2025 Nov 23 23:07:55.775705 kernel: KASLR enabled Nov 23 23:07:55.775711 kernel: efi: EFI v2.7 by EDK II Nov 23 23:07:55.775716 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Nov 23 23:07:55.775722 kernel: random: crng init done Nov 23 23:07:55.775728 kernel: secureboot: Secure boot disabled Nov 23 23:07:55.775734 kernel: ACPI: Early table checksum verification disabled Nov 23 23:07:55.775740 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Nov 23 23:07:55.775747 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Nov 23 23:07:55.775754 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:07:55.775760 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:07:55.775766 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:07:55.775772 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:07:55.775779 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:07:55.775787 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:07:55.775794 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:07:55.775800 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:07:55.775806 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:07:55.775812 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Nov 23 23:07:55.775818 kernel: ACPI: Use ACPI SPCR as default console: No Nov 23 23:07:55.775825 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Nov 23 23:07:55.775831 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Nov 23 23:07:55.775837 kernel: Zone ranges: Nov 23 23:07:55.775843 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Nov 23 23:07:55.775850 kernel: DMA32 empty Nov 23 23:07:55.775856 kernel: Normal empty Nov 23 23:07:55.775862 kernel: Device empty Nov 23 23:07:55.775868 kernel: Movable zone start for each node Nov 23 23:07:55.775874 kernel: Early memory node ranges Nov 23 23:07:55.775880 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Nov 23 23:07:55.775886 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Nov 23 23:07:55.775892 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Nov 23 23:07:55.775898 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Nov 23 23:07:55.775904 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Nov 23 23:07:55.775910 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Nov 23 23:07:55.775916 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Nov 23 23:07:55.775924 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Nov 23 23:07:55.775930 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Nov 23 23:07:55.775936 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Nov 23 23:07:55.775945 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Nov 23 23:07:55.775952 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Nov 23 23:07:55.775958 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Nov 23 23:07:55.775966 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Nov 23 23:07:55.775972 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Nov 23 23:07:55.775979 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Nov 23 23:07:55.775986 kernel: psci: probing for conduit method from ACPI. Nov 23 23:07:55.775992 kernel: psci: PSCIv1.1 detected in firmware. Nov 23 23:07:55.775999 kernel: psci: Using standard PSCI v0.2 function IDs Nov 23 23:07:55.776005 kernel: psci: Trusted OS migration not required Nov 23 23:07:55.776012 kernel: psci: SMC Calling Convention v1.1 Nov 23 23:07:55.776018 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Nov 23 23:07:55.776025 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Nov 23 23:07:55.776033 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Nov 23 23:07:55.776040 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Nov 23 23:07:55.776046 kernel: Detected PIPT I-cache on CPU0 Nov 23 23:07:55.776053 kernel: CPU features: detected: GIC system register CPU interface Nov 23 23:07:55.776059 kernel: CPU features: detected: Spectre-v4 Nov 23 23:07:55.776066 kernel: CPU features: detected: Spectre-BHB Nov 23 23:07:55.776072 kernel: CPU features: kernel page table isolation forced ON by KASLR Nov 23 23:07:55.776078 kernel: CPU features: detected: Kernel page table isolation (KPTI) Nov 23 23:07:55.776095 kernel: CPU features: detected: ARM erratum 1418040 Nov 23 23:07:55.776103 kernel: CPU features: detected: SSBS not fully self-synchronizing Nov 23 23:07:55.776110 kernel: alternatives: applying boot alternatives Nov 23 23:07:55.776117 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=4db094b704dd398addf25219e01d6d8f197b31dbf6377199102cc61dad0e4bb2 Nov 23 23:07:55.776127 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 23 23:07:55.776134 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 23 23:07:55.776140 kernel: Fallback order for Node 0: 0 Nov 23 23:07:55.776147 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Nov 23 23:07:55.776153 kernel: Policy zone: DMA Nov 23 23:07:55.776159 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 23 23:07:55.776177 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Nov 23 23:07:55.776185 kernel: software IO TLB: area num 4. Nov 23 23:07:55.776191 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Nov 23 23:07:55.776198 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Nov 23 23:07:55.776205 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 23 23:07:55.776214 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 23 23:07:55.776222 kernel: rcu: RCU event tracing is enabled. Nov 23 23:07:55.776229 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 23 23:07:55.776236 kernel: Trampoline variant of Tasks RCU enabled. Nov 23 23:07:55.776243 kernel: Tracing variant of Tasks RCU enabled. Nov 23 23:07:55.776249 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 23 23:07:55.776256 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 23 23:07:55.776263 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 23 23:07:55.776270 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 23 23:07:55.776276 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 23 23:07:55.776283 kernel: GICv3: 256 SPIs implemented Nov 23 23:07:55.776291 kernel: GICv3: 0 Extended SPIs implemented Nov 23 23:07:55.776299 kernel: Root IRQ handler: gic_handle_irq Nov 23 23:07:55.776305 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Nov 23 23:07:55.776312 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Nov 23 23:07:55.776318 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Nov 23 23:07:55.776325 kernel: ITS [mem 0x08080000-0x0809ffff] Nov 23 23:07:55.776332 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Nov 23 23:07:55.776339 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Nov 23 23:07:55.776345 kernel: GICv3: using LPI property table @0x0000000040130000 Nov 23 23:07:55.776352 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Nov 23 23:07:55.776359 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 23 23:07:55.776366 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 23 23:07:55.776374 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Nov 23 23:07:55.776381 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Nov 23 23:07:55.776388 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Nov 23 23:07:55.776394 kernel: arm-pv: using stolen time PV Nov 23 23:07:55.776401 kernel: Console: colour dummy device 80x25 Nov 23 23:07:55.776408 kernel: ACPI: Core revision 20240827 Nov 23 23:07:55.776415 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Nov 23 23:07:55.776421 kernel: pid_max: default: 32768 minimum: 301 Nov 23 23:07:55.776428 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 23 23:07:55.776434 kernel: landlock: Up and running. Nov 23 23:07:55.776442 kernel: SELinux: Initializing. Nov 23 23:07:55.776463 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 23 23:07:55.776469 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 23 23:07:55.776476 kernel: rcu: Hierarchical SRCU implementation. Nov 23 23:07:55.776483 kernel: rcu: Max phase no-delay instances is 400. Nov 23 23:07:55.776490 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 23 23:07:55.776497 kernel: Remapping and enabling EFI services. Nov 23 23:07:55.776504 kernel: smp: Bringing up secondary CPUs ... Nov 23 23:07:55.776511 kernel: Detected PIPT I-cache on CPU1 Nov 23 23:07:55.776524 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Nov 23 23:07:55.776531 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Nov 23 23:07:55.776538 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 23 23:07:55.776547 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Nov 23 23:07:55.776554 kernel: Detected PIPT I-cache on CPU2 Nov 23 23:07:55.776561 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Nov 23 23:07:55.776568 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Nov 23 23:07:55.776575 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 23 23:07:55.776584 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Nov 23 23:07:55.776591 kernel: Detected PIPT I-cache on CPU3 Nov 23 23:07:55.776603 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Nov 23 23:07:55.776610 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Nov 23 23:07:55.776618 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 23 23:07:55.776624 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Nov 23 23:07:55.776631 kernel: smp: Brought up 1 node, 4 CPUs Nov 23 23:07:55.776639 kernel: SMP: Total of 4 processors activated. Nov 23 23:07:55.776646 kernel: CPU: All CPU(s) started at EL1 Nov 23 23:07:55.776654 kernel: CPU features: detected: 32-bit EL0 Support Nov 23 23:07:55.776662 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Nov 23 23:07:55.776669 kernel: CPU features: detected: Common not Private translations Nov 23 23:07:55.776677 kernel: CPU features: detected: CRC32 instructions Nov 23 23:07:55.776684 kernel: CPU features: detected: Enhanced Virtualization Traps Nov 23 23:07:55.776691 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Nov 23 23:07:55.776698 kernel: CPU features: detected: LSE atomic instructions Nov 23 23:07:55.776706 kernel: CPU features: detected: Privileged Access Never Nov 23 23:07:55.776712 kernel: CPU features: detected: RAS Extension Support Nov 23 23:07:55.776721 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Nov 23 23:07:55.776728 kernel: alternatives: applying system-wide alternatives Nov 23 23:07:55.776735 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Nov 23 23:07:55.776743 kernel: Memory: 2423776K/2572288K available (11200K kernel code, 2456K rwdata, 9084K rodata, 39552K init, 1038K bss, 126176K reserved, 16384K cma-reserved) Nov 23 23:07:55.776750 kernel: devtmpfs: initialized Nov 23 23:07:55.776762 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 23 23:07:55.776770 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 23 23:07:55.776777 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Nov 23 23:07:55.776784 kernel: 0 pages in range for non-PLT usage Nov 23 23:07:55.776792 kernel: 508400 pages in range for PLT usage Nov 23 23:07:55.776800 kernel: pinctrl core: initialized pinctrl subsystem Nov 23 23:07:55.776807 kernel: SMBIOS 3.0.0 present. Nov 23 23:07:55.776814 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Nov 23 23:07:55.776821 kernel: DMI: Memory slots populated: 1/1 Nov 23 23:07:55.776829 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 23 23:07:55.776838 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 23 23:07:55.776845 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 23 23:07:55.776853 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 23 23:07:55.776861 kernel: audit: initializing netlink subsys (disabled) Nov 23 23:07:55.776868 kernel: audit: type=2000 audit(0.021:1): state=initialized audit_enabled=0 res=1 Nov 23 23:07:55.776876 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 23 23:07:55.776883 kernel: cpuidle: using governor menu Nov 23 23:07:55.776890 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 23 23:07:55.776897 kernel: ASID allocator initialised with 32768 entries Nov 23 23:07:55.776904 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 23 23:07:55.776911 kernel: Serial: AMBA PL011 UART driver Nov 23 23:07:55.776918 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 23 23:07:55.776928 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 23 23:07:55.776935 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 23 23:07:55.776942 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 23 23:07:55.776949 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 23 23:07:55.776957 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 23 23:07:55.776964 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 23 23:07:55.776971 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 23 23:07:55.776978 kernel: ACPI: Added _OSI(Module Device) Nov 23 23:07:55.776985 kernel: ACPI: Added _OSI(Processor Device) Nov 23 23:07:55.776994 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 23 23:07:55.777001 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 23 23:07:55.777008 kernel: ACPI: Interpreter enabled Nov 23 23:07:55.777015 kernel: ACPI: Using GIC for interrupt routing Nov 23 23:07:55.777022 kernel: ACPI: MCFG table detected, 1 entries Nov 23 23:07:55.777029 kernel: ACPI: CPU0 has been hot-added Nov 23 23:07:55.777036 kernel: ACPI: CPU1 has been hot-added Nov 23 23:07:55.777043 kernel: ACPI: CPU2 has been hot-added Nov 23 23:07:55.777050 kernel: ACPI: CPU3 has been hot-added Nov 23 23:07:55.777058 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Nov 23 23:07:55.777066 kernel: printk: legacy console [ttyAMA0] enabled Nov 23 23:07:55.777074 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 23 23:07:55.777335 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 23 23:07:55.777422 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 23 23:07:55.777487 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 23 23:07:55.777550 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Nov 23 23:07:55.777612 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Nov 23 23:07:55.777626 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Nov 23 23:07:55.777634 kernel: PCI host bridge to bus 0000:00 Nov 23 23:07:55.777706 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Nov 23 23:07:55.777765 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 23 23:07:55.777822 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Nov 23 23:07:55.777876 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 23 23:07:55.777957 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Nov 23 23:07:55.778033 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 23 23:07:55.778112 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Nov 23 23:07:55.778198 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Nov 23 23:07:55.778266 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Nov 23 23:07:55.778346 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Nov 23 23:07:55.778408 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Nov 23 23:07:55.778474 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Nov 23 23:07:55.778534 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Nov 23 23:07:55.778588 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 23 23:07:55.778644 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Nov 23 23:07:55.778655 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 23 23:07:55.778663 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 23 23:07:55.778671 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 23 23:07:55.778678 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 23 23:07:55.778688 kernel: iommu: Default domain type: Translated Nov 23 23:07:55.778695 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 23 23:07:55.778702 kernel: efivars: Registered efivars operations Nov 23 23:07:55.778710 kernel: vgaarb: loaded Nov 23 23:07:55.778718 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 23 23:07:55.778725 kernel: VFS: Disk quotas dquot_6.6.0 Nov 23 23:07:55.778733 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 23 23:07:55.778741 kernel: pnp: PnP ACPI init Nov 23 23:07:55.778818 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Nov 23 23:07:55.778831 kernel: pnp: PnP ACPI: found 1 devices Nov 23 23:07:55.778838 kernel: NET: Registered PF_INET protocol family Nov 23 23:07:55.778846 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 23 23:07:55.778853 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 23 23:07:55.778861 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 23 23:07:55.778869 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 23 23:07:55.778876 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 23 23:07:55.778883 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 23 23:07:55.778892 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 23 23:07:55.778900 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 23 23:07:55.778907 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 23 23:07:55.778927 kernel: PCI: CLS 0 bytes, default 64 Nov 23 23:07:55.778935 kernel: kvm [1]: HYP mode not available Nov 23 23:07:55.778943 kernel: Initialise system trusted keyrings Nov 23 23:07:55.778951 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 23 23:07:55.778959 kernel: Key type asymmetric registered Nov 23 23:07:55.778967 kernel: Asymmetric key parser 'x509' registered Nov 23 23:07:55.778976 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 23 23:07:55.778984 kernel: io scheduler mq-deadline registered Nov 23 23:07:55.778992 kernel: io scheduler kyber registered Nov 23 23:07:55.779000 kernel: io scheduler bfq registered Nov 23 23:07:55.779008 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 23 23:07:55.779015 kernel: ACPI: button: Power Button [PWRB] Nov 23 23:07:55.779023 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 23 23:07:55.779097 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Nov 23 23:07:55.779109 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 23 23:07:55.779119 kernel: thunder_xcv, ver 1.0 Nov 23 23:07:55.779127 kernel: thunder_bgx, ver 1.0 Nov 23 23:07:55.779135 kernel: nicpf, ver 1.0 Nov 23 23:07:55.779142 kernel: nicvf, ver 1.0 Nov 23 23:07:55.779245 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 23 23:07:55.779309 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-23T23:07:55 UTC (1763939275) Nov 23 23:07:55.779319 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 23 23:07:55.779327 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Nov 23 23:07:55.779337 kernel: watchdog: NMI not fully supported Nov 23 23:07:55.779344 kernel: watchdog: Hard watchdog permanently disabled Nov 23 23:07:55.779351 kernel: NET: Registered PF_INET6 protocol family Nov 23 23:07:55.779359 kernel: Segment Routing with IPv6 Nov 23 23:07:55.779366 kernel: In-situ OAM (IOAM) with IPv6 Nov 23 23:07:55.779373 kernel: NET: Registered PF_PACKET protocol family Nov 23 23:07:55.779380 kernel: Key type dns_resolver registered Nov 23 23:07:55.779388 kernel: registered taskstats version 1 Nov 23 23:07:55.779395 kernel: Loading compiled-in X.509 certificates Nov 23 23:07:55.779403 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.58-flatcar: 00c36da29593053a7da9cd3c5945ae69451ce339' Nov 23 23:07:55.779411 kernel: Demotion targets for Node 0: null Nov 23 23:07:55.779418 kernel: Key type .fscrypt registered Nov 23 23:07:55.779425 kernel: Key type fscrypt-provisioning registered Nov 23 23:07:55.779432 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 23 23:07:55.779439 kernel: ima: Allocated hash algorithm: sha1 Nov 23 23:07:55.779447 kernel: ima: No architecture policies found Nov 23 23:07:55.779454 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 23 23:07:55.779462 kernel: clk: Disabling unused clocks Nov 23 23:07:55.779469 kernel: PM: genpd: Disabling unused power domains Nov 23 23:07:55.779478 kernel: Warning: unable to open an initial console. Nov 23 23:07:55.779486 kernel: Freeing unused kernel memory: 39552K Nov 23 23:07:55.779493 kernel: Run /init as init process Nov 23 23:07:55.779501 kernel: with arguments: Nov 23 23:07:55.779509 kernel: /init Nov 23 23:07:55.779516 kernel: with environment: Nov 23 23:07:55.779523 kernel: HOME=/ Nov 23 23:07:55.779531 kernel: TERM=linux Nov 23 23:07:55.779540 systemd[1]: Successfully made /usr/ read-only. Nov 23 23:07:55.779552 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 23 23:07:55.779561 systemd[1]: Detected virtualization kvm. Nov 23 23:07:55.779569 systemd[1]: Detected architecture arm64. Nov 23 23:07:55.779576 systemd[1]: Running in initrd. Nov 23 23:07:55.779584 systemd[1]: No hostname configured, using default hostname. Nov 23 23:07:55.779593 systemd[1]: Hostname set to . Nov 23 23:07:55.779601 systemd[1]: Initializing machine ID from VM UUID. Nov 23 23:07:55.779611 systemd[1]: Queued start job for default target initrd.target. Nov 23 23:07:55.779619 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 23 23:07:55.779627 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 23 23:07:55.779635 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 23 23:07:55.779643 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 23 23:07:55.779651 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 23 23:07:55.779659 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 23 23:07:55.779669 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 23 23:07:55.779677 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 23 23:07:55.779685 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 23 23:07:55.779693 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 23 23:07:55.779701 systemd[1]: Reached target paths.target - Path Units. Nov 23 23:07:55.779709 systemd[1]: Reached target slices.target - Slice Units. Nov 23 23:07:55.779716 systemd[1]: Reached target swap.target - Swaps. Nov 23 23:07:55.779724 systemd[1]: Reached target timers.target - Timer Units. Nov 23 23:07:55.779734 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 23 23:07:55.779741 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 23 23:07:55.779749 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 23 23:07:55.779757 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 23 23:07:55.779764 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 23 23:07:55.779773 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 23 23:07:55.779780 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 23 23:07:55.779788 systemd[1]: Reached target sockets.target - Socket Units. Nov 23 23:07:55.779797 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 23 23:07:55.779805 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 23 23:07:55.779813 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 23 23:07:55.779821 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 23 23:07:55.779828 systemd[1]: Starting systemd-fsck-usr.service... Nov 23 23:07:55.779836 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 23 23:07:55.779844 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 23 23:07:55.779852 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 23:07:55.779859 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 23 23:07:55.779869 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 23 23:07:55.779877 systemd[1]: Finished systemd-fsck-usr.service. Nov 23 23:07:55.779885 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 23 23:07:55.779912 systemd-journald[245]: Collecting audit messages is disabled. Nov 23 23:07:55.779935 systemd-journald[245]: Journal started Nov 23 23:07:55.779953 systemd-journald[245]: Runtime Journal (/run/log/journal/6455769e8f4b43de88ded177cb491bf8) is 6M, max 48.5M, 42.4M free. Nov 23 23:07:55.788321 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 23 23:07:55.788378 kernel: Bridge firewalling registered Nov 23 23:07:55.788389 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:07:55.772904 systemd-modules-load[246]: Inserted module 'overlay' Nov 23 23:07:55.787703 systemd-modules-load[246]: Inserted module 'br_netfilter' Nov 23 23:07:55.794728 systemd[1]: Started systemd-journald.service - Journal Service. Nov 23 23:07:55.797143 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 23 23:07:55.798502 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 23 23:07:55.803348 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 23 23:07:55.805324 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 23 23:07:55.807560 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 23 23:07:55.822981 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 23 23:07:55.834800 systemd-tmpfiles[273]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 23 23:07:55.834986 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 23 23:07:55.838112 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 23 23:07:55.841116 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 23 23:07:55.844025 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 23 23:07:55.848054 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 23 23:07:55.850999 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 23 23:07:55.874068 dracut-cmdline[290]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=4db094b704dd398addf25219e01d6d8f197b31dbf6377199102cc61dad0e4bb2 Nov 23 23:07:55.890092 systemd-resolved[291]: Positive Trust Anchors: Nov 23 23:07:55.890113 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 23 23:07:55.890146 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 23 23:07:55.896008 systemd-resolved[291]: Defaulting to hostname 'linux'. Nov 23 23:07:55.897251 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 23 23:07:55.900712 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 23 23:07:55.964204 kernel: SCSI subsystem initialized Nov 23 23:07:55.969189 kernel: Loading iSCSI transport class v2.0-870. Nov 23 23:07:55.978308 kernel: iscsi: registered transport (tcp) Nov 23 23:07:55.991292 kernel: iscsi: registered transport (qla4xxx) Nov 23 23:07:55.991350 kernel: QLogic iSCSI HBA Driver Nov 23 23:07:56.010645 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 23 23:07:56.025575 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 23 23:07:56.027869 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 23 23:07:56.078864 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 23 23:07:56.083334 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 23 23:07:56.151205 kernel: raid6: neonx8 gen() 15610 MB/s Nov 23 23:07:56.168199 kernel: raid6: neonx4 gen() 15275 MB/s Nov 23 23:07:56.185199 kernel: raid6: neonx2 gen() 13174 MB/s Nov 23 23:07:56.202196 kernel: raid6: neonx1 gen() 10289 MB/s Nov 23 23:07:56.219182 kernel: raid6: int64x8 gen() 6890 MB/s Nov 23 23:07:56.236186 kernel: raid6: int64x4 gen() 7337 MB/s Nov 23 23:07:56.253196 kernel: raid6: int64x2 gen() 6068 MB/s Nov 23 23:07:56.270335 kernel: raid6: int64x1 gen() 5018 MB/s Nov 23 23:07:56.270372 kernel: raid6: using algorithm neonx8 gen() 15610 MB/s Nov 23 23:07:56.288231 kernel: raid6: .... xor() 12037 MB/s, rmw enabled Nov 23 23:07:56.288264 kernel: raid6: using neon recovery algorithm Nov 23 23:07:56.294242 kernel: xor: measuring software checksum speed Nov 23 23:07:56.294295 kernel: 8regs : 21533 MB/sec Nov 23 23:07:56.295488 kernel: 32regs : 21607 MB/sec Nov 23 23:07:56.295512 kernel: arm64_neon : 27927 MB/sec Nov 23 23:07:56.295521 kernel: xor: using function: arm64_neon (27927 MB/sec) Nov 23 23:07:56.351220 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 23 23:07:56.358753 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 23 23:07:56.362617 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 23 23:07:56.394996 systemd-udevd[500]: Using default interface naming scheme 'v255'. Nov 23 23:07:56.399300 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 23 23:07:56.403445 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 23 23:07:56.444534 dracut-pre-trigger[510]: rd.md=0: removing MD RAID activation Nov 23 23:07:56.471627 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 23 23:07:56.474250 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 23 23:07:56.550222 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 23 23:07:56.554970 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 23 23:07:56.615468 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Nov 23 23:07:56.615661 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 23 23:07:56.621668 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 23 23:07:56.621802 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:07:56.624148 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 23:07:56.626149 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 23:07:56.633190 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 23 23:07:56.633270 kernel: GPT:9289727 != 19775487 Nov 23 23:07:56.633285 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 23 23:07:56.633294 kernel: GPT:9289727 != 19775487 Nov 23 23:07:56.633303 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 23 23:07:56.633321 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 23 23:07:56.671068 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 23 23:07:56.672745 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 23 23:07:56.674627 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:07:56.688228 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 23 23:07:56.694505 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 23 23:07:56.695716 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 23 23:07:56.704573 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 23 23:07:56.705866 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 23 23:07:56.707756 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 23 23:07:56.709683 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 23 23:07:56.712401 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 23 23:07:56.714322 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 23 23:07:56.739804 disk-uuid[593]: Primary Header is updated. Nov 23 23:07:56.739804 disk-uuid[593]: Secondary Entries is updated. Nov 23 23:07:56.739804 disk-uuid[593]: Secondary Header is updated. Nov 23 23:07:56.744182 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 23 23:07:56.745189 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 23 23:07:57.755203 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 23 23:07:57.755703 disk-uuid[596]: The operation has completed successfully. Nov 23 23:07:57.780218 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 23 23:07:57.780331 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 23 23:07:57.825575 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 23 23:07:57.841592 sh[613]: Success Nov 23 23:07:57.855206 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 23 23:07:57.855274 kernel: device-mapper: uevent: version 1.0.3 Nov 23 23:07:57.857223 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 23 23:07:57.867249 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Nov 23 23:07:57.896060 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 23 23:07:57.899696 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 23 23:07:57.911640 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 23 23:07:57.919215 kernel: BTRFS: device fsid 5fd06d80-8dd4-4ca0-aa0c-93ddab5f4498 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (625) Nov 23 23:07:57.919268 kernel: BTRFS info (device dm-0): first mount of filesystem 5fd06d80-8dd4-4ca0-aa0c-93ddab5f4498 Nov 23 23:07:57.921033 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 23 23:07:57.925570 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 23 23:07:57.925630 kernel: BTRFS info (device dm-0): enabling free space tree Nov 23 23:07:57.926914 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 23 23:07:57.928390 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 23 23:07:57.929826 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 23 23:07:57.930744 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 23 23:07:57.933973 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 23 23:07:57.959225 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (656) Nov 23 23:07:57.961370 kernel: BTRFS info (device vda6): first mount of filesystem fbc9a6bc-8b9c-4847-949c-e8c4f3bf01b3 Nov 23 23:07:57.961436 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 23 23:07:57.964487 kernel: BTRFS info (device vda6): turning on async discard Nov 23 23:07:57.964553 kernel: BTRFS info (device vda6): enabling free space tree Nov 23 23:07:57.970250 kernel: BTRFS info (device vda6): last unmount of filesystem fbc9a6bc-8b9c-4847-949c-e8c4f3bf01b3 Nov 23 23:07:57.970674 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 23 23:07:57.974553 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 23 23:07:58.054818 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 23 23:07:58.059410 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 23 23:07:58.084022 ignition[701]: Ignition 2.22.0 Nov 23 23:07:58.084039 ignition[701]: Stage: fetch-offline Nov 23 23:07:58.084090 ignition[701]: no configs at "/usr/lib/ignition/base.d" Nov 23 23:07:58.084100 ignition[701]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 23 23:07:58.084216 ignition[701]: parsed url from cmdline: "" Nov 23 23:07:58.084219 ignition[701]: no config URL provided Nov 23 23:07:58.084224 ignition[701]: reading system config file "/usr/lib/ignition/user.ign" Nov 23 23:07:58.084231 ignition[701]: no config at "/usr/lib/ignition/user.ign" Nov 23 23:07:58.084259 ignition[701]: op(1): [started] loading QEMU firmware config module Nov 23 23:07:58.084263 ignition[701]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 23 23:07:58.091542 ignition[701]: op(1): [finished] loading QEMU firmware config module Nov 23 23:07:58.103616 systemd-networkd[808]: lo: Link UP Nov 23 23:07:58.103629 systemd-networkd[808]: lo: Gained carrier Nov 23 23:07:58.104440 systemd-networkd[808]: Enumeration completed Nov 23 23:07:58.104945 systemd-networkd[808]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:07:58.104949 systemd-networkd[808]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 23 23:07:58.106035 systemd-networkd[808]: eth0: Link UP Nov 23 23:07:58.106103 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 23 23:07:58.106154 systemd-networkd[808]: eth0: Gained carrier Nov 23 23:07:58.106271 systemd-networkd[808]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:07:58.107411 systemd[1]: Reached target network.target - Network. Nov 23 23:07:58.132274 systemd-networkd[808]: eth0: DHCPv4 address 10.0.0.64/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 23 23:07:58.145343 ignition[701]: parsing config with SHA512: c5ee354bb230245b210aa3c0fb8735de3892f5a08782cf89cfed237fabaca0caa59a7c80af4e0ec18985556012ed660f9bac31bcf1505fe7eef61b4d0079de6e Nov 23 23:07:58.151205 unknown[701]: fetched base config from "system" Nov 23 23:07:58.151216 unknown[701]: fetched user config from "qemu" Nov 23 23:07:58.151641 ignition[701]: fetch-offline: fetch-offline passed Nov 23 23:07:58.154837 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 23 23:07:58.151701 ignition[701]: Ignition finished successfully Nov 23 23:07:58.157151 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 23 23:07:58.158135 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 23 23:07:58.199227 ignition[816]: Ignition 2.22.0 Nov 23 23:07:58.199241 ignition[816]: Stage: kargs Nov 23 23:07:58.199383 ignition[816]: no configs at "/usr/lib/ignition/base.d" Nov 23 23:07:58.199392 ignition[816]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 23 23:07:58.200298 ignition[816]: kargs: kargs passed Nov 23 23:07:58.200459 ignition[816]: Ignition finished successfully Nov 23 23:07:58.203819 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 23 23:07:58.206599 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 23 23:07:58.238084 ignition[825]: Ignition 2.22.0 Nov 23 23:07:58.238102 ignition[825]: Stage: disks Nov 23 23:07:58.238293 ignition[825]: no configs at "/usr/lib/ignition/base.d" Nov 23 23:07:58.238303 ignition[825]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 23 23:07:58.239152 ignition[825]: disks: disks passed Nov 23 23:07:58.242041 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 23 23:07:58.239223 ignition[825]: Ignition finished successfully Nov 23 23:07:58.243362 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 23 23:07:58.245185 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 23 23:07:58.246805 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 23 23:07:58.248593 systemd[1]: Reached target sysinit.target - System Initialization. Nov 23 23:07:58.250092 systemd[1]: Reached target basic.target - Basic System. Nov 23 23:07:58.252881 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 23 23:07:58.285553 systemd-fsck[835]: ROOT: clean, 15/553520 files, 52789/553472 blocks Nov 23 23:07:58.293740 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 23 23:07:58.296220 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 23 23:07:58.364200 kernel: EXT4-fs (vda9): mounted filesystem fa3f8731-d4e3-4e51-b6db-fa404206cf07 r/w with ordered data mode. Quota mode: none. Nov 23 23:07:58.364831 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 23 23:07:58.366318 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 23 23:07:58.369711 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 23 23:07:58.372418 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 23 23:07:58.373495 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 23 23:07:58.373542 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 23 23:07:58.373581 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 23 23:07:58.387042 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 23 23:07:58.390285 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (844) Nov 23 23:07:58.390311 kernel: BTRFS info (device vda6): first mount of filesystem fbc9a6bc-8b9c-4847-949c-e8c4f3bf01b3 Nov 23 23:07:58.390321 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 23 23:07:58.390330 kernel: BTRFS info (device vda6): turning on async discard Nov 23 23:07:58.392270 kernel: BTRFS info (device vda6): enabling free space tree Nov 23 23:07:58.394110 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 23 23:07:58.396579 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 23 23:07:58.446475 initrd-setup-root[868]: cut: /sysroot/etc/passwd: No such file or directory Nov 23 23:07:58.450184 initrd-setup-root[875]: cut: /sysroot/etc/group: No such file or directory Nov 23 23:07:58.453558 initrd-setup-root[882]: cut: /sysroot/etc/shadow: No such file or directory Nov 23 23:07:58.456548 initrd-setup-root[889]: cut: /sysroot/etc/gshadow: No such file or directory Nov 23 23:07:58.536930 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 23 23:07:58.539275 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 23 23:07:58.541213 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 23 23:07:58.569194 kernel: BTRFS info (device vda6): last unmount of filesystem fbc9a6bc-8b9c-4847-949c-e8c4f3bf01b3 Nov 23 23:07:58.582692 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 23 23:07:58.598934 ignition[957]: INFO : Ignition 2.22.0 Nov 23 23:07:58.598934 ignition[957]: INFO : Stage: mount Nov 23 23:07:58.600506 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 23 23:07:58.600506 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 23 23:07:58.600506 ignition[957]: INFO : mount: mount passed Nov 23 23:07:58.600506 ignition[957]: INFO : Ignition finished successfully Nov 23 23:07:58.604227 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 23 23:07:58.606502 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 23 23:07:58.918638 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 23 23:07:58.920159 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 23 23:07:58.951882 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (970) Nov 23 23:07:58.951933 kernel: BTRFS info (device vda6): first mount of filesystem fbc9a6bc-8b9c-4847-949c-e8c4f3bf01b3 Nov 23 23:07:58.951944 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 23 23:07:58.957055 kernel: BTRFS info (device vda6): turning on async discard Nov 23 23:07:58.957091 kernel: BTRFS info (device vda6): enabling free space tree Nov 23 23:07:58.958602 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 23 23:07:59.011628 ignition[988]: INFO : Ignition 2.22.0 Nov 23 23:07:59.011628 ignition[988]: INFO : Stage: files Nov 23 23:07:59.014486 ignition[988]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 23 23:07:59.014486 ignition[988]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 23 23:07:59.019105 ignition[988]: DEBUG : files: compiled without relabeling support, skipping Nov 23 23:07:59.020115 ignition[988]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 23 23:07:59.020115 ignition[988]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 23 23:07:59.024374 ignition[988]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 23 23:07:59.026079 ignition[988]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 23 23:07:59.026079 ignition[988]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 23 23:07:59.024997 unknown[988]: wrote ssh authorized keys file for user: core Nov 23 23:07:59.030623 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 23 23:07:59.030623 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Nov 23 23:07:59.066758 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 23 23:07:59.233789 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 23 23:07:59.233789 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 23 23:07:59.237808 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Nov 23 23:07:59.421100 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 23 23:07:59.489130 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 23 23:07:59.489130 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 23 23:07:59.493021 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 23 23:07:59.493021 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 23 23:07:59.493021 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 23 23:07:59.493021 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 23 23:07:59.493021 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 23 23:07:59.493021 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 23 23:07:59.493021 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 23 23:07:59.493021 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 23 23:07:59.493021 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 23 23:07:59.493021 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 23 23:07:59.510386 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 23 23:07:59.510386 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 23 23:07:59.510386 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Nov 23 23:07:59.758945 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 23 23:07:59.968293 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 23 23:07:59.968293 ignition[988]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 23 23:07:59.971623 ignition[988]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 23 23:08:00.024706 ignition[988]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 23 23:08:00.024706 ignition[988]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 23 23:08:00.024706 ignition[988]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Nov 23 23:08:00.024706 ignition[988]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 23 23:08:00.031553 ignition[988]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 23 23:08:00.031553 ignition[988]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Nov 23 23:08:00.031553 ignition[988]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Nov 23 23:08:00.045657 ignition[988]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 23 23:08:00.050194 ignition[988]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 23 23:08:00.052836 ignition[988]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Nov 23 23:08:00.052836 ignition[988]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Nov 23 23:08:00.052836 ignition[988]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Nov 23 23:08:00.052836 ignition[988]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 23 23:08:00.052836 ignition[988]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 23 23:08:00.052836 ignition[988]: INFO : files: files passed Nov 23 23:08:00.052836 ignition[988]: INFO : Ignition finished successfully Nov 23 23:08:00.053698 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 23 23:08:00.056266 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 23 23:08:00.058097 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 23 23:08:00.073701 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 23 23:08:00.073826 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 23 23:08:00.076785 initrd-setup-root-after-ignition[1016]: grep: /sysroot/oem/oem-release: No such file or directory Nov 23 23:08:00.078423 initrd-setup-root-after-ignition[1019]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 23 23:08:00.078423 initrd-setup-root-after-ignition[1019]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 23 23:08:00.081290 initrd-setup-root-after-ignition[1023]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 23 23:08:00.081215 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 23 23:08:00.082695 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 23 23:08:00.085548 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 23 23:08:00.106291 systemd-networkd[808]: eth0: Gained IPv6LL Nov 23 23:08:00.131477 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 23 23:08:00.131633 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 23 23:08:00.133725 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 23 23:08:00.135303 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 23 23:08:00.137079 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 23 23:08:00.138046 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 23 23:08:00.165463 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 23 23:08:00.169350 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 23 23:08:00.190815 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 23 23:08:00.192054 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 23 23:08:00.194136 systemd[1]: Stopped target timers.target - Timer Units. Nov 23 23:08:00.196117 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 23 23:08:00.196364 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 23 23:08:00.198502 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 23 23:08:00.200486 systemd[1]: Stopped target basic.target - Basic System. Nov 23 23:08:00.202136 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 23 23:08:00.203834 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 23 23:08:00.205811 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 23 23:08:00.207589 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 23 23:08:00.209432 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 23 23:08:00.211243 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 23 23:08:00.213259 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 23 23:08:00.215156 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 23 23:08:00.216990 systemd[1]: Stopped target swap.target - Swaps. Nov 23 23:08:00.218606 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 23 23:08:00.218759 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 23 23:08:00.221155 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 23 23:08:00.224384 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 23 23:08:00.225702 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 23 23:08:00.229223 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 23 23:08:00.230512 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 23 23:08:00.230748 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 23 23:08:00.233731 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 23 23:08:00.233864 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 23 23:08:00.235921 systemd[1]: Stopped target paths.target - Path Units. Nov 23 23:08:00.237524 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 23 23:08:00.238255 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 23 23:08:00.239564 systemd[1]: Stopped target slices.target - Slice Units. Nov 23 23:08:00.241027 systemd[1]: Stopped target sockets.target - Socket Units. Nov 23 23:08:00.242820 systemd[1]: iscsid.socket: Deactivated successfully. Nov 23 23:08:00.242905 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 23 23:08:00.244873 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 23 23:08:00.244952 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 23 23:08:00.246364 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 23 23:08:00.246487 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 23 23:08:00.248101 systemd[1]: ignition-files.service: Deactivated successfully. Nov 23 23:08:00.248224 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 23 23:08:00.250463 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 23 23:08:00.251796 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 23 23:08:00.251944 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 23 23:08:00.254641 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 23 23:08:00.256464 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 23 23:08:00.256608 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 23 23:08:00.258356 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 23 23:08:00.258474 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 23 23:08:00.264070 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 23 23:08:00.270367 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 23 23:08:00.282660 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 23 23:08:00.287978 ignition[1043]: INFO : Ignition 2.22.0 Nov 23 23:08:00.287978 ignition[1043]: INFO : Stage: umount Nov 23 23:08:00.289740 ignition[1043]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 23 23:08:00.289740 ignition[1043]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 23 23:08:00.289740 ignition[1043]: INFO : umount: umount passed Nov 23 23:08:00.289740 ignition[1043]: INFO : Ignition finished successfully Nov 23 23:08:00.291867 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 23 23:08:00.291982 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 23 23:08:00.293492 systemd[1]: Stopped target network.target - Network. Nov 23 23:08:00.296214 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 23 23:08:00.296349 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 23 23:08:00.297814 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 23 23:08:00.297881 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 23 23:08:00.299550 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 23 23:08:00.299604 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 23 23:08:00.302073 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 23 23:08:00.302122 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 23 23:08:00.303569 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 23 23:08:00.305144 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 23 23:08:00.313006 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 23 23:08:00.313141 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 23 23:08:00.316563 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 23 23:08:00.316813 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 23 23:08:00.316912 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 23 23:08:00.320089 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 23 23:08:00.320778 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 23 23:08:00.322403 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 23 23:08:00.322449 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 23 23:08:00.325363 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 23 23:08:00.326932 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 23 23:08:00.326996 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 23 23:08:00.329298 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 23 23:08:00.329349 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 23 23:08:00.331905 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 23 23:08:00.331951 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 23 23:08:00.333972 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 23 23:08:00.334022 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 23 23:08:00.336748 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 23 23:08:00.341736 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 23 23:08:00.341806 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 23 23:08:00.355147 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 23 23:08:00.355313 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 23 23:08:00.357403 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 23 23:08:00.357498 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 23 23:08:00.359249 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 23 23:08:00.359328 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 23 23:08:00.360878 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 23 23:08:00.360916 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 23 23:08:00.362053 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 23 23:08:00.362122 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 23 23:08:00.364640 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 23 23:08:00.364706 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 23 23:08:00.367375 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 23 23:08:00.367438 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 23 23:08:00.371033 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 23 23:08:00.372173 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 23 23:08:00.372245 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 23 23:08:00.375054 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 23 23:08:00.375114 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 23 23:08:00.377821 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 23 23:08:00.377878 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:08:00.382048 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Nov 23 23:08:00.382117 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 23 23:08:00.382150 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 23 23:08:00.382471 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 23 23:08:00.382569 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 23 23:08:00.384828 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 23 23:08:00.385003 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 23 23:08:00.387934 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 23 23:08:00.388051 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 23 23:08:00.389646 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 23 23:08:00.392141 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 23 23:08:00.412720 systemd[1]: Switching root. Nov 23 23:08:00.453633 systemd-journald[245]: Journal stopped Nov 23 23:08:01.323550 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Nov 23 23:08:01.323622 kernel: SELinux: policy capability network_peer_controls=1 Nov 23 23:08:01.323634 kernel: SELinux: policy capability open_perms=1 Nov 23 23:08:01.323644 kernel: SELinux: policy capability extended_socket_class=1 Nov 23 23:08:01.323658 kernel: SELinux: policy capability always_check_network=0 Nov 23 23:08:01.323668 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 23 23:08:01.323678 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 23 23:08:01.323687 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 23 23:08:01.323701 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 23 23:08:01.323715 kernel: SELinux: policy capability userspace_initial_context=0 Nov 23 23:08:01.323725 kernel: audit: type=1403 audit(1763939280.678:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 23 23:08:01.323740 systemd[1]: Successfully loaded SELinux policy in 74.562ms. Nov 23 23:08:01.323763 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.813ms. Nov 23 23:08:01.323775 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 23 23:08:01.323786 systemd[1]: Detected virtualization kvm. Nov 23 23:08:01.323799 systemd[1]: Detected architecture arm64. Nov 23 23:08:01.323809 systemd[1]: Detected first boot. Nov 23 23:08:01.323821 systemd[1]: Initializing machine ID from VM UUID. Nov 23 23:08:01.323832 zram_generator::config[1088]: No configuration found. Nov 23 23:08:01.323843 kernel: NET: Registered PF_VSOCK protocol family Nov 23 23:08:01.323853 systemd[1]: Populated /etc with preset unit settings. Nov 23 23:08:01.323865 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 23 23:08:01.323876 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 23 23:08:01.323887 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 23 23:08:01.323897 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 23 23:08:01.323910 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 23 23:08:01.323921 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 23 23:08:01.323931 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 23 23:08:01.323941 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 23 23:08:01.323952 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 23 23:08:01.323962 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 23 23:08:01.323972 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 23 23:08:01.323983 systemd[1]: Created slice user.slice - User and Session Slice. Nov 23 23:08:01.323993 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 23 23:08:01.324005 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 23 23:08:01.324015 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 23 23:08:01.324027 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 23 23:08:01.324037 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 23 23:08:01.324048 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 23 23:08:01.324070 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Nov 23 23:08:01.324083 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 23 23:08:01.324095 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 23 23:08:01.324114 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 23 23:08:01.324125 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 23 23:08:01.324137 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 23 23:08:01.324147 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 23 23:08:01.324159 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 23 23:08:01.324253 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 23 23:08:01.324268 systemd[1]: Reached target slices.target - Slice Units. Nov 23 23:08:01.324278 systemd[1]: Reached target swap.target - Swaps. Nov 23 23:08:01.324290 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 23 23:08:01.324302 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 23 23:08:01.324313 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 23 23:08:01.324328 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 23 23:08:01.324338 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 23 23:08:01.324348 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 23 23:08:01.324359 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 23 23:08:01.324372 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 23 23:08:01.324382 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 23 23:08:01.324393 systemd[1]: Mounting media.mount - External Media Directory... Nov 23 23:08:01.324405 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 23 23:08:01.324415 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 23 23:08:01.324425 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 23 23:08:01.324435 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 23 23:08:01.324469 systemd[1]: Reached target machines.target - Containers. Nov 23 23:08:01.324481 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 23 23:08:01.324492 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 23:08:01.324501 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 23 23:08:01.324514 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 23 23:08:01.324524 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 23 23:08:01.324534 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 23 23:08:01.324544 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 23 23:08:01.324553 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 23 23:08:01.324567 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 23 23:08:01.324577 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 23 23:08:01.324587 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 23 23:08:01.324597 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 23 23:08:01.324608 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 23 23:08:01.324620 kernel: fuse: init (API version 7.41) Nov 23 23:08:01.324631 systemd[1]: Stopped systemd-fsck-usr.service. Nov 23 23:08:01.324641 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 23:08:01.324652 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 23 23:08:01.324662 kernel: ACPI: bus type drm_connector registered Nov 23 23:08:01.324671 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 23 23:08:01.324681 kernel: loop: module loaded Nov 23 23:08:01.324690 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 23 23:08:01.324702 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 23 23:08:01.324712 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 23 23:08:01.324752 systemd-journald[1156]: Collecting audit messages is disabled. Nov 23 23:08:01.324777 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 23 23:08:01.324789 systemd[1]: verity-setup.service: Deactivated successfully. Nov 23 23:08:01.324799 systemd[1]: Stopped verity-setup.service. Nov 23 23:08:01.324810 systemd-journald[1156]: Journal started Nov 23 23:08:01.324832 systemd-journald[1156]: Runtime Journal (/run/log/journal/6455769e8f4b43de88ded177cb491bf8) is 6M, max 48.5M, 42.4M free. Nov 23 23:08:01.098597 systemd[1]: Queued start job for default target multi-user.target. Nov 23 23:08:01.117344 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 23 23:08:01.117764 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 23 23:08:01.332059 systemd[1]: Started systemd-journald.service - Journal Service. Nov 23 23:08:01.330848 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 23 23:08:01.332399 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 23 23:08:01.333711 systemd[1]: Mounted media.mount - External Media Directory. Nov 23 23:08:01.334957 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 23 23:08:01.336493 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 23 23:08:01.337918 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 23 23:08:01.339316 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 23 23:08:01.343197 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 23 23:08:01.344667 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 23 23:08:01.344859 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 23 23:08:01.346456 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 23 23:08:01.346704 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 23 23:08:01.348240 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 23 23:08:01.348417 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 23 23:08:01.349728 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 23 23:08:01.349903 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 23 23:08:01.351416 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 23 23:08:01.351598 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 23 23:08:01.353145 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 23 23:08:01.353342 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 23 23:08:01.354662 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 23 23:08:01.356036 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 23 23:08:01.357690 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 23 23:08:01.359205 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 23 23:08:01.371834 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 23 23:08:01.374256 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 23 23:08:01.376655 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 23 23:08:01.377870 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 23 23:08:01.377902 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 23 23:08:01.379783 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 23 23:08:01.392125 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 23 23:08:01.393283 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 23:08:01.394857 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 23 23:08:01.396935 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 23 23:08:01.398254 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 23 23:08:01.399492 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 23 23:08:01.400778 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 23 23:08:01.403452 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 23 23:08:01.407425 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 23 23:08:01.409913 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 23 23:08:01.414215 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 23 23:08:01.416635 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 23 23:08:01.419616 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 23 23:08:01.421350 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 23 23:08:01.423240 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 23 23:08:01.424567 systemd-journald[1156]: Time spent on flushing to /var/log/journal/6455769e8f4b43de88ded177cb491bf8 is 11.846ms for 895 entries. Nov 23 23:08:01.424567 systemd-journald[1156]: System Journal (/var/log/journal/6455769e8f4b43de88ded177cb491bf8) is 8M, max 195.6M, 187.6M free. Nov 23 23:08:01.439887 systemd-journald[1156]: Received client request to flush runtime journal. Nov 23 23:08:01.439922 kernel: loop0: detected capacity change from 0 to 119840 Nov 23 23:08:01.429178 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 23 23:08:01.434340 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 23 23:08:01.446663 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 23 23:08:01.454202 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 23 23:08:01.458224 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 23 23:08:01.461507 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 23 23:08:01.476202 kernel: loop1: detected capacity change from 0 to 100632 Nov 23 23:08:01.484591 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 23 23:08:01.493919 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Nov 23 23:08:01.493939 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Nov 23 23:08:01.499222 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 23 23:08:01.510211 kernel: loop2: detected capacity change from 0 to 200800 Nov 23 23:08:01.547200 kernel: loop3: detected capacity change from 0 to 119840 Nov 23 23:08:01.559194 kernel: loop4: detected capacity change from 0 to 100632 Nov 23 23:08:01.570191 kernel: loop5: detected capacity change from 0 to 200800 Nov 23 23:08:01.577601 (sd-merge)[1226]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 23 23:08:01.578051 (sd-merge)[1226]: Merged extensions into '/usr'. Nov 23 23:08:01.582817 systemd[1]: Reload requested from client PID 1204 ('systemd-sysext') (unit systemd-sysext.service)... Nov 23 23:08:01.582975 systemd[1]: Reloading... Nov 23 23:08:01.649238 zram_generator::config[1255]: No configuration found. Nov 23 23:08:01.769014 ldconfig[1199]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 23 23:08:01.795869 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 23 23:08:01.796503 systemd[1]: Reloading finished in 213 ms. Nov 23 23:08:01.829099 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 23 23:08:01.832674 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 23 23:08:01.858693 systemd[1]: Starting ensure-sysext.service... Nov 23 23:08:01.860712 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 23 23:08:01.871822 systemd[1]: Reload requested from client PID 1287 ('systemctl') (unit ensure-sysext.service)... Nov 23 23:08:01.871845 systemd[1]: Reloading... Nov 23 23:08:01.886934 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 23 23:08:01.886976 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 23 23:08:01.887255 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 23 23:08:01.887549 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 23 23:08:01.888324 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 23 23:08:01.888545 systemd-tmpfiles[1288]: ACLs are not supported, ignoring. Nov 23 23:08:01.888597 systemd-tmpfiles[1288]: ACLs are not supported, ignoring. Nov 23 23:08:01.892221 systemd-tmpfiles[1288]: Detected autofs mount point /boot during canonicalization of boot. Nov 23 23:08:01.892235 systemd-tmpfiles[1288]: Skipping /boot Nov 23 23:08:01.898687 systemd-tmpfiles[1288]: Detected autofs mount point /boot during canonicalization of boot. Nov 23 23:08:01.898703 systemd-tmpfiles[1288]: Skipping /boot Nov 23 23:08:01.919232 zram_generator::config[1313]: No configuration found. Nov 23 23:08:02.060209 systemd[1]: Reloading finished in 188 ms. Nov 23 23:08:02.082026 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 23 23:08:02.089144 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 23 23:08:02.103656 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 23 23:08:02.106211 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 23 23:08:02.108345 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 23 23:08:02.113081 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 23 23:08:02.116271 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 23 23:08:02.119260 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 23 23:08:02.129814 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 23 23:08:02.135127 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 23:08:02.137431 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 23 23:08:02.140249 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 23 23:08:02.143775 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 23 23:08:02.145127 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 23:08:02.145317 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 23:08:02.146976 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 23 23:08:02.151081 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 23 23:08:02.151279 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 23 23:08:02.153313 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 23 23:08:02.153494 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 23 23:08:02.155563 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 23 23:08:02.155720 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 23 23:08:02.161599 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 23 23:08:02.168335 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 23 23:08:02.169796 systemd-udevd[1356]: Using default interface naming scheme 'v255'. Nov 23 23:08:02.171718 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 23:08:02.173126 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 23 23:08:02.175831 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 23 23:08:02.178481 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 23 23:08:02.179605 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 23:08:02.179733 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 23:08:02.187774 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 23 23:08:02.189158 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 23 23:08:02.190214 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 23 23:08:02.194787 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 23 23:08:02.194994 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 23 23:08:02.196424 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 23 23:08:02.198751 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 23 23:08:02.198926 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 23 23:08:02.201229 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 23 23:08:02.201411 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 23 23:08:02.208750 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 23 23:08:02.221271 systemd[1]: Finished ensure-sysext.service. Nov 23 23:08:02.228642 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 23:08:02.230655 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 23 23:08:02.234322 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 23 23:08:02.235363 augenrules[1427]: No rules Nov 23 23:08:02.236223 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 23 23:08:02.239231 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 23 23:08:02.240305 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 23:08:02.240360 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 23:08:02.248912 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 23 23:08:02.257523 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 23 23:08:02.258687 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 23 23:08:02.259314 systemd[1]: audit-rules.service: Deactivated successfully. Nov 23 23:08:02.259552 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 23 23:08:02.261627 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 23 23:08:02.261788 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 23 23:08:02.264212 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 23 23:08:02.264387 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 23 23:08:02.265722 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 23 23:08:02.265907 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 23 23:08:02.267340 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 23 23:08:02.267497 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 23 23:08:02.272390 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 23 23:08:02.272452 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 23 23:08:02.273067 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Nov 23 23:08:02.274757 systemd-resolved[1354]: Positive Trust Anchors: Nov 23 23:08:02.274777 systemd-resolved[1354]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 23 23:08:02.274811 systemd-resolved[1354]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 23 23:08:02.288321 systemd-resolved[1354]: Defaulting to hostname 'linux'. Nov 23 23:08:02.302748 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 23 23:08:02.305401 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 23 23:08:02.345346 systemd-networkd[1438]: lo: Link UP Nov 23 23:08:02.345354 systemd-networkd[1438]: lo: Gained carrier Nov 23 23:08:02.346416 systemd-networkd[1438]: Enumeration completed Nov 23 23:08:02.346538 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 23 23:08:02.347200 systemd-networkd[1438]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:08:02.347204 systemd-networkd[1438]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 23 23:08:02.348293 systemd-networkd[1438]: eth0: Link UP Nov 23 23:08:02.348453 systemd-networkd[1438]: eth0: Gained carrier Nov 23 23:08:02.348471 systemd-networkd[1438]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:08:02.349142 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 23 23:08:02.352139 systemd[1]: Reached target network.target - Network. Nov 23 23:08:02.353210 systemd[1]: Reached target sysinit.target - System Initialization. Nov 23 23:08:02.354335 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 23 23:08:02.355655 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 23 23:08:02.357205 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 23 23:08:02.358397 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 23 23:08:02.358430 systemd[1]: Reached target paths.target - Path Units. Nov 23 23:08:02.359417 systemd[1]: Reached target time-set.target - System Time Set. Nov 23 23:08:02.360738 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 23 23:08:02.362213 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 23 23:08:02.363250 systemd-networkd[1438]: eth0: DHCPv4 address 10.0.0.64/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 23 23:08:02.363510 systemd[1]: Reached target timers.target - Timer Units. Nov 23 23:08:02.363862 systemd-timesyncd[1439]: Network configuration changed, trying to establish connection. Nov 23 23:08:02.365047 systemd-timesyncd[1439]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 23 23:08:02.365103 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 23 23:08:02.365107 systemd-timesyncd[1439]: Initial clock synchronization to Sun 2025-11-23 23:08:02.391636 UTC. Nov 23 23:08:02.367738 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 23 23:08:02.370949 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 23 23:08:02.372385 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 23 23:08:02.374345 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 23 23:08:02.377808 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 23 23:08:02.379537 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 23 23:08:02.381973 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 23 23:08:02.385841 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 23 23:08:02.387668 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 23 23:08:02.396161 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 23 23:08:02.398296 systemd[1]: Reached target sockets.target - Socket Units. Nov 23 23:08:02.399264 systemd[1]: Reached target basic.target - Basic System. Nov 23 23:08:02.400281 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 23 23:08:02.400313 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 23 23:08:02.401742 systemd[1]: Starting containerd.service - containerd container runtime... Nov 23 23:08:02.405699 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 23 23:08:02.410487 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 23 23:08:02.418345 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 23 23:08:02.422383 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 23 23:08:02.423358 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 23 23:08:02.425381 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 23 23:08:02.432186 jq[1471]: false Nov 23 23:08:02.429284 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 23 23:08:02.431468 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 23 23:08:02.434076 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 23 23:08:02.436411 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 23 23:08:02.440887 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 23 23:08:02.442710 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 23 23:08:02.443154 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 23 23:08:02.443676 systemd[1]: Starting update-engine.service - Update Engine... Nov 23 23:08:02.445284 extend-filesystems[1473]: Found /dev/vda6 Nov 23 23:08:02.450210 extend-filesystems[1473]: Found /dev/vda9 Nov 23 23:08:02.448328 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 23 23:08:02.450959 extend-filesystems[1473]: Checking size of /dev/vda9 Nov 23 23:08:02.450304 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 23 23:08:02.455744 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 23 23:08:02.457201 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 23 23:08:02.457416 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 23 23:08:02.460813 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 23 23:08:02.461008 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 23 23:08:02.464738 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 23 23:08:02.467292 systemd[1]: motdgen.service: Deactivated successfully. Nov 23 23:08:02.467500 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 23 23:08:02.475573 jq[1490]: true Nov 23 23:08:02.481984 update_engine[1488]: I20251123 23:08:02.481650 1488 main.cc:92] Flatcar Update Engine starting Nov 23 23:08:02.485237 extend-filesystems[1473]: Resized partition /dev/vda9 Nov 23 23:08:02.485576 (ntainerd)[1501]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 23 23:08:02.491130 extend-filesystems[1518]: resize2fs 1.47.3 (8-Jul-2025) Nov 23 23:08:02.492783 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 23:08:02.496707 tar[1498]: linux-arm64/LICENSE Nov 23 23:08:02.496707 tar[1498]: linux-arm64/helm Nov 23 23:08:02.513703 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 23 23:08:02.517298 dbus-daemon[1467]: [system] SELinux support is enabled Nov 23 23:08:02.518278 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 23 23:08:02.520759 jq[1515]: true Nov 23 23:08:02.526835 update_engine[1488]: I20251123 23:08:02.526773 1488 update_check_scheduler.cc:74] Next update check in 3m0s Nov 23 23:08:02.529191 systemd[1]: Started update-engine.service - Update Engine. Nov 23 23:08:02.531446 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 23 23:08:02.531478 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 23 23:08:02.532777 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 23 23:08:02.532803 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 23 23:08:02.546212 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 23 23:08:02.553364 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 23 23:08:02.557817 extend-filesystems[1518]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 23 23:08:02.557817 extend-filesystems[1518]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 23 23:08:02.557817 extend-filesystems[1518]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 23 23:08:02.564239 extend-filesystems[1473]: Resized filesystem in /dev/vda9 Nov 23 23:08:02.563510 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 23 23:08:02.565431 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 23 23:08:02.592378 bash[1541]: Updated "/home/core/.ssh/authorized_keys" Nov 23 23:08:02.597184 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 23 23:08:02.599625 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 23 23:08:02.603756 locksmithd[1524]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 23 23:08:02.645404 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:08:02.694266 containerd[1501]: time="2025-11-23T23:08:02Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 23 23:08:02.696538 containerd[1501]: time="2025-11-23T23:08:02.694879240Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Nov 23 23:08:02.696080 systemd-logind[1486]: Watching system buttons on /dev/input/event0 (Power Button) Nov 23 23:08:02.696297 systemd-logind[1486]: New seat seat0. Nov 23 23:08:02.698448 systemd[1]: Started systemd-logind.service - User Login Management. Nov 23 23:08:02.710469 containerd[1501]: time="2025-11-23T23:08:02.710405920Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.4µs" Nov 23 23:08:02.710469 containerd[1501]: time="2025-11-23T23:08:02.710463560Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 23 23:08:02.710586 containerd[1501]: time="2025-11-23T23:08:02.710487520Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 23 23:08:02.710674 containerd[1501]: time="2025-11-23T23:08:02.710648640Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 23 23:08:02.710719 containerd[1501]: time="2025-11-23T23:08:02.710677960Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 23 23:08:02.710719 containerd[1501]: time="2025-11-23T23:08:02.710710240Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 23 23:08:02.710808 containerd[1501]: time="2025-11-23T23:08:02.710770920Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 23 23:08:02.710808 containerd[1501]: time="2025-11-23T23:08:02.710787120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 23 23:08:02.712514 containerd[1501]: time="2025-11-23T23:08:02.711503680Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 23 23:08:02.712514 containerd[1501]: time="2025-11-23T23:08:02.712243200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 23 23:08:02.712514 containerd[1501]: time="2025-11-23T23:08:02.712268240Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 23 23:08:02.712514 containerd[1501]: time="2025-11-23T23:08:02.712284520Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 23 23:08:02.712514 containerd[1501]: time="2025-11-23T23:08:02.712407000Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 23 23:08:02.712672 containerd[1501]: time="2025-11-23T23:08:02.712609200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 23 23:08:02.712672 containerd[1501]: time="2025-11-23T23:08:02.712638280Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 23 23:08:02.712672 containerd[1501]: time="2025-11-23T23:08:02.712649240Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 23 23:08:02.712731 containerd[1501]: time="2025-11-23T23:08:02.712693640Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 23 23:08:02.712934 containerd[1501]: time="2025-11-23T23:08:02.712911280Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 23 23:08:02.712999 containerd[1501]: time="2025-11-23T23:08:02.712979960Z" level=info msg="metadata content store policy set" policy=shared Nov 23 23:08:02.716679 containerd[1501]: time="2025-11-23T23:08:02.716639000Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 23 23:08:02.716762 containerd[1501]: time="2025-11-23T23:08:02.716704120Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 23 23:08:02.716762 containerd[1501]: time="2025-11-23T23:08:02.716737680Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 23 23:08:02.716762 containerd[1501]: time="2025-11-23T23:08:02.716752320Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 23 23:08:02.716879 containerd[1501]: time="2025-11-23T23:08:02.716765720Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 23 23:08:02.716879 containerd[1501]: time="2025-11-23T23:08:02.716778000Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 23 23:08:02.716879 containerd[1501]: time="2025-11-23T23:08:02.716800080Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 23 23:08:02.716879 containerd[1501]: time="2025-11-23T23:08:02.716812120Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 23 23:08:02.716879 containerd[1501]: time="2025-11-23T23:08:02.716823920Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 23 23:08:02.716879 containerd[1501]: time="2025-11-23T23:08:02.716834240Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 23 23:08:02.716879 containerd[1501]: time="2025-11-23T23:08:02.716843040Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 23 23:08:02.716879 containerd[1501]: time="2025-11-23T23:08:02.716855720Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 23 23:08:02.717014 containerd[1501]: time="2025-11-23T23:08:02.716990000Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 23 23:08:02.717034 containerd[1501]: time="2025-11-23T23:08:02.717012680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 23 23:08:02.717034 containerd[1501]: time="2025-11-23T23:08:02.717027440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 23 23:08:02.717076 containerd[1501]: time="2025-11-23T23:08:02.717040880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 23 23:08:02.717076 containerd[1501]: time="2025-11-23T23:08:02.717065360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 23 23:08:02.717114 containerd[1501]: time="2025-11-23T23:08:02.717078360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 23 23:08:02.717114 containerd[1501]: time="2025-11-23T23:08:02.717090160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 23 23:08:02.717114 containerd[1501]: time="2025-11-23T23:08:02.717101200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 23 23:08:02.717179 containerd[1501]: time="2025-11-23T23:08:02.717114280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 23 23:08:02.717179 containerd[1501]: time="2025-11-23T23:08:02.717125160Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 23 23:08:02.717179 containerd[1501]: time="2025-11-23T23:08:02.717134960Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 23 23:08:02.717373 containerd[1501]: time="2025-11-23T23:08:02.717344920Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 23 23:08:02.717373 containerd[1501]: time="2025-11-23T23:08:02.717371320Z" level=info msg="Start snapshots syncer" Nov 23 23:08:02.717427 containerd[1501]: time="2025-11-23T23:08:02.717400280Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 23 23:08:02.717732 containerd[1501]: time="2025-11-23T23:08:02.717663440Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 23 23:08:02.717853 containerd[1501]: time="2025-11-23T23:08:02.717750760Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 23 23:08:02.717853 containerd[1501]: time="2025-11-23T23:08:02.717796160Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 23 23:08:02.717951 containerd[1501]: time="2025-11-23T23:08:02.717924400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 23 23:08:02.717981 containerd[1501]: time="2025-11-23T23:08:02.717957040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 23 23:08:02.717981 containerd[1501]: time="2025-11-23T23:08:02.717969040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 23 23:08:02.718015 containerd[1501]: time="2025-11-23T23:08:02.717980720Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 23 23:08:02.718015 containerd[1501]: time="2025-11-23T23:08:02.717993960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 23 23:08:02.718015 containerd[1501]: time="2025-11-23T23:08:02.718004200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 23 23:08:02.718089 containerd[1501]: time="2025-11-23T23:08:02.718015080Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 23 23:08:02.718089 containerd[1501]: time="2025-11-23T23:08:02.718038800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 23 23:08:02.718089 containerd[1501]: time="2025-11-23T23:08:02.718067480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 23 23:08:02.718089 containerd[1501]: time="2025-11-23T23:08:02.718084840Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 23 23:08:02.718153 containerd[1501]: time="2025-11-23T23:08:02.718117720Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 23 23:08:02.718153 containerd[1501]: time="2025-11-23T23:08:02.718135800Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 23 23:08:02.718153 containerd[1501]: time="2025-11-23T23:08:02.718146320Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 23 23:08:02.718220 containerd[1501]: time="2025-11-23T23:08:02.718155680Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 23 23:08:02.718220 containerd[1501]: time="2025-11-23T23:08:02.718181520Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 23 23:08:02.718220 containerd[1501]: time="2025-11-23T23:08:02.718193160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 23 23:08:02.718220 containerd[1501]: time="2025-11-23T23:08:02.718203960Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 23 23:08:02.718293 containerd[1501]: time="2025-11-23T23:08:02.718280480Z" level=info msg="runtime interface created" Nov 23 23:08:02.718293 containerd[1501]: time="2025-11-23T23:08:02.718286480Z" level=info msg="created NRI interface" Nov 23 23:08:02.718326 containerd[1501]: time="2025-11-23T23:08:02.718294480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 23 23:08:02.718326 containerd[1501]: time="2025-11-23T23:08:02.718306520Z" level=info msg="Connect containerd service" Nov 23 23:08:02.718360 containerd[1501]: time="2025-11-23T23:08:02.718325920Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 23 23:08:02.719114 containerd[1501]: time="2025-11-23T23:08:02.719079800Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 23 23:08:02.793485 containerd[1501]: time="2025-11-23T23:08:02.793373920Z" level=info msg="Start subscribing containerd event" Nov 23 23:08:02.793594 containerd[1501]: time="2025-11-23T23:08:02.793504160Z" level=info msg="Start recovering state" Nov 23 23:08:02.793677 containerd[1501]: time="2025-11-23T23:08:02.793657400Z" level=info msg="Start event monitor" Nov 23 23:08:02.793866 containerd[1501]: time="2025-11-23T23:08:02.793685680Z" level=info msg="Start cni network conf syncer for default" Nov 23 23:08:02.793866 containerd[1501]: time="2025-11-23T23:08:02.793698080Z" level=info msg="Start streaming server" Nov 23 23:08:02.793866 containerd[1501]: time="2025-11-23T23:08:02.793709680Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 23 23:08:02.793866 containerd[1501]: time="2025-11-23T23:08:02.793716720Z" level=info msg="runtime interface starting up..." Nov 23 23:08:02.793866 containerd[1501]: time="2025-11-23T23:08:02.793722400Z" level=info msg="starting plugins..." Nov 23 23:08:02.793866 containerd[1501]: time="2025-11-23T23:08:02.793738000Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 23 23:08:02.794236 containerd[1501]: time="2025-11-23T23:08:02.794213000Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 23 23:08:02.794349 containerd[1501]: time="2025-11-23T23:08:02.794333680Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 23 23:08:02.794515 containerd[1501]: time="2025-11-23T23:08:02.794502800Z" level=info msg="containerd successfully booted in 0.100742s" Nov 23 23:08:02.794609 systemd[1]: Started containerd.service - containerd container runtime. Nov 23 23:08:02.892750 tar[1498]: linux-arm64/README.md Nov 23 23:08:02.914208 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 23 23:08:03.231605 sshd_keygen[1512]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 23 23:08:03.252597 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 23 23:08:03.256106 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 23 23:08:03.276371 systemd[1]: issuegen.service: Deactivated successfully. Nov 23 23:08:03.276638 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 23 23:08:03.280440 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 23 23:08:03.316494 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 23 23:08:03.320773 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 23 23:08:03.323239 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Nov 23 23:08:03.324603 systemd[1]: Reached target getty.target - Login Prompts. Nov 23 23:08:03.370481 systemd-networkd[1438]: eth0: Gained IPv6LL Nov 23 23:08:03.374215 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 23 23:08:03.376404 systemd[1]: Reached target network-online.target - Network is Online. Nov 23 23:08:03.378945 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 23 23:08:03.381659 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:08:03.389754 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 23 23:08:03.405689 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 23 23:08:03.406306 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 23 23:08:03.407903 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 23 23:08:03.412441 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 23 23:08:03.994815 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:08:03.996751 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 23 23:08:04.002324 (kubelet)[1617]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 23:08:04.008339 systemd[1]: Startup finished in 2.135s (kernel) + 5.049s (initrd) + 3.403s (userspace) = 10.589s. Nov 23 23:08:04.363385 kubelet[1617]: E1123 23:08:04.361450 1617 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 23:08:04.368065 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 23:08:04.368238 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 23:08:04.370723 systemd[1]: kubelet.service: Consumed 711ms CPU time, 247.5M memory peak. Nov 23 23:08:08.712024 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 23 23:08:08.715071 systemd[1]: Started sshd@0-10.0.0.64:22-10.0.0.1:59648.service - OpenSSH per-connection server daemon (10.0.0.1:59648). Nov 23 23:08:08.821156 sshd[1630]: Accepted publickey for core from 10.0.0.1 port 59648 ssh2: RSA SHA256:yIy4UrzOMNNnnIqKwL8egez+/NjI/EpaMMlf9RYGR+A Nov 23 23:08:08.823622 sshd-session[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:08:08.830881 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 23 23:08:08.832201 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 23 23:08:08.838985 systemd-logind[1486]: New session 1 of user core. Nov 23 23:08:08.856843 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 23 23:08:08.860551 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 23 23:08:08.875825 (systemd)[1635]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 23 23:08:08.878402 systemd-logind[1486]: New session c1 of user core. Nov 23 23:08:09.005830 systemd[1635]: Queued start job for default target default.target. Nov 23 23:08:09.016287 systemd[1635]: Created slice app.slice - User Application Slice. Nov 23 23:08:09.016318 systemd[1635]: Reached target paths.target - Paths. Nov 23 23:08:09.016357 systemd[1635]: Reached target timers.target - Timers. Nov 23 23:08:09.017640 systemd[1635]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 23 23:08:09.028533 systemd[1635]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 23 23:08:09.028664 systemd[1635]: Reached target sockets.target - Sockets. Nov 23 23:08:09.028710 systemd[1635]: Reached target basic.target - Basic System. Nov 23 23:08:09.028738 systemd[1635]: Reached target default.target - Main User Target. Nov 23 23:08:09.028764 systemd[1635]: Startup finished in 143ms. Nov 23 23:08:09.028870 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 23 23:08:09.030150 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 23 23:08:09.104676 systemd[1]: Started sshd@1-10.0.0.64:22-10.0.0.1:59652.service - OpenSSH per-connection server daemon (10.0.0.1:59652). Nov 23 23:08:09.181296 sshd[1646]: Accepted publickey for core from 10.0.0.1 port 59652 ssh2: RSA SHA256:yIy4UrzOMNNnnIqKwL8egez+/NjI/EpaMMlf9RYGR+A Nov 23 23:08:09.184131 sshd-session[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:08:09.188968 systemd-logind[1486]: New session 2 of user core. Nov 23 23:08:09.200413 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 23 23:08:09.253287 sshd[1649]: Connection closed by 10.0.0.1 port 59652 Nov 23 23:08:09.253923 sshd-session[1646]: pam_unix(sshd:session): session closed for user core Nov 23 23:08:09.270365 systemd[1]: sshd@1-10.0.0.64:22-10.0.0.1:59652.service: Deactivated successfully. Nov 23 23:08:09.272569 systemd[1]: session-2.scope: Deactivated successfully. Nov 23 23:08:09.273350 systemd-logind[1486]: Session 2 logged out. Waiting for processes to exit. Nov 23 23:08:09.275418 systemd[1]: Started sshd@2-10.0.0.64:22-10.0.0.1:50606.service - OpenSSH per-connection server daemon (10.0.0.1:50606). Nov 23 23:08:09.276325 systemd-logind[1486]: Removed session 2. Nov 23 23:08:09.332075 sshd[1655]: Accepted publickey for core from 10.0.0.1 port 50606 ssh2: RSA SHA256:yIy4UrzOMNNnnIqKwL8egez+/NjI/EpaMMlf9RYGR+A Nov 23 23:08:09.333522 sshd-session[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:08:09.338221 systemd-logind[1486]: New session 3 of user core. Nov 23 23:08:09.353390 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 23 23:08:09.401593 sshd[1658]: Connection closed by 10.0.0.1 port 50606 Nov 23 23:08:09.401943 sshd-session[1655]: pam_unix(sshd:session): session closed for user core Nov 23 23:08:09.422263 systemd[1]: sshd@2-10.0.0.64:22-10.0.0.1:50606.service: Deactivated successfully. Nov 23 23:08:09.424522 systemd[1]: session-3.scope: Deactivated successfully. Nov 23 23:08:09.425263 systemd-logind[1486]: Session 3 logged out. Waiting for processes to exit. Nov 23 23:08:09.427514 systemd[1]: Started sshd@3-10.0.0.64:22-10.0.0.1:50618.service - OpenSSH per-connection server daemon (10.0.0.1:50618). Nov 23 23:08:09.428374 systemd-logind[1486]: Removed session 3. Nov 23 23:08:09.486162 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 50618 ssh2: RSA SHA256:yIy4UrzOMNNnnIqKwL8egez+/NjI/EpaMMlf9RYGR+A Nov 23 23:08:09.487508 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:08:09.492322 systemd-logind[1486]: New session 4 of user core. Nov 23 23:08:09.510399 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 23 23:08:09.564419 sshd[1667]: Connection closed by 10.0.0.1 port 50618 Nov 23 23:08:09.564679 sshd-session[1664]: pam_unix(sshd:session): session closed for user core Nov 23 23:08:09.578447 systemd[1]: sshd@3-10.0.0.64:22-10.0.0.1:50618.service: Deactivated successfully. Nov 23 23:08:09.581634 systemd[1]: session-4.scope: Deactivated successfully. Nov 23 23:08:09.582342 systemd-logind[1486]: Session 4 logged out. Waiting for processes to exit. Nov 23 23:08:09.584543 systemd[1]: Started sshd@4-10.0.0.64:22-10.0.0.1:50630.service - OpenSSH per-connection server daemon (10.0.0.1:50630). Nov 23 23:08:09.584997 systemd-logind[1486]: Removed session 4. Nov 23 23:08:09.652251 sshd[1673]: Accepted publickey for core from 10.0.0.1 port 50630 ssh2: RSA SHA256:yIy4UrzOMNNnnIqKwL8egez+/NjI/EpaMMlf9RYGR+A Nov 23 23:08:09.653636 sshd-session[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:08:09.659679 systemd-logind[1486]: New session 5 of user core. Nov 23 23:08:09.668403 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 23 23:08:09.731101 sudo[1677]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 23 23:08:09.731386 sudo[1677]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 23:08:09.747186 sudo[1677]: pam_unix(sudo:session): session closed for user root Nov 23 23:08:09.749186 sshd[1676]: Connection closed by 10.0.0.1 port 50630 Nov 23 23:08:09.749557 sshd-session[1673]: pam_unix(sshd:session): session closed for user core Nov 23 23:08:09.760631 systemd[1]: sshd@4-10.0.0.64:22-10.0.0.1:50630.service: Deactivated successfully. Nov 23 23:08:09.764151 systemd[1]: session-5.scope: Deactivated successfully. Nov 23 23:08:09.767230 systemd-logind[1486]: Session 5 logged out. Waiting for processes to exit. Nov 23 23:08:09.771219 systemd[1]: Started sshd@5-10.0.0.64:22-10.0.0.1:50636.service - OpenSSH per-connection server daemon (10.0.0.1:50636). Nov 23 23:08:09.772569 systemd-logind[1486]: Removed session 5. Nov 23 23:08:09.848305 sshd[1683]: Accepted publickey for core from 10.0.0.1 port 50636 ssh2: RSA SHA256:yIy4UrzOMNNnnIqKwL8egez+/NjI/EpaMMlf9RYGR+A Nov 23 23:08:09.850092 sshd-session[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:08:09.854713 systemd-logind[1486]: New session 6 of user core. Nov 23 23:08:09.874418 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 23 23:08:09.926927 sudo[1688]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 23 23:08:09.927915 sudo[1688]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 23:08:10.023382 sudo[1688]: pam_unix(sudo:session): session closed for user root Nov 23 23:08:10.029600 sudo[1687]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 23 23:08:10.029984 sudo[1687]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 23:08:10.042340 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 23 23:08:10.085812 augenrules[1710]: No rules Nov 23 23:08:10.087147 systemd[1]: audit-rules.service: Deactivated successfully. Nov 23 23:08:10.088257 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 23 23:08:10.089159 sudo[1687]: pam_unix(sudo:session): session closed for user root Nov 23 23:08:10.092345 sshd[1686]: Connection closed by 10.0.0.1 port 50636 Nov 23 23:08:10.092978 sshd-session[1683]: pam_unix(sshd:session): session closed for user core Nov 23 23:08:10.102101 systemd[1]: sshd@5-10.0.0.64:22-10.0.0.1:50636.service: Deactivated successfully. Nov 23 23:08:10.103639 systemd[1]: session-6.scope: Deactivated successfully. Nov 23 23:08:10.109845 systemd-logind[1486]: Session 6 logged out. Waiting for processes to exit. Nov 23 23:08:10.112530 systemd[1]: Started sshd@6-10.0.0.64:22-10.0.0.1:50642.service - OpenSSH per-connection server daemon (10.0.0.1:50642). Nov 23 23:08:10.112999 systemd-logind[1486]: Removed session 6. Nov 23 23:08:10.179027 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 50642 ssh2: RSA SHA256:yIy4UrzOMNNnnIqKwL8egez+/NjI/EpaMMlf9RYGR+A Nov 23 23:08:10.180851 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:08:10.185401 systemd-logind[1486]: New session 7 of user core. Nov 23 23:08:10.195375 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 23 23:08:10.248862 sudo[1723]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 23 23:08:10.249155 sudo[1723]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 23:08:10.568098 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 23 23:08:10.596020 (dockerd)[1743]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 23 23:08:10.818449 dockerd[1743]: time="2025-11-23T23:08:10.818310247Z" level=info msg="Starting up" Nov 23 23:08:10.819579 dockerd[1743]: time="2025-11-23T23:08:10.819545516Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 23 23:08:10.830795 dockerd[1743]: time="2025-11-23T23:08:10.830741259Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 23 23:08:10.869451 dockerd[1743]: time="2025-11-23T23:08:10.869386407Z" level=info msg="Loading containers: start." Nov 23 23:08:10.880199 kernel: Initializing XFRM netlink socket Nov 23 23:08:11.090912 systemd-networkd[1438]: docker0: Link UP Nov 23 23:08:11.097409 dockerd[1743]: time="2025-11-23T23:08:11.097359083Z" level=info msg="Loading containers: done." Nov 23 23:08:11.110796 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1716051971-merged.mount: Deactivated successfully. Nov 23 23:08:11.113858 dockerd[1743]: time="2025-11-23T23:08:11.113818875Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 23 23:08:11.113966 dockerd[1743]: time="2025-11-23T23:08:11.113940115Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 23 23:08:11.114050 dockerd[1743]: time="2025-11-23T23:08:11.114032087Z" level=info msg="Initializing buildkit" Nov 23 23:08:11.142504 dockerd[1743]: time="2025-11-23T23:08:11.142463612Z" level=info msg="Completed buildkit initialization" Nov 23 23:08:11.150662 dockerd[1743]: time="2025-11-23T23:08:11.150574669Z" level=info msg="Daemon has completed initialization" Nov 23 23:08:11.150830 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 23 23:08:11.151179 dockerd[1743]: time="2025-11-23T23:08:11.150678573Z" level=info msg="API listen on /run/docker.sock" Nov 23 23:08:11.611074 containerd[1501]: time="2025-11-23T23:08:11.611011805Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.2\"" Nov 23 23:08:12.270835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3195501088.mount: Deactivated successfully. Nov 23 23:08:13.248732 containerd[1501]: time="2025-11-23T23:08:13.248665332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:08:13.250532 containerd[1501]: time="2025-11-23T23:08:13.250485642Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.2: active requests=0, bytes read=24563046" Nov 23 23:08:13.251544 containerd[1501]: time="2025-11-23T23:08:13.251512419Z" level=info msg="ImageCreate event name:\"sha256:b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:08:13.255332 containerd[1501]: time="2025-11-23T23:08:13.255283031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:08:13.256729 containerd[1501]: time="2025-11-23T23:08:13.256224614Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.2\" with image id \"sha256:b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077\", size \"24559643\" in 1.645157315s" Nov 23 23:08:13.256729 containerd[1501]: time="2025-11-23T23:08:13.256266650Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.2\" returns image reference \"sha256:b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7\"" Nov 23 23:08:13.257110 containerd[1501]: time="2025-11-23T23:08:13.257062785Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.2\"" Nov 23 23:08:14.319910 containerd[1501]: time="2025-11-23T23:08:14.319861583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:08:14.320961 containerd[1501]: time="2025-11-23T23:08:14.320886262Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.2: active requests=0, bytes read=19134214" Nov 23 23:08:14.322578 containerd[1501]: time="2025-11-23T23:08:14.322540336Z" level=info msg="ImageCreate event name:\"sha256:1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:08:14.326101 containerd[1501]: time="2025-11-23T23:08:14.326045246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:08:14.327198 containerd[1501]: time="2025-11-23T23:08:14.327010716Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.2\" with image id \"sha256:1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb\", size \"20718696\" in 1.069907336s" Nov 23 23:08:14.327198 containerd[1501]: time="2025-11-23T23:08:14.327093144Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.2\" returns image reference \"sha256:1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2\"" Nov 23 23:08:14.327573 containerd[1501]: time="2025-11-23T23:08:14.327545234Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.2\"" Nov 23 23:08:14.618663 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 23 23:08:14.620440 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:08:14.797697 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:08:14.802430 (kubelet)[2029]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 23:08:14.844415 kubelet[2029]: E1123 23:08:14.844343 2029 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 23:08:14.847915 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 23:08:14.848065 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 23:08:14.848563 systemd[1]: kubelet.service: Consumed 164ms CPU time, 107.8M memory peak. Nov 23 23:08:15.322651 containerd[1501]: time="2025-11-23T23:08:15.322588827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:08:15.323664 containerd[1501]: time="2025-11-23T23:08:15.323508253Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.2: active requests=0, bytes read=14191285" Nov 23 23:08:15.324566 containerd[1501]: time="2025-11-23T23:08:15.324541086Z" level=info msg="ImageCreate event name:\"sha256:4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:08:15.328215 containerd[1501]: time="2025-11-23T23:08:15.328070395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:08:15.329108 containerd[1501]: time="2025-11-23T23:08:15.329054870Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.2\" with image id \"sha256:4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6\", size \"15775785\" in 1.001473607s" Nov 23 23:08:15.329108 containerd[1501]: time="2025-11-23T23:08:15.329095501Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.2\" returns image reference \"sha256:4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949\"" Nov 23 23:08:15.329715 containerd[1501]: time="2025-11-23T23:08:15.329683393Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.2\"" Nov 23 23:08:16.288397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1413199492.mount: Deactivated successfully. Nov 23 23:08:16.478214 containerd[1501]: time="2025-11-23T23:08:16.478140652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:08:16.479108 containerd[1501]: time="2025-11-23T23:08:16.479072202Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.2: active requests=0, bytes read=22803243" Nov 23 23:08:16.480179 containerd[1501]: time="2025-11-23T23:08:16.480033974Z" level=info msg="ImageCreate event name:\"sha256:94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:08:16.486305 containerd[1501]: time="2025-11-23T23:08:16.486250328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:08:16.487022 containerd[1501]: time="2025-11-23T23:08:16.486970286Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.2\" with image id \"sha256:94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786\", repo tag \"registry.k8s.io/kube-proxy:v1.34.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5\", size \"22802260\" in 1.15724234s" Nov 23 23:08:16.487022 containerd[1501]: time="2025-11-23T23:08:16.487011355Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.2\" returns image reference \"sha256:94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786\"" Nov 23 23:08:16.487505 containerd[1501]: time="2025-11-23T23:08:16.487477171Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 23 23:08:16.982398 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2839824811.mount: Deactivated successfully. Nov 23 23:08:17.919188 containerd[1501]: time="2025-11-23T23:08:17.918454811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:08:17.919594 containerd[1501]: time="2025-11-23T23:08:17.919188786Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395408" Nov 23 23:08:17.920238 containerd[1501]: time="2025-11-23T23:08:17.920201709Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:08:17.923140 containerd[1501]: time="2025-11-23T23:08:17.923079891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:08:17.925336 containerd[1501]: time="2025-11-23T23:08:17.925294986Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.437766138s" Nov 23 23:08:17.925475 containerd[1501]: time="2025-11-23T23:08:17.925337534Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Nov 23 23:08:17.925808 containerd[1501]: time="2025-11-23T23:08:17.925781474Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 23 23:08:18.391795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2682864211.mount: Deactivated successfully. Nov 23 23:08:18.399295 containerd[1501]: time="2025-11-23T23:08:18.399225649Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:08:18.400145 containerd[1501]: time="2025-11-23T23:08:18.400082591Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268711" Nov 23 23:08:18.401129 containerd[1501]: time="2025-11-23T23:08:18.401067054Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:08:18.403520 containerd[1501]: time="2025-11-23T23:08:18.403443437Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:08:18.404511 containerd[1501]: time="2025-11-23T23:08:18.404330919Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 478.514541ms" Nov 23 23:08:18.404511 containerd[1501]: time="2025-11-23T23:08:18.404362218Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Nov 23 23:08:18.404977 containerd[1501]: time="2025-11-23T23:08:18.404948549Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 23 23:08:19.062500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount460418533.mount: Deactivated successfully. Nov 23 23:08:21.397737 containerd[1501]: time="2025-11-23T23:08:21.397671626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:08:21.398340 containerd[1501]: time="2025-11-23T23:08:21.398305076Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=98062989" Nov 23 23:08:21.401050 containerd[1501]: time="2025-11-23T23:08:21.400982231Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:08:21.404205 containerd[1501]: time="2025-11-23T23:08:21.404148442Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:08:21.405406 containerd[1501]: time="2025-11-23T23:08:21.405265984Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 3.000278092s" Nov 23 23:08:21.405406 containerd[1501]: time="2025-11-23T23:08:21.405309687Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Nov 23 23:08:24.914239 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 23 23:08:24.916086 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:08:25.086419 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:08:25.097562 (kubelet)[2190]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 23:08:25.138735 kubelet[2190]: E1123 23:08:25.138653 2190 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 23:08:25.141487 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 23:08:25.141633 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 23:08:25.142247 systemd[1]: kubelet.service: Consumed 159ms CPU time, 105.8M memory peak. Nov 23 23:08:26.245280 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:08:26.245499 systemd[1]: kubelet.service: Consumed 159ms CPU time, 105.8M memory peak. Nov 23 23:08:26.248000 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:08:26.277825 systemd[1]: Reload requested from client PID 2205 ('systemctl') (unit session-7.scope)... Nov 23 23:08:26.277843 systemd[1]: Reloading... Nov 23 23:08:26.361239 zram_generator::config[2247]: No configuration found. Nov 23 23:08:26.668898 systemd[1]: Reloading finished in 390 ms. Nov 23 23:08:26.726497 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:08:26.730976 systemd[1]: kubelet.service: Deactivated successfully. Nov 23 23:08:26.731249 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:08:26.731304 systemd[1]: kubelet.service: Consumed 111ms CPU time, 95.2M memory peak. Nov 23 23:08:26.733001 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:08:26.887113 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:08:26.899567 (kubelet)[2294]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 23 23:08:26.939043 kubelet[2294]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 23 23:08:26.939043 kubelet[2294]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 23:08:26.939043 kubelet[2294]: I1123 23:08:26.938514 2294 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 23 23:08:27.376535 kubelet[2294]: I1123 23:08:27.376286 2294 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 23 23:08:27.376535 kubelet[2294]: I1123 23:08:27.376319 2294 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 23 23:08:27.377556 kubelet[2294]: I1123 23:08:27.377518 2294 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 23 23:08:27.377556 kubelet[2294]: I1123 23:08:27.377547 2294 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 23 23:08:27.377865 kubelet[2294]: I1123 23:08:27.377848 2294 server.go:956] "Client rotation is on, will bootstrap in background" Nov 23 23:08:27.461100 kubelet[2294]: E1123 23:08:27.460456 2294 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.64:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 23 23:08:27.461772 kubelet[2294]: I1123 23:08:27.461742 2294 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 23 23:08:27.467509 kubelet[2294]: I1123 23:08:27.467471 2294 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 23 23:08:27.470414 kubelet[2294]: I1123 23:08:27.470373 2294 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 23 23:08:27.470629 kubelet[2294]: I1123 23:08:27.470582 2294 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 23 23:08:27.470773 kubelet[2294]: I1123 23:08:27.470616 2294 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 23 23:08:27.470875 kubelet[2294]: I1123 23:08:27.470776 2294 topology_manager.go:138] "Creating topology manager with none policy" Nov 23 23:08:27.470875 kubelet[2294]: I1123 23:08:27.470786 2294 container_manager_linux.go:306] "Creating device plugin manager" Nov 23 23:08:27.470921 kubelet[2294]: I1123 23:08:27.470903 2294 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 23 23:08:27.473210 kubelet[2294]: I1123 23:08:27.473155 2294 state_mem.go:36] "Initialized new in-memory state store" Nov 23 23:08:27.474384 kubelet[2294]: I1123 23:08:27.474350 2294 kubelet.go:475] "Attempting to sync node with API server" Nov 23 23:08:27.474384 kubelet[2294]: I1123 23:08:27.474380 2294 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 23 23:08:27.474455 kubelet[2294]: I1123 23:08:27.474408 2294 kubelet.go:387] "Adding apiserver pod source" Nov 23 23:08:27.474455 kubelet[2294]: I1123 23:08:27.474422 2294 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 23 23:08:27.474963 kubelet[2294]: E1123 23:08:27.474922 2294 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 23 23:08:27.476105 kubelet[2294]: E1123 23:08:27.475759 2294 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 23 23:08:27.476531 kubelet[2294]: I1123 23:08:27.476509 2294 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 23 23:08:27.477176 kubelet[2294]: I1123 23:08:27.477153 2294 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 23 23:08:27.477318 kubelet[2294]: I1123 23:08:27.477302 2294 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 23 23:08:27.477364 kubelet[2294]: W1123 23:08:27.477349 2294 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 23 23:08:27.480371 kubelet[2294]: I1123 23:08:27.480203 2294 server.go:1262] "Started kubelet" Nov 23 23:08:27.480758 kubelet[2294]: I1123 23:08:27.480704 2294 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 23 23:08:27.480796 kubelet[2294]: I1123 23:08:27.480771 2294 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 23 23:08:27.480815 kubelet[2294]: I1123 23:08:27.480788 2294 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 23 23:08:27.481134 kubelet[2294]: I1123 23:08:27.481112 2294 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 23 23:08:27.481197 kubelet[2294]: I1123 23:08:27.481151 2294 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 23 23:08:27.482605 kubelet[2294]: I1123 23:08:27.482300 2294 server.go:310] "Adding debug handlers to kubelet server" Nov 23 23:08:27.483667 kubelet[2294]: I1123 23:08:27.483640 2294 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 23 23:08:27.486335 kubelet[2294]: E1123 23:08:27.484519 2294 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 23 23:08:27.486335 kubelet[2294]: I1123 23:08:27.484562 2294 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 23 23:08:27.486335 kubelet[2294]: I1123 23:08:27.484754 2294 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 23 23:08:27.486335 kubelet[2294]: I1123 23:08:27.484801 2294 reconciler.go:29] "Reconciler: start to sync state" Nov 23 23:08:27.486335 kubelet[2294]: E1123 23:08:27.485348 2294 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.64:6443: connect: connection refused" interval="200ms" Nov 23 23:08:27.486335 kubelet[2294]: E1123 23:08:27.485447 2294 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 23 23:08:27.486603 kubelet[2294]: E1123 23:08:27.486426 2294 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 23 23:08:27.487425 kubelet[2294]: I1123 23:08:27.487407 2294 factory.go:223] Registration of the systemd container factory successfully Nov 23 23:08:27.487618 kubelet[2294]: I1123 23:08:27.487596 2294 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 23 23:08:27.488978 kubelet[2294]: E1123 23:08:27.487360 2294 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.64:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.64:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.187ac57453b3fdd4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-23 23:08:27.480153556 +0000 UTC m=+0.577228276,LastTimestamp:2025-11-23 23:08:27.480153556 +0000 UTC m=+0.577228276,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 23 23:08:27.490330 kubelet[2294]: I1123 23:08:27.489996 2294 factory.go:223] Registration of the containerd container factory successfully Nov 23 23:08:27.503782 kubelet[2294]: I1123 23:08:27.503678 2294 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 23 23:08:27.505771 kubelet[2294]: I1123 23:08:27.505640 2294 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 23 23:08:27.505771 kubelet[2294]: I1123 23:08:27.505666 2294 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 23 23:08:27.505771 kubelet[2294]: I1123 23:08:27.505721 2294 state_mem.go:36] "Initialized new in-memory state store" Nov 23 23:08:27.507066 kubelet[2294]: I1123 23:08:27.507038 2294 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 23 23:08:27.507066 kubelet[2294]: I1123 23:08:27.507067 2294 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 23 23:08:27.507557 kubelet[2294]: I1123 23:08:27.507198 2294 kubelet.go:2427] "Starting kubelet main sync loop" Nov 23 23:08:27.507557 kubelet[2294]: E1123 23:08:27.507384 2294 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 23 23:08:27.508612 kubelet[2294]: E1123 23:08:27.508492 2294 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 23 23:08:27.509948 kubelet[2294]: I1123 23:08:27.509778 2294 policy_none.go:49] "None policy: Start" Nov 23 23:08:27.509948 kubelet[2294]: I1123 23:08:27.509801 2294 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 23 23:08:27.509948 kubelet[2294]: I1123 23:08:27.509815 2294 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 23 23:08:27.511934 kubelet[2294]: I1123 23:08:27.511909 2294 policy_none.go:47] "Start" Nov 23 23:08:27.516095 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 23 23:08:27.534304 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 23 23:08:27.537575 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 23 23:08:27.558374 kubelet[2294]: E1123 23:08:27.558326 2294 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 23 23:08:27.558824 kubelet[2294]: I1123 23:08:27.558794 2294 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 23 23:08:27.559193 kubelet[2294]: I1123 23:08:27.558816 2294 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 23 23:08:27.559193 kubelet[2294]: I1123 23:08:27.559056 2294 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 23 23:08:27.560967 kubelet[2294]: E1123 23:08:27.560940 2294 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 23 23:08:27.561067 kubelet[2294]: E1123 23:08:27.560986 2294 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 23 23:08:27.631146 systemd[1]: Created slice kubepods-burstable-pod460d03667103e96bfc2770da6959e5bc.slice - libcontainer container kubepods-burstable-pod460d03667103e96bfc2770da6959e5bc.slice. Nov 23 23:08:27.645823 kubelet[2294]: E1123 23:08:27.645374 2294 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 23 23:08:27.651125 systemd[1]: Created slice kubepods-burstable-pod41694572f76b3db8403039f40dd5ea25.slice - libcontainer container kubepods-burstable-pod41694572f76b3db8403039f40dd5ea25.slice. Nov 23 23:08:27.660781 kubelet[2294]: I1123 23:08:27.660750 2294 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 23 23:08:27.661269 kubelet[2294]: E1123 23:08:27.661227 2294 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.64:6443/api/v1/nodes\": dial tcp 10.0.0.64:6443: connect: connection refused" node="localhost" Nov 23 23:08:27.665105 kubelet[2294]: E1123 23:08:27.664960 2294 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 23 23:08:27.667580 systemd[1]: Created slice kubepods-burstable-podf7d0af91d0c9a9742236c44baa5e2751.slice - libcontainer container kubepods-burstable-podf7d0af91d0c9a9742236c44baa5e2751.slice. Nov 23 23:08:27.669179 kubelet[2294]: E1123 23:08:27.669114 2294 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 23 23:08:27.686483 kubelet[2294]: I1123 23:08:27.686412 2294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/460d03667103e96bfc2770da6959e5bc-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"460d03667103e96bfc2770da6959e5bc\") " pod="kube-system/kube-apiserver-localhost" Nov 23 23:08:27.686483 kubelet[2294]: I1123 23:08:27.686452 2294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/41694572f76b3db8403039f40dd5ea25-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"41694572f76b3db8403039f40dd5ea25\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:08:27.686483 kubelet[2294]: I1123 23:08:27.686470 2294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/41694572f76b3db8403039f40dd5ea25-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"41694572f76b3db8403039f40dd5ea25\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:08:27.686483 kubelet[2294]: I1123 23:08:27.686486 2294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/41694572f76b3db8403039f40dd5ea25-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"41694572f76b3db8403039f40dd5ea25\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:08:27.686672 kubelet[2294]: I1123 23:08:27.686502 2294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/41694572f76b3db8403039f40dd5ea25-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"41694572f76b3db8403039f40dd5ea25\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:08:27.686672 kubelet[2294]: I1123 23:08:27.686550 2294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/460d03667103e96bfc2770da6959e5bc-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"460d03667103e96bfc2770da6959e5bc\") " pod="kube-system/kube-apiserver-localhost" Nov 23 23:08:27.686672 kubelet[2294]: I1123 23:08:27.686594 2294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/41694572f76b3db8403039f40dd5ea25-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"41694572f76b3db8403039f40dd5ea25\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:08:27.686672 kubelet[2294]: I1123 23:08:27.686629 2294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7d0af91d0c9a9742236c44baa5e2751-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7d0af91d0c9a9742236c44baa5e2751\") " pod="kube-system/kube-scheduler-localhost" Nov 23 23:08:27.686672 kubelet[2294]: I1123 23:08:27.686645 2294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/460d03667103e96bfc2770da6959e5bc-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"460d03667103e96bfc2770da6959e5bc\") " pod="kube-system/kube-apiserver-localhost" Nov 23 23:08:27.686799 kubelet[2294]: E1123 23:08:27.686772 2294 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.64:6443: connect: connection refused" interval="400ms" Nov 23 23:08:27.863487 kubelet[2294]: I1123 23:08:27.863450 2294 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 23 23:08:27.863852 kubelet[2294]: E1123 23:08:27.863817 2294 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.64:6443/api/v1/nodes\": dial tcp 10.0.0.64:6443: connect: connection refused" node="localhost" Nov 23 23:08:27.949348 containerd[1501]: time="2025-11-23T23:08:27.949290053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:460d03667103e96bfc2770da6959e5bc,Namespace:kube-system,Attempt:0,}" Nov 23 23:08:27.969102 containerd[1501]: time="2025-11-23T23:08:27.969058811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:41694572f76b3db8403039f40dd5ea25,Namespace:kube-system,Attempt:0,}" Nov 23 23:08:27.972474 containerd[1501]: time="2025-11-23T23:08:27.972427243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f7d0af91d0c9a9742236c44baa5e2751,Namespace:kube-system,Attempt:0,}" Nov 23 23:08:28.087866 kubelet[2294]: E1123 23:08:28.087825 2294 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.64:6443: connect: connection refused" interval="800ms" Nov 23 23:08:28.265883 kubelet[2294]: I1123 23:08:28.265534 2294 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 23 23:08:28.266055 kubelet[2294]: E1123 23:08:28.265893 2294 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.64:6443/api/v1/nodes\": dial tcp 10.0.0.64:6443: connect: connection refused" node="localhost" Nov 23 23:08:28.403355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4234774658.mount: Deactivated successfully. Nov 23 23:08:28.410250 containerd[1501]: time="2025-11-23T23:08:28.409464069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 23:08:28.412576 containerd[1501]: time="2025-11-23T23:08:28.412520843Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Nov 23 23:08:28.414213 containerd[1501]: time="2025-11-23T23:08:28.414110410Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 23:08:28.417375 containerd[1501]: time="2025-11-23T23:08:28.417324637Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 23:08:28.418934 containerd[1501]: time="2025-11-23T23:08:28.418809850Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 23:08:28.419707 containerd[1501]: time="2025-11-23T23:08:28.419663293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 23:08:28.420838 containerd[1501]: time="2025-11-23T23:08:28.420377450Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 469.470905ms" Nov 23 23:08:28.420838 containerd[1501]: time="2025-11-23T23:08:28.420502532Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 23 23:08:28.421469 containerd[1501]: time="2025-11-23T23:08:28.421437602Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 23 23:08:28.425030 containerd[1501]: time="2025-11-23T23:08:28.424983259Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 454.453528ms" Nov 23 23:08:28.426392 containerd[1501]: time="2025-11-23T23:08:28.426349032Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 452.675308ms" Nov 23 23:08:28.443201 containerd[1501]: time="2025-11-23T23:08:28.442889841Z" level=info msg="connecting to shim 6c9633a2af7db11b71a89c3cfde63cba3de2f01455658eeea7ddd91e8aec6ea8" address="unix:///run/containerd/s/4e77ad003ee2a4c3a6988436901f7cc4545ba864bf772bcf7a6010d65313d514" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:08:28.467642 containerd[1501]: time="2025-11-23T23:08:28.467589477Z" level=info msg="connecting to shim 21e729d9a651e86969ee2c99efc81bfb549d613936993115da2db3e101c64da8" address="unix:///run/containerd/s/41d4f973034a55c43f49bf0616e24da5a5532da37e043ea96e4fdc4fe9da1b62" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:08:28.473963 containerd[1501]: time="2025-11-23T23:08:28.473912936Z" level=info msg="connecting to shim 25ae2067ab6f63df3cb5aaf45ffc5f0e1ada1a25cff842d1b1687f422b541749" address="unix:///run/containerd/s/9771f0c33ed76a9ee2cab747ea6c94ba3fba4310655c522bc0818269e0e9f538" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:08:28.477406 systemd[1]: Started cri-containerd-6c9633a2af7db11b71a89c3cfde63cba3de2f01455658eeea7ddd91e8aec6ea8.scope - libcontainer container 6c9633a2af7db11b71a89c3cfde63cba3de2f01455658eeea7ddd91e8aec6ea8. Nov 23 23:08:28.505426 systemd[1]: Started cri-containerd-21e729d9a651e86969ee2c99efc81bfb549d613936993115da2db3e101c64da8.scope - libcontainer container 21e729d9a651e86969ee2c99efc81bfb549d613936993115da2db3e101c64da8. Nov 23 23:08:28.513020 systemd[1]: Started cri-containerd-25ae2067ab6f63df3cb5aaf45ffc5f0e1ada1a25cff842d1b1687f422b541749.scope - libcontainer container 25ae2067ab6f63df3cb5aaf45ffc5f0e1ada1a25cff842d1b1687f422b541749. Nov 23 23:08:28.553267 containerd[1501]: time="2025-11-23T23:08:28.553125742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:41694572f76b3db8403039f40dd5ea25,Namespace:kube-system,Attempt:0,} returns sandbox id \"21e729d9a651e86969ee2c99efc81bfb549d613936993115da2db3e101c64da8\"" Nov 23 23:08:28.555758 containerd[1501]: time="2025-11-23T23:08:28.555681150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:460d03667103e96bfc2770da6959e5bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c9633a2af7db11b71a89c3cfde63cba3de2f01455658eeea7ddd91e8aec6ea8\"" Nov 23 23:08:28.563530 containerd[1501]: time="2025-11-23T23:08:28.563450569Z" level=info msg="CreateContainer within sandbox \"21e729d9a651e86969ee2c99efc81bfb549d613936993115da2db3e101c64da8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 23 23:08:28.565654 containerd[1501]: time="2025-11-23T23:08:28.565621009Z" level=info msg="CreateContainer within sandbox \"6c9633a2af7db11b71a89c3cfde63cba3de2f01455658eeea7ddd91e8aec6ea8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 23 23:08:28.574552 containerd[1501]: time="2025-11-23T23:08:28.574471826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f7d0af91d0c9a9742236c44baa5e2751,Namespace:kube-system,Attempt:0,} returns sandbox id \"25ae2067ab6f63df3cb5aaf45ffc5f0e1ada1a25cff842d1b1687f422b541749\"" Nov 23 23:08:28.577940 kubelet[2294]: E1123 23:08:28.577907 2294 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 23 23:08:28.578266 containerd[1501]: time="2025-11-23T23:08:28.578134802Z" level=info msg="Container 65acb6323ea7edbc21cc3e8f72e068e5206e6eee45b011d2880d0446cc5d3d48: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:08:28.578905 containerd[1501]: time="2025-11-23T23:08:28.578796381Z" level=info msg="CreateContainer within sandbox \"25ae2067ab6f63df3cb5aaf45ffc5f0e1ada1a25cff842d1b1687f422b541749\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 23 23:08:28.582539 containerd[1501]: time="2025-11-23T23:08:28.582503171Z" level=info msg="Container 357a866bf88010e08111cbab0bd61e24c439464a03874f8194bca9b97071b890: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:08:28.587972 containerd[1501]: time="2025-11-23T23:08:28.587920929Z" level=info msg="CreateContainer within sandbox \"6c9633a2af7db11b71a89c3cfde63cba3de2f01455658eeea7ddd91e8aec6ea8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"65acb6323ea7edbc21cc3e8f72e068e5206e6eee45b011d2880d0446cc5d3d48\"" Nov 23 23:08:28.588695 containerd[1501]: time="2025-11-23T23:08:28.588667457Z" level=info msg="StartContainer for \"65acb6323ea7edbc21cc3e8f72e068e5206e6eee45b011d2880d0446cc5d3d48\"" Nov 23 23:08:28.589993 kubelet[2294]: E1123 23:08:28.589962 2294 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 23 23:08:28.590719 containerd[1501]: time="2025-11-23T23:08:28.590685487Z" level=info msg="connecting to shim 65acb6323ea7edbc21cc3e8f72e068e5206e6eee45b011d2880d0446cc5d3d48" address="unix:///run/containerd/s/4e77ad003ee2a4c3a6988436901f7cc4545ba864bf772bcf7a6010d65313d514" protocol=ttrpc version=3 Nov 23 23:08:28.593882 containerd[1501]: time="2025-11-23T23:08:28.593837653Z" level=info msg="Container 0834fcc615abd4240241040fc84fd52e5e27aa82c0590dfecfab669488f5829e: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:08:28.594139 containerd[1501]: time="2025-11-23T23:08:28.594024755Z" level=info msg="CreateContainer within sandbox \"21e729d9a651e86969ee2c99efc81bfb549d613936993115da2db3e101c64da8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"357a866bf88010e08111cbab0bd61e24c439464a03874f8194bca9b97071b890\"" Nov 23 23:08:28.594605 containerd[1501]: time="2025-11-23T23:08:28.594578739Z" level=info msg="StartContainer for \"357a866bf88010e08111cbab0bd61e24c439464a03874f8194bca9b97071b890\"" Nov 23 23:08:28.596331 containerd[1501]: time="2025-11-23T23:08:28.596291467Z" level=info msg="connecting to shim 357a866bf88010e08111cbab0bd61e24c439464a03874f8194bca9b97071b890" address="unix:///run/containerd/s/41d4f973034a55c43f49bf0616e24da5a5532da37e043ea96e4fdc4fe9da1b62" protocol=ttrpc version=3 Nov 23 23:08:28.604902 containerd[1501]: time="2025-11-23T23:08:28.604783685Z" level=info msg="CreateContainer within sandbox \"25ae2067ab6f63df3cb5aaf45ffc5f0e1ada1a25cff842d1b1687f422b541749\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0834fcc615abd4240241040fc84fd52e5e27aa82c0590dfecfab669488f5829e\"" Nov 23 23:08:28.606194 containerd[1501]: time="2025-11-23T23:08:28.606124970Z" level=info msg="StartContainer for \"0834fcc615abd4240241040fc84fd52e5e27aa82c0590dfecfab669488f5829e\"" Nov 23 23:08:28.607532 kubelet[2294]: E1123 23:08:28.607308 2294 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.64:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.64:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.187ac57453b3fdd4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-23 23:08:27.480153556 +0000 UTC m=+0.577228276,LastTimestamp:2025-11-23 23:08:27.480153556 +0000 UTC m=+0.577228276,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 23 23:08:28.609114 containerd[1501]: time="2025-11-23T23:08:28.609080391Z" level=info msg="connecting to shim 0834fcc615abd4240241040fc84fd52e5e27aa82c0590dfecfab669488f5829e" address="unix:///run/containerd/s/9771f0c33ed76a9ee2cab747ea6c94ba3fba4310655c522bc0818269e0e9f538" protocol=ttrpc version=3 Nov 23 23:08:28.614265 systemd[1]: Started cri-containerd-65acb6323ea7edbc21cc3e8f72e068e5206e6eee45b011d2880d0446cc5d3d48.scope - libcontainer container 65acb6323ea7edbc21cc3e8f72e068e5206e6eee45b011d2880d0446cc5d3d48. Nov 23 23:08:28.617592 systemd[1]: Started cri-containerd-357a866bf88010e08111cbab0bd61e24c439464a03874f8194bca9b97071b890.scope - libcontainer container 357a866bf88010e08111cbab0bd61e24c439464a03874f8194bca9b97071b890. Nov 23 23:08:28.636329 systemd[1]: Started cri-containerd-0834fcc615abd4240241040fc84fd52e5e27aa82c0590dfecfab669488f5829e.scope - libcontainer container 0834fcc615abd4240241040fc84fd52e5e27aa82c0590dfecfab669488f5829e. Nov 23 23:08:28.674894 kubelet[2294]: E1123 23:08:28.674842 2294 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 23 23:08:28.677514 containerd[1501]: time="2025-11-23T23:08:28.677459602Z" level=info msg="StartContainer for \"357a866bf88010e08111cbab0bd61e24c439464a03874f8194bca9b97071b890\" returns successfully" Nov 23 23:08:28.679223 containerd[1501]: time="2025-11-23T23:08:28.679122514Z" level=info msg="StartContainer for \"65acb6323ea7edbc21cc3e8f72e068e5206e6eee45b011d2880d0446cc5d3d48\" returns successfully" Nov 23 23:08:28.687084 containerd[1501]: time="2025-11-23T23:08:28.687043223Z" level=info msg="StartContainer for \"0834fcc615abd4240241040fc84fd52e5e27aa82c0590dfecfab669488f5829e\" returns successfully" Nov 23 23:08:29.068103 kubelet[2294]: I1123 23:08:29.068032 2294 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 23 23:08:29.520346 kubelet[2294]: E1123 23:08:29.520139 2294 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 23 23:08:29.524188 kubelet[2294]: E1123 23:08:29.524146 2294 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 23 23:08:29.526328 kubelet[2294]: E1123 23:08:29.526289 2294 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 23 23:08:30.159415 kubelet[2294]: E1123 23:08:30.159367 2294 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 23 23:08:30.358257 kubelet[2294]: I1123 23:08:30.358215 2294 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 23 23:08:30.358257 kubelet[2294]: E1123 23:08:30.358258 2294 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 23 23:08:30.385712 kubelet[2294]: I1123 23:08:30.385607 2294 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 23 23:08:30.395323 kubelet[2294]: E1123 23:08:30.395282 2294 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 23 23:08:30.395323 kubelet[2294]: I1123 23:08:30.395316 2294 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 23 23:08:30.397706 kubelet[2294]: E1123 23:08:30.397470 2294 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 23 23:08:30.397706 kubelet[2294]: I1123 23:08:30.397500 2294 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 23 23:08:30.399431 kubelet[2294]: E1123 23:08:30.399402 2294 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 23 23:08:30.476692 kubelet[2294]: I1123 23:08:30.475798 2294 apiserver.go:52] "Watching apiserver" Nov 23 23:08:30.485073 kubelet[2294]: I1123 23:08:30.485004 2294 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 23 23:08:30.527832 kubelet[2294]: I1123 23:08:30.527778 2294 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 23 23:08:30.529499 kubelet[2294]: I1123 23:08:30.529404 2294 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 23 23:08:30.530447 kubelet[2294]: E1123 23:08:30.530421 2294 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 23 23:08:30.531636 kubelet[2294]: E1123 23:08:30.531459 2294 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 23 23:08:32.351607 systemd[1]: Reload requested from client PID 2579 ('systemctl') (unit session-7.scope)... Nov 23 23:08:32.351625 systemd[1]: Reloading... Nov 23 23:08:32.435194 zram_generator::config[2622]: No configuration found. Nov 23 23:08:32.638478 systemd[1]: Reloading finished in 286 ms. Nov 23 23:08:32.667679 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:08:32.684570 systemd[1]: kubelet.service: Deactivated successfully. Nov 23 23:08:32.684924 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:08:32.684984 systemd[1]: kubelet.service: Consumed 887ms CPU time, 123.4M memory peak. Nov 23 23:08:32.686831 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:08:32.845868 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:08:32.850047 (kubelet)[2664]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 23 23:08:32.892046 kubelet[2664]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 23 23:08:32.892046 kubelet[2664]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 23:08:32.892046 kubelet[2664]: I1123 23:08:32.892007 2664 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 23 23:08:32.899367 kubelet[2664]: I1123 23:08:32.899228 2664 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 23 23:08:32.899367 kubelet[2664]: I1123 23:08:32.899262 2664 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 23 23:08:32.899367 kubelet[2664]: I1123 23:08:32.899300 2664 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 23 23:08:32.899367 kubelet[2664]: I1123 23:08:32.899307 2664 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 23 23:08:32.899831 kubelet[2664]: I1123 23:08:32.899734 2664 server.go:956] "Client rotation is on, will bootstrap in background" Nov 23 23:08:32.901591 kubelet[2664]: I1123 23:08:32.901564 2664 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 23 23:08:32.903903 kubelet[2664]: I1123 23:08:32.903860 2664 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 23 23:08:32.907403 kubelet[2664]: I1123 23:08:32.907378 2664 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 23 23:08:32.914184 kubelet[2664]: I1123 23:08:32.914035 2664 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 23 23:08:32.915744 kubelet[2664]: I1123 23:08:32.915669 2664 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 23 23:08:32.916137 kubelet[2664]: I1123 23:08:32.915853 2664 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 23 23:08:32.916236 kubelet[2664]: I1123 23:08:32.916142 2664 topology_manager.go:138] "Creating topology manager with none policy" Nov 23 23:08:32.916236 kubelet[2664]: I1123 23:08:32.916153 2664 container_manager_linux.go:306] "Creating device plugin manager" Nov 23 23:08:32.916236 kubelet[2664]: I1123 23:08:32.916228 2664 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 23 23:08:32.919907 kubelet[2664]: I1123 23:08:32.919877 2664 state_mem.go:36] "Initialized new in-memory state store" Nov 23 23:08:32.920075 kubelet[2664]: I1123 23:08:32.920065 2664 kubelet.go:475] "Attempting to sync node with API server" Nov 23 23:08:32.920108 kubelet[2664]: I1123 23:08:32.920079 2664 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 23 23:08:32.920108 kubelet[2664]: I1123 23:08:32.920107 2664 kubelet.go:387] "Adding apiserver pod source" Nov 23 23:08:32.920162 kubelet[2664]: I1123 23:08:32.920122 2664 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 23 23:08:32.923281 kubelet[2664]: I1123 23:08:32.923241 2664 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 23 23:08:32.923983 kubelet[2664]: I1123 23:08:32.923958 2664 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 23 23:08:32.924025 kubelet[2664]: I1123 23:08:32.923996 2664 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 23 23:08:32.934187 kubelet[2664]: I1123 23:08:32.933674 2664 server.go:1262] "Started kubelet" Nov 23 23:08:32.934866 kubelet[2664]: I1123 23:08:32.934834 2664 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 23 23:08:32.937990 kubelet[2664]: I1123 23:08:32.937951 2664 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 23 23:08:32.939617 kubelet[2664]: I1123 23:08:32.939188 2664 server.go:310] "Adding debug handlers to kubelet server" Nov 23 23:08:32.941658 kubelet[2664]: I1123 23:08:32.941423 2664 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 23 23:08:32.941658 kubelet[2664]: I1123 23:08:32.941503 2664 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 23 23:08:32.941786 kubelet[2664]: I1123 23:08:32.941760 2664 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 23 23:08:32.942339 kubelet[2664]: I1123 23:08:32.942307 2664 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 23 23:08:32.945288 kubelet[2664]: I1123 23:08:32.945260 2664 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 23 23:08:32.946700 kubelet[2664]: E1123 23:08:32.946388 2664 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 23 23:08:32.949189 kubelet[2664]: I1123 23:08:32.947994 2664 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 23 23:08:32.949553 kubelet[2664]: I1123 23:08:32.949454 2664 reconciler.go:29] "Reconciler: start to sync state" Nov 23 23:08:32.952107 kubelet[2664]: I1123 23:08:32.952075 2664 factory.go:223] Registration of the containerd container factory successfully Nov 23 23:08:32.952107 kubelet[2664]: I1123 23:08:32.952098 2664 factory.go:223] Registration of the systemd container factory successfully Nov 23 23:08:32.953192 kubelet[2664]: I1123 23:08:32.952773 2664 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 23 23:08:32.954083 kubelet[2664]: E1123 23:08:32.954058 2664 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 23 23:08:32.956646 kubelet[2664]: I1123 23:08:32.956064 2664 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 23 23:08:32.958488 kubelet[2664]: I1123 23:08:32.958443 2664 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 23 23:08:32.958488 kubelet[2664]: I1123 23:08:32.958473 2664 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 23 23:08:32.958488 kubelet[2664]: I1123 23:08:32.958500 2664 kubelet.go:2427] "Starting kubelet main sync loop" Nov 23 23:08:32.958619 kubelet[2664]: E1123 23:08:32.958550 2664 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 23 23:08:32.993414 kubelet[2664]: I1123 23:08:32.993117 2664 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 23 23:08:32.993414 kubelet[2664]: I1123 23:08:32.993140 2664 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 23 23:08:32.993414 kubelet[2664]: I1123 23:08:32.993162 2664 state_mem.go:36] "Initialized new in-memory state store" Nov 23 23:08:32.993414 kubelet[2664]: I1123 23:08:32.993320 2664 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 23 23:08:32.993414 kubelet[2664]: I1123 23:08:32.993329 2664 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 23 23:08:32.993414 kubelet[2664]: I1123 23:08:32.993346 2664 policy_none.go:49] "None policy: Start" Nov 23 23:08:32.993414 kubelet[2664]: I1123 23:08:32.993356 2664 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 23 23:08:32.993414 kubelet[2664]: I1123 23:08:32.993366 2664 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 23 23:08:32.994797 kubelet[2664]: I1123 23:08:32.994768 2664 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 23 23:08:32.994903 kubelet[2664]: I1123 23:08:32.994893 2664 policy_none.go:47] "Start" Nov 23 23:08:33.000378 kubelet[2664]: E1123 23:08:32.999659 2664 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 23 23:08:33.000491 kubelet[2664]: I1123 23:08:33.000444 2664 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 23 23:08:33.000514 kubelet[2664]: I1123 23:08:33.000483 2664 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 23 23:08:33.000832 kubelet[2664]: I1123 23:08:33.000806 2664 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 23 23:08:33.002746 kubelet[2664]: E1123 23:08:33.002427 2664 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 23 23:08:33.059514 kubelet[2664]: I1123 23:08:33.059480 2664 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 23 23:08:33.059703 kubelet[2664]: I1123 23:08:33.059574 2664 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 23 23:08:33.060021 kubelet[2664]: I1123 23:08:33.060002 2664 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 23 23:08:33.101868 kubelet[2664]: I1123 23:08:33.101838 2664 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 23 23:08:33.110473 kubelet[2664]: I1123 23:08:33.110438 2664 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 23 23:08:33.110599 kubelet[2664]: I1123 23:08:33.110526 2664 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 23 23:08:33.151052 kubelet[2664]: I1123 23:08:33.149852 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/460d03667103e96bfc2770da6959e5bc-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"460d03667103e96bfc2770da6959e5bc\") " pod="kube-system/kube-apiserver-localhost" Nov 23 23:08:33.151052 kubelet[2664]: I1123 23:08:33.149959 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/460d03667103e96bfc2770da6959e5bc-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"460d03667103e96bfc2770da6959e5bc\") " pod="kube-system/kube-apiserver-localhost" Nov 23 23:08:33.151052 kubelet[2664]: I1123 23:08:33.150023 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/460d03667103e96bfc2770da6959e5bc-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"460d03667103e96bfc2770da6959e5bc\") " pod="kube-system/kube-apiserver-localhost" Nov 23 23:08:33.151052 kubelet[2664]: I1123 23:08:33.150056 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/41694572f76b3db8403039f40dd5ea25-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"41694572f76b3db8403039f40dd5ea25\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:08:33.151052 kubelet[2664]: I1123 23:08:33.150076 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/41694572f76b3db8403039f40dd5ea25-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"41694572f76b3db8403039f40dd5ea25\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:08:33.151264 kubelet[2664]: I1123 23:08:33.150128 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/41694572f76b3db8403039f40dd5ea25-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"41694572f76b3db8403039f40dd5ea25\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:08:33.151264 kubelet[2664]: I1123 23:08:33.150160 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/41694572f76b3db8403039f40dd5ea25-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"41694572f76b3db8403039f40dd5ea25\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:08:33.151264 kubelet[2664]: I1123 23:08:33.150216 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/41694572f76b3db8403039f40dd5ea25-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"41694572f76b3db8403039f40dd5ea25\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:08:33.151264 kubelet[2664]: I1123 23:08:33.150236 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7d0af91d0c9a9742236c44baa5e2751-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7d0af91d0c9a9742236c44baa5e2751\") " pod="kube-system/kube-scheduler-localhost" Nov 23 23:08:33.339672 sudo[2703]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 23 23:08:33.340016 sudo[2703]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 23 23:08:33.673905 sudo[2703]: pam_unix(sudo:session): session closed for user root Nov 23 23:08:33.920980 kubelet[2664]: I1123 23:08:33.920918 2664 apiserver.go:52] "Watching apiserver" Nov 23 23:08:33.950097 kubelet[2664]: I1123 23:08:33.949956 2664 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 23 23:08:33.975198 kubelet[2664]: I1123 23:08:33.975021 2664 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 23 23:08:33.975198 kubelet[2664]: I1123 23:08:33.975119 2664 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 23 23:08:33.975992 kubelet[2664]: I1123 23:08:33.975463 2664 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 23 23:08:33.984868 kubelet[2664]: E1123 23:08:33.984829 2664 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 23 23:08:33.985226 kubelet[2664]: E1123 23:08:33.985028 2664 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 23 23:08:33.985226 kubelet[2664]: E1123 23:08:33.985075 2664 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 23 23:08:34.007889 kubelet[2664]: I1123 23:08:34.007810 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.00774919 podStartE2EDuration="1.00774919s" podCreationTimestamp="2025-11-23 23:08:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:08:33.995686925 +0000 UTC m=+1.142668809" watchObservedRunningTime="2025-11-23 23:08:34.00774919 +0000 UTC m=+1.154731074" Nov 23 23:08:34.017719 kubelet[2664]: I1123 23:08:34.016687 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.01666812 podStartE2EDuration="1.01666812s" podCreationTimestamp="2025-11-23 23:08:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:08:34.008023132 +0000 UTC m=+1.155004976" watchObservedRunningTime="2025-11-23 23:08:34.01666812 +0000 UTC m=+1.163650004" Nov 23 23:08:34.017719 kubelet[2664]: I1123 23:08:34.016818 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.016813713 podStartE2EDuration="1.016813713s" podCreationTimestamp="2025-11-23 23:08:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:08:34.016620509 +0000 UTC m=+1.163602433" watchObservedRunningTime="2025-11-23 23:08:34.016813713 +0000 UTC m=+1.163795597" Nov 23 23:08:35.493346 sudo[1723]: pam_unix(sudo:session): session closed for user root Nov 23 23:08:35.495259 sshd[1722]: Connection closed by 10.0.0.1 port 50642 Nov 23 23:08:35.495676 sshd-session[1719]: pam_unix(sshd:session): session closed for user core Nov 23 23:08:35.500139 systemd[1]: sshd@6-10.0.0.64:22-10.0.0.1:50642.service: Deactivated successfully. Nov 23 23:08:35.502342 systemd[1]: session-7.scope: Deactivated successfully. Nov 23 23:08:35.502643 systemd[1]: session-7.scope: Consumed 7.145s CPU time, 259.7M memory peak. Nov 23 23:08:35.503857 systemd-logind[1486]: Session 7 logged out. Waiting for processes to exit. Nov 23 23:08:35.505566 systemd-logind[1486]: Removed session 7. Nov 23 23:08:39.291777 kubelet[2664]: I1123 23:08:39.291708 2664 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 23 23:08:39.292238 kubelet[2664]: I1123 23:08:39.292220 2664 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 23 23:08:39.292271 containerd[1501]: time="2025-11-23T23:08:39.292030811Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 23 23:08:40.081699 systemd[1]: Created slice kubepods-besteffort-pode0854d48_ec88_498c_8016_23747ab2b325.slice - libcontainer container kubepods-besteffort-pode0854d48_ec88_498c_8016_23747ab2b325.slice. Nov 23 23:08:40.094493 kubelet[2664]: I1123 23:08:40.094394 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e0854d48-ec88-498c-8016-23747ab2b325-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-g2hsp\" (UID: \"e0854d48-ec88-498c-8016-23747ab2b325\") " pod="kube-system/cilium-operator-6f9c7c5859-g2hsp" Nov 23 23:08:40.094493 kubelet[2664]: I1123 23:08:40.094450 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xphlm\" (UniqueName: \"kubernetes.io/projected/e0854d48-ec88-498c-8016-23747ab2b325-kube-api-access-xphlm\") pod \"cilium-operator-6f9c7c5859-g2hsp\" (UID: \"e0854d48-ec88-498c-8016-23747ab2b325\") " pod="kube-system/cilium-operator-6f9c7c5859-g2hsp" Nov 23 23:08:40.205641 kubelet[2664]: E1123 23:08:40.205596 2664 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 23 23:08:40.205641 kubelet[2664]: E1123 23:08:40.205633 2664 projected.go:196] Error preparing data for projected volume kube-api-access-xphlm for pod kube-system/cilium-operator-6f9c7c5859-g2hsp: configmap "kube-root-ca.crt" not found Nov 23 23:08:40.208433 kubelet[2664]: E1123 23:08:40.208363 2664 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e0854d48-ec88-498c-8016-23747ab2b325-kube-api-access-xphlm podName:e0854d48-ec88-498c-8016-23747ab2b325 nodeName:}" failed. No retries permitted until 2025-11-23 23:08:40.70567238 +0000 UTC m=+7.852654224 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xphlm" (UniqueName: "kubernetes.io/projected/e0854d48-ec88-498c-8016-23747ab2b325-kube-api-access-xphlm") pod "cilium-operator-6f9c7c5859-g2hsp" (UID: "e0854d48-ec88-498c-8016-23747ab2b325") : configmap "kube-root-ca.crt" not found Nov 23 23:08:40.421679 systemd[1]: Created slice kubepods-besteffort-pod0fbf1851_b7e5_47f8_8d7b_c34410750c95.slice - libcontainer container kubepods-besteffort-pod0fbf1851_b7e5_47f8_8d7b_c34410750c95.slice. Nov 23 23:08:40.440475 systemd[1]: Created slice kubepods-burstable-pod4291a75a_e0d5_485c_9a73_94161cf73fc1.slice - libcontainer container kubepods-burstable-pod4291a75a_e0d5_485c_9a73_94161cf73fc1.slice. Nov 23 23:08:40.498894 kubelet[2664]: I1123 23:08:40.498517 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4291a75a-e0d5-485c-9a73-94161cf73fc1-bpf-maps\") pod \"cilium-ddz29\" (UID: \"4291a75a-e0d5-485c-9a73-94161cf73fc1\") " pod="kube-system/cilium-ddz29" Nov 23 23:08:40.498894 kubelet[2664]: I1123 23:08:40.498581 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4291a75a-e0d5-485c-9a73-94161cf73fc1-cilium-config-path\") pod \"cilium-ddz29\" (UID: \"4291a75a-e0d5-485c-9a73-94161cf73fc1\") " pod="kube-system/cilium-ddz29" Nov 23 23:08:40.498894 kubelet[2664]: I1123 23:08:40.498626 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4291a75a-e0d5-485c-9a73-94161cf73fc1-host-proc-sys-kernel\") pod \"cilium-ddz29\" (UID: \"4291a75a-e0d5-485c-9a73-94161cf73fc1\") " pod="kube-system/cilium-ddz29" Nov 23 23:08:40.498894 kubelet[2664]: I1123 23:08:40.498680 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmx77\" (UniqueName: \"kubernetes.io/projected/0fbf1851-b7e5-47f8-8d7b-c34410750c95-kube-api-access-jmx77\") pod \"kube-proxy-phl4k\" (UID: \"0fbf1851-b7e5-47f8-8d7b-c34410750c95\") " pod="kube-system/kube-proxy-phl4k" Nov 23 23:08:40.498894 kubelet[2664]: I1123 23:08:40.498708 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4291a75a-e0d5-485c-9a73-94161cf73fc1-etc-cni-netd\") pod \"cilium-ddz29\" (UID: \"4291a75a-e0d5-485c-9a73-94161cf73fc1\") " pod="kube-system/cilium-ddz29" Nov 23 23:08:40.499371 kubelet[2664]: I1123 23:08:40.498738 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4291a75a-e0d5-485c-9a73-94161cf73fc1-lib-modules\") pod \"cilium-ddz29\" (UID: \"4291a75a-e0d5-485c-9a73-94161cf73fc1\") " pod="kube-system/cilium-ddz29" Nov 23 23:08:40.499371 kubelet[2664]: I1123 23:08:40.498793 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0fbf1851-b7e5-47f8-8d7b-c34410750c95-kube-proxy\") pod \"kube-proxy-phl4k\" (UID: \"0fbf1851-b7e5-47f8-8d7b-c34410750c95\") " pod="kube-system/kube-proxy-phl4k" Nov 23 23:08:40.499371 kubelet[2664]: I1123 23:08:40.498856 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4291a75a-e0d5-485c-9a73-94161cf73fc1-host-proc-sys-net\") pod \"cilium-ddz29\" (UID: \"4291a75a-e0d5-485c-9a73-94161cf73fc1\") " pod="kube-system/cilium-ddz29" Nov 23 23:08:40.499371 kubelet[2664]: I1123 23:08:40.498894 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4291a75a-e0d5-485c-9a73-94161cf73fc1-cilium-cgroup\") pod \"cilium-ddz29\" (UID: \"4291a75a-e0d5-485c-9a73-94161cf73fc1\") " pod="kube-system/cilium-ddz29" Nov 23 23:08:40.499371 kubelet[2664]: I1123 23:08:40.498915 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8qdz\" (UniqueName: \"kubernetes.io/projected/4291a75a-e0d5-485c-9a73-94161cf73fc1-kube-api-access-d8qdz\") pod \"cilium-ddz29\" (UID: \"4291a75a-e0d5-485c-9a73-94161cf73fc1\") " pod="kube-system/cilium-ddz29" Nov 23 23:08:40.499371 kubelet[2664]: I1123 23:08:40.498932 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0fbf1851-b7e5-47f8-8d7b-c34410750c95-lib-modules\") pod \"kube-proxy-phl4k\" (UID: \"0fbf1851-b7e5-47f8-8d7b-c34410750c95\") " pod="kube-system/kube-proxy-phl4k" Nov 23 23:08:40.499497 kubelet[2664]: I1123 23:08:40.498952 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4291a75a-e0d5-485c-9a73-94161cf73fc1-cni-path\") pod \"cilium-ddz29\" (UID: \"4291a75a-e0d5-485c-9a73-94161cf73fc1\") " pod="kube-system/cilium-ddz29" Nov 23 23:08:40.499497 kubelet[2664]: I1123 23:08:40.498965 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4291a75a-e0d5-485c-9a73-94161cf73fc1-hubble-tls\") pod \"cilium-ddz29\" (UID: \"4291a75a-e0d5-485c-9a73-94161cf73fc1\") " pod="kube-system/cilium-ddz29" Nov 23 23:08:40.499497 kubelet[2664]: I1123 23:08:40.498983 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0fbf1851-b7e5-47f8-8d7b-c34410750c95-xtables-lock\") pod \"kube-proxy-phl4k\" (UID: \"0fbf1851-b7e5-47f8-8d7b-c34410750c95\") " pod="kube-system/kube-proxy-phl4k" Nov 23 23:08:40.499497 kubelet[2664]: I1123 23:08:40.498996 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4291a75a-e0d5-485c-9a73-94161cf73fc1-hostproc\") pod \"cilium-ddz29\" (UID: \"4291a75a-e0d5-485c-9a73-94161cf73fc1\") " pod="kube-system/cilium-ddz29" Nov 23 23:08:40.499497 kubelet[2664]: I1123 23:08:40.499010 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4291a75a-e0d5-485c-9a73-94161cf73fc1-xtables-lock\") pod \"cilium-ddz29\" (UID: \"4291a75a-e0d5-485c-9a73-94161cf73fc1\") " pod="kube-system/cilium-ddz29" Nov 23 23:08:40.499497 kubelet[2664]: I1123 23:08:40.499042 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4291a75a-e0d5-485c-9a73-94161cf73fc1-clustermesh-secrets\") pod \"cilium-ddz29\" (UID: \"4291a75a-e0d5-485c-9a73-94161cf73fc1\") " pod="kube-system/cilium-ddz29" Nov 23 23:08:40.499607 kubelet[2664]: I1123 23:08:40.499074 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4291a75a-e0d5-485c-9a73-94161cf73fc1-cilium-run\") pod \"cilium-ddz29\" (UID: \"4291a75a-e0d5-485c-9a73-94161cf73fc1\") " pod="kube-system/cilium-ddz29" Nov 23 23:08:40.730311 containerd[1501]: time="2025-11-23T23:08:40.729766803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-phl4k,Uid:0fbf1851-b7e5-47f8-8d7b-c34410750c95,Namespace:kube-system,Attempt:0,}" Nov 23 23:08:40.744771 containerd[1501]: time="2025-11-23T23:08:40.744729733Z" level=info msg="connecting to shim fb4f1f5bb437aaefc0ef2605cb89a7f107125e1ea8bd85466cd9e1f31e7086bf" address="unix:///run/containerd/s/23d4799d72e0412e4ca9e8438304ed0e04add052757dee4e9f87569691f93f52" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:08:40.747681 containerd[1501]: time="2025-11-23T23:08:40.747641858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ddz29,Uid:4291a75a-e0d5-485c-9a73-94161cf73fc1,Namespace:kube-system,Attempt:0,}" Nov 23 23:08:40.765853 containerd[1501]: time="2025-11-23T23:08:40.765485788Z" level=info msg="connecting to shim 309a791571f952d93c62a7bd8bc2828f9e9df724cef0f0fbbcb595b67fd90781" address="unix:///run/containerd/s/1b906b01b7cfe9f058895789ada1464a25ab78987d91c824aa20c83b9a884924" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:08:40.768446 systemd[1]: Started cri-containerd-fb4f1f5bb437aaefc0ef2605cb89a7f107125e1ea8bd85466cd9e1f31e7086bf.scope - libcontainer container fb4f1f5bb437aaefc0ef2605cb89a7f107125e1ea8bd85466cd9e1f31e7086bf. Nov 23 23:08:40.792395 systemd[1]: Started cri-containerd-309a791571f952d93c62a7bd8bc2828f9e9df724cef0f0fbbcb595b67fd90781.scope - libcontainer container 309a791571f952d93c62a7bd8bc2828f9e9df724cef0f0fbbcb595b67fd90781. Nov 23 23:08:40.821832 containerd[1501]: time="2025-11-23T23:08:40.821775520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-phl4k,Uid:0fbf1851-b7e5-47f8-8d7b-c34410750c95,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb4f1f5bb437aaefc0ef2605cb89a7f107125e1ea8bd85466cd9e1f31e7086bf\"" Nov 23 23:08:40.826805 containerd[1501]: time="2025-11-23T23:08:40.826770244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ddz29,Uid:4291a75a-e0d5-485c-9a73-94161cf73fc1,Namespace:kube-system,Attempt:0,} returns sandbox id \"309a791571f952d93c62a7bd8bc2828f9e9df724cef0f0fbbcb595b67fd90781\"" Nov 23 23:08:40.830479 containerd[1501]: time="2025-11-23T23:08:40.830442486Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 23 23:08:40.830813 containerd[1501]: time="2025-11-23T23:08:40.830781058Z" level=info msg="CreateContainer within sandbox \"fb4f1f5bb437aaefc0ef2605cb89a7f107125e1ea8bd85466cd9e1f31e7086bf\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 23 23:08:40.839713 containerd[1501]: time="2025-11-23T23:08:40.839664097Z" level=info msg="Container 1815bd66d9eac872931238170bf602a9a787e77015584b8fbd1978c0d264c5bf: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:08:40.846062 containerd[1501]: time="2025-11-23T23:08:40.846018109Z" level=info msg="CreateContainer within sandbox \"fb4f1f5bb437aaefc0ef2605cb89a7f107125e1ea8bd85466cd9e1f31e7086bf\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1815bd66d9eac872931238170bf602a9a787e77015584b8fbd1978c0d264c5bf\"" Nov 23 23:08:40.846870 containerd[1501]: time="2025-11-23T23:08:40.846779746Z" level=info msg="StartContainer for \"1815bd66d9eac872931238170bf602a9a787e77015584b8fbd1978c0d264c5bf\"" Nov 23 23:08:40.848435 containerd[1501]: time="2025-11-23T23:08:40.848406874Z" level=info msg="connecting to shim 1815bd66d9eac872931238170bf602a9a787e77015584b8fbd1978c0d264c5bf" address="unix:///run/containerd/s/23d4799d72e0412e4ca9e8438304ed0e04add052757dee4e9f87569691f93f52" protocol=ttrpc version=3 Nov 23 23:08:40.873387 systemd[1]: Started cri-containerd-1815bd66d9eac872931238170bf602a9a787e77015584b8fbd1978c0d264c5bf.scope - libcontainer container 1815bd66d9eac872931238170bf602a9a787e77015584b8fbd1978c0d264c5bf. Nov 23 23:08:40.958568 containerd[1501]: time="2025-11-23T23:08:40.958506679Z" level=info msg="StartContainer for \"1815bd66d9eac872931238170bf602a9a787e77015584b8fbd1978c0d264c5bf\" returns successfully" Nov 23 23:08:40.997011 containerd[1501]: time="2025-11-23T23:08:40.995944727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-g2hsp,Uid:e0854d48-ec88-498c-8016-23747ab2b325,Namespace:kube-system,Attempt:0,}" Nov 23 23:08:41.023216 containerd[1501]: time="2025-11-23T23:08:41.022334482Z" level=info msg="connecting to shim f835d297d726f5595cb95d1823ec811af2f9b7c1625b7c11c5d106e77c769a1b" address="unix:///run/containerd/s/e790a3f31ca78ed8d1b0200cebc1420d3577d3310620106053a073b11cde2b47" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:08:41.046408 systemd[1]: Started cri-containerd-f835d297d726f5595cb95d1823ec811af2f9b7c1625b7c11c5d106e77c769a1b.scope - libcontainer container f835d297d726f5595cb95d1823ec811af2f9b7c1625b7c11c5d106e77c769a1b. Nov 23 23:08:41.094735 containerd[1501]: time="2025-11-23T23:08:41.094688820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-g2hsp,Uid:e0854d48-ec88-498c-8016-23747ab2b325,Namespace:kube-system,Attempt:0,} returns sandbox id \"f835d297d726f5595cb95d1823ec811af2f9b7c1625b7c11c5d106e77c769a1b\"" Nov 23 23:08:43.274204 kubelet[2664]: I1123 23:08:43.274110 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-phl4k" podStartSLOduration=3.274093913 podStartE2EDuration="3.274093913s" podCreationTimestamp="2025-11-23 23:08:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:08:41.008760855 +0000 UTC m=+8.155742739" watchObservedRunningTime="2025-11-23 23:08:43.274093913 +0000 UTC m=+10.421075797" Nov 23 23:08:47.425079 update_engine[1488]: I20251123 23:08:47.424937 1488 update_attempter.cc:509] Updating boot flags... Nov 23 23:08:49.904226 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2094207134.mount: Deactivated successfully. Nov 23 23:09:01.521237 systemd[1]: Started sshd@7-10.0.0.64:22-10.0.0.1:51348.service - OpenSSH per-connection server daemon (10.0.0.1:51348). Nov 23 23:09:01.658979 containerd[1501]: time="2025-11-23T23:09:01.658913148Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:09:01.660200 containerd[1501]: time="2025-11-23T23:09:01.659917588Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Nov 23 23:09:01.663175 containerd[1501]: time="2025-11-23T23:09:01.662703018Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:09:01.663930 containerd[1501]: time="2025-11-23T23:09:01.663874184Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 20.833386652s" Nov 23 23:09:01.663930 containerd[1501]: time="2025-11-23T23:09:01.663928466Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Nov 23 23:09:01.666537 containerd[1501]: time="2025-11-23T23:09:01.666492287Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 23 23:09:01.674644 containerd[1501]: time="2025-11-23T23:09:01.674597607Z" level=info msg="CreateContainer within sandbox \"309a791571f952d93c62a7bd8bc2828f9e9df724cef0f0fbbcb595b67fd90781\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 23 23:09:01.693521 containerd[1501]: time="2025-11-23T23:09:01.693463671Z" level=info msg="Container 4d34c567202a53d999b7296f5662ac21e3737e93fbdc7f1fde274b0ddb1114b1: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:09:01.706036 containerd[1501]: time="2025-11-23T23:09:01.705962885Z" level=info msg="CreateContainer within sandbox \"309a791571f952d93c62a7bd8bc2828f9e9df724cef0f0fbbcb595b67fd90781\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4d34c567202a53d999b7296f5662ac21e3737e93fbdc7f1fde274b0ddb1114b1\"" Nov 23 23:09:01.706865 containerd[1501]: time="2025-11-23T23:09:01.706833599Z" level=info msg="StartContainer for \"4d34c567202a53d999b7296f5662ac21e3737e93fbdc7f1fde274b0ddb1114b1\"" Nov 23 23:09:01.708982 containerd[1501]: time="2025-11-23T23:09:01.708941162Z" level=info msg="connecting to shim 4d34c567202a53d999b7296f5662ac21e3737e93fbdc7f1fde274b0ddb1114b1" address="unix:///run/containerd/s/1b906b01b7cfe9f058895789ada1464a25ab78987d91c824aa20c83b9a884924" protocol=ttrpc version=3 Nov 23 23:09:01.710213 sshd[3095]: Accepted publickey for core from 10.0.0.1 port 51348 ssh2: RSA SHA256:yIy4UrzOMNNnnIqKwL8egez+/NjI/EpaMMlf9RYGR+A Nov 23 23:09:01.711526 sshd-session[3095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:09:01.724337 systemd-logind[1486]: New session 8 of user core. Nov 23 23:09:01.732072 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 23 23:09:01.756410 systemd[1]: Started cri-containerd-4d34c567202a53d999b7296f5662ac21e3737e93fbdc7f1fde274b0ddb1114b1.scope - libcontainer container 4d34c567202a53d999b7296f5662ac21e3737e93fbdc7f1fde274b0ddb1114b1. Nov 23 23:09:01.792595 containerd[1501]: time="2025-11-23T23:09:01.792472458Z" level=info msg="StartContainer for \"4d34c567202a53d999b7296f5662ac21e3737e93fbdc7f1fde274b0ddb1114b1\" returns successfully" Nov 23 23:09:01.812092 systemd[1]: cri-containerd-4d34c567202a53d999b7296f5662ac21e3737e93fbdc7f1fde274b0ddb1114b1.scope: Deactivated successfully. Nov 23 23:09:01.863567 containerd[1501]: time="2025-11-23T23:09:01.863505461Z" level=info msg="received container exit event container_id:\"4d34c567202a53d999b7296f5662ac21e3737e93fbdc7f1fde274b0ddb1114b1\" id:\"4d34c567202a53d999b7296f5662ac21e3737e93fbdc7f1fde274b0ddb1114b1\" pid:3117 exited_at:{seconds:1763939341 nanos:852022608}" Nov 23 23:09:01.885041 sshd[3103]: Connection closed by 10.0.0.1 port 51348 Nov 23 23:09:01.885707 sshd-session[3095]: pam_unix(sshd:session): session closed for user core Nov 23 23:09:01.889640 systemd-logind[1486]: Session 8 logged out. Waiting for processes to exit. Nov 23 23:09:01.889765 systemd[1]: sshd@7-10.0.0.64:22-10.0.0.1:51348.service: Deactivated successfully. Nov 23 23:09:01.891664 systemd[1]: session-8.scope: Deactivated successfully. Nov 23 23:09:01.894658 systemd-logind[1486]: Removed session 8. Nov 23 23:09:01.927279 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d34c567202a53d999b7296f5662ac21e3737e93fbdc7f1fde274b0ddb1114b1-rootfs.mount: Deactivated successfully. Nov 23 23:09:02.050086 containerd[1501]: time="2025-11-23T23:09:02.049942255Z" level=info msg="CreateContainer within sandbox \"309a791571f952d93c62a7bd8bc2828f9e9df724cef0f0fbbcb595b67fd90781\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 23 23:09:02.081232 containerd[1501]: time="2025-11-23T23:09:02.080780675Z" level=info msg="Container 5db2a082f546cac9fa0149d44b13f3ec87d771061e29fee1a7a45b8a989cc4d7: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:09:02.087262 containerd[1501]: time="2025-11-23T23:09:02.087143911Z" level=info msg="CreateContainer within sandbox \"309a791571f952d93c62a7bd8bc2828f9e9df724cef0f0fbbcb595b67fd90781\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5db2a082f546cac9fa0149d44b13f3ec87d771061e29fee1a7a45b8a989cc4d7\"" Nov 23 23:09:02.087827 containerd[1501]: time="2025-11-23T23:09:02.087775174Z" level=info msg="StartContainer for \"5db2a082f546cac9fa0149d44b13f3ec87d771061e29fee1a7a45b8a989cc4d7\"" Nov 23 23:09:02.090492 containerd[1501]: time="2025-11-23T23:09:02.090447193Z" level=info msg="connecting to shim 5db2a082f546cac9fa0149d44b13f3ec87d771061e29fee1a7a45b8a989cc4d7" address="unix:///run/containerd/s/1b906b01b7cfe9f058895789ada1464a25ab78987d91c824aa20c83b9a884924" protocol=ttrpc version=3 Nov 23 23:09:02.113428 systemd[1]: Started cri-containerd-5db2a082f546cac9fa0149d44b13f3ec87d771061e29fee1a7a45b8a989cc4d7.scope - libcontainer container 5db2a082f546cac9fa0149d44b13f3ec87d771061e29fee1a7a45b8a989cc4d7. Nov 23 23:09:02.143282 containerd[1501]: time="2025-11-23T23:09:02.143230345Z" level=info msg="StartContainer for \"5db2a082f546cac9fa0149d44b13f3ec87d771061e29fee1a7a45b8a989cc4d7\" returns successfully" Nov 23 23:09:02.157310 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 23 23:09:02.157527 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 23 23:09:02.157993 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 23 23:09:02.159444 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 23 23:09:02.160670 systemd[1]: cri-containerd-5db2a082f546cac9fa0149d44b13f3ec87d771061e29fee1a7a45b8a989cc4d7.scope: Deactivated successfully. Nov 23 23:09:02.161813 containerd[1501]: time="2025-11-23T23:09:02.161766551Z" level=info msg="received container exit event container_id:\"5db2a082f546cac9fa0149d44b13f3ec87d771061e29fee1a7a45b8a989cc4d7\" id:\"5db2a082f546cac9fa0149d44b13f3ec87d771061e29fee1a7a45b8a989cc4d7\" pid:3175 exited_at:{seconds:1763939342 nanos:160808836}" Nov 23 23:09:02.184558 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 23 23:09:02.904268 containerd[1501]: time="2025-11-23T23:09:02.904198174Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:09:02.905257 containerd[1501]: time="2025-11-23T23:09:02.904949002Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Nov 23 23:09:02.905851 containerd[1501]: time="2025-11-23T23:09:02.905794353Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:09:02.907352 containerd[1501]: time="2025-11-23T23:09:02.907303129Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.239830843s" Nov 23 23:09:02.907352 containerd[1501]: time="2025-11-23T23:09:02.907348010Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Nov 23 23:09:02.912588 containerd[1501]: time="2025-11-23T23:09:02.912477440Z" level=info msg="CreateContainer within sandbox \"f835d297d726f5595cb95d1823ec811af2f9b7c1625b7c11c5d106e77c769a1b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 23 23:09:02.924360 containerd[1501]: time="2025-11-23T23:09:02.923256879Z" level=info msg="Container 92e1f6b991356744d60a3f14880c1d9b1373e63ea055a816d98acf6b16819751: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:09:02.932955 containerd[1501]: time="2025-11-23T23:09:02.932865154Z" level=info msg="CreateContainer within sandbox \"f835d297d726f5595cb95d1823ec811af2f9b7c1625b7c11c5d106e77c769a1b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"92e1f6b991356744d60a3f14880c1d9b1373e63ea055a816d98acf6b16819751\"" Nov 23 23:09:02.933571 containerd[1501]: time="2025-11-23T23:09:02.933508858Z" level=info msg="StartContainer for \"92e1f6b991356744d60a3f14880c1d9b1373e63ea055a816d98acf6b16819751\"" Nov 23 23:09:02.935367 containerd[1501]: time="2025-11-23T23:09:02.935293084Z" level=info msg="connecting to shim 92e1f6b991356744d60a3f14880c1d9b1373e63ea055a816d98acf6b16819751" address="unix:///run/containerd/s/e790a3f31ca78ed8d1b0200cebc1420d3577d3310620106053a073b11cde2b47" protocol=ttrpc version=3 Nov 23 23:09:02.979478 systemd[1]: Started cri-containerd-92e1f6b991356744d60a3f14880c1d9b1373e63ea055a816d98acf6b16819751.scope - libcontainer container 92e1f6b991356744d60a3f14880c1d9b1373e63ea055a816d98acf6b16819751. Nov 23 23:09:03.017138 containerd[1501]: time="2025-11-23T23:09:03.017087231Z" level=info msg="StartContainer for \"92e1f6b991356744d60a3f14880c1d9b1373e63ea055a816d98acf6b16819751\" returns successfully" Nov 23 23:09:03.117271 containerd[1501]: time="2025-11-23T23:09:03.117136180Z" level=info msg="CreateContainer within sandbox \"309a791571f952d93c62a7bd8bc2828f9e9df724cef0f0fbbcb595b67fd90781\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 23 23:09:03.189196 containerd[1501]: time="2025-11-23T23:09:03.186022929Z" level=info msg="Container fc2fa03eee861c62b1821fc958e2bd2a287876bd25798306f2d39c4023dbb292: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:09:03.192558 kubelet[2664]: I1123 23:09:03.192448 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-g2hsp" podStartSLOduration=1.380181565 podStartE2EDuration="23.192423591s" podCreationTimestamp="2025-11-23 23:08:40 +0000 UTC" firstStartedPulling="2025-11-23 23:08:41.096092821 +0000 UTC m=+8.243074705" lastFinishedPulling="2025-11-23 23:09:02.908334887 +0000 UTC m=+30.055316731" observedRunningTime="2025-11-23 23:09:03.186002288 +0000 UTC m=+30.332984252" watchObservedRunningTime="2025-11-23 23:09:03.192423591 +0000 UTC m=+30.339405515" Nov 23 23:09:03.207741 containerd[1501]: time="2025-11-23T23:09:03.206746568Z" level=info msg="CreateContainer within sandbox \"309a791571f952d93c62a7bd8bc2828f9e9df724cef0f0fbbcb595b67fd90781\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fc2fa03eee861c62b1821fc958e2bd2a287876bd25798306f2d39c4023dbb292\"" Nov 23 23:09:03.210717 containerd[1501]: time="2025-11-23T23:09:03.210600861Z" level=info msg="StartContainer for \"fc2fa03eee861c62b1821fc958e2bd2a287876bd25798306f2d39c4023dbb292\"" Nov 23 23:09:03.213439 containerd[1501]: time="2025-11-23T23:09:03.213381318Z" level=info msg="connecting to shim fc2fa03eee861c62b1821fc958e2bd2a287876bd25798306f2d39c4023dbb292" address="unix:///run/containerd/s/1b906b01b7cfe9f058895789ada1464a25ab78987d91c824aa20c83b9a884924" protocol=ttrpc version=3 Nov 23 23:09:03.259499 systemd[1]: Started cri-containerd-fc2fa03eee861c62b1821fc958e2bd2a287876bd25798306f2d39c4023dbb292.scope - libcontainer container fc2fa03eee861c62b1821fc958e2bd2a287876bd25798306f2d39c4023dbb292. Nov 23 23:09:03.337937 containerd[1501]: time="2025-11-23T23:09:03.337890556Z" level=info msg="StartContainer for \"fc2fa03eee861c62b1821fc958e2bd2a287876bd25798306f2d39c4023dbb292\" returns successfully" Nov 23 23:09:03.340316 systemd[1]: cri-containerd-fc2fa03eee861c62b1821fc958e2bd2a287876bd25798306f2d39c4023dbb292.scope: Deactivated successfully. Nov 23 23:09:03.340898 systemd[1]: cri-containerd-fc2fa03eee861c62b1821fc958e2bd2a287876bd25798306f2d39c4023dbb292.scope: Consumed 36ms CPU time, 8.3M memory peak, 6.1M read from disk. Nov 23 23:09:03.342333 containerd[1501]: time="2025-11-23T23:09:03.342274028Z" level=info msg="received container exit event container_id:\"fc2fa03eee861c62b1821fc958e2bd2a287876bd25798306f2d39c4023dbb292\" id:\"fc2fa03eee861c62b1821fc958e2bd2a287876bd25798306f2d39c4023dbb292\" pid:3275 exited_at:{seconds:1763939343 nanos:341364036}" Nov 23 23:09:03.690909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount997225695.mount: Deactivated successfully. Nov 23 23:09:04.066567 containerd[1501]: time="2025-11-23T23:09:04.066425878Z" level=info msg="CreateContainer within sandbox \"309a791571f952d93c62a7bd8bc2828f9e9df724cef0f0fbbcb595b67fd90781\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 23 23:09:04.076091 containerd[1501]: time="2025-11-23T23:09:04.076039751Z" level=info msg="Container f46f351c666bd379e30e3cfff3b7aa00a440512ec98cdb071b6abb9c308f643c: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:09:04.080339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2160661766.mount: Deactivated successfully. Nov 23 23:09:04.084542 containerd[1501]: time="2025-11-23T23:09:04.084497266Z" level=info msg="CreateContainer within sandbox \"309a791571f952d93c62a7bd8bc2828f9e9df724cef0f0fbbcb595b67fd90781\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f46f351c666bd379e30e3cfff3b7aa00a440512ec98cdb071b6abb9c308f643c\"" Nov 23 23:09:04.087141 containerd[1501]: time="2025-11-23T23:09:04.087100871Z" level=info msg="StartContainer for \"f46f351c666bd379e30e3cfff3b7aa00a440512ec98cdb071b6abb9c308f643c\"" Nov 23 23:09:04.094034 containerd[1501]: time="2025-11-23T23:09:04.093981654Z" level=info msg="connecting to shim f46f351c666bd379e30e3cfff3b7aa00a440512ec98cdb071b6abb9c308f643c" address="unix:///run/containerd/s/1b906b01b7cfe9f058895789ada1464a25ab78987d91c824aa20c83b9a884924" protocol=ttrpc version=3 Nov 23 23:09:04.113394 systemd[1]: Started cri-containerd-f46f351c666bd379e30e3cfff3b7aa00a440512ec98cdb071b6abb9c308f643c.scope - libcontainer container f46f351c666bd379e30e3cfff3b7aa00a440512ec98cdb071b6abb9c308f643c. Nov 23 23:09:04.139745 systemd[1]: cri-containerd-f46f351c666bd379e30e3cfff3b7aa00a440512ec98cdb071b6abb9c308f643c.scope: Deactivated successfully. Nov 23 23:09:04.147534 containerd[1501]: time="2025-11-23T23:09:04.147477354Z" level=info msg="received container exit event container_id:\"f46f351c666bd379e30e3cfff3b7aa00a440512ec98cdb071b6abb9c308f643c\" id:\"f46f351c666bd379e30e3cfff3b7aa00a440512ec98cdb071b6abb9c308f643c\" pid:3315 exited_at:{seconds:1763939344 nanos:140701293}" Nov 23 23:09:04.156376 containerd[1501]: time="2025-11-23T23:09:04.156333362Z" level=info msg="StartContainer for \"f46f351c666bd379e30e3cfff3b7aa00a440512ec98cdb071b6abb9c308f643c\" returns successfully" Nov 23 23:09:04.163357 containerd[1501]: time="2025-11-23T23:09:04.143890717Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4291a75a_e0d5_485c_9a73_94161cf73fc1.slice/cri-containerd-f46f351c666bd379e30e3cfff3b7aa00a440512ec98cdb071b6abb9c308f643c.scope/memory.events\": no such file or directory" Nov 23 23:09:04.183289 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f46f351c666bd379e30e3cfff3b7aa00a440512ec98cdb071b6abb9c308f643c-rootfs.mount: Deactivated successfully. Nov 23 23:09:05.070435 containerd[1501]: time="2025-11-23T23:09:05.070377896Z" level=info msg="CreateContainer within sandbox \"309a791571f952d93c62a7bd8bc2828f9e9df724cef0f0fbbcb595b67fd90781\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 23 23:09:05.090204 containerd[1501]: time="2025-11-23T23:09:05.089142788Z" level=info msg="Container 6454f76659a8dd0d0668dfc5f757777fea7bb50c3be7d4d070789eaf9395ee6c: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:09:05.092699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount349769520.mount: Deactivated successfully. Nov 23 23:09:05.099708 containerd[1501]: time="2025-11-23T23:09:05.099647348Z" level=info msg="CreateContainer within sandbox \"309a791571f952d93c62a7bd8bc2828f9e9df724cef0f0fbbcb595b67fd90781\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6454f76659a8dd0d0668dfc5f757777fea7bb50c3be7d4d070789eaf9395ee6c\"" Nov 23 23:09:05.100336 containerd[1501]: time="2025-11-23T23:09:05.100268087Z" level=info msg="StartContainer for \"6454f76659a8dd0d0668dfc5f757777fea7bb50c3be7d4d070789eaf9395ee6c\"" Nov 23 23:09:05.101799 containerd[1501]: time="2025-11-23T23:09:05.101748252Z" level=info msg="connecting to shim 6454f76659a8dd0d0668dfc5f757777fea7bb50c3be7d4d070789eaf9395ee6c" address="unix:///run/containerd/s/1b906b01b7cfe9f058895789ada1464a25ab78987d91c824aa20c83b9a884924" protocol=ttrpc version=3 Nov 23 23:09:05.125377 systemd[1]: Started cri-containerd-6454f76659a8dd0d0668dfc5f757777fea7bb50c3be7d4d070789eaf9395ee6c.scope - libcontainer container 6454f76659a8dd0d0668dfc5f757777fea7bb50c3be7d4d070789eaf9395ee6c. Nov 23 23:09:05.171120 containerd[1501]: time="2025-11-23T23:09:05.171081045Z" level=info msg="StartContainer for \"6454f76659a8dd0d0668dfc5f757777fea7bb50c3be7d4d070789eaf9395ee6c\" returns successfully" Nov 23 23:09:05.291561 kubelet[2664]: I1123 23:09:05.291511 2664 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 23 23:09:05.341723 kubelet[2664]: E1123 23:09:05.339705 2664 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap" Nov 23 23:09:05.348042 kubelet[2664]: E1123 23:09:05.346345 2664 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-zlqrd\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" podUID="acaf478e-5b19-411f-bc8d-03ea82612035" pod="kube-system/coredns-66bc5c9577-zlqrd" Nov 23 23:09:05.361457 systemd[1]: Created slice kubepods-burstable-podacaf478e_5b19_411f_bc8d_03ea82612035.slice - libcontainer container kubepods-burstable-podacaf478e_5b19_411f_bc8d_03ea82612035.slice. Nov 23 23:09:05.369705 kubelet[2664]: I1123 23:09:05.369664 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97bea278-8563-4edd-a70d-e1f1d4a862df-config-volume\") pod \"coredns-66bc5c9577-s8psz\" (UID: \"97bea278-8563-4edd-a70d-e1f1d4a862df\") " pod="kube-system/coredns-66bc5c9577-s8psz" Nov 23 23:09:05.369705 kubelet[2664]: I1123 23:09:05.369706 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftqlv\" (UniqueName: \"kubernetes.io/projected/97bea278-8563-4edd-a70d-e1f1d4a862df-kube-api-access-ftqlv\") pod \"coredns-66bc5c9577-s8psz\" (UID: \"97bea278-8563-4edd-a70d-e1f1d4a862df\") " pod="kube-system/coredns-66bc5c9577-s8psz" Nov 23 23:09:05.369874 kubelet[2664]: I1123 23:09:05.369727 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2chx\" (UniqueName: \"kubernetes.io/projected/acaf478e-5b19-411f-bc8d-03ea82612035-kube-api-access-t2chx\") pod \"coredns-66bc5c9577-zlqrd\" (UID: \"acaf478e-5b19-411f-bc8d-03ea82612035\") " pod="kube-system/coredns-66bc5c9577-zlqrd" Nov 23 23:09:05.369874 kubelet[2664]: I1123 23:09:05.369744 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/acaf478e-5b19-411f-bc8d-03ea82612035-config-volume\") pod \"coredns-66bc5c9577-zlqrd\" (UID: \"acaf478e-5b19-411f-bc8d-03ea82612035\") " pod="kube-system/coredns-66bc5c9577-zlqrd" Nov 23 23:09:05.370653 systemd[1]: Created slice kubepods-burstable-pod97bea278_8563_4edd_a70d_e1f1d4a862df.slice - libcontainer container kubepods-burstable-pod97bea278_8563_4edd_a70d_e1f1d4a862df.slice. Nov 23 23:09:06.094356 kubelet[2664]: I1123 23:09:06.094245 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ddz29" podStartSLOduration=5.256459974 podStartE2EDuration="26.094225205s" podCreationTimestamp="2025-11-23 23:08:40 +0000 UTC" firstStartedPulling="2025-11-23 23:08:40.828314 +0000 UTC m=+7.975295884" lastFinishedPulling="2025-11-23 23:09:01.666079231 +0000 UTC m=+28.813061115" observedRunningTime="2025-11-23 23:09:06.093955477 +0000 UTC m=+33.240937401" watchObservedRunningTime="2025-11-23 23:09:06.094225205 +0000 UTC m=+33.241207089" Nov 23 23:09:06.470776 kubelet[2664]: E1123 23:09:06.470735 2664 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Nov 23 23:09:06.471152 kubelet[2664]: E1123 23:09:06.470805 2664 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Nov 23 23:09:06.471152 kubelet[2664]: E1123 23:09:06.471124 2664 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/acaf478e-5b19-411f-bc8d-03ea82612035-config-volume podName:acaf478e-5b19-411f-bc8d-03ea82612035 nodeName:}" failed. No retries permitted until 2025-11-23 23:09:06.970841525 +0000 UTC m=+34.117823409 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/acaf478e-5b19-411f-bc8d-03ea82612035-config-volume") pod "coredns-66bc5c9577-zlqrd" (UID: "acaf478e-5b19-411f-bc8d-03ea82612035") : failed to sync configmap cache: timed out waiting for the condition Nov 23 23:09:06.471152 kubelet[2664]: E1123 23:09:06.471146 2664 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/97bea278-8563-4edd-a70d-e1f1d4a862df-config-volume podName:97bea278-8563-4edd-a70d-e1f1d4a862df nodeName:}" failed. No retries permitted until 2025-11-23 23:09:06.971138734 +0000 UTC m=+34.118120618 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/97bea278-8563-4edd-a70d-e1f1d4a862df-config-volume") pod "coredns-66bc5c9577-s8psz" (UID: "97bea278-8563-4edd-a70d-e1f1d4a862df") : failed to sync configmap cache: timed out waiting for the condition Nov 23 23:09:06.902104 systemd[1]: Started sshd@8-10.0.0.64:22-10.0.0.1:51350.service - OpenSSH per-connection server daemon (10.0.0.1:51350). Nov 23 23:09:06.970356 sshd[3457]: Accepted publickey for core from 10.0.0.1 port 51350 ssh2: RSA SHA256:yIy4UrzOMNNnnIqKwL8egez+/NjI/EpaMMlf9RYGR+A Nov 23 23:09:06.971795 sshd-session[3457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:09:06.977016 systemd-logind[1486]: New session 9 of user core. Nov 23 23:09:06.994627 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 23 23:09:07.131044 sshd[3460]: Connection closed by 10.0.0.1 port 51350 Nov 23 23:09:07.131481 sshd-session[3457]: pam_unix(sshd:session): session closed for user core Nov 23 23:09:07.136110 systemd[1]: sshd@8-10.0.0.64:22-10.0.0.1:51350.service: Deactivated successfully. Nov 23 23:09:07.139187 systemd[1]: session-9.scope: Deactivated successfully. Nov 23 23:09:07.140828 systemd-logind[1486]: Session 9 logged out. Waiting for processes to exit. Nov 23 23:09:07.142691 systemd-logind[1486]: Removed session 9. Nov 23 23:09:07.171266 containerd[1501]: time="2025-11-23T23:09:07.171210660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zlqrd,Uid:acaf478e-5b19-411f-bc8d-03ea82612035,Namespace:kube-system,Attempt:0,}" Nov 23 23:09:07.178847 containerd[1501]: time="2025-11-23T23:09:07.178630890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-s8psz,Uid:97bea278-8563-4edd-a70d-e1f1d4a862df,Namespace:kube-system,Attempt:0,}" Nov 23 23:09:07.332560 systemd-networkd[1438]: cilium_host: Link UP Nov 23 23:09:07.333044 systemd-networkd[1438]: cilium_net: Link UP Nov 23 23:09:07.333255 systemd-networkd[1438]: cilium_host: Gained carrier Nov 23 23:09:07.333397 systemd-networkd[1438]: cilium_net: Gained carrier Nov 23 23:09:07.419493 systemd-networkd[1438]: cilium_vxlan: Link UP Nov 23 23:09:07.419499 systemd-networkd[1438]: cilium_vxlan: Gained carrier Nov 23 23:09:07.578335 systemd-networkd[1438]: cilium_net: Gained IPv6LL Nov 23 23:09:07.711071 kernel: NET: Registered PF_ALG protocol family Nov 23 23:09:07.762405 systemd-networkd[1438]: cilium_host: Gained IPv6LL Nov 23 23:09:08.338502 systemd-networkd[1438]: lxc_health: Link UP Nov 23 23:09:08.348992 systemd-networkd[1438]: lxc_health: Gained carrier Nov 23 23:09:08.650507 systemd-networkd[1438]: cilium_vxlan: Gained IPv6LL Nov 23 23:09:08.727363 systemd-networkd[1438]: lxcc02a95fa0599: Link UP Nov 23 23:09:08.730231 kernel: eth0: renamed from tmp393b9 Nov 23 23:09:08.731992 systemd-networkd[1438]: lxcc02a95fa0599: Gained carrier Nov 23 23:09:08.733430 systemd-networkd[1438]: lxcbf42f34fa6a0: Link UP Nov 23 23:09:08.745218 kernel: eth0: renamed from tmpa80f0 Nov 23 23:09:08.748661 systemd-networkd[1438]: lxcbf42f34fa6a0: Gained carrier Nov 23 23:09:09.546399 systemd-networkd[1438]: lxc_health: Gained IPv6LL Nov 23 23:09:09.995322 systemd-networkd[1438]: lxcbf42f34fa6a0: Gained IPv6LL Nov 23 23:09:10.570331 systemd-networkd[1438]: lxcc02a95fa0599: Gained IPv6LL Nov 23 23:09:12.146484 systemd[1]: Started sshd@9-10.0.0.64:22-10.0.0.1:43792.service - OpenSSH per-connection server daemon (10.0.0.1:43792). Nov 23 23:09:12.208724 sshd[3888]: Accepted publickey for core from 10.0.0.1 port 43792 ssh2: RSA SHA256:yIy4UrzOMNNnnIqKwL8egez+/NjI/EpaMMlf9RYGR+A Nov 23 23:09:12.210893 sshd-session[3888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:09:12.220434 systemd-logind[1486]: New session 10 of user core. Nov 23 23:09:12.229383 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 23 23:09:12.352904 sshd[3891]: Connection closed by 10.0.0.1 port 43792 Nov 23 23:09:12.353497 sshd-session[3888]: pam_unix(sshd:session): session closed for user core Nov 23 23:09:12.357238 systemd[1]: sshd@9-10.0.0.64:22-10.0.0.1:43792.service: Deactivated successfully. Nov 23 23:09:12.359016 systemd[1]: session-10.scope: Deactivated successfully. Nov 23 23:09:12.359809 systemd-logind[1486]: Session 10 logged out. Waiting for processes to exit. Nov 23 23:09:12.363226 systemd-logind[1486]: Removed session 10. Nov 23 23:09:12.549633 containerd[1501]: time="2025-11-23T23:09:12.549578210Z" level=info msg="connecting to shim a80f0d64a4ff72a3769ea83dbf60f953d1b0737f1af58ba417946103b73330dd" address="unix:///run/containerd/s/b88abcf20436af497ce74e2aaf4f285c53ea4eb35759c9461f7ed811631bdc53" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:09:12.550733 containerd[1501]: time="2025-11-23T23:09:12.550700475Z" level=info msg="connecting to shim 393b9250722f2aef6afbb4e8b5f67c1509de1e30ea934030d4747235ef5f406d" address="unix:///run/containerd/s/d17ca81935cd3cde74581d870805484be3c7372eae023c92fa3641be124822f6" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:09:12.572381 systemd[1]: Started cri-containerd-a80f0d64a4ff72a3769ea83dbf60f953d1b0737f1af58ba417946103b73330dd.scope - libcontainer container a80f0d64a4ff72a3769ea83dbf60f953d1b0737f1af58ba417946103b73330dd. Nov 23 23:09:12.592379 systemd[1]: Started cri-containerd-393b9250722f2aef6afbb4e8b5f67c1509de1e30ea934030d4747235ef5f406d.scope - libcontainer container 393b9250722f2aef6afbb4e8b5f67c1509de1e30ea934030d4747235ef5f406d. Nov 23 23:09:12.598943 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 23 23:09:12.611473 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 23 23:09:12.637366 containerd[1501]: time="2025-11-23T23:09:12.637290843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-s8psz,Uid:97bea278-8563-4edd-a70d-e1f1d4a862df,Namespace:kube-system,Attempt:0,} returns sandbox id \"a80f0d64a4ff72a3769ea83dbf60f953d1b0737f1af58ba417946103b73330dd\"" Nov 23 23:09:12.642672 containerd[1501]: time="2025-11-23T23:09:12.642618361Z" level=info msg="CreateContainer within sandbox \"a80f0d64a4ff72a3769ea83dbf60f953d1b0737f1af58ba417946103b73330dd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 23 23:09:12.643740 containerd[1501]: time="2025-11-23T23:09:12.643701225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zlqrd,Uid:acaf478e-5b19-411f-bc8d-03ea82612035,Namespace:kube-system,Attempt:0,} returns sandbox id \"393b9250722f2aef6afbb4e8b5f67c1509de1e30ea934030d4747235ef5f406d\"" Nov 23 23:09:12.650465 containerd[1501]: time="2025-11-23T23:09:12.649741800Z" level=info msg="CreateContainer within sandbox \"393b9250722f2aef6afbb4e8b5f67c1509de1e30ea934030d4747235ef5f406d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 23 23:09:12.655509 containerd[1501]: time="2025-11-23T23:09:12.655459927Z" level=info msg="Container 8b0a85754d9dbe8768c036bbd599c73642bd69f688cc63e98b93556fbdec5bd2: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:09:12.663972 containerd[1501]: time="2025-11-23T23:09:12.663926235Z" level=info msg="CreateContainer within sandbox \"a80f0d64a4ff72a3769ea83dbf60f953d1b0737f1af58ba417946103b73330dd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8b0a85754d9dbe8768c036bbd599c73642bd69f688cc63e98b93556fbdec5bd2\"" Nov 23 23:09:12.664777 containerd[1501]: time="2025-11-23T23:09:12.664729933Z" level=info msg="StartContainer for \"8b0a85754d9dbe8768c036bbd599c73642bd69f688cc63e98b93556fbdec5bd2\"" Nov 23 23:09:12.666289 containerd[1501]: time="2025-11-23T23:09:12.666127524Z" level=info msg="connecting to shim 8b0a85754d9dbe8768c036bbd599c73642bd69f688cc63e98b93556fbdec5bd2" address="unix:///run/containerd/s/b88abcf20436af497ce74e2aaf4f285c53ea4eb35759c9461f7ed811631bdc53" protocol=ttrpc version=3 Nov 23 23:09:12.667597 containerd[1501]: time="2025-11-23T23:09:12.667566597Z" level=info msg="Container 6921ade12f69d5ee179f647a1d9fc536cc7fc7f36e988730ca0de676131dd92e: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:09:12.674254 containerd[1501]: time="2025-11-23T23:09:12.674206904Z" level=info msg="CreateContainer within sandbox \"393b9250722f2aef6afbb4e8b5f67c1509de1e30ea934030d4747235ef5f406d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6921ade12f69d5ee179f647a1d9fc536cc7fc7f36e988730ca0de676131dd92e\"" Nov 23 23:09:12.675015 containerd[1501]: time="2025-11-23T23:09:12.674987562Z" level=info msg="StartContainer for \"6921ade12f69d5ee179f647a1d9fc536cc7fc7f36e988730ca0de676131dd92e\"" Nov 23 23:09:12.676775 containerd[1501]: time="2025-11-23T23:09:12.676737641Z" level=info msg="connecting to shim 6921ade12f69d5ee179f647a1d9fc536cc7fc7f36e988730ca0de676131dd92e" address="unix:///run/containerd/s/d17ca81935cd3cde74581d870805484be3c7372eae023c92fa3641be124822f6" protocol=ttrpc version=3 Nov 23 23:09:12.688360 systemd[1]: Started cri-containerd-8b0a85754d9dbe8768c036bbd599c73642bd69f688cc63e98b93556fbdec5bd2.scope - libcontainer container 8b0a85754d9dbe8768c036bbd599c73642bd69f688cc63e98b93556fbdec5bd2. Nov 23 23:09:12.691699 systemd[1]: Started cri-containerd-6921ade12f69d5ee179f647a1d9fc536cc7fc7f36e988730ca0de676131dd92e.scope - libcontainer container 6921ade12f69d5ee179f647a1d9fc536cc7fc7f36e988730ca0de676131dd92e. Nov 23 23:09:12.748343 containerd[1501]: time="2025-11-23T23:09:12.748282393Z" level=info msg="StartContainer for \"8b0a85754d9dbe8768c036bbd599c73642bd69f688cc63e98b93556fbdec5bd2\" returns successfully" Nov 23 23:09:12.749336 containerd[1501]: time="2025-11-23T23:09:12.749247415Z" level=info msg="StartContainer for \"6921ade12f69d5ee179f647a1d9fc536cc7fc7f36e988730ca0de676131dd92e\" returns successfully" Nov 23 23:09:13.100906 kubelet[2664]: I1123 23:09:13.100845 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-s8psz" podStartSLOduration=33.10082842 podStartE2EDuration="33.10082842s" podCreationTimestamp="2025-11-23 23:08:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:09:13.099654874 +0000 UTC m=+40.246636758" watchObservedRunningTime="2025-11-23 23:09:13.10082842 +0000 UTC m=+40.247810304" Nov 23 23:09:17.368666 systemd[1]: Started sshd@10-10.0.0.64:22-10.0.0.1:43808.service - OpenSSH per-connection server daemon (10.0.0.1:43808). Nov 23 23:09:17.422374 sshd[4077]: Accepted publickey for core from 10.0.0.1 port 43808 ssh2: RSA SHA256:yIy4UrzOMNNnnIqKwL8egez+/NjI/EpaMMlf9RYGR+A Nov 23 23:09:17.423971 sshd-session[4077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:09:17.428106 systemd-logind[1486]: New session 11 of user core. Nov 23 23:09:17.441386 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 23 23:09:17.563218 sshd[4080]: Connection closed by 10.0.0.1 port 43808 Nov 23 23:09:17.565379 sshd-session[4077]: pam_unix(sshd:session): session closed for user core Nov 23 23:09:17.585048 systemd[1]: sshd@10-10.0.0.64:22-10.0.0.1:43808.service: Deactivated successfully. Nov 23 23:09:17.586999 systemd[1]: session-11.scope: Deactivated successfully. Nov 23 23:09:17.587851 systemd-logind[1486]: Session 11 logged out. Waiting for processes to exit. Nov 23 23:09:17.590942 systemd[1]: Started sshd@11-10.0.0.64:22-10.0.0.1:43822.service - OpenSSH per-connection server daemon (10.0.0.1:43822). Nov 23 23:09:17.591924 systemd-logind[1486]: Removed session 11. Nov 23 23:09:17.654325 sshd[4094]: Accepted publickey for core from 10.0.0.1 port 43822 ssh2: RSA SHA256:yIy4UrzOMNNnnIqKwL8egez+/NjI/EpaMMlf9RYGR+A Nov 23 23:09:17.655695 sshd-session[4094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:09:17.660499 systemd-logind[1486]: New session 12 of user core. Nov 23 23:09:17.676393 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 23 23:09:17.860376 sshd[4097]: Connection closed by 10.0.0.1 port 43822 Nov 23 23:09:17.860785 sshd-session[4094]: pam_unix(sshd:session): session closed for user core Nov 23 23:09:17.873052 systemd[1]: sshd@11-10.0.0.64:22-10.0.0.1:43822.service: Deactivated successfully. Nov 23 23:09:17.877275 systemd[1]: session-12.scope: Deactivated successfully. Nov 23 23:09:17.879502 systemd-logind[1486]: Session 12 logged out. Waiting for processes to exit. Nov 23 23:09:17.882774 systemd[1]: Started sshd@12-10.0.0.64:22-10.0.0.1:43838.service - OpenSSH per-connection server daemon (10.0.0.1:43838). Nov 23 23:09:17.884275 systemd-logind[1486]: Removed session 12. Nov 23 23:09:17.944424 sshd[4109]: Accepted publickey for core from 10.0.0.1 port 43838 ssh2: RSA SHA256:yIy4UrzOMNNnnIqKwL8egez+/NjI/EpaMMlf9RYGR+A Nov 23 23:09:17.945889 sshd-session[4109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:09:17.952575 systemd-logind[1486]: New session 13 of user core. Nov 23 23:09:17.966411 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 23 23:09:18.081520 sshd[4112]: Connection closed by 10.0.0.1 port 43838 Nov 23 23:09:18.082070 sshd-session[4109]: pam_unix(sshd:session): session closed for user core Nov 23 23:09:18.086270 systemd-logind[1486]: Session 13 logged out. Waiting for processes to exit. Nov 23 23:09:18.086577 systemd[1]: sshd@12-10.0.0.64:22-10.0.0.1:43838.service: Deactivated successfully. Nov 23 23:09:18.089883 systemd[1]: session-13.scope: Deactivated successfully. Nov 23 23:09:18.092021 systemd-logind[1486]: Removed session 13. Nov 23 23:09:23.098180 systemd[1]: Started sshd@13-10.0.0.64:22-10.0.0.1:38962.service - OpenSSH per-connection server daemon (10.0.0.1:38962). Nov 23 23:09:23.127856 kubelet[2664]: I1123 23:09:23.127781 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-zlqrd" podStartSLOduration=43.127765467 podStartE2EDuration="43.127765467s" podCreationTimestamp="2025-11-23 23:08:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:09:13.114202109 +0000 UTC m=+40.261184033" watchObservedRunningTime="2025-11-23 23:09:23.127765467 +0000 UTC m=+50.274747311" Nov 23 23:09:23.191921 sshd[4125]: Accepted publickey for core from 10.0.0.1 port 38962 ssh2: RSA SHA256:yIy4UrzOMNNnnIqKwL8egez+/NjI/EpaMMlf9RYGR+A Nov 23 23:09:23.194336 sshd-session[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:09:23.206838 systemd-logind[1486]: New session 14 of user core. Nov 23 23:09:23.222595 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 23 23:09:23.366910 sshd[4130]: Connection closed by 10.0.0.1 port 38962 Nov 23 23:09:23.367510 sshd-session[4125]: pam_unix(sshd:session): session closed for user core Nov 23 23:09:23.376030 systemd[1]: sshd@13-10.0.0.64:22-10.0.0.1:38962.service: Deactivated successfully. Nov 23 23:09:23.378064 systemd[1]: session-14.scope: Deactivated successfully. Nov 23 23:09:23.379221 systemd-logind[1486]: Session 14 logged out. Waiting for processes to exit. Nov 23 23:09:23.381460 systemd-logind[1486]: Removed session 14. Nov 23 23:09:28.381683 systemd[1]: Started sshd@14-10.0.0.64:22-10.0.0.1:38970.service - OpenSSH per-connection server daemon (10.0.0.1:38970). Nov 23 23:09:28.457317 sshd[4147]: Accepted publickey for core from 10.0.0.1 port 38970 ssh2: RSA SHA256:yIy4UrzOMNNnnIqKwL8egez+/NjI/EpaMMlf9RYGR+A Nov 23 23:09:28.458979 sshd-session[4147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:09:28.465757 systemd-logind[1486]: New session 15 of user core. Nov 23 23:09:28.473467 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 23 23:09:28.610757 sshd[4151]: Connection closed by 10.0.0.1 port 38970 Nov 23 23:09:28.611355 sshd-session[4147]: pam_unix(sshd:session): session closed for user core Nov 23 23:09:28.624560 systemd[1]: sshd@14-10.0.0.64:22-10.0.0.1:38970.service: Deactivated successfully. Nov 23 23:09:28.626382 systemd[1]: session-15.scope: Deactivated successfully. Nov 23 23:09:28.628339 systemd-logind[1486]: Session 15 logged out. Waiting for processes to exit. Nov 23 23:09:28.629587 systemd[1]: Started sshd@15-10.0.0.64:22-10.0.0.1:38984.service - OpenSSH per-connection server daemon (10.0.0.1:38984). Nov 23 23:09:28.630660 systemd-logind[1486]: Removed session 15. Nov 23 23:09:28.692187 sshd[4165]: Accepted publickey for core from 10.0.0.1 port 38984 ssh2: RSA SHA256:yIy4UrzOMNNnnIqKwL8egez+/NjI/EpaMMlf9RYGR+A Nov 23 23:09:28.693779 sshd-session[4165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:09:28.702193 systemd-logind[1486]: New session 16 of user core. Nov 23 23:09:28.713426 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 23 23:09:28.915089 sshd[4168]: Connection closed by 10.0.0.1 port 38984 Nov 23 23:09:28.915681 sshd-session[4165]: pam_unix(sshd:session): session closed for user core Nov 23 23:09:28.932587 systemd[1]: sshd@15-10.0.0.64:22-10.0.0.1:38984.service: Deactivated successfully. Nov 23 23:09:28.934562 systemd[1]: session-16.scope: Deactivated successfully. Nov 23 23:09:28.935554 systemd-logind[1486]: Session 16 logged out. Waiting for processes to exit. Nov 23 23:09:28.938453 systemd[1]: Started sshd@16-10.0.0.64:22-10.0.0.1:38992.service - OpenSSH per-connection server daemon (10.0.0.1:38992). Nov 23 23:09:28.939452 systemd-logind[1486]: Removed session 16. Nov 23 23:09:29.035478 sshd[4179]: Accepted publickey for core from 10.0.0.1 port 38992 ssh2: RSA SHA256:yIy4UrzOMNNnnIqKwL8egez+/NjI/EpaMMlf9RYGR+A Nov 23 23:09:29.037077 sshd-session[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:09:29.042339 systemd-logind[1486]: New session 17 of user core. Nov 23 23:09:29.054444 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 23 23:09:29.723703 sshd[4182]: Connection closed by 10.0.0.1 port 38992 Nov 23 23:09:29.724108 sshd-session[4179]: pam_unix(sshd:session): session closed for user core Nov 23 23:09:29.734442 systemd[1]: sshd@16-10.0.0.64:22-10.0.0.1:38992.service: Deactivated successfully. Nov 23 23:09:29.737621 systemd[1]: session-17.scope: Deactivated successfully. Nov 23 23:09:29.740320 systemd-logind[1486]: Session 17 logged out. Waiting for processes to exit. Nov 23 23:09:29.744870 systemd[1]: Started sshd@17-10.0.0.64:22-10.0.0.1:58828.service - OpenSSH per-connection server daemon (10.0.0.1:58828). Nov 23 23:09:29.746413 systemd-logind[1486]: Removed session 17. Nov 23 23:09:29.806896 sshd[4203]: Accepted publickey for core from 10.0.0.1 port 58828 ssh2: RSA SHA256:yIy4UrzOMNNnnIqKwL8egez+/NjI/EpaMMlf9RYGR+A Nov 23 23:09:29.808379 sshd-session[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:09:29.817288 systemd-logind[1486]: New session 18 of user core. Nov 23 23:09:29.823445 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 23 23:09:30.074679 sshd[4206]: Connection closed by 10.0.0.1 port 58828 Nov 23 23:09:30.078059 sshd-session[4203]: pam_unix(sshd:session): session closed for user core Nov 23 23:09:30.084676 systemd[1]: sshd@17-10.0.0.64:22-10.0.0.1:58828.service: Deactivated successfully. Nov 23 23:09:30.088868 systemd[1]: session-18.scope: Deactivated successfully. Nov 23 23:09:30.091496 systemd-logind[1486]: Session 18 logged out. Waiting for processes to exit. Nov 23 23:09:30.095410 systemd[1]: Started sshd@18-10.0.0.64:22-10.0.0.1:58838.service - OpenSSH per-connection server daemon (10.0.0.1:58838). Nov 23 23:09:30.097453 systemd-logind[1486]: Removed session 18. Nov 23 23:09:30.152593 sshd[4217]: Accepted publickey for core from 10.0.0.1 port 58838 ssh2: RSA SHA256:yIy4UrzOMNNnnIqKwL8egez+/NjI/EpaMMlf9RYGR+A Nov 23 23:09:30.154114 sshd-session[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:09:30.159259 systemd-logind[1486]: New session 19 of user core. Nov 23 23:09:30.173449 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 23 23:09:30.288686 sshd[4220]: Connection closed by 10.0.0.1 port 58838 Nov 23 23:09:30.289010 sshd-session[4217]: pam_unix(sshd:session): session closed for user core Nov 23 23:09:30.292874 systemd[1]: sshd@18-10.0.0.64:22-10.0.0.1:58838.service: Deactivated successfully. Nov 23 23:09:30.295333 systemd[1]: session-19.scope: Deactivated successfully. Nov 23 23:09:30.296411 systemd-logind[1486]: Session 19 logged out. Waiting for processes to exit. Nov 23 23:09:30.298443 systemd-logind[1486]: Removed session 19. Nov 23 23:09:35.313693 systemd[1]: Started sshd@19-10.0.0.64:22-10.0.0.1:58848.service - OpenSSH per-connection server daemon (10.0.0.1:58848). Nov 23 23:09:35.388073 sshd[4239]: Accepted publickey for core from 10.0.0.1 port 58848 ssh2: RSA SHA256:yIy4UrzOMNNnnIqKwL8egez+/NjI/EpaMMlf9RYGR+A Nov 23 23:09:35.389531 sshd-session[4239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:09:35.395829 systemd-logind[1486]: New session 20 of user core. Nov 23 23:09:35.411416 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 23 23:09:35.556502 sshd[4242]: Connection closed by 10.0.0.1 port 58848 Nov 23 23:09:35.557135 sshd-session[4239]: pam_unix(sshd:session): session closed for user core Nov 23 23:09:35.561160 systemd[1]: sshd@19-10.0.0.64:22-10.0.0.1:58848.service: Deactivated successfully. Nov 23 23:09:35.563023 systemd[1]: session-20.scope: Deactivated successfully. Nov 23 23:09:35.565461 systemd-logind[1486]: Session 20 logged out. Waiting for processes to exit. Nov 23 23:09:35.566863 systemd-logind[1486]: Removed session 20. Nov 23 23:09:40.569202 systemd[1]: Started sshd@20-10.0.0.64:22-10.0.0.1:53120.service - OpenSSH per-connection server daemon (10.0.0.1:53120). Nov 23 23:09:40.640615 sshd[4255]: Accepted publickey for core from 10.0.0.1 port 53120 ssh2: RSA SHA256:yIy4UrzOMNNnnIqKwL8egez+/NjI/EpaMMlf9RYGR+A Nov 23 23:09:40.642362 sshd-session[4255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:09:40.647401 systemd-logind[1486]: New session 21 of user core. Nov 23 23:09:40.655617 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 23 23:09:40.783559 sshd[4258]: Connection closed by 10.0.0.1 port 53120 Nov 23 23:09:40.783887 sshd-session[4255]: pam_unix(sshd:session): session closed for user core Nov 23 23:09:40.787515 systemd[1]: sshd@20-10.0.0.64:22-10.0.0.1:53120.service: Deactivated successfully. Nov 23 23:09:40.789314 systemd[1]: session-21.scope: Deactivated successfully. Nov 23 23:09:40.790584 systemd-logind[1486]: Session 21 logged out. Waiting for processes to exit. Nov 23 23:09:40.791564 systemd-logind[1486]: Removed session 21. Nov 23 23:09:45.798904 systemd[1]: Started sshd@21-10.0.0.64:22-10.0.0.1:53128.service - OpenSSH per-connection server daemon (10.0.0.1:53128). Nov 23 23:09:45.880813 sshd[4273]: Accepted publickey for core from 10.0.0.1 port 53128 ssh2: RSA SHA256:yIy4UrzOMNNnnIqKwL8egez+/NjI/EpaMMlf9RYGR+A Nov 23 23:09:45.882311 sshd-session[4273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:09:45.887604 systemd-logind[1486]: New session 22 of user core. Nov 23 23:09:45.898388 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 23 23:09:46.023206 sshd[4276]: Connection closed by 10.0.0.1 port 53128 Nov 23 23:09:46.023034 sshd-session[4273]: pam_unix(sshd:session): session closed for user core Nov 23 23:09:46.032559 systemd[1]: sshd@21-10.0.0.64:22-10.0.0.1:53128.service: Deactivated successfully. Nov 23 23:09:46.035598 systemd[1]: session-22.scope: Deactivated successfully. Nov 23 23:09:46.036923 systemd-logind[1486]: Session 22 logged out. Waiting for processes to exit. Nov 23 23:09:46.040694 systemd[1]: Started sshd@22-10.0.0.64:22-10.0.0.1:53132.service - OpenSSH per-connection server daemon (10.0.0.1:53132). Nov 23 23:09:46.042504 systemd-logind[1486]: Removed session 22. Nov 23 23:09:46.112088 sshd[4289]: Accepted publickey for core from 10.0.0.1 port 53132 ssh2: RSA SHA256:yIy4UrzOMNNnnIqKwL8egez+/NjI/EpaMMlf9RYGR+A Nov 23 23:09:46.113585 sshd-session[4289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:09:46.118059 systemd-logind[1486]: New session 23 of user core. Nov 23 23:09:46.129443 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 23 23:09:48.269464 containerd[1501]: time="2025-11-23T23:09:48.264730679Z" level=info msg="StopContainer for \"92e1f6b991356744d60a3f14880c1d9b1373e63ea055a816d98acf6b16819751\" with timeout 30 (s)" Nov 23 23:09:48.274752 containerd[1501]: time="2025-11-23T23:09:48.274712409Z" level=info msg="Stop container \"92e1f6b991356744d60a3f14880c1d9b1373e63ea055a816d98acf6b16819751\" with signal terminated" Nov 23 23:09:48.294528 systemd[1]: cri-containerd-92e1f6b991356744d60a3f14880c1d9b1373e63ea055a816d98acf6b16819751.scope: Deactivated successfully. Nov 23 23:09:48.297370 containerd[1501]: time="2025-11-23T23:09:48.297322733Z" level=info msg="received container exit event container_id:\"92e1f6b991356744d60a3f14880c1d9b1373e63ea055a816d98acf6b16819751\" id:\"92e1f6b991356744d60a3f14880c1d9b1373e63ea055a816d98acf6b16819751\" pid:3238 exited_at:{seconds:1763939388 nanos:296844169}" Nov 23 23:09:48.298043 containerd[1501]: time="2025-11-23T23:09:48.297986379Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 23 23:09:48.307980 containerd[1501]: time="2025-11-23T23:09:48.307938069Z" level=info msg="StopContainer for \"6454f76659a8dd0d0668dfc5f757777fea7bb50c3be7d4d070789eaf9395ee6c\" with timeout 2 (s)" Nov 23 23:09:48.308780 containerd[1501]: time="2025-11-23T23:09:48.308750477Z" level=info msg="Stop container \"6454f76659a8dd0d0668dfc5f757777fea7bb50c3be7d4d070789eaf9395ee6c\" with signal terminated" Nov 23 23:09:48.317895 systemd-networkd[1438]: lxc_health: Link DOWN Nov 23 23:09:48.317906 systemd-networkd[1438]: lxc_health: Lost carrier Nov 23 23:09:48.328881 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92e1f6b991356744d60a3f14880c1d9b1373e63ea055a816d98acf6b16819751-rootfs.mount: Deactivated successfully. Nov 23 23:09:48.335887 systemd[1]: cri-containerd-6454f76659a8dd0d0668dfc5f757777fea7bb50c3be7d4d070789eaf9395ee6c.scope: Deactivated successfully. Nov 23 23:09:48.336237 systemd[1]: cri-containerd-6454f76659a8dd0d0668dfc5f757777fea7bb50c3be7d4d070789eaf9395ee6c.scope: Consumed 6.658s CPU time, 122.7M memory peak, 136K read from disk, 12.9M written to disk. Nov 23 23:09:48.337662 containerd[1501]: time="2025-11-23T23:09:48.337610817Z" level=info msg="received container exit event container_id:\"6454f76659a8dd0d0668dfc5f757777fea7bb50c3be7d4d070789eaf9395ee6c\" id:\"6454f76659a8dd0d0668dfc5f757777fea7bb50c3be7d4d070789eaf9395ee6c\" pid:3352 exited_at:{seconds:1763939388 nanos:337152653}" Nov 23 23:09:48.348731 containerd[1501]: time="2025-11-23T23:09:48.348670597Z" level=info msg="StopContainer for \"92e1f6b991356744d60a3f14880c1d9b1373e63ea055a816d98acf6b16819751\" returns successfully" Nov 23 23:09:48.351293 containerd[1501]: time="2025-11-23T23:09:48.350952818Z" level=info msg="StopPodSandbox for \"f835d297d726f5595cb95d1823ec811af2f9b7c1625b7c11c5d106e77c769a1b\"" Nov 23 23:09:48.351293 containerd[1501]: time="2025-11-23T23:09:48.351080499Z" level=info msg="Container to stop \"92e1f6b991356744d60a3f14880c1d9b1373e63ea055a816d98acf6b16819751\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 23 23:09:48.361776 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6454f76659a8dd0d0668dfc5f757777fea7bb50c3be7d4d070789eaf9395ee6c-rootfs.mount: Deactivated successfully. Nov 23 23:09:48.363449 systemd[1]: cri-containerd-f835d297d726f5595cb95d1823ec811af2f9b7c1625b7c11c5d106e77c769a1b.scope: Deactivated successfully. Nov 23 23:09:48.367542 containerd[1501]: time="2025-11-23T23:09:48.367499167Z" level=info msg="received sandbox exit event container_id:\"f835d297d726f5595cb95d1823ec811af2f9b7c1625b7c11c5d106e77c769a1b\" id:\"f835d297d726f5595cb95d1823ec811af2f9b7c1625b7c11c5d106e77c769a1b\" exit_status:137 exited_at:{seconds:1763939388 nanos:367147124}" monitor_name=podsandbox Nov 23 23:09:48.373545 containerd[1501]: time="2025-11-23T23:09:48.373500102Z" level=info msg="StopContainer for \"6454f76659a8dd0d0668dfc5f757777fea7bb50c3be7d4d070789eaf9395ee6c\" returns successfully" Nov 23 23:09:48.374780 containerd[1501]: time="2025-11-23T23:09:48.374725553Z" level=info msg="StopPodSandbox for \"309a791571f952d93c62a7bd8bc2828f9e9df724cef0f0fbbcb595b67fd90781\"" Nov 23 23:09:48.374914 containerd[1501]: time="2025-11-23T23:09:48.374826914Z" level=info msg="Container to stop \"4d34c567202a53d999b7296f5662ac21e3737e93fbdc7f1fde274b0ddb1114b1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 23 23:09:48.374914 containerd[1501]: time="2025-11-23T23:09:48.374840514Z" level=info msg="Container to stop \"5db2a082f546cac9fa0149d44b13f3ec87d771061e29fee1a7a45b8a989cc4d7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 23 23:09:48.374914 containerd[1501]: time="2025-11-23T23:09:48.374849594Z" level=info msg="Container to stop \"fc2fa03eee861c62b1821fc958e2bd2a287876bd25798306f2d39c4023dbb292\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 23 23:09:48.374914 containerd[1501]: time="2025-11-23T23:09:48.374858714Z" level=info msg="Container to stop \"f46f351c666bd379e30e3cfff3b7aa00a440512ec98cdb071b6abb9c308f643c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 23 23:09:48.374914 containerd[1501]: time="2025-11-23T23:09:48.374868794Z" level=info msg="Container to stop \"6454f76659a8dd0d0668dfc5f757777fea7bb50c3be7d4d070789eaf9395ee6c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 23 23:09:48.381681 systemd[1]: cri-containerd-309a791571f952d93c62a7bd8bc2828f9e9df724cef0f0fbbcb595b67fd90781.scope: Deactivated successfully. Nov 23 23:09:48.383218 containerd[1501]: time="2025-11-23T23:09:48.383150669Z" level=info msg="received sandbox exit event container_id:\"309a791571f952d93c62a7bd8bc2828f9e9df724cef0f0fbbcb595b67fd90781\" id:\"309a791571f952d93c62a7bd8bc2828f9e9df724cef0f0fbbcb595b67fd90781\" exit_status:137 exited_at:{seconds:1763939388 nanos:382902506}" monitor_name=podsandbox Nov 23 23:09:48.395687 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f835d297d726f5595cb95d1823ec811af2f9b7c1625b7c11c5d106e77c769a1b-rootfs.mount: Deactivated successfully. Nov 23 23:09:48.405460 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-309a791571f952d93c62a7bd8bc2828f9e9df724cef0f0fbbcb595b67fd90781-rootfs.mount: Deactivated successfully. Nov 23 23:09:48.418029 containerd[1501]: time="2025-11-23T23:09:48.417989623Z" level=info msg="shim disconnected" id=f835d297d726f5595cb95d1823ec811af2f9b7c1625b7c11c5d106e77c769a1b namespace=k8s.io Nov 23 23:09:48.418206 containerd[1501]: time="2025-11-23T23:09:48.418021744Z" level=warning msg="cleaning up after shim disconnected" id=f835d297d726f5595cb95d1823ec811af2f9b7c1625b7c11c5d106e77c769a1b namespace=k8s.io Nov 23 23:09:48.418206 containerd[1501]: time="2025-11-23T23:09:48.418054184Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 23 23:09:48.419357 containerd[1501]: time="2025-11-23T23:09:48.419284235Z" level=info msg="shim disconnected" id=309a791571f952d93c62a7bd8bc2828f9e9df724cef0f0fbbcb595b67fd90781 namespace=k8s.io Nov 23 23:09:48.419647 containerd[1501]: time="2025-11-23T23:09:48.419460637Z" level=warning msg="cleaning up after shim disconnected" id=309a791571f952d93c62a7bd8bc2828f9e9df724cef0f0fbbcb595b67fd90781 namespace=k8s.io Nov 23 23:09:48.419647 containerd[1501]: time="2025-11-23T23:09:48.419504917Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 23 23:09:48.447612 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f835d297d726f5595cb95d1823ec811af2f9b7c1625b7c11c5d106e77c769a1b-shm.mount: Deactivated successfully. Nov 23 23:09:48.447720 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-309a791571f952d93c62a7bd8bc2828f9e9df724cef0f0fbbcb595b67fd90781-shm.mount: Deactivated successfully. Nov 23 23:09:48.448241 containerd[1501]: time="2025-11-23T23:09:48.447892174Z" level=info msg="TearDown network for sandbox \"309a791571f952d93c62a7bd8bc2828f9e9df724cef0f0fbbcb595b67fd90781\" successfully" Nov 23 23:09:48.448241 containerd[1501]: time="2025-11-23T23:09:48.447929494Z" level=info msg="StopPodSandbox for \"309a791571f952d93c62a7bd8bc2828f9e9df724cef0f0fbbcb595b67fd90781\" returns successfully" Nov 23 23:09:48.449184 containerd[1501]: time="2025-11-23T23:09:48.448531699Z" level=info msg="TearDown network for sandbox \"f835d297d726f5595cb95d1823ec811af2f9b7c1625b7c11c5d106e77c769a1b\" successfully" Nov 23 23:09:48.449184 containerd[1501]: time="2025-11-23T23:09:48.448563540Z" level=info msg="StopPodSandbox for \"f835d297d726f5595cb95d1823ec811af2f9b7c1625b7c11c5d106e77c769a1b\" returns successfully" Nov 23 23:09:48.451156 containerd[1501]: time="2025-11-23T23:09:48.451119283Z" level=info msg="received sandbox container exit event sandbox_id:\"309a791571f952d93c62a7bd8bc2828f9e9df724cef0f0fbbcb595b67fd90781\" exit_status:137 exited_at:{seconds:1763939388 nanos:382902506}" monitor_name=criService Nov 23 23:09:48.451618 containerd[1501]: time="2025-11-23T23:09:48.451584567Z" level=info msg="received sandbox container exit event sandbox_id:\"f835d297d726f5595cb95d1823ec811af2f9b7c1625b7c11c5d106e77c769a1b\" exit_status:137 exited_at:{seconds:1763939388 nanos:367147124}" monitor_name=criService Nov 23 23:09:48.592255 kubelet[2664]: I1123 23:09:48.592105 2664 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4291a75a-e0d5-485c-9a73-94161cf73fc1-hostproc" (OuterVolumeSpecName: "hostproc") pod "4291a75a-e0d5-485c-9a73-94161cf73fc1" (UID: "4291a75a-e0d5-485c-9a73-94161cf73fc1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 23 23:09:48.593056 kubelet[2664]: I1123 23:09:48.592620 2664 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4291a75a-e0d5-485c-9a73-94161cf73fc1-hostproc\") pod \"4291a75a-e0d5-485c-9a73-94161cf73fc1\" (UID: \"4291a75a-e0d5-485c-9a73-94161cf73fc1\") " Nov 23 23:09:48.593056 kubelet[2664]: I1123 23:09:48.592684 2664 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xphlm\" (UniqueName: \"kubernetes.io/projected/e0854d48-ec88-498c-8016-23747ab2b325-kube-api-access-xphlm\") pod \"e0854d48-ec88-498c-8016-23747ab2b325\" (UID: \"e0854d48-ec88-498c-8016-23747ab2b325\") " Nov 23 23:09:48.593056 kubelet[2664]: I1123 23:09:48.592705 2664 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4291a75a-e0d5-485c-9a73-94161cf73fc1-clustermesh-secrets\") pod \"4291a75a-e0d5-485c-9a73-94161cf73fc1\" (UID: \"4291a75a-e0d5-485c-9a73-94161cf73fc1\") " Nov 23 23:09:48.593056 kubelet[2664]: I1123 23:09:48.592724 2664 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4291a75a-e0d5-485c-9a73-94161cf73fc1-cilium-config-path\") pod \"4291a75a-e0d5-485c-9a73-94161cf73fc1\" (UID: \"4291a75a-e0d5-485c-9a73-94161cf73fc1\") " Nov 23 23:09:48.593056 kubelet[2664]: I1123 23:09:48.592738 2664 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4291a75a-e0d5-485c-9a73-94161cf73fc1-host-proc-sys-net\") pod \"4291a75a-e0d5-485c-9a73-94161cf73fc1\" (UID: \"4291a75a-e0d5-485c-9a73-94161cf73fc1\") " Nov 23 23:09:48.593056 kubelet[2664]: I1123 23:09:48.592750 2664 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4291a75a-e0d5-485c-9a73-94161cf73fc1-cni-path\") pod \"4291a75a-e0d5-485c-9a73-94161cf73fc1\" (UID: \"4291a75a-e0d5-485c-9a73-94161cf73fc1\") " Nov 23 23:09:48.593499 kubelet[2664]: I1123 23:09:48.592780 2664 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4291a75a-e0d5-485c-9a73-94161cf73fc1-cni-path" (OuterVolumeSpecName: "cni-path") pod "4291a75a-e0d5-485c-9a73-94161cf73fc1" (UID: "4291a75a-e0d5-485c-9a73-94161cf73fc1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 23 23:09:48.593499 kubelet[2664]: I1123 23:09:48.593132 2664 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4291a75a-e0d5-485c-9a73-94161cf73fc1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4291a75a-e0d5-485c-9a73-94161cf73fc1" (UID: "4291a75a-e0d5-485c-9a73-94161cf73fc1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 23 23:09:48.593499 kubelet[2664]: I1123 23:09:48.593205 2664 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e0854d48-ec88-498c-8016-23747ab2b325-cilium-config-path\") pod \"e0854d48-ec88-498c-8016-23747ab2b325\" (UID: \"e0854d48-ec88-498c-8016-23747ab2b325\") " Nov 23 23:09:48.593499 kubelet[2664]: I1123 23:09:48.593224 2664 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4291a75a-e0d5-485c-9a73-94161cf73fc1-lib-modules\") pod \"4291a75a-e0d5-485c-9a73-94161cf73fc1\" (UID: \"4291a75a-e0d5-485c-9a73-94161cf73fc1\") " Nov 23 23:09:48.593499 kubelet[2664]: I1123 23:09:48.593240 2664 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4291a75a-e0d5-485c-9a73-94161cf73fc1-etc-cni-netd\") pod \"4291a75a-e0d5-485c-9a73-94161cf73fc1\" (UID: \"4291a75a-e0d5-485c-9a73-94161cf73fc1\") " Nov 23 23:09:48.593499 kubelet[2664]: I1123 23:09:48.593263 2664 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4291a75a-e0d5-485c-9a73-94161cf73fc1-bpf-maps\") pod \"4291a75a-e0d5-485c-9a73-94161cf73fc1\" (UID: \"4291a75a-e0d5-485c-9a73-94161cf73fc1\") " Nov 23 23:09:48.594051 kubelet[2664]: I1123 23:09:48.593277 2664 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4291a75a-e0d5-485c-9a73-94161cf73fc1-host-proc-sys-kernel\") pod \"4291a75a-e0d5-485c-9a73-94161cf73fc1\" (UID: \"4291a75a-e0d5-485c-9a73-94161cf73fc1\") " Nov 23 23:09:48.594051 kubelet[2664]: I1123 23:09:48.593290 2664 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4291a75a-e0d5-485c-9a73-94161cf73fc1-cilium-run\") pod \"4291a75a-e0d5-485c-9a73-94161cf73fc1\" (UID: \"4291a75a-e0d5-485c-9a73-94161cf73fc1\") " Nov 23 23:09:48.594051 kubelet[2664]: I1123 23:09:48.593309 2664 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4291a75a-e0d5-485c-9a73-94161cf73fc1-hubble-tls\") pod \"4291a75a-e0d5-485c-9a73-94161cf73fc1\" (UID: \"4291a75a-e0d5-485c-9a73-94161cf73fc1\") " Nov 23 23:09:48.594051 kubelet[2664]: I1123 23:09:48.593325 2664 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8qdz\" (UniqueName: \"kubernetes.io/projected/4291a75a-e0d5-485c-9a73-94161cf73fc1-kube-api-access-d8qdz\") pod \"4291a75a-e0d5-485c-9a73-94161cf73fc1\" (UID: \"4291a75a-e0d5-485c-9a73-94161cf73fc1\") " Nov 23 23:09:48.594051 kubelet[2664]: I1123 23:09:48.593341 2664 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4291a75a-e0d5-485c-9a73-94161cf73fc1-cilium-cgroup\") pod \"4291a75a-e0d5-485c-9a73-94161cf73fc1\" (UID: \"4291a75a-e0d5-485c-9a73-94161cf73fc1\") " Nov 23 23:09:48.594051 kubelet[2664]: I1123 23:09:48.593354 2664 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4291a75a-e0d5-485c-9a73-94161cf73fc1-xtables-lock\") pod \"4291a75a-e0d5-485c-9a73-94161cf73fc1\" (UID: \"4291a75a-e0d5-485c-9a73-94161cf73fc1\") " Nov 23 23:09:48.594243 kubelet[2664]: I1123 23:09:48.593558 2664 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4291a75a-e0d5-485c-9a73-94161cf73fc1-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Nov 23 23:09:48.594243 kubelet[2664]: I1123 23:09:48.593578 2664 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4291a75a-e0d5-485c-9a73-94161cf73fc1-cni-path\") on node \"localhost\" DevicePath \"\"" Nov 23 23:09:48.594243 kubelet[2664]: I1123 23:09:48.593586 2664 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4291a75a-e0d5-485c-9a73-94161cf73fc1-hostproc\") on node \"localhost\" DevicePath \"\"" Nov 23 23:09:48.594243 kubelet[2664]: I1123 23:09:48.593612 2664 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4291a75a-e0d5-485c-9a73-94161cf73fc1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4291a75a-e0d5-485c-9a73-94161cf73fc1" (UID: "4291a75a-e0d5-485c-9a73-94161cf73fc1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 23 23:09:48.594347 kubelet[2664]: I1123 23:09:48.594318 2664 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4291a75a-e0d5-485c-9a73-94161cf73fc1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4291a75a-e0d5-485c-9a73-94161cf73fc1" (UID: "4291a75a-e0d5-485c-9a73-94161cf73fc1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 23 23:09:48.594393 kubelet[2664]: I1123 23:09:48.594358 2664 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4291a75a-e0d5-485c-9a73-94161cf73fc1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4291a75a-e0d5-485c-9a73-94161cf73fc1" (UID: "4291a75a-e0d5-485c-9a73-94161cf73fc1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 23 23:09:48.594420 kubelet[2664]: I1123 23:09:48.594402 2664 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4291a75a-e0d5-485c-9a73-94161cf73fc1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4291a75a-e0d5-485c-9a73-94161cf73fc1" (UID: "4291a75a-e0d5-485c-9a73-94161cf73fc1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 23 23:09:48.594420 kubelet[2664]: I1123 23:09:48.594416 2664 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4291a75a-e0d5-485c-9a73-94161cf73fc1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4291a75a-e0d5-485c-9a73-94161cf73fc1" (UID: "4291a75a-e0d5-485c-9a73-94161cf73fc1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 23 23:09:48.594774 kubelet[2664]: I1123 23:09:48.594726 2664 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4291a75a-e0d5-485c-9a73-94161cf73fc1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4291a75a-e0d5-485c-9a73-94161cf73fc1" (UID: "4291a75a-e0d5-485c-9a73-94161cf73fc1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 23 23:09:48.594806 kubelet[2664]: I1123 23:09:48.594784 2664 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4291a75a-e0d5-485c-9a73-94161cf73fc1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4291a75a-e0d5-485c-9a73-94161cf73fc1" (UID: "4291a75a-e0d5-485c-9a73-94161cf73fc1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 23 23:09:48.594806 kubelet[2664]: I1123 23:09:48.594802 2664 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4291a75a-e0d5-485c-9a73-94161cf73fc1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4291a75a-e0d5-485c-9a73-94161cf73fc1" (UID: "4291a75a-e0d5-485c-9a73-94161cf73fc1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 23 23:09:48.596198 kubelet[2664]: I1123 23:09:48.596132 2664 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0854d48-ec88-498c-8016-23747ab2b325-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e0854d48-ec88-498c-8016-23747ab2b325" (UID: "e0854d48-ec88-498c-8016-23747ab2b325"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 23 23:09:48.596982 kubelet[2664]: I1123 23:09:48.596944 2664 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4291a75a-e0d5-485c-9a73-94161cf73fc1-kube-api-access-d8qdz" (OuterVolumeSpecName: "kube-api-access-d8qdz") pod "4291a75a-e0d5-485c-9a73-94161cf73fc1" (UID: "4291a75a-e0d5-485c-9a73-94161cf73fc1"). InnerVolumeSpecName "kube-api-access-d8qdz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 23 23:09:48.597423 kubelet[2664]: I1123 23:09:48.597392 2664 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0854d48-ec88-498c-8016-23747ab2b325-kube-api-access-xphlm" (OuterVolumeSpecName: "kube-api-access-xphlm") pod "e0854d48-ec88-498c-8016-23747ab2b325" (UID: "e0854d48-ec88-498c-8016-23747ab2b325"). InnerVolumeSpecName "kube-api-access-xphlm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 23 23:09:48.597492 kubelet[2664]: I1123 23:09:48.597438 2664 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4291a75a-e0d5-485c-9a73-94161cf73fc1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4291a75a-e0d5-485c-9a73-94161cf73fc1" (UID: "4291a75a-e0d5-485c-9a73-94161cf73fc1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 23 23:09:48.598598 kubelet[2664]: I1123 23:09:48.598552 2664 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4291a75a-e0d5-485c-9a73-94161cf73fc1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4291a75a-e0d5-485c-9a73-94161cf73fc1" (UID: "4291a75a-e0d5-485c-9a73-94161cf73fc1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 23 23:09:48.693935 kubelet[2664]: I1123 23:09:48.693852 2664 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d8qdz\" (UniqueName: \"kubernetes.io/projected/4291a75a-e0d5-485c-9a73-94161cf73fc1-kube-api-access-d8qdz\") on node \"localhost\" DevicePath \"\"" Nov 23 23:09:48.693935 kubelet[2664]: I1123 23:09:48.693885 2664 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4291a75a-e0d5-485c-9a73-94161cf73fc1-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Nov 23 23:09:48.693935 kubelet[2664]: I1123 23:09:48.693896 2664 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4291a75a-e0d5-485c-9a73-94161cf73fc1-xtables-lock\") on node \"localhost\" DevicePath \"\"" Nov 23 23:09:48.693935 kubelet[2664]: I1123 23:09:48.693904 2664 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xphlm\" (UniqueName: \"kubernetes.io/projected/e0854d48-ec88-498c-8016-23747ab2b325-kube-api-access-xphlm\") on node \"localhost\" DevicePath \"\"" Nov 23 23:09:48.693935 kubelet[2664]: I1123 23:09:48.693912 2664 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4291a75a-e0d5-485c-9a73-94161cf73fc1-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Nov 23 23:09:48.693935 kubelet[2664]: I1123 23:09:48.693922 2664 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4291a75a-e0d5-485c-9a73-94161cf73fc1-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 23 23:09:48.693935 kubelet[2664]: I1123 23:09:48.693930 2664 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e0854d48-ec88-498c-8016-23747ab2b325-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 23 23:09:48.693935 kubelet[2664]: I1123 23:09:48.693937 2664 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4291a75a-e0d5-485c-9a73-94161cf73fc1-lib-modules\") on node \"localhost\" DevicePath \"\"" Nov 23 23:09:48.694936 kubelet[2664]: I1123 23:09:48.693946 2664 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4291a75a-e0d5-485c-9a73-94161cf73fc1-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Nov 23 23:09:48.694936 kubelet[2664]: I1123 23:09:48.693954 2664 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4291a75a-e0d5-485c-9a73-94161cf73fc1-bpf-maps\") on node \"localhost\" DevicePath \"\"" Nov 23 23:09:48.694936 kubelet[2664]: I1123 23:09:48.693962 2664 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4291a75a-e0d5-485c-9a73-94161cf73fc1-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Nov 23 23:09:48.694936 kubelet[2664]: I1123 23:09:48.693969 2664 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4291a75a-e0d5-485c-9a73-94161cf73fc1-cilium-run\") on node \"localhost\" DevicePath \"\"" Nov 23 23:09:48.694936 kubelet[2664]: I1123 23:09:48.693976 2664 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4291a75a-e0d5-485c-9a73-94161cf73fc1-hubble-tls\") on node \"localhost\" DevicePath \"\"" Nov 23 23:09:48.967195 systemd[1]: Removed slice kubepods-besteffort-pode0854d48_ec88_498c_8016_23747ab2b325.slice - libcontainer container kubepods-besteffort-pode0854d48_ec88_498c_8016_23747ab2b325.slice. Nov 23 23:09:48.968345 systemd[1]: Removed slice kubepods-burstable-pod4291a75a_e0d5_485c_9a73_94161cf73fc1.slice - libcontainer container kubepods-burstable-pod4291a75a_e0d5_485c_9a73_94161cf73fc1.slice. Nov 23 23:09:48.968455 systemd[1]: kubepods-burstable-pod4291a75a_e0d5_485c_9a73_94161cf73fc1.slice: Consumed 6.772s CPU time, 123M memory peak, 6.2M read from disk, 12.9M written to disk. Nov 23 23:09:49.192905 kubelet[2664]: I1123 23:09:49.192861 2664 scope.go:117] "RemoveContainer" containerID="6454f76659a8dd0d0668dfc5f757777fea7bb50c3be7d4d070789eaf9395ee6c" Nov 23 23:09:49.198056 containerd[1501]: time="2025-11-23T23:09:49.198011793Z" level=info msg="RemoveContainer for \"6454f76659a8dd0d0668dfc5f757777fea7bb50c3be7d4d070789eaf9395ee6c\"" Nov 23 23:09:49.227260 containerd[1501]: time="2025-11-23T23:09:49.226339883Z" level=info msg="RemoveContainer for \"6454f76659a8dd0d0668dfc5f757777fea7bb50c3be7d4d070789eaf9395ee6c\" returns successfully" Nov 23 23:09:49.227430 kubelet[2664]: I1123 23:09:49.226639 2664 scope.go:117] "RemoveContainer" containerID="f46f351c666bd379e30e3cfff3b7aa00a440512ec98cdb071b6abb9c308f643c" Nov 23 23:09:49.231267 containerd[1501]: time="2025-11-23T23:09:49.230712602Z" level=info msg="RemoveContainer for \"f46f351c666bd379e30e3cfff3b7aa00a440512ec98cdb071b6abb9c308f643c\"" Nov 23 23:09:49.245349 containerd[1501]: time="2025-11-23T23:09:49.245217610Z" level=info msg="RemoveContainer for \"f46f351c666bd379e30e3cfff3b7aa00a440512ec98cdb071b6abb9c308f643c\" returns successfully" Nov 23 23:09:49.246064 kubelet[2664]: I1123 23:09:49.245663 2664 scope.go:117] "RemoveContainer" containerID="fc2fa03eee861c62b1821fc958e2bd2a287876bd25798306f2d39c4023dbb292" Nov 23 23:09:49.250631 containerd[1501]: time="2025-11-23T23:09:49.250587778Z" level=info msg="RemoveContainer for \"fc2fa03eee861c62b1821fc958e2bd2a287876bd25798306f2d39c4023dbb292\"" Nov 23 23:09:49.254918 containerd[1501]: time="2025-11-23T23:09:49.254865135Z" level=info msg="RemoveContainer for \"fc2fa03eee861c62b1821fc958e2bd2a287876bd25798306f2d39c4023dbb292\" returns successfully" Nov 23 23:09:49.255305 kubelet[2664]: I1123 23:09:49.255152 2664 scope.go:117] "RemoveContainer" containerID="5db2a082f546cac9fa0149d44b13f3ec87d771061e29fee1a7a45b8a989cc4d7" Nov 23 23:09:49.257785 containerd[1501]: time="2025-11-23T23:09:49.257739761Z" level=info msg="RemoveContainer for \"5db2a082f546cac9fa0149d44b13f3ec87d771061e29fee1a7a45b8a989cc4d7\"" Nov 23 23:09:49.261394 containerd[1501]: time="2025-11-23T23:09:49.261351793Z" level=info msg="RemoveContainer for \"5db2a082f546cac9fa0149d44b13f3ec87d771061e29fee1a7a45b8a989cc4d7\" returns successfully" Nov 23 23:09:49.261725 kubelet[2664]: I1123 23:09:49.261586 2664 scope.go:117] "RemoveContainer" containerID="4d34c567202a53d999b7296f5662ac21e3737e93fbdc7f1fde274b0ddb1114b1" Nov 23 23:09:49.263160 containerd[1501]: time="2025-11-23T23:09:49.263132369Z" level=info msg="RemoveContainer for \"4d34c567202a53d999b7296f5662ac21e3737e93fbdc7f1fde274b0ddb1114b1\"" Nov 23 23:09:49.266094 containerd[1501]: time="2025-11-23T23:09:49.266005434Z" level=info msg="RemoveContainer for \"4d34c567202a53d999b7296f5662ac21e3737e93fbdc7f1fde274b0ddb1114b1\" returns successfully" Nov 23 23:09:49.266326 kubelet[2664]: I1123 23:09:49.266302 2664 scope.go:117] "RemoveContainer" containerID="6454f76659a8dd0d0668dfc5f757777fea7bb50c3be7d4d070789eaf9395ee6c" Nov 23 23:09:49.276622 containerd[1501]: time="2025-11-23T23:09:49.266589839Z" level=error msg="ContainerStatus for \"6454f76659a8dd0d0668dfc5f757777fea7bb50c3be7d4d070789eaf9395ee6c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6454f76659a8dd0d0668dfc5f757777fea7bb50c3be7d4d070789eaf9395ee6c\": not found" Nov 23 23:09:49.276985 kubelet[2664]: E1123 23:09:49.276835 2664 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6454f76659a8dd0d0668dfc5f757777fea7bb50c3be7d4d070789eaf9395ee6c\": not found" containerID="6454f76659a8dd0d0668dfc5f757777fea7bb50c3be7d4d070789eaf9395ee6c" Nov 23 23:09:49.276985 kubelet[2664]: I1123 23:09:49.276875 2664 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6454f76659a8dd0d0668dfc5f757777fea7bb50c3be7d4d070789eaf9395ee6c"} err="failed to get container status \"6454f76659a8dd0d0668dfc5f757777fea7bb50c3be7d4d070789eaf9395ee6c\": rpc error: code = NotFound desc = an error occurred when try to find container \"6454f76659a8dd0d0668dfc5f757777fea7bb50c3be7d4d070789eaf9395ee6c\": not found" Nov 23 23:09:49.276985 kubelet[2664]: I1123 23:09:49.276913 2664 scope.go:117] "RemoveContainer" containerID="f46f351c666bd379e30e3cfff3b7aa00a440512ec98cdb071b6abb9c308f643c" Nov 23 23:09:49.277270 containerd[1501]: time="2025-11-23T23:09:49.277197173Z" level=error msg="ContainerStatus for \"f46f351c666bd379e30e3cfff3b7aa00a440512ec98cdb071b6abb9c308f643c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f46f351c666bd379e30e3cfff3b7aa00a440512ec98cdb071b6abb9c308f643c\": not found" Nov 23 23:09:49.278755 kubelet[2664]: E1123 23:09:49.278614 2664 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f46f351c666bd379e30e3cfff3b7aa00a440512ec98cdb071b6abb9c308f643c\": not found" containerID="f46f351c666bd379e30e3cfff3b7aa00a440512ec98cdb071b6abb9c308f643c" Nov 23 23:09:49.278755 kubelet[2664]: I1123 23:09:49.278691 2664 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f46f351c666bd379e30e3cfff3b7aa00a440512ec98cdb071b6abb9c308f643c"} err="failed to get container status \"f46f351c666bd379e30e3cfff3b7aa00a440512ec98cdb071b6abb9c308f643c\": rpc error: code = NotFound desc = an error occurred when try to find container \"f46f351c666bd379e30e3cfff3b7aa00a440512ec98cdb071b6abb9c308f643c\": not found" Nov 23 23:09:49.278755 kubelet[2664]: I1123 23:09:49.278712 2664 scope.go:117] "RemoveContainer" containerID="fc2fa03eee861c62b1821fc958e2bd2a287876bd25798306f2d39c4023dbb292" Nov 23 23:09:49.279262 containerd[1501]: time="2025-11-23T23:09:49.278930748Z" level=error msg="ContainerStatus for \"fc2fa03eee861c62b1821fc958e2bd2a287876bd25798306f2d39c4023dbb292\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fc2fa03eee861c62b1821fc958e2bd2a287876bd25798306f2d39c4023dbb292\": not found" Nov 23 23:09:49.279942 kubelet[2664]: E1123 23:09:49.279066 2664 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fc2fa03eee861c62b1821fc958e2bd2a287876bd25798306f2d39c4023dbb292\": not found" containerID="fc2fa03eee861c62b1821fc958e2bd2a287876bd25798306f2d39c4023dbb292" Nov 23 23:09:49.280011 kubelet[2664]: I1123 23:09:49.279919 2664 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fc2fa03eee861c62b1821fc958e2bd2a287876bd25798306f2d39c4023dbb292"} err="failed to get container status \"fc2fa03eee861c62b1821fc958e2bd2a287876bd25798306f2d39c4023dbb292\": rpc error: code = NotFound desc = an error occurred when try to find container \"fc2fa03eee861c62b1821fc958e2bd2a287876bd25798306f2d39c4023dbb292\": not found" Nov 23 23:09:49.280135 kubelet[2664]: I1123 23:09:49.280068 2664 scope.go:117] "RemoveContainer" containerID="5db2a082f546cac9fa0149d44b13f3ec87d771061e29fee1a7a45b8a989cc4d7" Nov 23 23:09:49.283448 containerd[1501]: time="2025-11-23T23:09:49.280400201Z" level=error msg="ContainerStatus for \"5db2a082f546cac9fa0149d44b13f3ec87d771061e29fee1a7a45b8a989cc4d7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5db2a082f546cac9fa0149d44b13f3ec87d771061e29fee1a7a45b8a989cc4d7\": not found" Nov 23 23:09:49.283448 containerd[1501]: time="2025-11-23T23:09:49.280815045Z" level=error msg="ContainerStatus for \"4d34c567202a53d999b7296f5662ac21e3737e93fbdc7f1fde274b0ddb1114b1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4d34c567202a53d999b7296f5662ac21e3737e93fbdc7f1fde274b0ddb1114b1\": not found" Nov 23 23:09:49.283448 containerd[1501]: time="2025-11-23T23:09:49.282607661Z" level=info msg="RemoveContainer for \"92e1f6b991356744d60a3f14880c1d9b1373e63ea055a816d98acf6b16819751\"" Nov 23 23:09:49.283586 kubelet[2664]: E1123 23:09:49.280555 2664 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5db2a082f546cac9fa0149d44b13f3ec87d771061e29fee1a7a45b8a989cc4d7\": not found" containerID="5db2a082f546cac9fa0149d44b13f3ec87d771061e29fee1a7a45b8a989cc4d7" Nov 23 23:09:49.283586 kubelet[2664]: I1123 23:09:49.280578 2664 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5db2a082f546cac9fa0149d44b13f3ec87d771061e29fee1a7a45b8a989cc4d7"} err="failed to get container status \"5db2a082f546cac9fa0149d44b13f3ec87d771061e29fee1a7a45b8a989cc4d7\": rpc error: code = NotFound desc = an error occurred when try to find container \"5db2a082f546cac9fa0149d44b13f3ec87d771061e29fee1a7a45b8a989cc4d7\": not found" Nov 23 23:09:49.283586 kubelet[2664]: I1123 23:09:49.280609 2664 scope.go:117] "RemoveContainer" containerID="4d34c567202a53d999b7296f5662ac21e3737e93fbdc7f1fde274b0ddb1114b1" Nov 23 23:09:49.283586 kubelet[2664]: E1123 23:09:49.280977 2664 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4d34c567202a53d999b7296f5662ac21e3737e93fbdc7f1fde274b0ddb1114b1\": not found" containerID="4d34c567202a53d999b7296f5662ac21e3737e93fbdc7f1fde274b0ddb1114b1" Nov 23 23:09:49.283586 kubelet[2664]: I1123 23:09:49.281024 2664 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4d34c567202a53d999b7296f5662ac21e3737e93fbdc7f1fde274b0ddb1114b1"} err="failed to get container status \"4d34c567202a53d999b7296f5662ac21e3737e93fbdc7f1fde274b0ddb1114b1\": rpc error: code = NotFound desc = an error occurred when try to find container \"4d34c567202a53d999b7296f5662ac21e3737e93fbdc7f1fde274b0ddb1114b1\": not found" Nov 23 23:09:49.283586 kubelet[2664]: I1123 23:09:49.281039 2664 scope.go:117] "RemoveContainer" containerID="92e1f6b991356744d60a3f14880c1d9b1373e63ea055a816d98acf6b16819751" Nov 23 23:09:49.286076 containerd[1501]: time="2025-11-23T23:09:49.286005611Z" level=info msg="RemoveContainer for \"92e1f6b991356744d60a3f14880c1d9b1373e63ea055a816d98acf6b16819751\" returns successfully" Nov 23 23:09:49.286390 kubelet[2664]: I1123 23:09:49.286311 2664 scope.go:117] "RemoveContainer" containerID="92e1f6b991356744d60a3f14880c1d9b1373e63ea055a816d98acf6b16819751" Nov 23 23:09:49.286610 containerd[1501]: time="2025-11-23T23:09:49.286578976Z" level=error msg="ContainerStatus for \"92e1f6b991356744d60a3f14880c1d9b1373e63ea055a816d98acf6b16819751\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"92e1f6b991356744d60a3f14880c1d9b1373e63ea055a816d98acf6b16819751\": not found" Nov 23 23:09:49.286728 kubelet[2664]: E1123 23:09:49.286703 2664 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"92e1f6b991356744d60a3f14880c1d9b1373e63ea055a816d98acf6b16819751\": not found" containerID="92e1f6b991356744d60a3f14880c1d9b1373e63ea055a816d98acf6b16819751" Nov 23 23:09:49.286776 kubelet[2664]: I1123 23:09:49.286735 2664 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"92e1f6b991356744d60a3f14880c1d9b1373e63ea055a816d98acf6b16819751"} err="failed to get container status \"92e1f6b991356744d60a3f14880c1d9b1373e63ea055a816d98acf6b16819751\": rpc error: code = NotFound desc = an error occurred when try to find container \"92e1f6b991356744d60a3f14880c1d9b1373e63ea055a816d98acf6b16819751\": not found" Nov 23 23:09:49.328122 systemd[1]: var-lib-kubelet-pods-e0854d48\x2dec88\x2d498c\x2d8016\x2d23747ab2b325-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxphlm.mount: Deactivated successfully. Nov 23 23:09:49.328262 systemd[1]: var-lib-kubelet-pods-4291a75a\x2de0d5\x2d485c\x2d9a73\x2d94161cf73fc1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd8qdz.mount: Deactivated successfully. Nov 23 23:09:49.328327 systemd[1]: var-lib-kubelet-pods-4291a75a\x2de0d5\x2d485c\x2d9a73\x2d94161cf73fc1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 23 23:09:49.328397 systemd[1]: var-lib-kubelet-pods-4291a75a\x2de0d5\x2d485c\x2d9a73\x2d94161cf73fc1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 23 23:09:50.200009 sshd[4292]: Connection closed by 10.0.0.1 port 53132 Nov 23 23:09:50.201842 sshd-session[4289]: pam_unix(sshd:session): session closed for user core Nov 23 23:09:50.212130 systemd[1]: sshd@22-10.0.0.64:22-10.0.0.1:53132.service: Deactivated successfully. Nov 23 23:09:50.217746 systemd[1]: session-23.scope: Deactivated successfully. Nov 23 23:09:50.217989 systemd[1]: session-23.scope: Consumed 1.417s CPU time, 26M memory peak. Nov 23 23:09:50.221245 systemd-logind[1486]: Session 23 logged out. Waiting for processes to exit. Nov 23 23:09:50.226576 systemd[1]: Started sshd@23-10.0.0.64:22-10.0.0.1:50746.service - OpenSSH per-connection server daemon (10.0.0.1:50746). Nov 23 23:09:50.229817 systemd-logind[1486]: Removed session 23. Nov 23 23:09:50.288053 sshd[4437]: Accepted publickey for core from 10.0.0.1 port 50746 ssh2: RSA SHA256:yIy4UrzOMNNnnIqKwL8egez+/NjI/EpaMMlf9RYGR+A Nov 23 23:09:50.289752 sshd-session[4437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:09:50.297260 systemd-logind[1486]: New session 24 of user core. Nov 23 23:09:50.301364 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 23 23:09:50.961699 kubelet[2664]: I1123 23:09:50.961593 2664 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4291a75a-e0d5-485c-9a73-94161cf73fc1" path="/var/lib/kubelet/pods/4291a75a-e0d5-485c-9a73-94161cf73fc1/volumes" Nov 23 23:09:50.962781 kubelet[2664]: I1123 23:09:50.962727 2664 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0854d48-ec88-498c-8016-23747ab2b325" path="/var/lib/kubelet/pods/e0854d48-ec88-498c-8016-23747ab2b325/volumes" Nov 23 23:09:51.495112 sshd[4440]: Connection closed by 10.0.0.1 port 50746 Nov 23 23:09:51.497810 sshd-session[4437]: pam_unix(sshd:session): session closed for user core Nov 23 23:09:51.515028 systemd[1]: sshd@23-10.0.0.64:22-10.0.0.1:50746.service: Deactivated successfully. Nov 23 23:09:51.517740 systemd[1]: session-24.scope: Deactivated successfully. Nov 23 23:09:51.518736 systemd[1]: session-24.scope: Consumed 1.027s CPU time, 24M memory peak. Nov 23 23:09:51.522662 systemd-logind[1486]: Session 24 logged out. Waiting for processes to exit. Nov 23 23:09:51.530766 systemd[1]: Started sshd@24-10.0.0.64:22-10.0.0.1:50756.service - OpenSSH per-connection server daemon (10.0.0.1:50756). Nov 23 23:09:51.533409 systemd-logind[1486]: Removed session 24. Nov 23 23:09:51.561077 systemd[1]: Created slice kubepods-burstable-pod432407c8_0899_4bec_96ac_b2cf7ce38507.slice - libcontainer container kubepods-burstable-pod432407c8_0899_4bec_96ac_b2cf7ce38507.slice. Nov 23 23:09:51.608598 kubelet[2664]: I1123 23:09:51.608552 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/432407c8-0899-4bec-96ac-b2cf7ce38507-xtables-lock\") pod \"cilium-lj4ff\" (UID: \"432407c8-0899-4bec-96ac-b2cf7ce38507\") " pod="kube-system/cilium-lj4ff" Nov 23 23:09:51.608598 kubelet[2664]: I1123 23:09:51.608600 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/432407c8-0899-4bec-96ac-b2cf7ce38507-host-proc-sys-net\") pod \"cilium-lj4ff\" (UID: \"432407c8-0899-4bec-96ac-b2cf7ce38507\") " pod="kube-system/cilium-lj4ff" Nov 23 23:09:51.608771 kubelet[2664]: I1123 23:09:51.608620 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/432407c8-0899-4bec-96ac-b2cf7ce38507-hubble-tls\") pod \"cilium-lj4ff\" (UID: \"432407c8-0899-4bec-96ac-b2cf7ce38507\") " pod="kube-system/cilium-lj4ff" Nov 23 23:09:51.608771 kubelet[2664]: I1123 23:09:51.608636 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/432407c8-0899-4bec-96ac-b2cf7ce38507-cilium-cgroup\") pod \"cilium-lj4ff\" (UID: \"432407c8-0899-4bec-96ac-b2cf7ce38507\") " pod="kube-system/cilium-lj4ff" Nov 23 23:09:51.608771 kubelet[2664]: I1123 23:09:51.608651 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/432407c8-0899-4bec-96ac-b2cf7ce38507-etc-cni-netd\") pod \"cilium-lj4ff\" (UID: \"432407c8-0899-4bec-96ac-b2cf7ce38507\") " pod="kube-system/cilium-lj4ff" Nov 23 23:09:51.608771 kubelet[2664]: I1123 23:09:51.608664 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/432407c8-0899-4bec-96ac-b2cf7ce38507-lib-modules\") pod \"cilium-lj4ff\" (UID: \"432407c8-0899-4bec-96ac-b2cf7ce38507\") " pod="kube-system/cilium-lj4ff" Nov 23 23:09:51.608771 kubelet[2664]: I1123 23:09:51.608677 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/432407c8-0899-4bec-96ac-b2cf7ce38507-cilium-ipsec-secrets\") pod \"cilium-lj4ff\" (UID: \"432407c8-0899-4bec-96ac-b2cf7ce38507\") " pod="kube-system/cilium-lj4ff" Nov 23 23:09:51.608771 kubelet[2664]: I1123 23:09:51.608692 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/432407c8-0899-4bec-96ac-b2cf7ce38507-bpf-maps\") pod \"cilium-lj4ff\" (UID: \"432407c8-0899-4bec-96ac-b2cf7ce38507\") " pod="kube-system/cilium-lj4ff" Nov 23 23:09:51.608902 kubelet[2664]: I1123 23:09:51.608705 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/432407c8-0899-4bec-96ac-b2cf7ce38507-hostproc\") pod \"cilium-lj4ff\" (UID: \"432407c8-0899-4bec-96ac-b2cf7ce38507\") " pod="kube-system/cilium-lj4ff" Nov 23 23:09:51.608902 kubelet[2664]: I1123 23:09:51.608720 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/432407c8-0899-4bec-96ac-b2cf7ce38507-cni-path\") pod \"cilium-lj4ff\" (UID: \"432407c8-0899-4bec-96ac-b2cf7ce38507\") " pod="kube-system/cilium-lj4ff" Nov 23 23:09:51.608902 kubelet[2664]: I1123 23:09:51.608733 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/432407c8-0899-4bec-96ac-b2cf7ce38507-cilium-config-path\") pod \"cilium-lj4ff\" (UID: \"432407c8-0899-4bec-96ac-b2cf7ce38507\") " pod="kube-system/cilium-lj4ff" Nov 23 23:09:51.608902 kubelet[2664]: I1123 23:09:51.608748 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/432407c8-0899-4bec-96ac-b2cf7ce38507-host-proc-sys-kernel\") pod \"cilium-lj4ff\" (UID: \"432407c8-0899-4bec-96ac-b2cf7ce38507\") " pod="kube-system/cilium-lj4ff" Nov 23 23:09:51.608902 kubelet[2664]: I1123 23:09:51.608763 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzxvt\" (UniqueName: \"kubernetes.io/projected/432407c8-0899-4bec-96ac-b2cf7ce38507-kube-api-access-nzxvt\") pod \"cilium-lj4ff\" (UID: \"432407c8-0899-4bec-96ac-b2cf7ce38507\") " pod="kube-system/cilium-lj4ff" Nov 23 23:09:51.608902 kubelet[2664]: I1123 23:09:51.608778 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/432407c8-0899-4bec-96ac-b2cf7ce38507-cilium-run\") pod \"cilium-lj4ff\" (UID: \"432407c8-0899-4bec-96ac-b2cf7ce38507\") " pod="kube-system/cilium-lj4ff" Nov 23 23:09:51.609014 kubelet[2664]: I1123 23:09:51.608794 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/432407c8-0899-4bec-96ac-b2cf7ce38507-clustermesh-secrets\") pod \"cilium-lj4ff\" (UID: \"432407c8-0899-4bec-96ac-b2cf7ce38507\") " pod="kube-system/cilium-lj4ff" Nov 23 23:09:51.614261 sshd[4451]: Accepted publickey for core from 10.0.0.1 port 50756 ssh2: RSA SHA256:yIy4UrzOMNNnnIqKwL8egez+/NjI/EpaMMlf9RYGR+A Nov 23 23:09:51.616112 sshd-session[4451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:09:51.625249 systemd-logind[1486]: New session 25 of user core. Nov 23 23:09:51.638735 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 23 23:09:51.690833 sshd[4454]: Connection closed by 10.0.0.1 port 50756 Nov 23 23:09:51.691400 sshd-session[4451]: pam_unix(sshd:session): session closed for user core Nov 23 23:09:51.702858 systemd[1]: sshd@24-10.0.0.64:22-10.0.0.1:50756.service: Deactivated successfully. Nov 23 23:09:51.705024 systemd[1]: session-25.scope: Deactivated successfully. Nov 23 23:09:51.708111 systemd-logind[1486]: Session 25 logged out. Waiting for processes to exit. Nov 23 23:09:51.711236 systemd[1]: Started sshd@25-10.0.0.64:22-10.0.0.1:50760.service - OpenSSH per-connection server daemon (10.0.0.1:50760). Nov 23 23:09:51.711874 systemd-logind[1486]: Removed session 25. Nov 23 23:09:51.776213 sshd[4463]: Accepted publickey for core from 10.0.0.1 port 50760 ssh2: RSA SHA256:yIy4UrzOMNNnnIqKwL8egez+/NjI/EpaMMlf9RYGR+A Nov 23 23:09:51.777925 sshd-session[4463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:09:51.782087 systemd-logind[1486]: New session 26 of user core. Nov 23 23:09:51.791380 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 23 23:09:51.868721 containerd[1501]: time="2025-11-23T23:09:51.868651258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lj4ff,Uid:432407c8-0899-4bec-96ac-b2cf7ce38507,Namespace:kube-system,Attempt:0,}" Nov 23 23:09:51.886467 containerd[1501]: time="2025-11-23T23:09:51.886417168Z" level=info msg="connecting to shim b48c21eaab2faa77666c6ce02eae3d48a5c1a90fb7370be62e72c17c5b3ac2b3" address="unix:///run/containerd/s/5ee7f98bdc517bd9570f55494c2d5a74499200ebfe2d824a223f5ea8274c56ef" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:09:51.919486 systemd[1]: Started cri-containerd-b48c21eaab2faa77666c6ce02eae3d48a5c1a90fb7370be62e72c17c5b3ac2b3.scope - libcontainer container b48c21eaab2faa77666c6ce02eae3d48a5c1a90fb7370be62e72c17c5b3ac2b3. Nov 23 23:09:51.947924 containerd[1501]: time="2025-11-23T23:09:51.947876449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lj4ff,Uid:432407c8-0899-4bec-96ac-b2cf7ce38507,Namespace:kube-system,Attempt:0,} returns sandbox id \"b48c21eaab2faa77666c6ce02eae3d48a5c1a90fb7370be62e72c17c5b3ac2b3\"" Nov 23 23:09:51.954497 containerd[1501]: time="2025-11-23T23:09:51.954442664Z" level=info msg="CreateContainer within sandbox \"b48c21eaab2faa77666c6ce02eae3d48a5c1a90fb7370be62e72c17c5b3ac2b3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 23 23:09:51.963160 containerd[1501]: time="2025-11-23T23:09:51.962525573Z" level=info msg="Container 2ba1ce2f53e299d74289d9566b792d64b60d454630e5b316db52bfa6c1c6eb2e: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:09:51.969015 containerd[1501]: time="2025-11-23T23:09:51.968956307Z" level=info msg="CreateContainer within sandbox \"b48c21eaab2faa77666c6ce02eae3d48a5c1a90fb7370be62e72c17c5b3ac2b3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2ba1ce2f53e299d74289d9566b792d64b60d454630e5b316db52bfa6c1c6eb2e\"" Nov 23 23:09:51.969563 containerd[1501]: time="2025-11-23T23:09:51.969541152Z" level=info msg="StartContainer for \"2ba1ce2f53e299d74289d9566b792d64b60d454630e5b316db52bfa6c1c6eb2e\"" Nov 23 23:09:51.970474 containerd[1501]: time="2025-11-23T23:09:51.970424560Z" level=info msg="connecting to shim 2ba1ce2f53e299d74289d9566b792d64b60d454630e5b316db52bfa6c1c6eb2e" address="unix:///run/containerd/s/5ee7f98bdc517bd9570f55494c2d5a74499200ebfe2d824a223f5ea8274c56ef" protocol=ttrpc version=3 Nov 23 23:09:51.994432 systemd[1]: Started cri-containerd-2ba1ce2f53e299d74289d9566b792d64b60d454630e5b316db52bfa6c1c6eb2e.scope - libcontainer container 2ba1ce2f53e299d74289d9566b792d64b60d454630e5b316db52bfa6c1c6eb2e. Nov 23 23:09:52.029002 containerd[1501]: time="2025-11-23T23:09:52.028508407Z" level=info msg="StartContainer for \"2ba1ce2f53e299d74289d9566b792d64b60d454630e5b316db52bfa6c1c6eb2e\" returns successfully" Nov 23 23:09:52.034903 systemd[1]: cri-containerd-2ba1ce2f53e299d74289d9566b792d64b60d454630e5b316db52bfa6c1c6eb2e.scope: Deactivated successfully. Nov 23 23:09:52.036574 containerd[1501]: time="2025-11-23T23:09:52.036524674Z" level=info msg="received container exit event container_id:\"2ba1ce2f53e299d74289d9566b792d64b60d454630e5b316db52bfa6c1c6eb2e\" id:\"2ba1ce2f53e299d74289d9566b792d64b60d454630e5b316db52bfa6c1c6eb2e\" pid:4534 exited_at:{seconds:1763939392 nanos:36184431}" Nov 23 23:09:52.239913 containerd[1501]: time="2025-11-23T23:09:52.238646511Z" level=info msg="CreateContainer within sandbox \"b48c21eaab2faa77666c6ce02eae3d48a5c1a90fb7370be62e72c17c5b3ac2b3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 23 23:09:52.255345 containerd[1501]: time="2025-11-23T23:09:52.255093887Z" level=info msg="Container 08f1880ac8d0d2be664f52cdc28649d33c87a3cac043991c6d7a5ad1e1472973: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:09:52.262502 containerd[1501]: time="2025-11-23T23:09:52.262456508Z" level=info msg="CreateContainer within sandbox \"b48c21eaab2faa77666c6ce02eae3d48a5c1a90fb7370be62e72c17c5b3ac2b3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"08f1880ac8d0d2be664f52cdc28649d33c87a3cac043991c6d7a5ad1e1472973\"" Nov 23 23:09:52.263440 containerd[1501]: time="2025-11-23T23:09:52.263339915Z" level=info msg="StartContainer for \"08f1880ac8d0d2be664f52cdc28649d33c87a3cac043991c6d7a5ad1e1472973\"" Nov 23 23:09:52.264747 containerd[1501]: time="2025-11-23T23:09:52.264597206Z" level=info msg="connecting to shim 08f1880ac8d0d2be664f52cdc28649d33c87a3cac043991c6d7a5ad1e1472973" address="unix:///run/containerd/s/5ee7f98bdc517bd9570f55494c2d5a74499200ebfe2d824a223f5ea8274c56ef" protocol=ttrpc version=3 Nov 23 23:09:52.295404 systemd[1]: Started cri-containerd-08f1880ac8d0d2be664f52cdc28649d33c87a3cac043991c6d7a5ad1e1472973.scope - libcontainer container 08f1880ac8d0d2be664f52cdc28649d33c87a3cac043991c6d7a5ad1e1472973. Nov 23 23:09:52.324799 containerd[1501]: time="2025-11-23T23:09:52.324720825Z" level=info msg="StartContainer for \"08f1880ac8d0d2be664f52cdc28649d33c87a3cac043991c6d7a5ad1e1472973\" returns successfully" Nov 23 23:09:52.331093 systemd[1]: cri-containerd-08f1880ac8d0d2be664f52cdc28649d33c87a3cac043991c6d7a5ad1e1472973.scope: Deactivated successfully. Nov 23 23:09:52.331588 containerd[1501]: time="2025-11-23T23:09:52.331549801Z" level=info msg="received container exit event container_id:\"08f1880ac8d0d2be664f52cdc28649d33c87a3cac043991c6d7a5ad1e1472973\" id:\"08f1880ac8d0d2be664f52cdc28649d33c87a3cac043991c6d7a5ad1e1472973\" pid:4582 exited_at:{seconds:1763939392 nanos:331308999}" Nov 23 23:09:53.022995 kubelet[2664]: E1123 23:09:53.022954 2664 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 23 23:09:53.230048 containerd[1501]: time="2025-11-23T23:09:53.229998976Z" level=info msg="CreateContainer within sandbox \"b48c21eaab2faa77666c6ce02eae3d48a5c1a90fb7370be62e72c17c5b3ac2b3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 23 23:09:53.243564 containerd[1501]: time="2025-11-23T23:09:53.243520926Z" level=info msg="Container cdecf9b3d4db9acd9d9071190814b26a50152f6a5bc4f5035219322b6d4f085e: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:09:53.250308 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount710855454.mount: Deactivated successfully. Nov 23 23:09:53.254972 containerd[1501]: time="2025-11-23T23:09:53.254918939Z" level=info msg="CreateContainer within sandbox \"b48c21eaab2faa77666c6ce02eae3d48a5c1a90fb7370be62e72c17c5b3ac2b3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cdecf9b3d4db9acd9d9071190814b26a50152f6a5bc4f5035219322b6d4f085e\"" Nov 23 23:09:53.255443 containerd[1501]: time="2025-11-23T23:09:53.255412823Z" level=info msg="StartContainer for \"cdecf9b3d4db9acd9d9071190814b26a50152f6a5bc4f5035219322b6d4f085e\"" Nov 23 23:09:53.259018 containerd[1501]: time="2025-11-23T23:09:53.258926411Z" level=info msg="connecting to shim cdecf9b3d4db9acd9d9071190814b26a50152f6a5bc4f5035219322b6d4f085e" address="unix:///run/containerd/s/5ee7f98bdc517bd9570f55494c2d5a74499200ebfe2d824a223f5ea8274c56ef" protocol=ttrpc version=3 Nov 23 23:09:53.283420 systemd[1]: Started cri-containerd-cdecf9b3d4db9acd9d9071190814b26a50152f6a5bc4f5035219322b6d4f085e.scope - libcontainer container cdecf9b3d4db9acd9d9071190814b26a50152f6a5bc4f5035219322b6d4f085e. Nov 23 23:09:53.378451 systemd[1]: cri-containerd-cdecf9b3d4db9acd9d9071190814b26a50152f6a5bc4f5035219322b6d4f085e.scope: Deactivated successfully. Nov 23 23:09:53.379632 containerd[1501]: time="2025-11-23T23:09:53.379550672Z" level=info msg="StartContainer for \"cdecf9b3d4db9acd9d9071190814b26a50152f6a5bc4f5035219322b6d4f085e\" returns successfully" Nov 23 23:09:53.381193 containerd[1501]: time="2025-11-23T23:09:53.381146525Z" level=info msg="received container exit event container_id:\"cdecf9b3d4db9acd9d9071190814b26a50152f6a5bc4f5035219322b6d4f085e\" id:\"cdecf9b3d4db9acd9d9071190814b26a50152f6a5bc4f5035219322b6d4f085e\" pid:4627 exited_at:{seconds:1763939393 nanos:380845242}" Nov 23 23:09:53.403446 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cdecf9b3d4db9acd9d9071190814b26a50152f6a5bc4f5035219322b6d4f085e-rootfs.mount: Deactivated successfully. Nov 23 23:09:54.242915 containerd[1501]: time="2025-11-23T23:09:54.241143274Z" level=info msg="CreateContainer within sandbox \"b48c21eaab2faa77666c6ce02eae3d48a5c1a90fb7370be62e72c17c5b3ac2b3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 23 23:09:54.262218 containerd[1501]: time="2025-11-23T23:09:54.261609316Z" level=info msg="Container cf3fbc5686470594187800c9d22a77ec1215cc8efae25af1f10b5065d10e8cee: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:09:54.273078 containerd[1501]: time="2025-11-23T23:09:54.273037247Z" level=info msg="CreateContainer within sandbox \"b48c21eaab2faa77666c6ce02eae3d48a5c1a90fb7370be62e72c17c5b3ac2b3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cf3fbc5686470594187800c9d22a77ec1215cc8efae25af1f10b5065d10e8cee\"" Nov 23 23:09:54.274416 containerd[1501]: time="2025-11-23T23:09:54.274360818Z" level=info msg="StartContainer for \"cf3fbc5686470594187800c9d22a77ec1215cc8efae25af1f10b5065d10e8cee\"" Nov 23 23:09:54.275742 containerd[1501]: time="2025-11-23T23:09:54.275708309Z" level=info msg="connecting to shim cf3fbc5686470594187800c9d22a77ec1215cc8efae25af1f10b5065d10e8cee" address="unix:///run/containerd/s/5ee7f98bdc517bd9570f55494c2d5a74499200ebfe2d824a223f5ea8274c56ef" protocol=ttrpc version=3 Nov 23 23:09:54.302017 systemd[1]: Started cri-containerd-cf3fbc5686470594187800c9d22a77ec1215cc8efae25af1f10b5065d10e8cee.scope - libcontainer container cf3fbc5686470594187800c9d22a77ec1215cc8efae25af1f10b5065d10e8cee. Nov 23 23:09:54.337194 systemd[1]: cri-containerd-cf3fbc5686470594187800c9d22a77ec1215cc8efae25af1f10b5065d10e8cee.scope: Deactivated successfully. Nov 23 23:09:54.338906 containerd[1501]: time="2025-11-23T23:09:54.338837971Z" level=info msg="received container exit event container_id:\"cf3fbc5686470594187800c9d22a77ec1215cc8efae25af1f10b5065d10e8cee\" id:\"cf3fbc5686470594187800c9d22a77ec1215cc8efae25af1f10b5065d10e8cee\" pid:4666 exited_at:{seconds:1763939394 nanos:337161078}" Nov 23 23:09:54.347963 containerd[1501]: time="2025-11-23T23:09:54.347898923Z" level=info msg="StartContainer for \"cf3fbc5686470594187800c9d22a77ec1215cc8efae25af1f10b5065d10e8cee\" returns successfully" Nov 23 23:09:54.362492 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cf3fbc5686470594187800c9d22a77ec1215cc8efae25af1f10b5065d10e8cee-rootfs.mount: Deactivated successfully. Nov 23 23:09:55.083741 kubelet[2664]: I1123 23:09:55.083668 2664 setters.go:543] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T23:09:55Z","lastTransitionTime":"2025-11-23T23:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 23 23:09:55.244803 containerd[1501]: time="2025-11-23T23:09:55.244689384Z" level=info msg="CreateContainer within sandbox \"b48c21eaab2faa77666c6ce02eae3d48a5c1a90fb7370be62e72c17c5b3ac2b3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 23 23:09:55.264323 containerd[1501]: time="2025-11-23T23:09:55.263528811Z" level=info msg="Container f18b223bf2196facd20f895a907a5519c5de2af71d9bfedaa52cfc2433c491d0: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:09:55.283999 containerd[1501]: time="2025-11-23T23:09:55.283936010Z" level=info msg="CreateContainer within sandbox \"b48c21eaab2faa77666c6ce02eae3d48a5c1a90fb7370be62e72c17c5b3ac2b3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f18b223bf2196facd20f895a907a5519c5de2af71d9bfedaa52cfc2433c491d0\"" Nov 23 23:09:55.284548 containerd[1501]: time="2025-11-23T23:09:55.284516735Z" level=info msg="StartContainer for \"f18b223bf2196facd20f895a907a5519c5de2af71d9bfedaa52cfc2433c491d0\"" Nov 23 23:09:55.286410 containerd[1501]: time="2025-11-23T23:09:55.286350949Z" level=info msg="connecting to shim f18b223bf2196facd20f895a907a5519c5de2af71d9bfedaa52cfc2433c491d0" address="unix:///run/containerd/s/5ee7f98bdc517bd9570f55494c2d5a74499200ebfe2d824a223f5ea8274c56ef" protocol=ttrpc version=3 Nov 23 23:09:55.315566 systemd[1]: Started cri-containerd-f18b223bf2196facd20f895a907a5519c5de2af71d9bfedaa52cfc2433c491d0.scope - libcontainer container f18b223bf2196facd20f895a907a5519c5de2af71d9bfedaa52cfc2433c491d0. Nov 23 23:09:55.374668 containerd[1501]: time="2025-11-23T23:09:55.374462116Z" level=info msg="StartContainer for \"f18b223bf2196facd20f895a907a5519c5de2af71d9bfedaa52cfc2433c491d0\" returns successfully" Nov 23 23:09:55.687193 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Nov 23 23:09:56.271837 kubelet[2664]: I1123 23:09:56.271748 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lj4ff" podStartSLOduration=5.271717474 podStartE2EDuration="5.271717474s" podCreationTimestamp="2025-11-23 23:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:09:56.270860508 +0000 UTC m=+83.417842392" watchObservedRunningTime="2025-11-23 23:09:56.271717474 +0000 UTC m=+83.418699358" Nov 23 23:09:58.702918 systemd-networkd[1438]: lxc_health: Link UP Nov 23 23:09:58.704509 systemd-networkd[1438]: lxc_health: Gained carrier Nov 23 23:10:00.235361 systemd-networkd[1438]: lxc_health: Gained IPv6LL Nov 23 23:10:04.750537 sshd[4468]: Connection closed by 10.0.0.1 port 50760 Nov 23 23:10:04.751194 sshd-session[4463]: pam_unix(sshd:session): session closed for user core Nov 23 23:10:04.755469 systemd[1]: sshd@25-10.0.0.64:22-10.0.0.1:50760.service: Deactivated successfully. Nov 23 23:10:04.757489 systemd[1]: session-26.scope: Deactivated successfully. Nov 23 23:10:04.758617 systemd-logind[1486]: Session 26 logged out. Waiting for processes to exit. Nov 23 23:10:04.759874 systemd-logind[1486]: Removed session 26.