Jul 14 21:43:58.748057 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 14 21:43:58.748079 kernel: Linux version 5.15.187-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Jul 14 20:49:56 -00 2025 Jul 14 21:43:58.748087 kernel: efi: EFI v2.70 by EDK II Jul 14 21:43:58.748092 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Jul 14 21:43:58.748097 kernel: random: crng init done Jul 14 21:43:58.748103 kernel: ACPI: Early table checksum verification disabled Jul 14 21:43:58.748109 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Jul 14 21:43:58.748116 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 14 21:43:58.748122 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:43:58.748127 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:43:58.748134 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:43:58.748139 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:43:58.748144 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:43:58.748150 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:43:58.748158 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:43:58.748165 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:43:58.748171 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:43:58.748177 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 14 21:43:58.748183 kernel: NUMA: Failed to initialise from firmware Jul 14 21:43:58.748189 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 14 21:43:58.748194 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Jul 14 21:43:58.748200 kernel: Zone ranges: Jul 14 21:43:58.748206 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 14 21:43:58.748213 kernel: DMA32 empty Jul 14 21:43:58.748219 kernel: Normal empty Jul 14 21:43:58.748225 kernel: Movable zone start for each node Jul 14 21:43:58.748231 kernel: Early memory node ranges Jul 14 21:43:58.748237 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Jul 14 21:43:58.748242 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Jul 14 21:43:58.748248 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Jul 14 21:43:58.748254 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Jul 14 21:43:58.748259 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Jul 14 21:43:58.748265 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Jul 14 21:43:58.748271 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Jul 14 21:43:58.748276 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 14 21:43:58.748284 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 14 21:43:58.748289 kernel: psci: probing for conduit method from ACPI. Jul 14 21:43:58.748295 kernel: psci: PSCIv1.1 detected in firmware. Jul 14 21:43:58.748300 kernel: psci: Using standard PSCI v0.2 function IDs Jul 14 21:43:58.748306 kernel: psci: Trusted OS migration not required Jul 14 21:43:58.748314 kernel: psci: SMC Calling Convention v1.1 Jul 14 21:43:58.748320 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 14 21:43:58.748328 kernel: ACPI: SRAT not present Jul 14 21:43:58.748341 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Jul 14 21:43:58.748347 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Jul 14 21:43:58.748354 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 14 21:43:58.748360 kernel: Detected PIPT I-cache on CPU0 Jul 14 21:43:58.748367 kernel: CPU features: detected: GIC system register CPU interface Jul 14 21:43:58.748373 kernel: CPU features: detected: Hardware dirty bit management Jul 14 21:43:58.748379 kernel: CPU features: detected: Spectre-v4 Jul 14 21:43:58.748385 kernel: CPU features: detected: Spectre-BHB Jul 14 21:43:58.748392 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 14 21:43:58.748398 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 14 21:43:58.748405 kernel: CPU features: detected: ARM erratum 1418040 Jul 14 21:43:58.748411 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 14 21:43:58.748417 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 14 21:43:58.748423 kernel: Policy zone: DMA Jul 14 21:43:58.748430 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=0fbac260ee8dcd4db6590eed44229ca41387b27ea0fa758fd2be410620d68236 Jul 14 21:43:58.748437 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 14 21:43:58.748443 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 14 21:43:58.748450 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 14 21:43:58.748456 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 14 21:43:58.748464 kernel: Memory: 2457340K/2572288K available (9792K kernel code, 2094K rwdata, 7588K rodata, 36416K init, 777K bss, 114948K reserved, 0K cma-reserved) Jul 14 21:43:58.748470 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 14 21:43:58.748476 kernel: trace event string verifier disabled Jul 14 21:43:58.748482 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 14 21:43:58.748495 kernel: rcu: RCU event tracing is enabled. Jul 14 21:43:58.748501 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 14 21:43:58.748507 kernel: Trampoline variant of Tasks RCU enabled. Jul 14 21:43:58.748514 kernel: Tracing variant of Tasks RCU enabled. Jul 14 21:43:58.748520 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 14 21:43:58.748527 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 14 21:43:58.748533 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 14 21:43:58.748541 kernel: GICv3: 256 SPIs implemented Jul 14 21:43:58.748547 kernel: GICv3: 0 Extended SPIs implemented Jul 14 21:43:58.748554 kernel: GICv3: Distributor has no Range Selector support Jul 14 21:43:58.748560 kernel: Root IRQ handler: gic_handle_irq Jul 14 21:43:58.748566 kernel: GICv3: 16 PPIs implemented Jul 14 21:43:58.748572 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 14 21:43:58.748578 kernel: ACPI: SRAT not present Jul 14 21:43:58.748584 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 14 21:43:58.748590 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Jul 14 21:43:58.748597 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Jul 14 21:43:58.748603 kernel: GICv3: using LPI property table @0x00000000400d0000 Jul 14 21:43:58.748609 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Jul 14 21:43:58.748617 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 21:43:58.748623 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 14 21:43:58.748629 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 14 21:43:58.748635 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 14 21:43:58.748641 kernel: arm-pv: using stolen time PV Jul 14 21:43:58.748648 kernel: Console: colour dummy device 80x25 Jul 14 21:43:58.748654 kernel: ACPI: Core revision 20210730 Jul 14 21:43:58.748661 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 14 21:43:58.748667 kernel: pid_max: default: 32768 minimum: 301 Jul 14 21:43:58.748673 kernel: LSM: Security Framework initializing Jul 14 21:43:58.748681 kernel: SELinux: Initializing. Jul 14 21:43:58.748687 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 21:43:58.748693 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 21:43:58.748700 kernel: rcu: Hierarchical SRCU implementation. Jul 14 21:43:58.748706 kernel: Platform MSI: ITS@0x8080000 domain created Jul 14 21:43:58.748712 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 14 21:43:58.748718 kernel: Remapping and enabling EFI services. Jul 14 21:43:58.748724 kernel: smp: Bringing up secondary CPUs ... Jul 14 21:43:58.748730 kernel: Detected PIPT I-cache on CPU1 Jul 14 21:43:58.748738 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 14 21:43:58.748744 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Jul 14 21:43:58.748750 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 21:43:58.748757 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 14 21:43:58.748763 kernel: Detected PIPT I-cache on CPU2 Jul 14 21:43:58.748769 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 14 21:43:58.748776 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Jul 14 21:43:58.748782 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 21:43:58.748788 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 14 21:43:58.748794 kernel: Detected PIPT I-cache on CPU3 Jul 14 21:43:58.748801 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 14 21:43:58.748808 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Jul 14 21:43:58.748815 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 21:43:58.748823 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 14 21:43:58.748834 kernel: smp: Brought up 1 node, 4 CPUs Jul 14 21:43:58.748842 kernel: SMP: Total of 4 processors activated. Jul 14 21:43:58.748858 kernel: CPU features: detected: 32-bit EL0 Support Jul 14 21:43:58.748864 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 14 21:43:58.748871 kernel: CPU features: detected: Common not Private translations Jul 14 21:43:58.748877 kernel: CPU features: detected: CRC32 instructions Jul 14 21:43:58.748884 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 14 21:43:58.748890 kernel: CPU features: detected: LSE atomic instructions Jul 14 21:43:58.748899 kernel: CPU features: detected: Privileged Access Never Jul 14 21:43:58.748906 kernel: CPU features: detected: RAS Extension Support Jul 14 21:43:58.748912 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 14 21:43:58.748919 kernel: CPU: All CPU(s) started at EL1 Jul 14 21:43:58.748925 kernel: alternatives: patching kernel code Jul 14 21:43:58.748933 kernel: devtmpfs: initialized Jul 14 21:43:58.748940 kernel: KASLR enabled Jul 14 21:43:58.748947 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 14 21:43:58.748953 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 14 21:43:58.748960 kernel: pinctrl core: initialized pinctrl subsystem Jul 14 21:43:58.748966 kernel: SMBIOS 3.0.0 present. Jul 14 21:43:58.748973 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Jul 14 21:43:58.748979 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 14 21:43:58.748986 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 14 21:43:58.748994 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 14 21:43:58.749001 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 14 21:43:58.749008 kernel: audit: initializing netlink subsys (disabled) Jul 14 21:43:58.749014 kernel: audit: type=2000 audit(0.035:1): state=initialized audit_enabled=0 res=1 Jul 14 21:43:58.749021 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 14 21:43:58.749027 kernel: cpuidle: using governor menu Jul 14 21:43:58.749034 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 14 21:43:58.749040 kernel: ASID allocator initialised with 32768 entries Jul 14 21:43:58.749047 kernel: ACPI: bus type PCI registered Jul 14 21:43:58.749055 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 14 21:43:58.749061 kernel: Serial: AMBA PL011 UART driver Jul 14 21:43:58.749068 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 14 21:43:58.749074 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Jul 14 21:43:58.749081 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 14 21:43:58.749088 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Jul 14 21:43:58.749094 kernel: cryptd: max_cpu_qlen set to 1000 Jul 14 21:43:58.749101 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 14 21:43:58.749107 kernel: ACPI: Added _OSI(Module Device) Jul 14 21:43:58.749115 kernel: ACPI: Added _OSI(Processor Device) Jul 14 21:43:58.749122 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 14 21:43:58.749128 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 14 21:43:58.749135 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 14 21:43:58.749141 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 14 21:43:58.749148 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 14 21:43:58.749154 kernel: ACPI: Interpreter enabled Jul 14 21:43:58.749161 kernel: ACPI: Using GIC for interrupt routing Jul 14 21:43:58.749167 kernel: ACPI: MCFG table detected, 1 entries Jul 14 21:43:58.749175 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 14 21:43:58.749182 kernel: printk: console [ttyAMA0] enabled Jul 14 21:43:58.749188 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 14 21:43:58.749325 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 14 21:43:58.749412 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 14 21:43:58.752079 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 14 21:43:58.752172 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 14 21:43:58.752242 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 14 21:43:58.752251 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 14 21:43:58.752258 kernel: PCI host bridge to bus 0000:00 Jul 14 21:43:58.752340 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 14 21:43:58.752404 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 14 21:43:58.752456 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 14 21:43:58.752507 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 14 21:43:58.752586 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 14 21:43:58.752658 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 14 21:43:58.752718 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 14 21:43:58.752787 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 14 21:43:58.752878 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 14 21:43:58.752945 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 14 21:43:58.753005 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 14 21:43:58.753068 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 14 21:43:58.753122 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 14 21:43:58.753218 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 14 21:43:58.753316 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 14 21:43:58.753327 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 14 21:43:58.753344 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 14 21:43:58.753352 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 14 21:43:58.753359 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 14 21:43:58.753370 kernel: iommu: Default domain type: Translated Jul 14 21:43:58.753379 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 14 21:43:58.753386 kernel: vgaarb: loaded Jul 14 21:43:58.753393 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 14 21:43:58.753400 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 14 21:43:58.753408 kernel: PTP clock support registered Jul 14 21:43:58.753416 kernel: Registered efivars operations Jul 14 21:43:58.753423 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 14 21:43:58.753430 kernel: VFS: Disk quotas dquot_6.6.0 Jul 14 21:43:58.753442 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 14 21:43:58.753449 kernel: pnp: PnP ACPI init Jul 14 21:43:58.753539 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 14 21:43:58.753553 kernel: pnp: PnP ACPI: found 1 devices Jul 14 21:43:58.753559 kernel: NET: Registered PF_INET protocol family Jul 14 21:43:58.753567 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 14 21:43:58.753574 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 14 21:43:58.753583 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 14 21:43:58.753592 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 14 21:43:58.753599 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 14 21:43:58.753605 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 14 21:43:58.753614 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 21:43:58.753621 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 21:43:58.753628 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 14 21:43:58.753637 kernel: PCI: CLS 0 bytes, default 64 Jul 14 21:43:58.753643 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 14 21:43:58.753650 kernel: kvm [1]: HYP mode not available Jul 14 21:43:58.753659 kernel: Initialise system trusted keyrings Jul 14 21:43:58.753667 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 14 21:43:58.753674 kernel: Key type asymmetric registered Jul 14 21:43:58.753681 kernel: Asymmetric key parser 'x509' registered Jul 14 21:43:58.753688 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 14 21:43:58.753694 kernel: io scheduler mq-deadline registered Jul 14 21:43:58.753701 kernel: io scheduler kyber registered Jul 14 21:43:58.753708 kernel: io scheduler bfq registered Jul 14 21:43:58.753714 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 14 21:43:58.753723 kernel: ACPI: button: Power Button [PWRB] Jul 14 21:43:58.753730 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 14 21:43:58.753796 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 14 21:43:58.753805 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 14 21:43:58.753812 kernel: thunder_xcv, ver 1.0 Jul 14 21:43:58.753819 kernel: thunder_bgx, ver 1.0 Jul 14 21:43:58.753826 kernel: nicpf, ver 1.0 Jul 14 21:43:58.753832 kernel: nicvf, ver 1.0 Jul 14 21:43:58.753954 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 14 21:43:58.754015 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-14T21:43:58 UTC (1752529438) Jul 14 21:43:58.754025 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 14 21:43:58.754032 kernel: NET: Registered PF_INET6 protocol family Jul 14 21:43:58.754039 kernel: Segment Routing with IPv6 Jul 14 21:43:58.754046 kernel: In-situ OAM (IOAM) with IPv6 Jul 14 21:43:58.754053 kernel: NET: Registered PF_PACKET protocol family Jul 14 21:43:58.754060 kernel: Key type dns_resolver registered Jul 14 21:43:58.754066 kernel: registered taskstats version 1 Jul 14 21:43:58.754075 kernel: Loading compiled-in X.509 certificates Jul 14 21:43:58.754082 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.187-flatcar: 118351bb2b1409a8fe1c98db16ecff1bb5342a27' Jul 14 21:43:58.754089 kernel: Key type .fscrypt registered Jul 14 21:43:58.754096 kernel: Key type fscrypt-provisioning registered Jul 14 21:43:58.754102 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 14 21:43:58.754109 kernel: ima: Allocated hash algorithm: sha1 Jul 14 21:43:58.754116 kernel: ima: No architecture policies found Jul 14 21:43:58.754123 kernel: clk: Disabling unused clocks Jul 14 21:43:58.754129 kernel: Freeing unused kernel memory: 36416K Jul 14 21:43:58.754137 kernel: Run /init as init process Jul 14 21:43:58.754144 kernel: with arguments: Jul 14 21:43:58.754151 kernel: /init Jul 14 21:43:58.754158 kernel: with environment: Jul 14 21:43:58.754165 kernel: HOME=/ Jul 14 21:43:58.754171 kernel: TERM=linux Jul 14 21:43:58.754178 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 14 21:43:58.754187 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 14 21:43:58.754197 systemd[1]: Detected virtualization kvm. Jul 14 21:43:58.754205 systemd[1]: Detected architecture arm64. Jul 14 21:43:58.754212 systemd[1]: Running in initrd. Jul 14 21:43:58.754219 systemd[1]: No hostname configured, using default hostname. Jul 14 21:43:58.754226 systemd[1]: Hostname set to . Jul 14 21:43:58.754233 systemd[1]: Initializing machine ID from VM UUID. Jul 14 21:43:58.754240 systemd[1]: Queued start job for default target initrd.target. Jul 14 21:43:58.754247 systemd[1]: Started systemd-ask-password-console.path. Jul 14 21:43:58.754256 systemd[1]: Reached target cryptsetup.target. Jul 14 21:43:58.754263 systemd[1]: Reached target paths.target. Jul 14 21:43:58.754269 systemd[1]: Reached target slices.target. Jul 14 21:43:58.754276 systemd[1]: Reached target swap.target. Jul 14 21:43:58.754284 systemd[1]: Reached target timers.target. Jul 14 21:43:58.754291 systemd[1]: Listening on iscsid.socket. Jul 14 21:43:58.754298 systemd[1]: Listening on iscsiuio.socket. Jul 14 21:43:58.754307 systemd[1]: Listening on systemd-journald-audit.socket. Jul 14 21:43:58.754314 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 14 21:43:58.754321 systemd[1]: Listening on systemd-journald.socket. Jul 14 21:43:58.754328 systemd[1]: Listening on systemd-networkd.socket. Jul 14 21:43:58.754344 systemd[1]: Listening on systemd-udevd-control.socket. Jul 14 21:43:58.754351 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 14 21:43:58.754358 systemd[1]: Reached target sockets.target. Jul 14 21:43:58.754365 systemd[1]: Starting kmod-static-nodes.service... Jul 14 21:43:58.754373 systemd[1]: Finished network-cleanup.service. Jul 14 21:43:58.754382 systemd[1]: Starting systemd-fsck-usr.service... Jul 14 21:43:58.754389 systemd[1]: Starting systemd-journald.service... Jul 14 21:43:58.754396 systemd[1]: Starting systemd-modules-load.service... Jul 14 21:43:58.754403 systemd[1]: Starting systemd-resolved.service... Jul 14 21:43:58.754411 systemd[1]: Starting systemd-vconsole-setup.service... Jul 14 21:43:58.754418 systemd[1]: Finished kmod-static-nodes.service. Jul 14 21:43:58.754425 systemd[1]: Finished systemd-fsck-usr.service. Jul 14 21:43:58.754432 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 14 21:43:58.754439 systemd[1]: Finished systemd-vconsole-setup.service. Jul 14 21:43:58.754448 kernel: audit: type=1130 audit(1752529438.749:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:58.754455 systemd[1]: Starting dracut-cmdline-ask.service... Jul 14 21:43:58.754468 systemd-journald[290]: Journal started Jul 14 21:43:58.754515 systemd-journald[290]: Runtime Journal (/run/log/journal/34e0c2ca29e14ccaa1e40fc966660825) is 6.0M, max 48.7M, 42.6M free. Jul 14 21:43:58.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:58.743164 systemd-modules-load[291]: Inserted module 'overlay' Jul 14 21:43:58.760008 systemd[1]: Started systemd-journald.service. Jul 14 21:43:58.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:58.760583 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 14 21:43:58.764802 kernel: audit: type=1130 audit(1752529438.759:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:58.764825 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 14 21:43:58.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:58.768890 kernel: audit: type=1130 audit(1752529438.764:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:58.768932 kernel: Bridge firewalling registered Jul 14 21:43:58.768886 systemd-modules-load[291]: Inserted module 'br_netfilter' Jul 14 21:43:58.769559 systemd-resolved[292]: Positive Trust Anchors: Jul 14 21:43:58.769566 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 21:43:58.769593 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 14 21:43:58.774485 systemd-resolved[292]: Defaulting to hostname 'linux'. Jul 14 21:43:58.775799 systemd[1]: Finished dracut-cmdline-ask.service. Jul 14 21:43:58.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:58.776539 systemd[1]: Started systemd-resolved.service. Jul 14 21:43:58.780463 kernel: audit: type=1130 audit(1752529438.776:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:58.779998 systemd[1]: Reached target nss-lookup.target. Jul 14 21:43:58.784244 kernel: audit: type=1130 audit(1752529438.779:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:58.784267 kernel: SCSI subsystem initialized Jul 14 21:43:58.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:58.784181 systemd[1]: Starting dracut-cmdline.service... Jul 14 21:43:58.792646 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 14 21:43:58.792704 kernel: device-mapper: uevent: version 1.0.3 Jul 14 21:43:58.792714 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 14 21:43:58.793241 dracut-cmdline[308]: dracut-dracut-053 Jul 14 21:43:58.795071 systemd-modules-load[291]: Inserted module 'dm_multipath' Jul 14 21:43:58.795949 systemd[1]: Finished systemd-modules-load.service. Jul 14 21:43:58.799954 kernel: audit: type=1130 audit(1752529438.795:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:58.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:58.800064 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=0fbac260ee8dcd4db6590eed44229ca41387b27ea0fa758fd2be410620d68236 Jul 14 21:43:58.797632 systemd[1]: Starting systemd-sysctl.service... Jul 14 21:43:58.807152 systemd[1]: Finished systemd-sysctl.service. Jul 14 21:43:58.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:58.810890 kernel: audit: type=1130 audit(1752529438.806:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:58.860870 kernel: Loading iSCSI transport class v2.0-870. Jul 14 21:43:58.874876 kernel: iscsi: registered transport (tcp) Jul 14 21:43:58.890875 kernel: iscsi: registered transport (qla4xxx) Jul 14 21:43:58.890921 kernel: QLogic iSCSI HBA Driver Jul 14 21:43:58.926669 systemd[1]: Finished dracut-cmdline.service. Jul 14 21:43:58.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:58.928295 systemd[1]: Starting dracut-pre-udev.service... Jul 14 21:43:58.931291 kernel: audit: type=1130 audit(1752529438.927:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:58.972879 kernel: raid6: neonx8 gen() 13737 MB/s Jul 14 21:43:58.989868 kernel: raid6: neonx8 xor() 10784 MB/s Jul 14 21:43:59.006866 kernel: raid6: neonx4 gen() 13554 MB/s Jul 14 21:43:59.023860 kernel: raid6: neonx4 xor() 11134 MB/s Jul 14 21:43:59.040867 kernel: raid6: neonx2 gen() 12952 MB/s Jul 14 21:43:59.057861 kernel: raid6: neonx2 xor() 10373 MB/s Jul 14 21:43:59.074865 kernel: raid6: neonx1 gen() 10595 MB/s Jul 14 21:43:59.091868 kernel: raid6: neonx1 xor() 8779 MB/s Jul 14 21:43:59.108864 kernel: raid6: int64x8 gen() 6266 MB/s Jul 14 21:43:59.125860 kernel: raid6: int64x8 xor() 3541 MB/s Jul 14 21:43:59.142866 kernel: raid6: int64x4 gen() 7220 MB/s Jul 14 21:43:59.159861 kernel: raid6: int64x4 xor() 3852 MB/s Jul 14 21:43:59.176864 kernel: raid6: int64x2 gen() 6149 MB/s Jul 14 21:43:59.193866 kernel: raid6: int64x2 xor() 3319 MB/s Jul 14 21:43:59.210865 kernel: raid6: int64x1 gen() 5041 MB/s Jul 14 21:43:59.227998 kernel: raid6: int64x1 xor() 2643 MB/s Jul 14 21:43:59.228014 kernel: raid6: using algorithm neonx8 gen() 13737 MB/s Jul 14 21:43:59.228022 kernel: raid6: .... xor() 10784 MB/s, rmw enabled Jul 14 21:43:59.229085 kernel: raid6: using neon recovery algorithm Jul 14 21:43:59.239888 kernel: xor: measuring software checksum speed Jul 14 21:43:59.239908 kernel: 8regs : 16520 MB/sec Jul 14 21:43:59.241186 kernel: 32regs : 20670 MB/sec Jul 14 21:43:59.241198 kernel: arm64_neon : 27349 MB/sec Jul 14 21:43:59.241207 kernel: xor: using function: arm64_neon (27349 MB/sec) Jul 14 21:43:59.299872 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Jul 14 21:43:59.310666 systemd[1]: Finished dracut-pre-udev.service. Jul 14 21:43:59.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:59.313000 audit: BPF prog-id=7 op=LOAD Jul 14 21:43:59.313000 audit: BPF prog-id=8 op=LOAD Jul 14 21:43:59.314803 systemd[1]: Starting systemd-udevd.service... Jul 14 21:43:59.315960 kernel: audit: type=1130 audit(1752529439.310:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:59.331310 systemd-udevd[491]: Using default interface naming scheme 'v252'. Jul 14 21:43:59.334766 systemd[1]: Started systemd-udevd.service. Jul 14 21:43:59.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:59.336343 systemd[1]: Starting dracut-pre-trigger.service... Jul 14 21:43:59.348741 dracut-pre-trigger[497]: rd.md=0: removing MD RAID activation Jul 14 21:43:59.380255 systemd[1]: Finished dracut-pre-trigger.service. Jul 14 21:43:59.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:59.381824 systemd[1]: Starting systemd-udev-trigger.service... Jul 14 21:43:59.417424 systemd[1]: Finished systemd-udev-trigger.service. Jul 14 21:43:59.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:59.448954 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 14 21:43:59.459808 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 14 21:43:59.459824 kernel: GPT:9289727 != 19775487 Jul 14 21:43:59.459832 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 14 21:43:59.459842 kernel: GPT:9289727 != 19775487 Jul 14 21:43:59.459876 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 14 21:43:59.459885 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 21:43:59.471876 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (547) Jul 14 21:43:59.476156 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 14 21:43:59.476991 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 14 21:43:59.482751 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 14 21:43:59.486753 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 14 21:43:59.489952 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 14 21:43:59.491494 systemd[1]: Starting disk-uuid.service... Jul 14 21:43:59.497837 disk-uuid[563]: Primary Header is updated. Jul 14 21:43:59.497837 disk-uuid[563]: Secondary Entries is updated. Jul 14 21:43:59.497837 disk-uuid[563]: Secondary Header is updated. Jul 14 21:43:59.504874 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 21:43:59.509866 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 21:43:59.513874 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 21:44:00.514381 disk-uuid[564]: The operation has completed successfully. Jul 14 21:44:00.515324 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 21:44:00.538708 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 14 21:44:00.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:00.538000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:00.538804 systemd[1]: Finished disk-uuid.service. Jul 14 21:44:00.540369 systemd[1]: Starting verity-setup.service... Jul 14 21:44:00.556869 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 14 21:44:00.580551 systemd[1]: Found device dev-mapper-usr.device. Jul 14 21:44:00.582672 systemd[1]: Mounting sysusr-usr.mount... Jul 14 21:44:00.584629 systemd[1]: Finished verity-setup.service. Jul 14 21:44:00.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:00.634876 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 14 21:44:00.634975 systemd[1]: Mounted sysusr-usr.mount. Jul 14 21:44:00.635622 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 14 21:44:00.636403 systemd[1]: Starting ignition-setup.service... Jul 14 21:44:00.638157 systemd[1]: Starting parse-ip-for-networkd.service... Jul 14 21:44:00.647481 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 14 21:44:00.647538 kernel: BTRFS info (device vda6): using free space tree Jul 14 21:44:00.647556 kernel: BTRFS info (device vda6): has skinny extents Jul 14 21:44:00.657123 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 14 21:44:00.664583 systemd[1]: Finished ignition-setup.service. Jul 14 21:44:00.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:00.666445 systemd[1]: Starting ignition-fetch-offline.service... Jul 14 21:44:00.730237 systemd[1]: Finished parse-ip-for-networkd.service. Jul 14 21:44:00.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:00.730000 audit: BPF prog-id=9 op=LOAD Jul 14 21:44:00.732268 systemd[1]: Starting systemd-networkd.service... Jul 14 21:44:00.755033 ignition[650]: Ignition 2.14.0 Jul 14 21:44:00.755834 ignition[650]: Stage: fetch-offline Jul 14 21:44:00.756481 ignition[650]: no configs at "/usr/lib/ignition/base.d" Jul 14 21:44:00.757222 ignition[650]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:44:00.758248 ignition[650]: parsed url from cmdline: "" Jul 14 21:44:00.758305 ignition[650]: no config URL provided Jul 14 21:44:00.758926 ignition[650]: reading system config file "/usr/lib/ignition/user.ign" Jul 14 21:44:00.758945 ignition[650]: no config at "/usr/lib/ignition/user.ign" Jul 14 21:44:00.758967 ignition[650]: op(1): [started] loading QEMU firmware config module Jul 14 21:44:00.758972 ignition[650]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 14 21:44:00.762726 systemd-networkd[739]: lo: Link UP Jul 14 21:44:00.762739 systemd-networkd[739]: lo: Gained carrier Jul 14 21:44:00.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:00.763174 systemd-networkd[739]: Enumeration completed Jul 14 21:44:00.763312 systemd[1]: Started systemd-networkd.service. Jul 14 21:44:00.765931 ignition[650]: op(1): [finished] loading QEMU firmware config module Jul 14 21:44:00.763390 systemd-networkd[739]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 21:44:00.764609 systemd[1]: Reached target network.target. Jul 14 21:44:00.764868 systemd-networkd[739]: eth0: Link UP Jul 14 21:44:00.764872 systemd-networkd[739]: eth0: Gained carrier Jul 14 21:44:00.766661 systemd[1]: Starting iscsiuio.service... Jul 14 21:44:00.775818 systemd[1]: Started iscsiuio.service. Jul 14 21:44:00.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:00.778063 systemd[1]: Starting iscsid.service... Jul 14 21:44:00.781764 iscsid[745]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 14 21:44:00.781764 iscsid[745]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 14 21:44:00.781764 iscsid[745]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 14 21:44:00.781764 iscsid[745]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 14 21:44:00.781764 iscsid[745]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 14 21:44:00.781764 iscsid[745]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 14 21:44:00.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:00.784687 systemd[1]: Started iscsid.service. Jul 14 21:44:00.787934 systemd-networkd[739]: eth0: DHCPv4 address 10.0.0.12/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 21:44:00.788794 systemd[1]: Starting dracut-initqueue.service... Jul 14 21:44:00.799795 systemd[1]: Finished dracut-initqueue.service. Jul 14 21:44:00.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:00.800680 systemd[1]: Reached target remote-fs-pre.target. Jul 14 21:44:00.801905 systemd[1]: Reached target remote-cryptsetup.target. Jul 14 21:44:00.803187 systemd[1]: Reached target remote-fs.target. Jul 14 21:44:00.805399 systemd[1]: Starting dracut-pre-mount.service... Jul 14 21:44:00.813537 systemd[1]: Finished dracut-pre-mount.service. Jul 14 21:44:00.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:00.824763 ignition[650]: parsing config with SHA512: 17d319453f4226b2d156fc8c9d53ba4b9f3e885fb32d6eee9d7127bea3ea3aab07e1e5a3c1f3b11025028058c1021e801ea780856d1156a56ae3b9b7151dc02b Jul 14 21:44:00.831504 unknown[650]: fetched base config from "system" Jul 14 21:44:00.831536 unknown[650]: fetched user config from "qemu" Jul 14 21:44:00.833430 ignition[650]: fetch-offline: fetch-offline passed Jul 14 21:44:00.833530 ignition[650]: Ignition finished successfully Jul 14 21:44:00.834417 systemd[1]: Finished ignition-fetch-offline.service. Jul 14 21:44:00.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:00.835423 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 14 21:44:00.836272 systemd[1]: Starting ignition-kargs.service... Jul 14 21:44:00.845634 ignition[760]: Ignition 2.14.0 Jul 14 21:44:00.845646 ignition[760]: Stage: kargs Jul 14 21:44:00.845757 ignition[760]: no configs at "/usr/lib/ignition/base.d" Jul 14 21:44:00.845769 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:44:00.846747 ignition[760]: kargs: kargs passed Jul 14 21:44:00.849112 systemd[1]: Finished ignition-kargs.service. Jul 14 21:44:00.846797 ignition[760]: Ignition finished successfully Jul 14 21:44:00.850736 systemd[1]: Starting ignition-disks.service... Jul 14 21:44:00.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:00.857695 ignition[766]: Ignition 2.14.0 Jul 14 21:44:00.857705 ignition[766]: Stage: disks Jul 14 21:44:00.857809 ignition[766]: no configs at "/usr/lib/ignition/base.d" Jul 14 21:44:00.857819 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:44:00.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:00.860090 systemd[1]: Finished ignition-disks.service. Jul 14 21:44:00.858744 ignition[766]: disks: disks passed Jul 14 21:44:00.860822 systemd[1]: Reached target initrd-root-device.target. Jul 14 21:44:00.858792 ignition[766]: Ignition finished successfully Jul 14 21:44:00.861962 systemd[1]: Reached target local-fs-pre.target. Jul 14 21:44:00.862937 systemd[1]: Reached target local-fs.target. Jul 14 21:44:00.863799 systemd[1]: Reached target sysinit.target. Jul 14 21:44:00.864868 systemd[1]: Reached target basic.target. Jul 14 21:44:00.866804 systemd[1]: Starting systemd-fsck-root.service... Jul 14 21:44:00.879014 systemd-fsck[774]: ROOT: clean, 619/553520 files, 56022/553472 blocks Jul 14 21:44:00.883377 systemd[1]: Finished systemd-fsck-root.service. Jul 14 21:44:00.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:00.885237 systemd[1]: Mounting sysroot.mount... Jul 14 21:44:00.892860 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 14 21:44:00.893187 systemd[1]: Mounted sysroot.mount. Jul 14 21:44:00.893798 systemd[1]: Reached target initrd-root-fs.target. Jul 14 21:44:00.895823 systemd[1]: Mounting sysroot-usr.mount... Jul 14 21:44:00.897088 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 14 21:44:00.897137 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 14 21:44:00.897163 systemd[1]: Reached target ignition-diskful.target. Jul 14 21:44:00.899452 systemd[1]: Mounted sysroot-usr.mount. Jul 14 21:44:00.901092 systemd[1]: Starting initrd-setup-root.service... Jul 14 21:44:00.905715 initrd-setup-root[784]: cut: /sysroot/etc/passwd: No such file or directory Jul 14 21:44:00.909745 initrd-setup-root[792]: cut: /sysroot/etc/group: No such file or directory Jul 14 21:44:00.913784 initrd-setup-root[800]: cut: /sysroot/etc/shadow: No such file or directory Jul 14 21:44:00.920214 initrd-setup-root[808]: cut: /sysroot/etc/gshadow: No such file or directory Jul 14 21:44:00.948828 systemd[1]: Finished initrd-setup-root.service. Jul 14 21:44:00.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:00.950342 systemd[1]: Starting ignition-mount.service... Jul 14 21:44:00.951588 systemd[1]: Starting sysroot-boot.service... Jul 14 21:44:00.956397 bash[826]: umount: /sysroot/usr/share/oem: not mounted. Jul 14 21:44:00.964588 ignition[828]: INFO : Ignition 2.14.0 Jul 14 21:44:00.964588 ignition[828]: INFO : Stage: mount Jul 14 21:44:00.966882 ignition[828]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 21:44:00.966882 ignition[828]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:44:00.966882 ignition[828]: INFO : mount: mount passed Jul 14 21:44:00.966882 ignition[828]: INFO : Ignition finished successfully Jul 14 21:44:00.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:00.966505 systemd[1]: Finished ignition-mount.service. Jul 14 21:44:00.975891 systemd[1]: Finished sysroot-boot.service. Jul 14 21:44:00.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:01.591556 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 14 21:44:01.597875 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (836) Jul 14 21:44:01.600226 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 14 21:44:01.600285 kernel: BTRFS info (device vda6): using free space tree Jul 14 21:44:01.600313 kernel: BTRFS info (device vda6): has skinny extents Jul 14 21:44:01.603212 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 14 21:44:01.604779 systemd[1]: Starting ignition-files.service... Jul 14 21:44:01.619666 ignition[856]: INFO : Ignition 2.14.0 Jul 14 21:44:01.619666 ignition[856]: INFO : Stage: files Jul 14 21:44:01.620966 ignition[856]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 21:44:01.620966 ignition[856]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:44:01.620966 ignition[856]: DEBUG : files: compiled without relabeling support, skipping Jul 14 21:44:01.625801 ignition[856]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 14 21:44:01.625801 ignition[856]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 14 21:44:01.629312 ignition[856]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 14 21:44:01.630377 ignition[856]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 14 21:44:01.631671 unknown[856]: wrote ssh authorized keys file for user: core Jul 14 21:44:01.632635 ignition[856]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 14 21:44:01.632635 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 14 21:44:01.632635 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jul 14 21:44:01.769453 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 14 21:44:01.975699 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 14 21:44:01.977580 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 14 21:44:01.979279 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 14 21:44:02.184334 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 14 21:44:02.294408 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 14 21:44:02.295764 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 14 21:44:02.295764 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 14 21:44:02.295764 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 14 21:44:02.295764 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 14 21:44:02.295764 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 21:44:02.295764 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 21:44:02.295764 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 21:44:02.295764 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 21:44:02.305820 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 21:44:02.305820 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 21:44:02.305820 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 14 21:44:02.305820 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 14 21:44:02.305820 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 14 21:44:02.305820 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 14 21:44:02.603940 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 14 21:44:02.633190 systemd-networkd[739]: eth0: Gained IPv6LL Jul 14 21:44:03.034877 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 14 21:44:03.034877 ignition[856]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 14 21:44:03.037706 ignition[856]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 21:44:03.037706 ignition[856]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 21:44:03.037706 ignition[856]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 14 21:44:03.037706 ignition[856]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 14 21:44:03.037706 ignition[856]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 21:44:03.037706 ignition[856]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 21:44:03.037706 ignition[856]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 14 21:44:03.037706 ignition[856]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jul 14 21:44:03.037706 ignition[856]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jul 14 21:44:03.037706 ignition[856]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Jul 14 21:44:03.037706 ignition[856]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 21:44:03.075832 ignition[856]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 21:44:03.077933 ignition[856]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Jul 14 21:44:03.077933 ignition[856]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 14 21:44:03.077933 ignition[856]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 14 21:44:03.077933 ignition[856]: INFO : files: files passed Jul 14 21:44:03.077933 ignition[856]: INFO : Ignition finished successfully Jul 14 21:44:03.087630 kernel: kauditd_printk_skb: 23 callbacks suppressed Jul 14 21:44:03.087653 kernel: audit: type=1130 audit(1752529443.079:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.077991 systemd[1]: Finished ignition-files.service. Jul 14 21:44:03.080634 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 14 21:44:03.086304 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 14 21:44:03.091262 initrd-setup-root-after-ignition[879]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Jul 14 21:44:03.087259 systemd[1]: Starting ignition-quench.service... Jul 14 21:44:03.098246 kernel: audit: type=1130 audit(1752529443.092:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.098279 kernel: audit: type=1131 audit(1752529443.092:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.098384 initrd-setup-root-after-ignition[883]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 21:44:03.102479 kernel: audit: type=1130 audit(1752529443.098:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.090378 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 14 21:44:03.090482 systemd[1]: Finished ignition-quench.service. Jul 14 21:44:03.093339 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 14 21:44:03.099080 systemd[1]: Reached target ignition-complete.target. Jul 14 21:44:03.103955 systemd[1]: Starting initrd-parse-etc.service... Jul 14 21:44:03.118298 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 14 21:44:03.118427 systemd[1]: Finished initrd-parse-etc.service. Jul 14 21:44:03.124740 kernel: audit: type=1130 audit(1752529443.118:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.124765 kernel: audit: type=1131 audit(1752529443.118:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.119720 systemd[1]: Reached target initrd-fs.target. Jul 14 21:44:03.125305 systemd[1]: Reached target initrd.target. Jul 14 21:44:03.126352 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 14 21:44:03.127306 systemd[1]: Starting dracut-pre-pivot.service... Jul 14 21:44:03.138287 systemd[1]: Finished dracut-pre-pivot.service. Jul 14 21:44:03.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.141939 kernel: audit: type=1130 audit(1752529443.138:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.139901 systemd[1]: Starting initrd-cleanup.service... Jul 14 21:44:03.148627 systemd[1]: Stopped target nss-lookup.target. Jul 14 21:44:03.149381 systemd[1]: Stopped target remote-cryptsetup.target. Jul 14 21:44:03.150459 systemd[1]: Stopped target timers.target. Jul 14 21:44:03.151508 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 14 21:44:03.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.151628 systemd[1]: Stopped dracut-pre-pivot.service. Jul 14 21:44:03.156442 kernel: audit: type=1131 audit(1752529443.151:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.152629 systemd[1]: Stopped target initrd.target. Jul 14 21:44:03.156067 systemd[1]: Stopped target basic.target. Jul 14 21:44:03.156998 systemd[1]: Stopped target ignition-complete.target. Jul 14 21:44:03.158013 systemd[1]: Stopped target ignition-diskful.target. Jul 14 21:44:03.158984 systemd[1]: Stopped target initrd-root-device.target. Jul 14 21:44:03.160072 systemd[1]: Stopped target remote-fs.target. Jul 14 21:44:03.161118 systemd[1]: Stopped target remote-fs-pre.target. Jul 14 21:44:03.162205 systemd[1]: Stopped target sysinit.target. Jul 14 21:44:03.163231 systemd[1]: Stopped target local-fs.target. Jul 14 21:44:03.164258 systemd[1]: Stopped target local-fs-pre.target. Jul 14 21:44:03.165255 systemd[1]: Stopped target swap.target. Jul 14 21:44:03.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.166204 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 14 21:44:03.171375 kernel: audit: type=1131 audit(1752529443.167:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.166336 systemd[1]: Stopped dracut-pre-mount.service. Jul 14 21:44:03.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.167423 systemd[1]: Stopped target cryptsetup.target. Jul 14 21:44:03.175797 kernel: audit: type=1131 audit(1752529443.171:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.170771 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 14 21:44:03.170908 systemd[1]: Stopped dracut-initqueue.service. Jul 14 21:44:03.172057 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 14 21:44:03.172157 systemd[1]: Stopped ignition-fetch-offline.service. Jul 14 21:44:03.175506 systemd[1]: Stopped target paths.target. Jul 14 21:44:03.176392 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 14 21:44:03.180880 systemd[1]: Stopped systemd-ask-password-console.path. Jul 14 21:44:03.181621 systemd[1]: Stopped target slices.target. Jul 14 21:44:03.182625 systemd[1]: Stopped target sockets.target. Jul 14 21:44:03.183613 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 14 21:44:03.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.183740 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 14 21:44:03.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.184736 systemd[1]: ignition-files.service: Deactivated successfully. Jul 14 21:44:03.184830 systemd[1]: Stopped ignition-files.service. Jul 14 21:44:03.188306 iscsid[745]: iscsid shutting down. Jul 14 21:44:03.186908 systemd[1]: Stopping ignition-mount.service... Jul 14 21:44:03.187915 systemd[1]: Stopping iscsid.service... Jul 14 21:44:03.189620 systemd[1]: Stopping sysroot-boot.service... Jul 14 21:44:03.190204 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 14 21:44:03.190404 systemd[1]: Stopped systemd-udev-trigger.service. Jul 14 21:44:03.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.191421 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 14 21:44:03.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.196129 ignition[896]: INFO : Ignition 2.14.0 Jul 14 21:44:03.196129 ignition[896]: INFO : Stage: umount Jul 14 21:44:03.196129 ignition[896]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 21:44:03.196129 ignition[896]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:44:03.196129 ignition[896]: INFO : umount: umount passed Jul 14 21:44:03.196129 ignition[896]: INFO : Ignition finished successfully Jul 14 21:44:03.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.191570 systemd[1]: Stopped dracut-pre-trigger.service. Jul 14 21:44:03.201000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.194699 systemd[1]: iscsid.service: Deactivated successfully. Jul 14 21:44:03.194809 systemd[1]: Stopped iscsid.service. Jul 14 21:44:03.197455 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 14 21:44:03.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.197547 systemd[1]: Finished initrd-cleanup.service. Jul 14 21:44:03.199140 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 14 21:44:03.199229 systemd[1]: Stopped ignition-mount.service. Jul 14 21:44:03.200514 systemd[1]: iscsid.socket: Deactivated successfully. Jul 14 21:44:03.200548 systemd[1]: Closed iscsid.socket. Jul 14 21:44:03.210000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.201429 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 14 21:44:03.201470 systemd[1]: Stopped ignition-disks.service. Jul 14 21:44:03.202768 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 14 21:44:03.202809 systemd[1]: Stopped ignition-kargs.service. Jul 14 21:44:03.204624 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 14 21:44:03.204670 systemd[1]: Stopped ignition-setup.service. Jul 14 21:44:03.205413 systemd[1]: Stopping iscsiuio.service... Jul 14 21:44:03.207185 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 14 21:44:03.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.209702 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 14 21:44:03.209802 systemd[1]: Stopped iscsiuio.service. Jul 14 21:44:03.210817 systemd[1]: Stopped target network.target. Jul 14 21:44:03.211649 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 14 21:44:03.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.211687 systemd[1]: Closed iscsiuio.socket. Jul 14 21:44:03.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.212844 systemd[1]: Stopping systemd-networkd.service... Jul 14 21:44:03.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.214172 systemd[1]: Stopping systemd-resolved.service... Jul 14 21:44:03.217351 systemd-networkd[739]: eth0: DHCPv6 lease lost Jul 14 21:44:03.230000 audit: BPF prog-id=9 op=UNLOAD Jul 14 21:44:03.218342 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 14 21:44:03.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.218442 systemd[1]: Stopped systemd-networkd.service. Jul 14 21:44:03.219390 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 14 21:44:03.219422 systemd[1]: Closed systemd-networkd.socket. Jul 14 21:44:03.221114 systemd[1]: Stopping network-cleanup.service... Jul 14 21:44:03.222005 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 14 21:44:03.237000 audit: BPF prog-id=6 op=UNLOAD Jul 14 21:44:03.222069 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 14 21:44:03.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.224059 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 14 21:44:03.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.224111 systemd[1]: Stopped systemd-sysctl.service. Jul 14 21:44:03.225812 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 14 21:44:03.225877 systemd[1]: Stopped systemd-modules-load.service. Jul 14 21:44:03.226733 systemd[1]: Stopping systemd-udevd.service... Jul 14 21:44:03.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.232005 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 14 21:44:03.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.232551 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 14 21:44:03.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.232660 systemd[1]: Stopped systemd-resolved.service. Jul 14 21:44:03.236918 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 14 21:44:03.237047 systemd[1]: Stopped systemd-udevd.service. Jul 14 21:44:03.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.239463 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 14 21:44:03.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.239571 systemd[1]: Stopped network-cleanup.service. Jul 14 21:44:03.240490 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 14 21:44:03.240525 systemd[1]: Closed systemd-udevd-control.socket. Jul 14 21:44:03.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.242304 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 14 21:44:03.242351 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 14 21:44:03.243443 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 14 21:44:03.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.243490 systemd[1]: Stopped dracut-pre-udev.service. Jul 14 21:44:03.244593 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 14 21:44:03.244630 systemd[1]: Stopped dracut-cmdline.service. Jul 14 21:44:03.246180 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 21:44:03.246224 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 14 21:44:03.247812 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 14 21:44:03.248768 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 14 21:44:03.248832 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Jul 14 21:44:03.250644 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 14 21:44:03.250689 systemd[1]: Stopped kmod-static-nodes.service. Jul 14 21:44:03.251435 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 21:44:03.251475 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 14 21:44:03.255032 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 14 21:44:03.255538 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 14 21:44:03.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.255634 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 14 21:44:03.266342 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 14 21:44:03.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.266444 systemd[1]: Stopped sysroot-boot.service. Jul 14 21:44:03.267223 systemd[1]: Reached target initrd-switch-root.target. Jul 14 21:44:03.268380 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 14 21:44:03.268444 systemd[1]: Stopped initrd-setup-root.service. Jul 14 21:44:03.270404 systemd[1]: Starting initrd-switch-root.service... Jul 14 21:44:03.277872 systemd[1]: Switching root. Jul 14 21:44:03.298487 systemd-journald[290]: Journal stopped Jul 14 21:44:05.359688 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). Jul 14 21:44:05.359752 kernel: SELinux: Class mctp_socket not defined in policy. Jul 14 21:44:05.359765 kernel: SELinux: Class anon_inode not defined in policy. Jul 14 21:44:05.359775 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 14 21:44:05.359784 kernel: SELinux: policy capability network_peer_controls=1 Jul 14 21:44:05.359794 kernel: SELinux: policy capability open_perms=1 Jul 14 21:44:05.359804 kernel: SELinux: policy capability extended_socket_class=1 Jul 14 21:44:05.359814 kernel: SELinux: policy capability always_check_network=0 Jul 14 21:44:05.359823 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 14 21:44:05.359832 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 14 21:44:05.359843 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 14 21:44:05.359867 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 14 21:44:05.359880 systemd[1]: Successfully loaded SELinux policy in 39.535ms. Jul 14 21:44:05.359900 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.216ms. Jul 14 21:44:05.359915 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 14 21:44:05.359927 systemd[1]: Detected virtualization kvm. Jul 14 21:44:05.359940 systemd[1]: Detected architecture arm64. Jul 14 21:44:05.359950 systemd[1]: Detected first boot. Jul 14 21:44:05.359960 systemd[1]: Initializing machine ID from VM UUID. Jul 14 21:44:05.359972 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 14 21:44:05.359983 systemd[1]: Populated /etc with preset unit settings. Jul 14 21:44:05.359993 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 14 21:44:05.360005 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 14 21:44:05.360016 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 21:44:05.360029 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 14 21:44:05.360039 systemd[1]: Stopped initrd-switch-root.service. Jul 14 21:44:05.360050 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 14 21:44:05.360061 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 14 21:44:05.360072 systemd[1]: Created slice system-addon\x2drun.slice. Jul 14 21:44:05.360082 systemd[1]: Created slice system-getty.slice. Jul 14 21:44:05.360093 systemd[1]: Created slice system-modprobe.slice. Jul 14 21:44:05.360103 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 14 21:44:05.360115 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 14 21:44:05.360126 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 14 21:44:05.360136 systemd[1]: Created slice user.slice. Jul 14 21:44:05.360147 systemd[1]: Started systemd-ask-password-console.path. Jul 14 21:44:05.360157 systemd[1]: Started systemd-ask-password-wall.path. Jul 14 21:44:05.360167 systemd[1]: Set up automount boot.automount. Jul 14 21:44:05.360177 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 14 21:44:05.360187 systemd[1]: Stopped target initrd-switch-root.target. Jul 14 21:44:05.360199 systemd[1]: Stopped target initrd-fs.target. Jul 14 21:44:05.360210 systemd[1]: Stopped target initrd-root-fs.target. Jul 14 21:44:05.360220 systemd[1]: Reached target integritysetup.target. Jul 14 21:44:05.360230 systemd[1]: Reached target remote-cryptsetup.target. Jul 14 21:44:05.360241 systemd[1]: Reached target remote-fs.target. Jul 14 21:44:05.360252 systemd[1]: Reached target slices.target. Jul 14 21:44:05.360262 systemd[1]: Reached target swap.target. Jul 14 21:44:05.360273 systemd[1]: Reached target torcx.target. Jul 14 21:44:05.360283 systemd[1]: Reached target veritysetup.target. Jul 14 21:44:05.360295 systemd[1]: Listening on systemd-coredump.socket. Jul 14 21:44:05.360305 systemd[1]: Listening on systemd-initctl.socket. Jul 14 21:44:05.360324 systemd[1]: Listening on systemd-networkd.socket. Jul 14 21:44:05.360337 systemd[1]: Listening on systemd-udevd-control.socket. Jul 14 21:44:05.360348 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 14 21:44:05.360358 systemd[1]: Listening on systemd-userdbd.socket. Jul 14 21:44:05.360373 systemd[1]: Mounting dev-hugepages.mount... Jul 14 21:44:05.360384 systemd[1]: Mounting dev-mqueue.mount... Jul 14 21:44:05.360395 systemd[1]: Mounting media.mount... Jul 14 21:44:05.360406 systemd[1]: Mounting sys-kernel-debug.mount... Jul 14 21:44:05.360417 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 14 21:44:05.360427 systemd[1]: Mounting tmp.mount... Jul 14 21:44:05.360440 systemd[1]: Starting flatcar-tmpfiles.service... Jul 14 21:44:05.360450 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 14 21:44:05.360460 systemd[1]: Starting kmod-static-nodes.service... Jul 14 21:44:05.360470 systemd[1]: Starting modprobe@configfs.service... Jul 14 21:44:05.360481 systemd[1]: Starting modprobe@dm_mod.service... Jul 14 21:44:05.360490 systemd[1]: Starting modprobe@drm.service... Jul 14 21:44:05.360502 systemd[1]: Starting modprobe@efi_pstore.service... Jul 14 21:44:05.360512 systemd[1]: Starting modprobe@fuse.service... Jul 14 21:44:05.360522 systemd[1]: Starting modprobe@loop.service... Jul 14 21:44:05.360533 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 14 21:44:05.360544 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 14 21:44:05.360554 systemd[1]: Stopped systemd-fsck-root.service. Jul 14 21:44:05.360565 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 14 21:44:05.360575 systemd[1]: Stopped systemd-fsck-usr.service. Jul 14 21:44:05.360585 systemd[1]: Stopped systemd-journald.service. Jul 14 21:44:05.360597 kernel: fuse: init (API version 7.34) Jul 14 21:44:05.360607 kernel: loop: module loaded Jul 14 21:44:05.360617 systemd[1]: Starting systemd-journald.service... Jul 14 21:44:05.360628 systemd[1]: Starting systemd-modules-load.service... Jul 14 21:44:05.360638 systemd[1]: Starting systemd-network-generator.service... Jul 14 21:44:05.360649 systemd[1]: Starting systemd-remount-fs.service... Jul 14 21:44:05.360660 systemd[1]: Starting systemd-udev-trigger.service... Jul 14 21:44:05.360671 systemd[1]: verity-setup.service: Deactivated successfully. Jul 14 21:44:05.360681 systemd[1]: Stopped verity-setup.service. Jul 14 21:44:05.360692 systemd[1]: Mounted dev-hugepages.mount. Jul 14 21:44:05.360703 systemd[1]: Mounted dev-mqueue.mount. Jul 14 21:44:05.360713 systemd[1]: Mounted media.mount. Jul 14 21:44:05.360724 systemd[1]: Mounted sys-kernel-debug.mount. Jul 14 21:44:05.360734 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 14 21:44:05.360748 systemd-journald[995]: Journal started Jul 14 21:44:05.360789 systemd-journald[995]: Runtime Journal (/run/log/journal/34e0c2ca29e14ccaa1e40fc966660825) is 6.0M, max 48.7M, 42.6M free. Jul 14 21:44:03.373000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 14 21:44:03.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 14 21:44:03.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 14 21:44:03.476000 audit: BPF prog-id=10 op=LOAD Jul 14 21:44:03.476000 audit: BPF prog-id=10 op=UNLOAD Jul 14 21:44:03.476000 audit: BPF prog-id=11 op=LOAD Jul 14 21:44:03.476000 audit: BPF prog-id=11 op=UNLOAD Jul 14 21:44:03.516000 audit[929]: AVC avc: denied { associate } for pid=929 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 14 21:44:03.516000 audit[929]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001bd8b4 a1=400013ede0 a2=4000145040 a3=32 items=0 ppid=912 pid=929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:44:03.516000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 14 21:44:03.518000 audit[929]: AVC avc: denied { associate } for pid=929 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 14 21:44:03.518000 audit[929]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001bd989 a2=1ed a3=0 items=2 ppid=912 pid=929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:44:03.518000 audit: CWD cwd="/" Jul 14 21:44:03.518000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 21:44:03.518000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 21:44:03.518000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 14 21:44:05.231000 audit: BPF prog-id=12 op=LOAD Jul 14 21:44:05.231000 audit: BPF prog-id=3 op=UNLOAD Jul 14 21:44:05.231000 audit: BPF prog-id=13 op=LOAD Jul 14 21:44:05.231000 audit: BPF prog-id=14 op=LOAD Jul 14 21:44:05.231000 audit: BPF prog-id=4 op=UNLOAD Jul 14 21:44:05.231000 audit: BPF prog-id=5 op=UNLOAD Jul 14 21:44:05.232000 audit: BPF prog-id=15 op=LOAD Jul 14 21:44:05.232000 audit: BPF prog-id=12 op=UNLOAD Jul 14 21:44:05.232000 audit: BPF prog-id=16 op=LOAD Jul 14 21:44:05.232000 audit: BPF prog-id=17 op=LOAD Jul 14 21:44:05.232000 audit: BPF prog-id=13 op=UNLOAD Jul 14 21:44:05.232000 audit: BPF prog-id=14 op=UNLOAD Jul 14 21:44:05.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:05.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:05.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:05.242000 audit: BPF prog-id=15 op=UNLOAD Jul 14 21:44:05.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:05.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:05.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:05.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:05.335000 audit: BPF prog-id=18 op=LOAD Jul 14 21:44:05.335000 audit: BPF prog-id=19 op=LOAD Jul 14 21:44:05.335000 audit: BPF prog-id=20 op=LOAD Jul 14 21:44:05.335000 audit: BPF prog-id=16 op=UNLOAD Jul 14 21:44:05.335000 audit: BPF prog-id=17 op=UNLOAD Jul 14 21:44:05.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:05.358000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 14 21:44:05.358000 audit[995]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffe0a6fed0 a2=4000 a3=1 items=0 ppid=1 pid=995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:44:05.358000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 14 21:44:05.231027 systemd[1]: Queued start job for default target multi-user.target. Jul 14 21:44:03.515078 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T21:44:03Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.101 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.101 /var/lib/torcx/store]" Jul 14 21:44:05.231041 systemd[1]: Unnecessary job was removed for dev-vda6.device. Jul 14 21:44:03.515358 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T21:44:03Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 14 21:44:05.234451 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 14 21:44:03.515377 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T21:44:03Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 14 21:44:05.362344 systemd[1]: Started systemd-journald.service. Jul 14 21:44:03.515410 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T21:44:03Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 14 21:44:03.515420 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T21:44:03Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 14 21:44:03.515450 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T21:44:03Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 14 21:44:03.515462 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T21:44:03Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 14 21:44:03.515674 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T21:44:03Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 14 21:44:03.515709 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T21:44:03Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 14 21:44:03.515721 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T21:44:03Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 14 21:44:03.516630 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T21:44:03Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 14 21:44:05.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:03.516664 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T21:44:03Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 14 21:44:03.516685 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T21:44:03Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.101: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.101 Jul 14 21:44:03.516699 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T21:44:03Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 14 21:44:03.516719 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T21:44:03Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.101: no such file or directory" path=/var/lib/torcx/store/3510.3.101 Jul 14 21:44:03.516733 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T21:44:03Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 14 21:44:05.363413 systemd[1]: Mounted tmp.mount. Jul 14 21:44:04.969654 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T21:44:04Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 14 21:44:04.969944 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T21:44:04Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 14 21:44:04.970051 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T21:44:04Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 14 21:44:04.970217 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T21:44:04Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 14 21:44:04.970270 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T21:44:04Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 14 21:44:04.970355 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T21:44:04Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 14 21:44:05.364366 systemd[1]: Finished kmod-static-nodes.service. Jul 14 21:44:05.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:05.365339 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 14 21:44:05.366157 systemd[1]: Finished modprobe@configfs.service. Jul 14 21:44:05.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:05.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:05.367105 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 21:44:05.369299 systemd[1]: Finished modprobe@dm_mod.service. Jul 14 21:44:05.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:05.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:05.370415 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 21:44:05.370593 systemd[1]: Finished modprobe@drm.service. Jul 14 21:44:05.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:05.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:05.371557 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 21:44:05.371731 systemd[1]: Finished modprobe@efi_pstore.service. Jul 14 21:44:05.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:05.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:05.372692 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 14 21:44:05.372894 systemd[1]: Finished modprobe@fuse.service. Jul 14 21:44:05.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:05.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:05.373874 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 21:44:05.374044 systemd[1]: Finished modprobe@loop.service. Jul 14 21:44:05.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:05.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:05.375098 systemd[1]: Finished systemd-modules-load.service. Jul 14 21:44:05.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:05.376115 systemd[1]: Finished systemd-network-generator.service. Jul 14 21:44:05.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:05.377209 systemd[1]: Finished systemd-remount-fs.service. Jul 14 21:44:05.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:05.378359 systemd[1]: Reached target network-pre.target. Jul 14 21:44:05.380630 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 14 21:44:05.382751 systemd[1]: Mounting sys-kernel-config.mount... Jul 14 21:44:05.383438 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 14 21:44:05.385588 systemd[1]: Starting systemd-hwdb-update.service... Jul 14 21:44:05.387596 systemd[1]: Starting systemd-journal-flush.service... Jul 14 21:44:05.388539 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 21:44:05.390029 systemd[1]: Starting systemd-random-seed.service... Jul 14 21:44:05.390889 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 14 21:44:05.398905 systemd[1]: Starting systemd-sysctl.service... Jul 14 21:44:05.401919 systemd-journald[995]: Time spent on flushing to /var/log/journal/34e0c2ca29e14ccaa1e40fc966660825 is 16.494ms for 999 entries. Jul 14 21:44:05.401919 systemd-journald[995]: System Journal (/var/log/journal/34e0c2ca29e14ccaa1e40fc966660825) is 8.0M, max 195.6M, 187.6M free. Jul 14 21:44:05.426070 systemd-journald[995]: Received client request to flush runtime journal. Jul 14 21:44:05.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:05.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:05.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:05.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:05.402260 systemd[1]: Finished flatcar-tmpfiles.service. Jul 14 21:44:05.404178 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 14 21:44:05.426589 udevadm[1030]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 14 21:44:05.405258 systemd[1]: Mounted sys-kernel-config.mount. Jul 14 21:44:05.406386 systemd[1]: Finished systemd-random-seed.service. Jul 14 21:44:05.407448 systemd[1]: Reached target first-boot-complete.target. Jul 14 21:44:05.409404 systemd[1]: Starting systemd-sysusers.service... Jul 14 21:44:05.414767 systemd[1]: Finished systemd-udev-trigger.service. Jul 14 21:44:05.416887 systemd[1]: Starting systemd-udev-settle.service... Jul 14 21:44:05.422539 systemd[1]: Finished systemd-sysctl.service. Jul 14 21:44:05.427051 systemd[1]: Finished systemd-journal-flush.service. Jul 14 21:44:05.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:05.440140 systemd[1]: Finished systemd-sysusers.service. Jul 14 21:44:05.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:05.442107 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 14 21:44:05.462220 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 14 21:44:05.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:05.798471 systemd[1]: Finished systemd-hwdb-update.service. Jul 14 21:44:05.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:05.799000 audit: BPF prog-id=21 op=LOAD Jul 14 21:44:05.799000 audit: BPF prog-id=22 op=LOAD Jul 14 21:44:05.799000 audit: BPF prog-id=7 op=UNLOAD Jul 14 21:44:05.799000 audit: BPF prog-id=8 op=UNLOAD Jul 14 21:44:05.800648 systemd[1]: Starting systemd-udevd.service... Jul 14 21:44:05.822271 systemd-udevd[1034]: Using default interface naming scheme 'v252'. Jul 14 21:44:05.843889 systemd[1]: Started systemd-udevd.service. Jul 14 21:44:05.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:05.847000 audit: BPF prog-id=23 op=LOAD Jul 14 21:44:05.855131 systemd[1]: Starting systemd-networkd.service... Jul 14 21:44:05.858000 audit: BPF prog-id=24 op=LOAD Jul 14 21:44:05.858000 audit: BPF prog-id=25 op=LOAD Jul 14 21:44:05.858000 audit: BPF prog-id=26 op=LOAD Jul 14 21:44:05.859824 systemd[1]: Starting systemd-userdbd.service... Jul 14 21:44:05.870639 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Jul 14 21:44:05.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:05.904996 systemd[1]: Started systemd-userdbd.service. Jul 14 21:44:05.926196 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 14 21:44:05.960138 systemd-networkd[1048]: lo: Link UP Jul 14 21:44:05.960432 systemd-networkd[1048]: lo: Gained carrier Jul 14 21:44:05.960906 systemd-networkd[1048]: Enumeration completed Jul 14 21:44:05.961100 systemd[1]: Started systemd-networkd.service. Jul 14 21:44:05.961102 systemd-networkd[1048]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 21:44:05.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:05.962518 systemd-networkd[1048]: eth0: Link UP Jul 14 21:44:05.962600 systemd-networkd[1048]: eth0: Gained carrier Jul 14 21:44:05.974276 systemd[1]: Finished systemd-udev-settle.service. Jul 14 21:44:05.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:05.976350 systemd[1]: Starting lvm2-activation-early.service... Jul 14 21:44:05.987022 systemd-networkd[1048]: eth0: DHCPv4 address 10.0.0.12/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 21:44:05.995394 lvm[1067]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 21:44:06.025789 systemd[1]: Finished lvm2-activation-early.service. Jul 14 21:44:06.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:06.026744 systemd[1]: Reached target cryptsetup.target. Jul 14 21:44:06.028690 systemd[1]: Starting lvm2-activation.service... Jul 14 21:44:06.032409 lvm[1068]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 21:44:06.055823 systemd[1]: Finished lvm2-activation.service. Jul 14 21:44:06.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:06.056646 systemd[1]: Reached target local-fs-pre.target. Jul 14 21:44:06.057402 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 14 21:44:06.057434 systemd[1]: Reached target local-fs.target. Jul 14 21:44:06.058153 systemd[1]: Reached target machines.target. Jul 14 21:44:06.060042 systemd[1]: Starting ldconfig.service... Jul 14 21:44:06.060921 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 14 21:44:06.060982 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 21:44:06.062116 systemd[1]: Starting systemd-boot-update.service... Jul 14 21:44:06.063945 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 14 21:44:06.066679 systemd[1]: Starting systemd-machine-id-commit.service... Jul 14 21:44:06.068685 systemd[1]: Starting systemd-sysext.service... Jul 14 21:44:06.069806 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1070 (bootctl) Jul 14 21:44:06.071106 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 14 21:44:06.087644 systemd[1]: Unmounting usr-share-oem.mount... Jul 14 21:44:06.088828 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 14 21:44:06.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:06.096730 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 14 21:44:06.096977 systemd[1]: Unmounted usr-share-oem.mount. Jul 14 21:44:06.141867 kernel: loop0: detected capacity change from 0 to 207008 Jul 14 21:44:06.146131 systemd[1]: Finished systemd-machine-id-commit.service. Jul 14 21:44:06.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:06.153871 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 14 21:44:06.163189 systemd-fsck[1081]: fsck.fat 4.2 (2021-01-31) Jul 14 21:44:06.163189 systemd-fsck[1081]: /dev/vda1: 236 files, 117310/258078 clusters Jul 14 21:44:06.165178 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 14 21:44:06.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:06.177877 kernel: loop1: detected capacity change from 0 to 207008 Jul 14 21:44:06.184493 (sd-sysext)[1084]: Using extensions 'kubernetes'. Jul 14 21:44:06.184893 (sd-sysext)[1084]: Merged extensions into '/usr'. Jul 14 21:44:06.201958 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 14 21:44:06.203548 systemd[1]: Starting modprobe@dm_mod.service... Jul 14 21:44:06.205552 systemd[1]: Starting modprobe@efi_pstore.service... Jul 14 21:44:06.207358 systemd[1]: Starting modprobe@loop.service... Jul 14 21:44:06.208316 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 14 21:44:06.208458 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 21:44:06.209277 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 21:44:06.209426 systemd[1]: Finished modprobe@dm_mod.service. Jul 14 21:44:06.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:06.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:06.210654 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 21:44:06.210773 systemd[1]: Finished modprobe@efi_pstore.service. Jul 14 21:44:06.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:06.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:06.212053 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 21:44:06.212168 systemd[1]: Finished modprobe@loop.service. Jul 14 21:44:06.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:06.212000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:06.213448 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 21:44:06.213552 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 14 21:44:06.253114 ldconfig[1069]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 14 21:44:06.257741 systemd[1]: Finished ldconfig.service. Jul 14 21:44:06.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:06.353941 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 14 21:44:06.355917 systemd[1]: Mounting boot.mount... Jul 14 21:44:06.357700 systemd[1]: Mounting usr-share-oem.mount... Jul 14 21:44:06.364520 systemd[1]: Mounted boot.mount. Jul 14 21:44:06.365351 systemd[1]: Mounted usr-share-oem.mount. Jul 14 21:44:06.367338 systemd[1]: Finished systemd-sysext.service. Jul 14 21:44:06.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:06.369296 systemd[1]: Starting ensure-sysext.service... Jul 14 21:44:06.371248 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 14 21:44:06.374273 systemd[1]: Finished systemd-boot-update.service. Jul 14 21:44:06.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:06.376657 systemd[1]: Reloading. Jul 14 21:44:06.381084 systemd-tmpfiles[1092]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 14 21:44:06.381962 systemd-tmpfiles[1092]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 14 21:44:06.383396 systemd-tmpfiles[1092]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 14 21:44:06.419305 /usr/lib/systemd/system-generators/torcx-generator[1112]: time="2025-07-14T21:44:06Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.101 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.101 /var/lib/torcx/store]" Jul 14 21:44:06.419350 /usr/lib/systemd/system-generators/torcx-generator[1112]: time="2025-07-14T21:44:06Z" level=info msg="torcx already run" Jul 14 21:44:06.475975 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 14 21:44:06.475996 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 14 21:44:06.491715 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 21:44:06.533000 audit: BPF prog-id=27 op=LOAD Jul 14 21:44:06.533000 audit: BPF prog-id=24 op=UNLOAD Jul 14 21:44:06.533000 audit: BPF prog-id=28 op=LOAD Jul 14 21:44:06.533000 audit: BPF prog-id=29 op=LOAD Jul 14 21:44:06.533000 audit: BPF prog-id=25 op=UNLOAD Jul 14 21:44:06.533000 audit: BPF prog-id=26 op=UNLOAD Jul 14 21:44:06.535000 audit: BPF prog-id=30 op=LOAD Jul 14 21:44:06.535000 audit: BPF prog-id=23 op=UNLOAD Jul 14 21:44:06.536000 audit: BPF prog-id=31 op=LOAD Jul 14 21:44:06.536000 audit: BPF prog-id=18 op=UNLOAD Jul 14 21:44:06.536000 audit: BPF prog-id=32 op=LOAD Jul 14 21:44:06.536000 audit: BPF prog-id=33 op=LOAD Jul 14 21:44:06.536000 audit: BPF prog-id=19 op=UNLOAD Jul 14 21:44:06.536000 audit: BPF prog-id=20 op=UNLOAD Jul 14 21:44:06.536000 audit: BPF prog-id=34 op=LOAD Jul 14 21:44:06.536000 audit: BPF prog-id=35 op=LOAD Jul 14 21:44:06.536000 audit: BPF prog-id=21 op=UNLOAD Jul 14 21:44:06.536000 audit: BPF prog-id=22 op=UNLOAD Jul 14 21:44:06.540229 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 14 21:44:06.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:06.544833 systemd[1]: Starting audit-rules.service... Jul 14 21:44:06.546758 systemd[1]: Starting clean-ca-certificates.service... Jul 14 21:44:06.548752 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 14 21:44:06.550000 audit: BPF prog-id=36 op=LOAD Jul 14 21:44:06.552154 systemd[1]: Starting systemd-resolved.service... Jul 14 21:44:06.555000 audit: BPF prog-id=37 op=LOAD Jul 14 21:44:06.557493 systemd[1]: Starting systemd-timesyncd.service... Jul 14 21:44:06.559397 systemd[1]: Starting systemd-update-utmp.service... Jul 14 21:44:06.564891 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 14 21:44:06.568036 systemd[1]: Starting modprobe@dm_mod.service... Jul 14 21:44:06.567000 audit[1157]: SYSTEM_BOOT pid=1157 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 14 21:44:06.569950 systemd[1]: Starting modprobe@efi_pstore.service... Jul 14 21:44:06.571764 systemd[1]: Starting modprobe@loop.service... Jul 14 21:44:06.572411 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 14 21:44:06.572540 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 21:44:06.573435 systemd[1]: Finished clean-ca-certificates.service. Jul 14 21:44:06.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:06.574648 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 21:44:06.574777 systemd[1]: Finished modprobe@dm_mod.service. Jul 14 21:44:06.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:06.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:06.575878 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 21:44:06.576005 systemd[1]: Finished modprobe@efi_pstore.service. Jul 14 21:44:06.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:06.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:06.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:06.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:06.577095 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 21:44:06.577222 systemd[1]: Finished modprobe@loop.service. Jul 14 21:44:06.580035 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 21:44:06.580195 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 14 21:44:06.580324 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 14 21:44:06.582136 systemd[1]: Finished systemd-update-utmp.service. Jul 14 21:44:06.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:06.583924 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 14 21:44:06.585201 systemd[1]: Starting modprobe@dm_mod.service... Jul 14 21:44:06.587051 systemd[1]: Starting modprobe@efi_pstore.service... Jul 14 21:44:06.588743 systemd[1]: Starting modprobe@loop.service... Jul 14 21:44:06.589475 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 14 21:44:06.589603 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 21:44:06.589709 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 14 21:44:06.590560 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 14 21:44:06.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:06.591776 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 21:44:06.591930 systemd[1]: Finished modprobe@dm_mod.service. Jul 14 21:44:06.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:06.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:06.593028 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 21:44:06.593141 systemd[1]: Finished modprobe@efi_pstore.service. Jul 14 21:44:06.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:06.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:06.594170 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 21:44:06.594287 systemd[1]: Finished modprobe@loop.service. Jul 14 21:44:06.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:06.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:06.597422 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 14 21:44:06.598715 systemd[1]: Starting modprobe@dm_mod.service... Jul 14 21:44:06.600715 systemd[1]: Starting modprobe@drm.service... Jul 14 21:44:06.602659 systemd[1]: Starting modprobe@efi_pstore.service... Jul 14 21:44:06.604666 systemd[1]: Starting modprobe@loop.service... Jul 14 21:44:06.605396 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 14 21:44:06.605520 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 21:44:06.606773 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 14 21:44:06.608948 systemd[1]: Starting systemd-update-done.service... Jul 14 21:44:06.609732 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 14 21:44:06.611076 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 21:44:06.611230 systemd[1]: Finished modprobe@dm_mod.service. Jul 14 21:44:06.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:06.611000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:06.612290 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 21:44:06.612426 systemd[1]: Finished modprobe@drm.service. Jul 14 21:44:06.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:06.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:06.613493 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 21:44:06.613612 systemd[1]: Finished modprobe@efi_pstore.service. Jul 14 21:44:06.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:06.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:06.614689 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 21:44:06.614816 systemd[1]: Finished modprobe@loop.service. Jul 14 21:44:06.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:06.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:06.615826 systemd[1]: Started systemd-timesyncd.service. Jul 14 21:44:06.616586 systemd-timesyncd[1156]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 14 21:44:06.616669 systemd-timesyncd[1156]: Initial clock synchronization to Mon 2025-07-14 21:44:06.218578 UTC. Jul 14 21:44:06.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:06.617409 systemd[1]: Finished systemd-update-done.service. Jul 14 21:44:06.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:06.619043 systemd[1]: Reached target time-set.target. Jul 14 21:44:06.618000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 14 21:44:06.618000 audit[1184]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffcc751b40 a2=420 a3=0 items=0 ppid=1151 pid=1184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:44:06.618000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 14 21:44:06.619774 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 21:44:06.620021 augenrules[1184]: No rules Jul 14 21:44:06.619814 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 14 21:44:06.620224 systemd[1]: Finished ensure-sysext.service. Jul 14 21:44:06.621172 systemd[1]: Finished audit-rules.service. Jul 14 21:44:06.631677 systemd-resolved[1155]: Positive Trust Anchors: Jul 14 21:44:06.631982 systemd-resolved[1155]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 21:44:06.632062 systemd-resolved[1155]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 14 21:44:06.643164 systemd-resolved[1155]: Defaulting to hostname 'linux'. Jul 14 21:44:06.646601 systemd[1]: Started systemd-resolved.service. Jul 14 21:44:06.647341 systemd[1]: Reached target network.target. Jul 14 21:44:06.647925 systemd[1]: Reached target nss-lookup.target. Jul 14 21:44:06.648541 systemd[1]: Reached target sysinit.target. Jul 14 21:44:06.649697 systemd[1]: Started motdgen.path. Jul 14 21:44:06.650288 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 14 21:44:06.651251 systemd[1]: Started logrotate.timer. Jul 14 21:44:06.651948 systemd[1]: Started mdadm.timer. Jul 14 21:44:06.652473 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 14 21:44:06.653182 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 14 21:44:06.653213 systemd[1]: Reached target paths.target. Jul 14 21:44:06.653765 systemd[1]: Reached target timers.target. Jul 14 21:44:06.656597 systemd[1]: Listening on dbus.socket. Jul 14 21:44:06.658421 systemd[1]: Starting docker.socket... Jul 14 21:44:06.661530 systemd[1]: Listening on sshd.socket. Jul 14 21:44:06.662253 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 21:44:06.662739 systemd[1]: Listening on docker.socket. Jul 14 21:44:06.663433 systemd[1]: Reached target sockets.target. Jul 14 21:44:06.667424 systemd[1]: Reached target basic.target. Jul 14 21:44:06.668184 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 14 21:44:06.668218 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 14 21:44:06.669423 systemd[1]: Starting containerd.service... Jul 14 21:44:06.671228 systemd[1]: Starting dbus.service... Jul 14 21:44:06.672924 systemd[1]: Starting enable-oem-cloudinit.service... Jul 14 21:44:06.674814 systemd[1]: Starting extend-filesystems.service... Jul 14 21:44:06.675546 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 14 21:44:06.677060 systemd[1]: Starting motdgen.service... Jul 14 21:44:06.678968 systemd[1]: Starting prepare-helm.service... Jul 14 21:44:06.682615 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 14 21:44:06.685261 systemd[1]: Starting sshd-keygen.service... Jul 14 21:44:06.688160 systemd[1]: Starting systemd-logind.service... Jul 14 21:44:06.691200 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 21:44:06.691282 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 14 21:44:06.691492 jq[1193]: false Jul 14 21:44:06.691717 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 14 21:44:06.692468 systemd[1]: Starting update-engine.service... Jul 14 21:44:06.694234 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 14 21:44:06.696673 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 14 21:44:06.696887 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 14 21:44:06.697211 jq[1211]: true Jul 14 21:44:06.698390 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 14 21:44:06.698567 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 14 21:44:06.706455 jq[1215]: true Jul 14 21:44:06.710977 systemd[1]: motdgen.service: Deactivated successfully. Jul 14 21:44:06.711157 systemd[1]: Finished motdgen.service. Jul 14 21:44:06.716327 extend-filesystems[1194]: Found loop1 Jul 14 21:44:06.716327 extend-filesystems[1194]: Found vda Jul 14 21:44:06.716327 extend-filesystems[1194]: Found vda1 Jul 14 21:44:06.716327 extend-filesystems[1194]: Found vda2 Jul 14 21:44:06.716327 extend-filesystems[1194]: Found vda3 Jul 14 21:44:06.716327 extend-filesystems[1194]: Found usr Jul 14 21:44:06.716327 extend-filesystems[1194]: Found vda4 Jul 14 21:44:06.716327 extend-filesystems[1194]: Found vda6 Jul 14 21:44:06.716327 extend-filesystems[1194]: Found vda7 Jul 14 21:44:06.716327 extend-filesystems[1194]: Found vda9 Jul 14 21:44:06.716327 extend-filesystems[1194]: Checking size of /dev/vda9 Jul 14 21:44:06.731941 tar[1214]: linux-arm64/LICENSE Jul 14 21:44:06.731941 tar[1214]: linux-arm64/helm Jul 14 21:44:06.738067 extend-filesystems[1194]: Resized partition /dev/vda9 Jul 14 21:44:06.740433 dbus-daemon[1192]: [system] SELinux support is enabled Jul 14 21:44:06.740895 systemd[1]: Started dbus.service. Jul 14 21:44:06.743273 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 14 21:44:06.743301 systemd[1]: Reached target system-config.target. Jul 14 21:44:06.744006 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 14 21:44:06.744020 systemd[1]: Reached target user-config.target. Jul 14 21:44:06.745917 extend-filesystems[1237]: resize2fs 1.46.5 (30-Dec-2021) Jul 14 21:44:06.751864 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 14 21:44:06.776765 systemd-logind[1203]: Watching system buttons on /dev/input/event0 (Power Button) Jul 14 21:44:06.784045 systemd-logind[1203]: New seat seat0. Jul 14 21:44:06.790137 systemd[1]: Started systemd-logind.service. Jul 14 21:44:06.790863 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 14 21:44:06.814492 update_engine[1209]: I0714 21:44:06.791502 1209 main.cc:92] Flatcar Update Engine starting Jul 14 21:44:06.814492 update_engine[1209]: I0714 21:44:06.806357 1209 update_check_scheduler.cc:74] Next update check in 4m41s Jul 14 21:44:06.804287 systemd[1]: Started update-engine.service. Jul 14 21:44:06.814957 extend-filesystems[1237]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 14 21:44:06.814957 extend-filesystems[1237]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 14 21:44:06.814957 extend-filesystems[1237]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 14 21:44:06.807606 systemd[1]: Started locksmithd.service. Jul 14 21:44:06.819443 extend-filesystems[1194]: Resized filesystem in /dev/vda9 Jul 14 21:44:06.816756 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 14 21:44:06.816963 systemd[1]: Finished extend-filesystems.service. Jul 14 21:44:06.827240 bash[1244]: Updated "/home/core/.ssh/authorized_keys" Jul 14 21:44:06.828446 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 14 21:44:06.832281 env[1216]: time="2025-07-14T21:44:06.832227800Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 14 21:44:06.854688 env[1216]: time="2025-07-14T21:44:06.854626960Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 14 21:44:06.854891 env[1216]: time="2025-07-14T21:44:06.854816400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 14 21:44:06.857510 env[1216]: time="2025-07-14T21:44:06.857455440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.187-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 14 21:44:06.857510 env[1216]: time="2025-07-14T21:44:06.857496600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 14 21:44:06.857767 env[1216]: time="2025-07-14T21:44:06.857734360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 21:44:06.857767 env[1216]: time="2025-07-14T21:44:06.857758960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 14 21:44:06.857834 env[1216]: time="2025-07-14T21:44:06.857772280Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 14 21:44:06.857834 env[1216]: time="2025-07-14T21:44:06.857782800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 14 21:44:06.857898 env[1216]: time="2025-07-14T21:44:06.857878480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 14 21:44:06.858182 env[1216]: time="2025-07-14T21:44:06.858150280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 14 21:44:06.858327 env[1216]: time="2025-07-14T21:44:06.858290920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 21:44:06.858327 env[1216]: time="2025-07-14T21:44:06.858321800Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 14 21:44:06.858396 env[1216]: time="2025-07-14T21:44:06.858379840Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 14 21:44:06.858422 env[1216]: time="2025-07-14T21:44:06.858397680Z" level=info msg="metadata content store policy set" policy=shared Jul 14 21:44:06.865102 env[1216]: time="2025-07-14T21:44:06.865063600Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 14 21:44:06.865102 env[1216]: time="2025-07-14T21:44:06.865123000Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 14 21:44:06.865102 env[1216]: time="2025-07-14T21:44:06.865145240Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 14 21:44:06.865333 env[1216]: time="2025-07-14T21:44:06.865182680Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 14 21:44:06.865333 env[1216]: time="2025-07-14T21:44:06.865199040Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 14 21:44:06.865333 env[1216]: time="2025-07-14T21:44:06.865212640Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 14 21:44:06.865333 env[1216]: time="2025-07-14T21:44:06.865226720Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 14 21:44:06.865637 env[1216]: time="2025-07-14T21:44:06.865617040Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 14 21:44:06.865682 env[1216]: time="2025-07-14T21:44:06.865645000Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 14 21:44:06.865682 env[1216]: time="2025-07-14T21:44:06.865660400Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 14 21:44:06.865682 env[1216]: time="2025-07-14T21:44:06.865673680Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 14 21:44:06.865754 env[1216]: time="2025-07-14T21:44:06.865687600Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 14 21:44:06.865859 env[1216]: time="2025-07-14T21:44:06.865827720Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 14 21:44:06.865949 env[1216]: time="2025-07-14T21:44:06.865932000Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 14 21:44:06.866231 env[1216]: time="2025-07-14T21:44:06.866208280Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 14 21:44:06.866263 env[1216]: time="2025-07-14T21:44:06.866244760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 14 21:44:06.866263 env[1216]: time="2025-07-14T21:44:06.866260320Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 14 21:44:06.866389 env[1216]: time="2025-07-14T21:44:06.866375680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 14 21:44:06.866425 env[1216]: time="2025-07-14T21:44:06.866392360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 14 21:44:06.866425 env[1216]: time="2025-07-14T21:44:06.866405120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 14 21:44:06.866477 env[1216]: time="2025-07-14T21:44:06.866416600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 14 21:44:06.866498 env[1216]: time="2025-07-14T21:44:06.866482640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 14 21:44:06.866518 env[1216]: time="2025-07-14T21:44:06.866497000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 14 21:44:06.866518 env[1216]: time="2025-07-14T21:44:06.866508120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 14 21:44:06.866554 env[1216]: time="2025-07-14T21:44:06.866521440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 14 21:44:06.866554 env[1216]: time="2025-07-14T21:44:06.866535840Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 14 21:44:06.866685 env[1216]: time="2025-07-14T21:44:06.866666920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 14 21:44:06.866717 env[1216]: time="2025-07-14T21:44:06.866690760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 14 21:44:06.866717 env[1216]: time="2025-07-14T21:44:06.866705240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 14 21:44:06.866754 env[1216]: time="2025-07-14T21:44:06.866716840Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 14 21:44:06.866754 env[1216]: time="2025-07-14T21:44:06.866731160Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 14 21:44:06.866754 env[1216]: time="2025-07-14T21:44:06.866742200Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 14 21:44:06.866814 env[1216]: time="2025-07-14T21:44:06.866758920Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 14 21:44:06.866814 env[1216]: time="2025-07-14T21:44:06.866791560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 14 21:44:06.867046 env[1216]: time="2025-07-14T21:44:06.866995880Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 14 21:44:06.867659 env[1216]: time="2025-07-14T21:44:06.867056320Z" level=info msg="Connect containerd service" Jul 14 21:44:06.867659 env[1216]: time="2025-07-14T21:44:06.867085640Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 14 21:44:06.867736 env[1216]: time="2025-07-14T21:44:06.867710040Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 14 21:44:06.867939 env[1216]: time="2025-07-14T21:44:06.867913560Z" level=info msg="Start subscribing containerd event" Jul 14 21:44:06.867980 env[1216]: time="2025-07-14T21:44:06.867957240Z" level=info msg="Start recovering state" Jul 14 21:44:06.868028 env[1216]: time="2025-07-14T21:44:06.868012120Z" level=info msg="Start event monitor" Jul 14 21:44:06.868059 env[1216]: time="2025-07-14T21:44:06.868034960Z" level=info msg="Start snapshots syncer" Jul 14 21:44:06.868059 env[1216]: time="2025-07-14T21:44:06.868044760Z" level=info msg="Start cni network conf syncer for default" Jul 14 21:44:06.868059 env[1216]: time="2025-07-14T21:44:06.868054120Z" level=info msg="Start streaming server" Jul 14 21:44:06.868398 env[1216]: time="2025-07-14T21:44:06.868377640Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 14 21:44:06.868451 env[1216]: time="2025-07-14T21:44:06.868442280Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 14 21:44:06.868518 env[1216]: time="2025-07-14T21:44:06.868500800Z" level=info msg="containerd successfully booted in 0.038192s" Jul 14 21:44:06.868577 systemd[1]: Started containerd.service. Jul 14 21:44:06.883625 locksmithd[1245]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 14 21:44:07.147571 tar[1214]: linux-arm64/README.md Jul 14 21:44:07.151893 systemd[1]: Finished prepare-helm.service. Jul 14 21:44:07.305008 systemd-networkd[1048]: eth0: Gained IPv6LL Jul 14 21:44:07.307097 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 14 21:44:07.308157 systemd[1]: Reached target network-online.target. Jul 14 21:44:07.310361 systemd[1]: Starting kubelet.service... Jul 14 21:44:07.877607 systemd[1]: Started kubelet.service. Jul 14 21:44:08.318620 kubelet[1260]: E0714 21:44:08.318498 1260 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 21:44:08.320622 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 21:44:08.320748 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 21:44:08.930836 sshd_keygen[1212]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 14 21:44:08.950057 systemd[1]: Finished sshd-keygen.service. Jul 14 21:44:08.952479 systemd[1]: Starting issuegen.service... Jul 14 21:44:08.957479 systemd[1]: issuegen.service: Deactivated successfully. Jul 14 21:44:08.957677 systemd[1]: Finished issuegen.service. Jul 14 21:44:08.960478 systemd[1]: Starting systemd-user-sessions.service... Jul 14 21:44:08.968860 systemd[1]: Finished systemd-user-sessions.service. Jul 14 21:44:08.972794 systemd[1]: Started getty@tty1.service. Jul 14 21:44:08.976120 systemd[1]: Started serial-getty@ttyAMA0.service. Jul 14 21:44:08.977193 systemd[1]: Reached target getty.target. Jul 14 21:44:08.978094 systemd[1]: Reached target multi-user.target. Jul 14 21:44:08.980622 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 14 21:44:08.988608 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 14 21:44:08.988798 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 14 21:44:08.989758 systemd[1]: Startup finished in 641ms (kernel) + 4.758s (initrd) + 5.660s (userspace) = 11.059s. Jul 14 21:44:11.061642 systemd[1]: Created slice system-sshd.slice. Jul 14 21:44:11.062752 systemd[1]: Started sshd@0-10.0.0.12:22-10.0.0.1:47708.service. Jul 14 21:44:11.105036 sshd[1283]: Accepted publickey for core from 10.0.0.1 port 47708 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:44:11.107454 sshd[1283]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:44:11.120077 systemd-logind[1203]: New session 1 of user core. Jul 14 21:44:11.120574 systemd[1]: Created slice user-500.slice. Jul 14 21:44:11.122492 systemd[1]: Starting user-runtime-dir@500.service... Jul 14 21:44:11.132092 systemd[1]: Finished user-runtime-dir@500.service. Jul 14 21:44:11.134295 systemd[1]: Starting user@500.service... Jul 14 21:44:11.137418 (systemd)[1286]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:44:11.200444 systemd[1286]: Queued start job for default target default.target. Jul 14 21:44:11.200962 systemd[1286]: Reached target paths.target. Jul 14 21:44:11.200994 systemd[1286]: Reached target sockets.target. Jul 14 21:44:11.201005 systemd[1286]: Reached target timers.target. Jul 14 21:44:11.201015 systemd[1286]: Reached target basic.target. Jul 14 21:44:11.201054 systemd[1286]: Reached target default.target. Jul 14 21:44:11.201079 systemd[1286]: Startup finished in 57ms. Jul 14 21:44:11.201277 systemd[1]: Started user@500.service. Jul 14 21:44:11.202279 systemd[1]: Started session-1.scope. Jul 14 21:44:11.260057 systemd[1]: Started sshd@1-10.0.0.12:22-10.0.0.1:47716.service. Jul 14 21:44:11.295389 sshd[1295]: Accepted publickey for core from 10.0.0.1 port 47716 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:44:11.296669 sshd[1295]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:44:11.301089 systemd-logind[1203]: New session 2 of user core. Jul 14 21:44:11.302715 systemd[1]: Started session-2.scope. Jul 14 21:44:11.358937 sshd[1295]: pam_unix(sshd:session): session closed for user core Jul 14 21:44:11.362477 systemd[1]: Started sshd@2-10.0.0.12:22-10.0.0.1:47732.service. Jul 14 21:44:11.363177 systemd[1]: sshd@1-10.0.0.12:22-10.0.0.1:47716.service: Deactivated successfully. Jul 14 21:44:11.364377 systemd-logind[1203]: Session 2 logged out. Waiting for processes to exit. Jul 14 21:44:11.364465 systemd[1]: session-2.scope: Deactivated successfully. Jul 14 21:44:11.365155 systemd-logind[1203]: Removed session 2. Jul 14 21:44:11.398419 sshd[1300]: Accepted publickey for core from 10.0.0.1 port 47732 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:44:11.399693 sshd[1300]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:44:11.403141 systemd-logind[1203]: New session 3 of user core. Jul 14 21:44:11.404006 systemd[1]: Started session-3.scope. Jul 14 21:44:11.453365 sshd[1300]: pam_unix(sshd:session): session closed for user core Jul 14 21:44:11.456814 systemd[1]: sshd@2-10.0.0.12:22-10.0.0.1:47732.service: Deactivated successfully. Jul 14 21:44:11.457483 systemd[1]: session-3.scope: Deactivated successfully. Jul 14 21:44:11.458015 systemd-logind[1203]: Session 3 logged out. Waiting for processes to exit. Jul 14 21:44:11.459096 systemd[1]: Started sshd@3-10.0.0.12:22-10.0.0.1:47748.service. Jul 14 21:44:11.459807 systemd-logind[1203]: Removed session 3. Jul 14 21:44:11.501547 sshd[1308]: Accepted publickey for core from 10.0.0.1 port 47748 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:44:11.503119 sshd[1308]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:44:11.506466 systemd-logind[1203]: New session 4 of user core. Jul 14 21:44:11.507319 systemd[1]: Started session-4.scope. Jul 14 21:44:11.561653 sshd[1308]: pam_unix(sshd:session): session closed for user core Jul 14 21:44:11.564282 systemd[1]: sshd@3-10.0.0.12:22-10.0.0.1:47748.service: Deactivated successfully. Jul 14 21:44:11.565085 systemd[1]: session-4.scope: Deactivated successfully. Jul 14 21:44:11.565741 systemd-logind[1203]: Session 4 logged out. Waiting for processes to exit. Jul 14 21:44:11.567171 systemd[1]: Started sshd@4-10.0.0.12:22-10.0.0.1:47764.service. Jul 14 21:44:11.567983 systemd-logind[1203]: Removed session 4. Jul 14 21:44:11.602432 sshd[1314]: Accepted publickey for core from 10.0.0.1 port 47764 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:44:11.603879 sshd[1314]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:44:11.607292 systemd-logind[1203]: New session 5 of user core. Jul 14 21:44:11.608249 systemd[1]: Started session-5.scope. Jul 14 21:44:11.666993 sudo[1317]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 14 21:44:11.667555 sudo[1317]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 14 21:44:11.730142 systemd[1]: Starting docker.service... Jul 14 21:44:11.831504 env[1328]: time="2025-07-14T21:44:11.831437706Z" level=info msg="Starting up" Jul 14 21:44:11.833380 env[1328]: time="2025-07-14T21:44:11.833352812Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 14 21:44:11.833380 env[1328]: time="2025-07-14T21:44:11.833376967Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 14 21:44:11.833473 env[1328]: time="2025-07-14T21:44:11.833397510Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 14 21:44:11.833473 env[1328]: time="2025-07-14T21:44:11.833408656Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 14 21:44:11.835792 env[1328]: time="2025-07-14T21:44:11.835762044Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 14 21:44:11.835792 env[1328]: time="2025-07-14T21:44:11.835788800Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 14 21:44:11.836336 env[1328]: time="2025-07-14T21:44:11.835809227Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 14 21:44:11.836336 env[1328]: time="2025-07-14T21:44:11.835819246Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 14 21:44:12.107071 env[1328]: time="2025-07-14T21:44:12.106974686Z" level=info msg="Loading containers: start." Jul 14 21:44:12.228863 kernel: Initializing XFRM netlink socket Jul 14 21:44:12.251238 env[1328]: time="2025-07-14T21:44:12.251201164Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 14 21:44:12.302232 systemd-networkd[1048]: docker0: Link UP Jul 14 21:44:12.321299 env[1328]: time="2025-07-14T21:44:12.321255797Z" level=info msg="Loading containers: done." Jul 14 21:44:12.342989 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck991533784-merged.mount: Deactivated successfully. Jul 14 21:44:12.347057 env[1328]: time="2025-07-14T21:44:12.347002122Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 14 21:44:12.347213 env[1328]: time="2025-07-14T21:44:12.347194917Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 14 21:44:12.347338 env[1328]: time="2025-07-14T21:44:12.347309790Z" level=info msg="Daemon has completed initialization" Jul 14 21:44:12.363935 systemd[1]: Started docker.service. Jul 14 21:44:12.371739 env[1328]: time="2025-07-14T21:44:12.371561978Z" level=info msg="API listen on /run/docker.sock" Jul 14 21:44:12.982066 env[1216]: time="2025-07-14T21:44:12.981986878Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 14 21:44:13.602257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3138573929.mount: Deactivated successfully. Jul 14 21:44:15.221890 env[1216]: time="2025-07-14T21:44:15.221829897Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:15.223466 env[1216]: time="2025-07-14T21:44:15.223434289Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:15.225441 env[1216]: time="2025-07-14T21:44:15.225414037Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:15.227291 env[1216]: time="2025-07-14T21:44:15.227263883Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:15.228075 env[1216]: time="2025-07-14T21:44:15.228043846Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\"" Jul 14 21:44:15.229207 env[1216]: time="2025-07-14T21:44:15.229176675Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 14 21:44:16.761734 env[1216]: time="2025-07-14T21:44:16.761688512Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:16.763241 env[1216]: time="2025-07-14T21:44:16.763211400Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:16.766026 env[1216]: time="2025-07-14T21:44:16.765982741Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:16.769061 env[1216]: time="2025-07-14T21:44:16.769025089Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:16.769746 env[1216]: time="2025-07-14T21:44:16.769702646Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\"" Jul 14 21:44:16.770287 env[1216]: time="2025-07-14T21:44:16.770257190Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 14 21:44:18.571487 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 14 21:44:18.571662 systemd[1]: Stopped kubelet.service. Jul 14 21:44:18.573068 systemd[1]: Starting kubelet.service... Jul 14 21:44:18.669544 systemd[1]: Started kubelet.service. Jul 14 21:44:18.721555 env[1216]: time="2025-07-14T21:44:18.721497897Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:18.722920 env[1216]: time="2025-07-14T21:44:18.722885195Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:18.723608 kubelet[1462]: E0714 21:44:18.723566 1462 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 21:44:18.724674 env[1216]: time="2025-07-14T21:44:18.724642251Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:18.726304 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 21:44:18.726433 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 21:44:18.727892 env[1216]: time="2025-07-14T21:44:18.727843229Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:18.728597 env[1216]: time="2025-07-14T21:44:18.728548978Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\"" Jul 14 21:44:18.729716 env[1216]: time="2025-07-14T21:44:18.729678105Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 14 21:44:19.849952 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1593464615.mount: Deactivated successfully. Jul 14 21:44:20.353547 env[1216]: time="2025-07-14T21:44:20.353497716Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:20.354858 env[1216]: time="2025-07-14T21:44:20.354807259Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:20.356801 env[1216]: time="2025-07-14T21:44:20.356765745Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:20.358480 env[1216]: time="2025-07-14T21:44:20.358447755Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:20.358984 env[1216]: time="2025-07-14T21:44:20.358946506Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jul 14 21:44:20.359738 env[1216]: time="2025-07-14T21:44:20.359711146Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 14 21:44:20.862888 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount788637085.mount: Deactivated successfully. Jul 14 21:44:21.964573 env[1216]: time="2025-07-14T21:44:21.964504529Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:21.997367 env[1216]: time="2025-07-14T21:44:21.997311233Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:22.016982 env[1216]: time="2025-07-14T21:44:22.016939255Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:22.025502 env[1216]: time="2025-07-14T21:44:22.025463915Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:22.026334 env[1216]: time="2025-07-14T21:44:22.026293710Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 14 21:44:22.027337 env[1216]: time="2025-07-14T21:44:22.027309648Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 14 21:44:22.433772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1846704699.mount: Deactivated successfully. Jul 14 21:44:22.437799 env[1216]: time="2025-07-14T21:44:22.437759440Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:22.439488 env[1216]: time="2025-07-14T21:44:22.439449147Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:22.440999 env[1216]: time="2025-07-14T21:44:22.440971345Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:22.442440 env[1216]: time="2025-07-14T21:44:22.442414835Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:22.443296 env[1216]: time="2025-07-14T21:44:22.443179232Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 14 21:44:22.443987 env[1216]: time="2025-07-14T21:44:22.443960475Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 14 21:44:23.017921 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1935165846.mount: Deactivated successfully. Jul 14 21:44:25.692538 env[1216]: time="2025-07-14T21:44:25.692471157Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:25.716731 env[1216]: time="2025-07-14T21:44:25.716683918Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:25.718561 env[1216]: time="2025-07-14T21:44:25.718533009Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:25.720538 env[1216]: time="2025-07-14T21:44:25.720510441Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:25.722372 env[1216]: time="2025-07-14T21:44:25.722334645Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jul 14 21:44:28.977198 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 14 21:44:28.977380 systemd[1]: Stopped kubelet.service. Jul 14 21:44:28.978769 systemd[1]: Starting kubelet.service... Jul 14 21:44:29.075645 systemd[1]: Started kubelet.service. Jul 14 21:44:29.107986 kubelet[1494]: E0714 21:44:29.107947 1494 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 21:44:29.109432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 21:44:29.109566 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 21:44:31.251251 systemd[1]: Stopped kubelet.service. Jul 14 21:44:31.253258 systemd[1]: Starting kubelet.service... Jul 14 21:44:31.283689 systemd[1]: Reloading. Jul 14 21:44:31.345141 /usr/lib/systemd/system-generators/torcx-generator[1529]: time="2025-07-14T21:44:31Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.101 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.101 /var/lib/torcx/store]" Jul 14 21:44:31.345170 /usr/lib/systemd/system-generators/torcx-generator[1529]: time="2025-07-14T21:44:31Z" level=info msg="torcx already run" Jul 14 21:44:31.514215 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 14 21:44:31.514238 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 14 21:44:31.530404 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 21:44:31.599942 systemd[1]: Started kubelet.service. Jul 14 21:44:31.603484 systemd[1]: Stopping kubelet.service... Jul 14 21:44:31.604092 systemd[1]: kubelet.service: Deactivated successfully. Jul 14 21:44:31.604290 systemd[1]: Stopped kubelet.service. Jul 14 21:44:31.606705 systemd[1]: Starting kubelet.service... Jul 14 21:44:31.700821 systemd[1]: Started kubelet.service. Jul 14 21:44:31.753987 kubelet[1579]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 21:44:31.754341 kubelet[1579]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 14 21:44:31.754392 kubelet[1579]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 21:44:31.754548 kubelet[1579]: I0714 21:44:31.754512 1579 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 21:44:32.441304 kubelet[1579]: I0714 21:44:32.441261 1579 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 14 21:44:32.441445 kubelet[1579]: I0714 21:44:32.441434 1579 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 21:44:32.441797 kubelet[1579]: I0714 21:44:32.441776 1579 server.go:954] "Client rotation is on, will bootstrap in background" Jul 14 21:44:32.473095 kubelet[1579]: E0714 21:44:32.473053 1579 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:44:32.474391 kubelet[1579]: I0714 21:44:32.474357 1579 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 21:44:32.482037 kubelet[1579]: E0714 21:44:32.482002 1579 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 21:44:32.482037 kubelet[1579]: I0714 21:44:32.482039 1579 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 21:44:32.487327 kubelet[1579]: I0714 21:44:32.487294 1579 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 21:44:32.488058 kubelet[1579]: I0714 21:44:32.488013 1579 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 21:44:32.488224 kubelet[1579]: I0714 21:44:32.488059 1579 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 14 21:44:32.488305 kubelet[1579]: I0714 21:44:32.488296 1579 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 21:44:32.488305 kubelet[1579]: I0714 21:44:32.488306 1579 container_manager_linux.go:304] "Creating device plugin manager" Jul 14 21:44:32.488507 kubelet[1579]: I0714 21:44:32.488497 1579 state_mem.go:36] "Initialized new in-memory state store" Jul 14 21:44:32.491077 kubelet[1579]: I0714 21:44:32.491056 1579 kubelet.go:446] "Attempting to sync node with API server" Jul 14 21:44:32.491130 kubelet[1579]: I0714 21:44:32.491080 1579 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 21:44:32.491130 kubelet[1579]: I0714 21:44:32.491099 1579 kubelet.go:352] "Adding apiserver pod source" Jul 14 21:44:32.491130 kubelet[1579]: I0714 21:44:32.491108 1579 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 21:44:32.493319 kubelet[1579]: W0714 21:44:32.493272 1579 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Jul 14 21:44:32.493356 kubelet[1579]: E0714 21:44:32.493337 1579 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:44:32.496383 kubelet[1579]: I0714 21:44:32.496355 1579 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 14 21:44:32.496981 kubelet[1579]: I0714 21:44:32.496961 1579 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 14 21:44:32.497091 kubelet[1579]: W0714 21:44:32.497080 1579 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 14 21:44:32.497975 kubelet[1579]: I0714 21:44:32.497951 1579 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 14 21:44:32.498019 kubelet[1579]: I0714 21:44:32.497987 1579 server.go:1287] "Started kubelet" Jul 14 21:44:32.501392 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 14 21:44:32.501560 kubelet[1579]: I0714 21:44:32.501531 1579 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 21:44:32.502696 kubelet[1579]: I0714 21:44:32.502593 1579 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 21:44:32.502991 kubelet[1579]: I0714 21:44:32.502971 1579 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 21:44:32.503065 kubelet[1579]: I0714 21:44:32.503044 1579 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 21:44:32.503622 kubelet[1579]: W0714 21:44:32.503575 1579 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Jul 14 21:44:32.503677 kubelet[1579]: E0714 21:44:32.503632 1579 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:44:32.503975 kubelet[1579]: I0714 21:44:32.503940 1579 server.go:479] "Adding debug handlers to kubelet server" Jul 14 21:44:32.506591 kubelet[1579]: I0714 21:44:32.506562 1579 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 21:44:32.506934 kubelet[1579]: E0714 21:44:32.506626 1579 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:44:32.506998 kubelet[1579]: I0714 21:44:32.506967 1579 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 14 21:44:32.507142 kubelet[1579]: I0714 21:44:32.507119 1579 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 14 21:44:32.507190 kubelet[1579]: I0714 21:44:32.507177 1579 reconciler.go:26] "Reconciler: start to sync state" Jul 14 21:44:32.507485 kubelet[1579]: W0714 21:44:32.507446 1579 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Jul 14 21:44:32.507536 kubelet[1579]: E0714 21:44:32.507493 1579 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:44:32.507821 kubelet[1579]: I0714 21:44:32.507798 1579 factory.go:221] Registration of the systemd container factory successfully Jul 14 21:44:32.507925 kubelet[1579]: E0714 21:44:32.507682 1579 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.12:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.12:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18523c451aa4ea25 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-14 21:44:32.497969701 +0000 UTC m=+0.793347461,LastTimestamp:2025-07-14 21:44:32.497969701 +0000 UTC m=+0.793347461,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 14 21:44:32.507925 kubelet[1579]: I0714 21:44:32.507913 1579 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 21:44:32.508292 kubelet[1579]: E0714 21:44:32.508247 1579 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="200ms" Jul 14 21:44:32.508596 kubelet[1579]: E0714 21:44:32.508579 1579 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 21:44:32.509414 kubelet[1579]: I0714 21:44:32.509339 1579 factory.go:221] Registration of the containerd container factory successfully Jul 14 21:44:32.520322 kubelet[1579]: I0714 21:44:32.520280 1579 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 14 21:44:32.520815 kubelet[1579]: I0714 21:44:32.520799 1579 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 14 21:44:32.520965 kubelet[1579]: I0714 21:44:32.520816 1579 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 14 21:44:32.520965 kubelet[1579]: I0714 21:44:32.520838 1579 state_mem.go:36] "Initialized new in-memory state store" Jul 14 21:44:32.521264 kubelet[1579]: I0714 21:44:32.521226 1579 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 14 21:44:32.521264 kubelet[1579]: I0714 21:44:32.521247 1579 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 14 21:44:32.521414 kubelet[1579]: I0714 21:44:32.521266 1579 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 14 21:44:32.521414 kubelet[1579]: I0714 21:44:32.521274 1579 kubelet.go:2382] "Starting kubelet main sync loop" Jul 14 21:44:32.521414 kubelet[1579]: E0714 21:44:32.521314 1579 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 21:44:32.523529 kubelet[1579]: W0714 21:44:32.523483 1579 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Jul 14 21:44:32.523672 kubelet[1579]: E0714 21:44:32.523647 1579 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:44:32.608051 kubelet[1579]: E0714 21:44:32.608011 1579 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:44:32.622227 kubelet[1579]: E0714 21:44:32.622198 1579 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 14 21:44:32.709860 kubelet[1579]: E0714 21:44:32.708525 1579 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:44:32.710309 kubelet[1579]: E0714 21:44:32.710267 1579 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="400ms" Jul 14 21:44:32.771783 kubelet[1579]: I0714 21:44:32.771748 1579 policy_none.go:49] "None policy: Start" Jul 14 21:44:32.771783 kubelet[1579]: I0714 21:44:32.771778 1579 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 14 21:44:32.771783 kubelet[1579]: I0714 21:44:32.771792 1579 state_mem.go:35] "Initializing new in-memory state store" Jul 14 21:44:32.785732 systemd[1]: Created slice kubepods.slice. Jul 14 21:44:32.790203 systemd[1]: Created slice kubepods-burstable.slice. Jul 14 21:44:32.792779 systemd[1]: Created slice kubepods-besteffort.slice. Jul 14 21:44:32.800669 kubelet[1579]: I0714 21:44:32.800640 1579 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 14 21:44:32.800856 kubelet[1579]: I0714 21:44:32.800792 1579 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 21:44:32.800856 kubelet[1579]: I0714 21:44:32.800809 1579 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 21:44:32.801134 kubelet[1579]: I0714 21:44:32.801097 1579 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 21:44:32.801860 kubelet[1579]: E0714 21:44:32.801827 1579 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 14 21:44:32.801931 kubelet[1579]: E0714 21:44:32.801886 1579 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 14 21:44:32.829508 systemd[1]: Created slice kubepods-burstable-pod74504402837c35cafa612c83251709c6.slice. Jul 14 21:44:32.849288 kubelet[1579]: E0714 21:44:32.849226 1579 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 21:44:32.852072 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jul 14 21:44:32.863127 kubelet[1579]: E0714 21:44:32.863080 1579 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 21:44:32.864227 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jul 14 21:44:32.865673 kubelet[1579]: E0714 21:44:32.865654 1579 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 21:44:32.902754 kubelet[1579]: I0714 21:44:32.902726 1579 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 21:44:32.903371 kubelet[1579]: E0714 21:44:32.903342 1579 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Jul 14 21:44:32.909738 kubelet[1579]: I0714 21:44:32.909692 1579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:44:32.909738 kubelet[1579]: I0714 21:44:32.909739 1579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 14 21:44:32.909842 kubelet[1579]: I0714 21:44:32.909759 1579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/74504402837c35cafa612c83251709c6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"74504402837c35cafa612c83251709c6\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:44:32.909842 kubelet[1579]: I0714 21:44:32.909775 1579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/74504402837c35cafa612c83251709c6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"74504402837c35cafa612c83251709c6\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:44:32.909842 kubelet[1579]: I0714 21:44:32.909798 1579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/74504402837c35cafa612c83251709c6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"74504402837c35cafa612c83251709c6\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:44:32.909842 kubelet[1579]: I0714 21:44:32.909817 1579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:44:32.909842 kubelet[1579]: I0714 21:44:32.909831 1579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:44:32.909990 kubelet[1579]: I0714 21:44:32.909884 1579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:44:32.909990 kubelet[1579]: I0714 21:44:32.909919 1579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:44:33.104743 kubelet[1579]: I0714 21:44:33.104636 1579 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 21:44:33.105702 kubelet[1579]: E0714 21:44:33.105654 1579 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Jul 14 21:44:33.111213 kubelet[1579]: E0714 21:44:33.111172 1579 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="800ms" Jul 14 21:44:33.150552 kubelet[1579]: E0714 21:44:33.150498 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:33.151182 env[1216]: time="2025-07-14T21:44:33.151134034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:74504402837c35cafa612c83251709c6,Namespace:kube-system,Attempt:0,}" Jul 14 21:44:33.163774 kubelet[1579]: E0714 21:44:33.163743 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:33.164448 env[1216]: time="2025-07-14T21:44:33.164391727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jul 14 21:44:33.166639 kubelet[1579]: E0714 21:44:33.166611 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:33.167041 env[1216]: time="2025-07-14T21:44:33.167002249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jul 14 21:44:33.346799 kubelet[1579]: W0714 21:44:33.346735 1579 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Jul 14 21:44:33.346996 kubelet[1579]: E0714 21:44:33.346805 1579 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:44:33.507088 kubelet[1579]: I0714 21:44:33.506996 1579 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 21:44:33.507366 kubelet[1579]: E0714 21:44:33.507340 1579 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Jul 14 21:44:33.593680 kubelet[1579]: W0714 21:44:33.593596 1579 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Jul 14 21:44:33.593680 kubelet[1579]: E0714 21:44:33.593672 1579 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:44:33.695375 kubelet[1579]: W0714 21:44:33.695323 1579 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Jul 14 21:44:33.695550 kubelet[1579]: E0714 21:44:33.695526 1579 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:44:33.700355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2876477652.mount: Deactivated successfully. Jul 14 21:44:33.719767 env[1216]: time="2025-07-14T21:44:33.719725060Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:33.724734 env[1216]: time="2025-07-14T21:44:33.724693854Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:33.726414 env[1216]: time="2025-07-14T21:44:33.726381284Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:33.727379 env[1216]: time="2025-07-14T21:44:33.727346631Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:33.729400 env[1216]: time="2025-07-14T21:44:33.729324411Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:33.731442 env[1216]: time="2025-07-14T21:44:33.731404394Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:33.733326 env[1216]: time="2025-07-14T21:44:33.733297586Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:33.734166 env[1216]: time="2025-07-14T21:44:33.734131935Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:33.736058 env[1216]: time="2025-07-14T21:44:33.736030079Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:33.737927 env[1216]: time="2025-07-14T21:44:33.737899547Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:33.739437 env[1216]: time="2025-07-14T21:44:33.739409012Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:33.741227 env[1216]: time="2025-07-14T21:44:33.741198165Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:33.768539 env[1216]: time="2025-07-14T21:44:33.768400449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:44:33.768539 env[1216]: time="2025-07-14T21:44:33.768443063Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:44:33.768707 env[1216]: time="2025-07-14T21:44:33.768453846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:44:33.769342 env[1216]: time="2025-07-14T21:44:33.769280368Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/af775f614c50a52e1d5be6d60f8d013fd27bccb3fb051fa4e951573a2699ff7a pid=1637 runtime=io.containerd.runc.v2 Jul 14 21:44:33.769451 env[1216]: time="2025-07-14T21:44:33.769299538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:44:33.769451 env[1216]: time="2025-07-14T21:44:33.769333046Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:44:33.769451 env[1216]: time="2025-07-14T21:44:33.769342711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:44:33.770195 env[1216]: time="2025-07-14T21:44:33.769796689Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/456a5f83b4e2ed879f7bad1c542331a6665c70908905b0ad67fe189ee2b30f41 pid=1634 runtime=io.containerd.runc.v2 Jul 14 21:44:33.771475 env[1216]: time="2025-07-14T21:44:33.771396494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:44:33.771475 env[1216]: time="2025-07-14T21:44:33.771447655Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:44:33.771622 env[1216]: time="2025-07-14T21:44:33.771472896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:44:33.772500 env[1216]: time="2025-07-14T21:44:33.771750187Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/541ab599729a85dfa421392200c9319bbde60ad3b25296c648dd3850332a3c08 pid=1639 runtime=io.containerd.runc.v2 Jul 14 21:44:33.781807 systemd[1]: Started cri-containerd-af775f614c50a52e1d5be6d60f8d013fd27bccb3fb051fa4e951573a2699ff7a.scope. Jul 14 21:44:33.794748 systemd[1]: Started cri-containerd-456a5f83b4e2ed879f7bad1c542331a6665c70908905b0ad67fe189ee2b30f41.scope. Jul 14 21:44:33.808503 systemd[1]: Started cri-containerd-541ab599729a85dfa421392200c9319bbde60ad3b25296c648dd3850332a3c08.scope. Jul 14 21:44:33.817330 kubelet[1579]: E0714 21:44:33.813480 1579 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.12:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.12:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18523c451aa4ea25 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-14 21:44:32.497969701 +0000 UTC m=+0.793347461,LastTimestamp:2025-07-14 21:44:32.497969701 +0000 UTC m=+0.793347461,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 14 21:44:33.847103 env[1216]: time="2025-07-14T21:44:33.847061376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"af775f614c50a52e1d5be6d60f8d013fd27bccb3fb051fa4e951573a2699ff7a\"" Jul 14 21:44:33.850645 kubelet[1579]: E0714 21:44:33.848220 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:33.850753 env[1216]: time="2025-07-14T21:44:33.850129111Z" level=info msg="CreateContainer within sandbox \"af775f614c50a52e1d5be6d60f8d013fd27bccb3fb051fa4e951573a2699ff7a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 14 21:44:33.856207 env[1216]: time="2025-07-14T21:44:33.856172723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:74504402837c35cafa612c83251709c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"456a5f83b4e2ed879f7bad1c542331a6665c70908905b0ad67fe189ee2b30f41\"" Jul 14 21:44:33.856941 kubelet[1579]: E0714 21:44:33.856918 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:33.858942 env[1216]: time="2025-07-14T21:44:33.858910608Z" level=info msg="CreateContainer within sandbox \"456a5f83b4e2ed879f7bad1c542331a6665c70908905b0ad67fe189ee2b30f41\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 14 21:44:33.868118 env[1216]: time="2025-07-14T21:44:33.868063650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"541ab599729a85dfa421392200c9319bbde60ad3b25296c648dd3850332a3c08\"" Jul 14 21:44:33.868788 kubelet[1579]: E0714 21:44:33.868765 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:33.869170 env[1216]: time="2025-07-14T21:44:33.869132517Z" level=info msg="CreateContainer within sandbox \"af775f614c50a52e1d5be6d60f8d013fd27bccb3fb051fa4e951573a2699ff7a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"be2860ec949f3a21d5f62e09dd4d381a15c07fba1fbb5e0e41ce2f3a2f94770c\"" Jul 14 21:44:33.869726 env[1216]: time="2025-07-14T21:44:33.869699040Z" level=info msg="StartContainer for \"be2860ec949f3a21d5f62e09dd4d381a15c07fba1fbb5e0e41ce2f3a2f94770c\"" Jul 14 21:44:33.870243 env[1216]: time="2025-07-14T21:44:33.870213005Z" level=info msg="CreateContainer within sandbox \"541ab599729a85dfa421392200c9319bbde60ad3b25296c648dd3850332a3c08\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 14 21:44:33.871513 env[1216]: time="2025-07-14T21:44:33.871478927Z" level=info msg="CreateContainer within sandbox \"456a5f83b4e2ed879f7bad1c542331a6665c70908905b0ad67fe189ee2b30f41\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"68442fc204882004b612fc8b1f5f9e5010d0efe2acb97221d665b1fd38d7abdd\"" Jul 14 21:44:33.872035 env[1216]: time="2025-07-14T21:44:33.872005952Z" level=info msg="StartContainer for \"68442fc204882004b612fc8b1f5f9e5010d0efe2acb97221d665b1fd38d7abdd\"" Jul 14 21:44:33.885610 env[1216]: time="2025-07-14T21:44:33.885541375Z" level=info msg="CreateContainer within sandbox \"541ab599729a85dfa421392200c9319bbde60ad3b25296c648dd3850332a3c08\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e3a7988332fc956b04f00d15197b6f896047be169ab208270229fe2eb6279b0e\"" Jul 14 21:44:33.886100 env[1216]: time="2025-07-14T21:44:33.886065365Z" level=info msg="StartContainer for \"e3a7988332fc956b04f00d15197b6f896047be169ab208270229fe2eb6279b0e\"" Jul 14 21:44:33.887183 systemd[1]: Started cri-containerd-be2860ec949f3a21d5f62e09dd4d381a15c07fba1fbb5e0e41ce2f3a2f94770c.scope. Jul 14 21:44:33.893989 systemd[1]: Started cri-containerd-68442fc204882004b612fc8b1f5f9e5010d0efe2acb97221d665b1fd38d7abdd.scope. Jul 14 21:44:33.913825 kubelet[1579]: E0714 21:44:33.912513 1579 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="1.6s" Jul 14 21:44:33.916672 systemd[1]: Started cri-containerd-e3a7988332fc956b04f00d15197b6f896047be169ab208270229fe2eb6279b0e.scope. Jul 14 21:44:33.962425 env[1216]: time="2025-07-14T21:44:33.962361550Z" level=info msg="StartContainer for \"68442fc204882004b612fc8b1f5f9e5010d0efe2acb97221d665b1fd38d7abdd\" returns successfully" Jul 14 21:44:33.962547 env[1216]: time="2025-07-14T21:44:33.962482204Z" level=info msg="StartContainer for \"be2860ec949f3a21d5f62e09dd4d381a15c07fba1fbb5e0e41ce2f3a2f94770c\" returns successfully" Jul 14 21:44:34.004495 env[1216]: time="2025-07-14T21:44:34.004449091Z" level=info msg="StartContainer for \"e3a7988332fc956b04f00d15197b6f896047be169ab208270229fe2eb6279b0e\" returns successfully" Jul 14 21:44:34.008174 kubelet[1579]: W0714 21:44:34.008077 1579 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Jul 14 21:44:34.008174 kubelet[1579]: E0714 21:44:34.008138 1579 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:44:34.308609 kubelet[1579]: I0714 21:44:34.308575 1579 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 21:44:34.531940 kubelet[1579]: E0714 21:44:34.531912 1579 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 21:44:34.532195 kubelet[1579]: E0714 21:44:34.532179 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:34.532915 kubelet[1579]: E0714 21:44:34.532897 1579 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 21:44:34.533171 kubelet[1579]: E0714 21:44:34.533157 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:34.533793 kubelet[1579]: E0714 21:44:34.533775 1579 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 21:44:34.533966 kubelet[1579]: E0714 21:44:34.533952 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:35.535691 kubelet[1579]: E0714 21:44:35.535651 1579 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 21:44:35.536053 kubelet[1579]: E0714 21:44:35.535784 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:35.536186 kubelet[1579]: E0714 21:44:35.536170 1579 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 21:44:35.536370 kubelet[1579]: E0714 21:44:35.536354 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:36.253248 kubelet[1579]: E0714 21:44:36.253213 1579 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 14 21:44:36.314233 kubelet[1579]: I0714 21:44:36.314199 1579 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 14 21:44:36.409026 kubelet[1579]: I0714 21:44:36.408980 1579 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 14 21:44:36.417216 kubelet[1579]: E0714 21:44:36.417176 1579 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 14 21:44:36.417216 kubelet[1579]: I0714 21:44:36.417211 1579 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 14 21:44:36.419084 kubelet[1579]: E0714 21:44:36.419056 1579 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 14 21:44:36.419084 kubelet[1579]: I0714 21:44:36.419083 1579 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 14 21:44:36.420573 kubelet[1579]: E0714 21:44:36.420551 1579 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 14 21:44:36.495711 kubelet[1579]: I0714 21:44:36.495674 1579 apiserver.go:52] "Watching apiserver" Jul 14 21:44:36.507505 kubelet[1579]: I0714 21:44:36.507402 1579 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 14 21:44:37.647420 kubelet[1579]: I0714 21:44:37.647387 1579 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 14 21:44:37.653143 kubelet[1579]: E0714 21:44:37.653098 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:38.206685 systemd[1]: Reloading. Jul 14 21:44:38.275996 /usr/lib/systemd/system-generators/torcx-generator[1873]: time="2025-07-14T21:44:38Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.101 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.101 /var/lib/torcx/store]" Jul 14 21:44:38.276028 /usr/lib/systemd/system-generators/torcx-generator[1873]: time="2025-07-14T21:44:38Z" level=info msg="torcx already run" Jul 14 21:44:38.336524 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 14 21:44:38.336544 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 14 21:44:38.352533 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 21:44:38.431090 systemd[1]: Stopping kubelet.service... Jul 14 21:44:38.455289 systemd[1]: kubelet.service: Deactivated successfully. Jul 14 21:44:38.455488 systemd[1]: Stopped kubelet.service. Jul 14 21:44:38.455536 systemd[1]: kubelet.service: Consumed 1.156s CPU time. Jul 14 21:44:38.457618 systemd[1]: Starting kubelet.service... Jul 14 21:44:38.556461 systemd[1]: Started kubelet.service. Jul 14 21:44:38.603614 kubelet[1916]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 21:44:38.603614 kubelet[1916]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 14 21:44:38.603614 kubelet[1916]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 21:44:38.603983 kubelet[1916]: I0714 21:44:38.603693 1916 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 21:44:38.611464 kubelet[1916]: I0714 21:44:38.611424 1916 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 14 21:44:38.611605 kubelet[1916]: I0714 21:44:38.611593 1916 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 21:44:38.618598 kubelet[1916]: I0714 21:44:38.618548 1916 server.go:954] "Client rotation is on, will bootstrap in background" Jul 14 21:44:38.625113 kubelet[1916]: I0714 21:44:38.625078 1916 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 14 21:44:38.628924 kubelet[1916]: I0714 21:44:38.628703 1916 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 21:44:38.639132 kubelet[1916]: E0714 21:44:38.638779 1916 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 21:44:38.639132 kubelet[1916]: I0714 21:44:38.638813 1916 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 21:44:38.641884 kubelet[1916]: I0714 21:44:38.641818 1916 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 21:44:38.642054 kubelet[1916]: I0714 21:44:38.642017 1916 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 21:44:38.642221 kubelet[1916]: I0714 21:44:38.642044 1916 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 14 21:44:38.642358 kubelet[1916]: I0714 21:44:38.642229 1916 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 21:44:38.642358 kubelet[1916]: I0714 21:44:38.642239 1916 container_manager_linux.go:304] "Creating device plugin manager" Jul 14 21:44:38.642358 kubelet[1916]: I0714 21:44:38.642284 1916 state_mem.go:36] "Initialized new in-memory state store" Jul 14 21:44:38.642454 kubelet[1916]: I0714 21:44:38.642407 1916 kubelet.go:446] "Attempting to sync node with API server" Jul 14 21:44:38.642454 kubelet[1916]: I0714 21:44:38.642426 1916 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 21:44:38.642454 kubelet[1916]: I0714 21:44:38.642443 1916 kubelet.go:352] "Adding apiserver pod source" Jul 14 21:44:38.642454 kubelet[1916]: I0714 21:44:38.642454 1916 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 21:44:38.645398 kubelet[1916]: I0714 21:44:38.643438 1916 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 14 21:44:38.645398 kubelet[1916]: I0714 21:44:38.643926 1916 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 14 21:44:38.645398 kubelet[1916]: I0714 21:44:38.644412 1916 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 14 21:44:38.645398 kubelet[1916]: I0714 21:44:38.644439 1916 server.go:1287] "Started kubelet" Jul 14 21:44:38.654646 kubelet[1916]: I0714 21:44:38.654559 1916 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 21:44:38.654953 kubelet[1916]: I0714 21:44:38.654931 1916 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 21:44:38.655023 kubelet[1916]: I0714 21:44:38.654999 1916 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 21:44:38.656114 kubelet[1916]: I0714 21:44:38.656082 1916 server.go:479] "Adding debug handlers to kubelet server" Jul 14 21:44:38.658738 kubelet[1916]: I0714 21:44:38.658712 1916 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 21:44:38.660944 kubelet[1916]: I0714 21:44:38.660919 1916 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 21:44:38.662422 kubelet[1916]: I0714 21:44:38.662404 1916 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 14 21:44:38.662801 kubelet[1916]: E0714 21:44:38.662774 1916 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:44:38.663508 kubelet[1916]: I0714 21:44:38.663483 1916 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 14 21:44:38.663723 kubelet[1916]: I0714 21:44:38.663710 1916 reconciler.go:26] "Reconciler: start to sync state" Jul 14 21:44:38.667614 kubelet[1916]: I0714 21:44:38.667575 1916 factory.go:221] Registration of the systemd container factory successfully Jul 14 21:44:38.667716 kubelet[1916]: I0714 21:44:38.667699 1916 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 21:44:38.669042 kubelet[1916]: E0714 21:44:38.669014 1916 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 21:44:38.669211 kubelet[1916]: I0714 21:44:38.669181 1916 factory.go:221] Registration of the containerd container factory successfully Jul 14 21:44:38.679531 kubelet[1916]: I0714 21:44:38.679491 1916 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 14 21:44:38.680600 kubelet[1916]: I0714 21:44:38.680580 1916 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 14 21:44:38.680717 kubelet[1916]: I0714 21:44:38.680703 1916 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 14 21:44:38.680801 kubelet[1916]: I0714 21:44:38.680789 1916 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 14 21:44:38.680877 kubelet[1916]: I0714 21:44:38.680863 1916 kubelet.go:2382] "Starting kubelet main sync loop" Jul 14 21:44:38.680987 kubelet[1916]: E0714 21:44:38.680967 1916 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 21:44:38.714097 kubelet[1916]: I0714 21:44:38.714002 1916 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 14 21:44:38.714097 kubelet[1916]: I0714 21:44:38.714027 1916 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 14 21:44:38.714097 kubelet[1916]: I0714 21:44:38.714053 1916 state_mem.go:36] "Initialized new in-memory state store" Jul 14 21:44:38.714270 kubelet[1916]: I0714 21:44:38.714234 1916 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 14 21:44:38.714270 kubelet[1916]: I0714 21:44:38.714247 1916 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 14 21:44:38.714270 kubelet[1916]: I0714 21:44:38.714265 1916 policy_none.go:49] "None policy: Start" Jul 14 21:44:38.714344 kubelet[1916]: I0714 21:44:38.714274 1916 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 14 21:44:38.714344 kubelet[1916]: I0714 21:44:38.714284 1916 state_mem.go:35] "Initializing new in-memory state store" Jul 14 21:44:38.714398 kubelet[1916]: I0714 21:44:38.714382 1916 state_mem.go:75] "Updated machine memory state" Jul 14 21:44:38.718195 kubelet[1916]: I0714 21:44:38.718163 1916 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 14 21:44:38.721165 kubelet[1916]: I0714 21:44:38.721136 1916 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 21:44:38.721322 kubelet[1916]: I0714 21:44:38.721283 1916 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 21:44:38.721611 kubelet[1916]: I0714 21:44:38.721561 1916 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 21:44:38.722594 kubelet[1916]: E0714 21:44:38.722554 1916 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 14 21:44:38.781547 kubelet[1916]: I0714 21:44:38.781511 1916 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 14 21:44:38.781827 kubelet[1916]: I0714 21:44:38.781541 1916 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 14 21:44:38.781969 kubelet[1916]: I0714 21:44:38.781656 1916 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 14 21:44:38.787972 kubelet[1916]: E0714 21:44:38.787933 1916 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 14 21:44:38.825280 kubelet[1916]: I0714 21:44:38.825252 1916 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 21:44:38.836443 kubelet[1916]: I0714 21:44:38.836408 1916 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 14 21:44:38.836591 kubelet[1916]: I0714 21:44:38.836502 1916 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 14 21:44:38.965175 kubelet[1916]: I0714 21:44:38.965062 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/74504402837c35cafa612c83251709c6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"74504402837c35cafa612c83251709c6\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:44:38.965175 kubelet[1916]: I0714 21:44:38.965101 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:44:38.965175 kubelet[1916]: I0714 21:44:38.965120 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:44:38.965175 kubelet[1916]: I0714 21:44:38.965138 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 14 21:44:38.965175 kubelet[1916]: I0714 21:44:38.965154 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/74504402837c35cafa612c83251709c6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"74504402837c35cafa612c83251709c6\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:44:38.965556 kubelet[1916]: I0714 21:44:38.965173 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/74504402837c35cafa612c83251709c6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"74504402837c35cafa612c83251709c6\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:44:38.965556 kubelet[1916]: I0714 21:44:38.965192 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:44:38.965556 kubelet[1916]: I0714 21:44:38.965233 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:44:38.965556 kubelet[1916]: I0714 21:44:38.965284 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:44:39.087541 kubelet[1916]: E0714 21:44:39.087508 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:39.087802 kubelet[1916]: E0714 21:44:39.087648 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:39.089028 kubelet[1916]: E0714 21:44:39.088992 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:39.259832 sudo[1950]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 14 21:44:39.260261 sudo[1950]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 14 21:44:39.643367 kubelet[1916]: I0714 21:44:39.643272 1916 apiserver.go:52] "Watching apiserver" Jul 14 21:44:39.664737 kubelet[1916]: I0714 21:44:39.664687 1916 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 14 21:44:39.696495 kubelet[1916]: I0714 21:44:39.696468 1916 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 14 21:44:39.696676 kubelet[1916]: E0714 21:44:39.696647 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:39.696799 kubelet[1916]: I0714 21:44:39.696547 1916 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 14 21:44:39.703078 kubelet[1916]: E0714 21:44:39.703046 1916 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 14 21:44:39.703244 kubelet[1916]: E0714 21:44:39.703224 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:39.704540 kubelet[1916]: E0714 21:44:39.704512 1916 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 14 21:44:39.704686 kubelet[1916]: E0714 21:44:39.704669 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:39.713253 sudo[1950]: pam_unix(sudo:session): session closed for user root Jul 14 21:44:39.718727 kubelet[1916]: I0714 21:44:39.717908 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.717894701 podStartE2EDuration="1.717894701s" podCreationTimestamp="2025-07-14 21:44:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:44:39.71765645 +0000 UTC m=+1.156363561" watchObservedRunningTime="2025-07-14 21:44:39.717894701 +0000 UTC m=+1.156601812" Jul 14 21:44:39.731809 kubelet[1916]: I0714 21:44:39.731755 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.7317364469999998 podStartE2EDuration="1.731736447s" podCreationTimestamp="2025-07-14 21:44:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:44:39.724801669 +0000 UTC m=+1.163508780" watchObservedRunningTime="2025-07-14 21:44:39.731736447 +0000 UTC m=+1.170443558" Jul 14 21:44:39.761553 kubelet[1916]: I0714 21:44:39.761481 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.761430829 podStartE2EDuration="2.761430829s" podCreationTimestamp="2025-07-14 21:44:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:44:39.734600665 +0000 UTC m=+1.173307776" watchObservedRunningTime="2025-07-14 21:44:39.761430829 +0000 UTC m=+1.200137940" Jul 14 21:44:40.697316 kubelet[1916]: E0714 21:44:40.697277 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:40.697866 kubelet[1916]: E0714 21:44:40.697824 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:41.699239 kubelet[1916]: E0714 21:44:41.699203 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:41.699588 kubelet[1916]: E0714 21:44:41.699258 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:41.755902 sudo[1317]: pam_unix(sudo:session): session closed for user root Jul 14 21:44:41.760991 sshd[1314]: pam_unix(sshd:session): session closed for user core Jul 14 21:44:41.767002 systemd-logind[1203]: Session 5 logged out. Waiting for processes to exit. Jul 14 21:44:41.767819 systemd[1]: sshd@4-10.0.0.12:22-10.0.0.1:47764.service: Deactivated successfully. Jul 14 21:44:41.769165 systemd[1]: session-5.scope: Deactivated successfully. Jul 14 21:44:41.769372 systemd[1]: session-5.scope: Consumed 7.996s CPU time. Jul 14 21:44:41.770167 systemd-logind[1203]: Removed session 5. Jul 14 21:44:43.741528 kubelet[1916]: I0714 21:44:43.741497 1916 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 14 21:44:43.741872 env[1216]: time="2025-07-14T21:44:43.741820780Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 14 21:44:43.742062 kubelet[1916]: I0714 21:44:43.741994 1916 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 14 21:44:44.419025 systemd[1]: Created slice kubepods-besteffort-pod3292e1da_1796_40b4_84e0_13982b45faca.slice. Jul 14 21:44:44.420572 kubelet[1916]: W0714 21:44:44.420506 1916 helpers.go:245] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3292e1da_1796_40b4_84e0_13982b45faca.slice/cpuset.cpus.effective": read /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3292e1da_1796_40b4_84e0_13982b45faca.slice/cpuset.cpus.effective: no such device Jul 14 21:44:44.433901 systemd[1]: Created slice kubepods-burstable-pod9b312e59_3d12_41c8_bef1_0ed26e880928.slice. Jul 14 21:44:44.502169 kubelet[1916]: I0714 21:44:44.502131 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9b312e59-3d12-41c8-bef1-0ed26e880928-hubble-tls\") pod \"cilium-pp4s5\" (UID: \"9b312e59-3d12-41c8-bef1-0ed26e880928\") " pod="kube-system/cilium-pp4s5" Jul 14 21:44:44.502359 kubelet[1916]: I0714 21:44:44.502342 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3292e1da-1796-40b4-84e0-13982b45faca-kube-proxy\") pod \"kube-proxy-6mbqr\" (UID: \"3292e1da-1796-40b4-84e0-13982b45faca\") " pod="kube-system/kube-proxy-6mbqr" Jul 14 21:44:44.502469 kubelet[1916]: I0714 21:44:44.502456 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b312e59-3d12-41c8-bef1-0ed26e880928-lib-modules\") pod \"cilium-pp4s5\" (UID: \"9b312e59-3d12-41c8-bef1-0ed26e880928\") " pod="kube-system/cilium-pp4s5" Jul 14 21:44:44.502614 kubelet[1916]: I0714 21:44:44.502594 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9b312e59-3d12-41c8-bef1-0ed26e880928-hostproc\") pod \"cilium-pp4s5\" (UID: \"9b312e59-3d12-41c8-bef1-0ed26e880928\") " pod="kube-system/cilium-pp4s5" Jul 14 21:44:44.502714 kubelet[1916]: I0714 21:44:44.502702 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b312e59-3d12-41c8-bef1-0ed26e880928-xtables-lock\") pod \"cilium-pp4s5\" (UID: \"9b312e59-3d12-41c8-bef1-0ed26e880928\") " pod="kube-system/cilium-pp4s5" Jul 14 21:44:44.502842 kubelet[1916]: I0714 21:44:44.502828 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9b312e59-3d12-41c8-bef1-0ed26e880928-clustermesh-secrets\") pod \"cilium-pp4s5\" (UID: \"9b312e59-3d12-41c8-bef1-0ed26e880928\") " pod="kube-system/cilium-pp4s5" Jul 14 21:44:44.503069 kubelet[1916]: I0714 21:44:44.503048 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9b312e59-3d12-41c8-bef1-0ed26e880928-host-proc-sys-kernel\") pod \"cilium-pp4s5\" (UID: \"9b312e59-3d12-41c8-bef1-0ed26e880928\") " pod="kube-system/cilium-pp4s5" Jul 14 21:44:44.503185 kubelet[1916]: I0714 21:44:44.503170 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9b312e59-3d12-41c8-bef1-0ed26e880928-cilium-run\") pod \"cilium-pp4s5\" (UID: \"9b312e59-3d12-41c8-bef1-0ed26e880928\") " pod="kube-system/cilium-pp4s5" Jul 14 21:44:44.503277 kubelet[1916]: I0714 21:44:44.503261 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvjkl\" (UniqueName: \"kubernetes.io/projected/3292e1da-1796-40b4-84e0-13982b45faca-kube-api-access-mvjkl\") pod \"kube-proxy-6mbqr\" (UID: \"3292e1da-1796-40b4-84e0-13982b45faca\") " pod="kube-system/kube-proxy-6mbqr" Jul 14 21:44:44.503374 kubelet[1916]: I0714 21:44:44.503362 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9b312e59-3d12-41c8-bef1-0ed26e880928-cilium-cgroup\") pod \"cilium-pp4s5\" (UID: \"9b312e59-3d12-41c8-bef1-0ed26e880928\") " pod="kube-system/cilium-pp4s5" Jul 14 21:44:44.503472 kubelet[1916]: I0714 21:44:44.503459 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3292e1da-1796-40b4-84e0-13982b45faca-xtables-lock\") pod \"kube-proxy-6mbqr\" (UID: \"3292e1da-1796-40b4-84e0-13982b45faca\") " pod="kube-system/kube-proxy-6mbqr" Jul 14 21:44:44.503549 kubelet[1916]: I0714 21:44:44.503538 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9b312e59-3d12-41c8-bef1-0ed26e880928-cni-path\") pod \"cilium-pp4s5\" (UID: \"9b312e59-3d12-41c8-bef1-0ed26e880928\") " pod="kube-system/cilium-pp4s5" Jul 14 21:44:44.503635 kubelet[1916]: I0714 21:44:44.503622 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3292e1da-1796-40b4-84e0-13982b45faca-lib-modules\") pod \"kube-proxy-6mbqr\" (UID: \"3292e1da-1796-40b4-84e0-13982b45faca\") " pod="kube-system/kube-proxy-6mbqr" Jul 14 21:44:44.503955 kubelet[1916]: I0714 21:44:44.503899 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gsgf\" (UniqueName: \"kubernetes.io/projected/9b312e59-3d12-41c8-bef1-0ed26e880928-kube-api-access-8gsgf\") pod \"cilium-pp4s5\" (UID: \"9b312e59-3d12-41c8-bef1-0ed26e880928\") " pod="kube-system/cilium-pp4s5" Jul 14 21:44:44.504174 kubelet[1916]: I0714 21:44:44.504088 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9b312e59-3d12-41c8-bef1-0ed26e880928-bpf-maps\") pod \"cilium-pp4s5\" (UID: \"9b312e59-3d12-41c8-bef1-0ed26e880928\") " pod="kube-system/cilium-pp4s5" Jul 14 21:44:44.504331 kubelet[1916]: I0714 21:44:44.504315 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9b312e59-3d12-41c8-bef1-0ed26e880928-etc-cni-netd\") pod \"cilium-pp4s5\" (UID: \"9b312e59-3d12-41c8-bef1-0ed26e880928\") " pod="kube-system/cilium-pp4s5" Jul 14 21:44:44.504449 kubelet[1916]: I0714 21:44:44.504434 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9b312e59-3d12-41c8-bef1-0ed26e880928-cilium-config-path\") pod \"cilium-pp4s5\" (UID: \"9b312e59-3d12-41c8-bef1-0ed26e880928\") " pod="kube-system/cilium-pp4s5" Jul 14 21:44:44.504544 kubelet[1916]: I0714 21:44:44.504530 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9b312e59-3d12-41c8-bef1-0ed26e880928-host-proc-sys-net\") pod \"cilium-pp4s5\" (UID: \"9b312e59-3d12-41c8-bef1-0ed26e880928\") " pod="kube-system/cilium-pp4s5" Jul 14 21:44:44.608339 kubelet[1916]: I0714 21:44:44.608287 1916 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jul 14 21:44:44.730740 kubelet[1916]: E0714 21:44:44.730617 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:44.732028 env[1216]: time="2025-07-14T21:44:44.731282114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6mbqr,Uid:3292e1da-1796-40b4-84e0-13982b45faca,Namespace:kube-system,Attempt:0,}" Jul 14 21:44:44.736413 kubelet[1916]: E0714 21:44:44.736385 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:44.737143 env[1216]: time="2025-07-14T21:44:44.737107680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pp4s5,Uid:9b312e59-3d12-41c8-bef1-0ed26e880928,Namespace:kube-system,Attempt:0,}" Jul 14 21:44:44.749486 env[1216]: time="2025-07-14T21:44:44.749277642Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:44:44.749486 env[1216]: time="2025-07-14T21:44:44.749325576Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:44:44.749486 env[1216]: time="2025-07-14T21:44:44.749341140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:44:44.749897 env[1216]: time="2025-07-14T21:44:44.749519552Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9235c25714f4d4f90a9e31bfd7ce21b78d1ce28bfdb1a2f0a985456566dc8747 pid=2007 runtime=io.containerd.runc.v2 Jul 14 21:44:44.759693 env[1216]: time="2025-07-14T21:44:44.759032864Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:44:44.761139 systemd[1]: Started cri-containerd-9235c25714f4d4f90a9e31bfd7ce21b78d1ce28bfdb1a2f0a985456566dc8747.scope. Jul 14 21:44:44.763614 env[1216]: time="2025-07-14T21:44:44.761604569Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:44:44.763614 env[1216]: time="2025-07-14T21:44:44.761631977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:44:44.763614 env[1216]: time="2025-07-14T21:44:44.761946228Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e38b0619c83573d8dc40f03495cd62e4515de498aedb711f71a8cfea2e0b6732 pid=2029 runtime=io.containerd.runc.v2 Jul 14 21:44:44.775530 systemd[1]: Started cri-containerd-e38b0619c83573d8dc40f03495cd62e4515de498aedb711f71a8cfea2e0b6732.scope. Jul 14 21:44:44.812654 systemd[1]: Created slice kubepods-besteffort-podd85766d9_e2ee_47db_97b5_d0ba9f06a74f.slice. Jul 14 21:44:44.839725 env[1216]: time="2025-07-14T21:44:44.839215426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6mbqr,Uid:3292e1da-1796-40b4-84e0-13982b45faca,Namespace:kube-system,Attempt:0,} returns sandbox id \"9235c25714f4d4f90a9e31bfd7ce21b78d1ce28bfdb1a2f0a985456566dc8747\"" Jul 14 21:44:44.843166 kubelet[1916]: E0714 21:44:44.842619 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:44.846007 env[1216]: time="2025-07-14T21:44:44.845963379Z" level=info msg="CreateContainer within sandbox \"9235c25714f4d4f90a9e31bfd7ce21b78d1ce28bfdb1a2f0a985456566dc8747\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 14 21:44:44.861584 env[1216]: time="2025-07-14T21:44:44.861536765Z" level=info msg="CreateContainer within sandbox \"9235c25714f4d4f90a9e31bfd7ce21b78d1ce28bfdb1a2f0a985456566dc8747\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"765390f0a42420167d531b9fc4d35a0a739b4c70897e5d86dbb69e53b10e32c9\"" Jul 14 21:44:44.862316 env[1216]: time="2025-07-14T21:44:44.862082683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pp4s5,Uid:9b312e59-3d12-41c8-bef1-0ed26e880928,Namespace:kube-system,Attempt:0,} returns sandbox id \"e38b0619c83573d8dc40f03495cd62e4515de498aedb711f71a8cfea2e0b6732\"" Jul 14 21:44:44.862629 env[1216]: time="2025-07-14T21:44:44.862572785Z" level=info msg="StartContainer for \"765390f0a42420167d531b9fc4d35a0a739b4c70897e5d86dbb69e53b10e32c9\"" Jul 14 21:44:44.863373 kubelet[1916]: E0714 21:44:44.863172 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:44.864718 env[1216]: time="2025-07-14T21:44:44.864685076Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 14 21:44:44.884686 systemd[1]: Started cri-containerd-765390f0a42420167d531b9fc4d35a0a739b4c70897e5d86dbb69e53b10e32c9.scope. Jul 14 21:44:44.908871 kubelet[1916]: I0714 21:44:44.908759 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-647mg\" (UniqueName: \"kubernetes.io/projected/d85766d9-e2ee-47db-97b5-d0ba9f06a74f-kube-api-access-647mg\") pod \"cilium-operator-6c4d7847fc-sjl6r\" (UID: \"d85766d9-e2ee-47db-97b5-d0ba9f06a74f\") " pod="kube-system/cilium-operator-6c4d7847fc-sjl6r" Jul 14 21:44:44.908871 kubelet[1916]: I0714 21:44:44.908804 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d85766d9-e2ee-47db-97b5-d0ba9f06a74f-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-sjl6r\" (UID: \"d85766d9-e2ee-47db-97b5-d0ba9f06a74f\") " pod="kube-system/cilium-operator-6c4d7847fc-sjl6r" Jul 14 21:44:44.999731 env[1216]: time="2025-07-14T21:44:44.999607798Z" level=info msg="StartContainer for \"765390f0a42420167d531b9fc4d35a0a739b4c70897e5d86dbb69e53b10e32c9\" returns successfully" Jul 14 21:44:45.115836 kubelet[1916]: E0714 21:44:45.115274 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:45.116033 env[1216]: time="2025-07-14T21:44:45.115980697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-sjl6r,Uid:d85766d9-e2ee-47db-97b5-d0ba9f06a74f,Namespace:kube-system,Attempt:0,}" Jul 14 21:44:45.132317 env[1216]: time="2025-07-14T21:44:45.132246313Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:44:45.132493 env[1216]: time="2025-07-14T21:44:45.132291085Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:44:45.132493 env[1216]: time="2025-07-14T21:44:45.132301968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:44:45.132493 env[1216]: time="2025-07-14T21:44:45.132457691Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3c0b73516f845f53ce82f6c1f042df4846dbb00b110a124dddec4ca0b2a14e84 pid=2161 runtime=io.containerd.runc.v2 Jul 14 21:44:45.143246 systemd[1]: Started cri-containerd-3c0b73516f845f53ce82f6c1f042df4846dbb00b110a124dddec4ca0b2a14e84.scope. Jul 14 21:44:45.180007 env[1216]: time="2025-07-14T21:44:45.179945540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-sjl6r,Uid:d85766d9-e2ee-47db-97b5-d0ba9f06a74f,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c0b73516f845f53ce82f6c1f042df4846dbb00b110a124dddec4ca0b2a14e84\"" Jul 14 21:44:45.181287 kubelet[1916]: E0714 21:44:45.180755 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:45.708103 kubelet[1916]: E0714 21:44:45.708073 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:45.720815 kubelet[1916]: I0714 21:44:45.720756 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6mbqr" podStartSLOduration=1.7207374469999999 podStartE2EDuration="1.720737447s" podCreationTimestamp="2025-07-14 21:44:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:44:45.719785026 +0000 UTC m=+7.158492137" watchObservedRunningTime="2025-07-14 21:44:45.720737447 +0000 UTC m=+7.159444518" Jul 14 21:44:47.166617 kubelet[1916]: E0714 21:44:47.166519 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:47.715258 kubelet[1916]: E0714 21:44:47.715221 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:49.054983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1634994421.mount: Deactivated successfully. Jul 14 21:44:50.446284 kubelet[1916]: E0714 21:44:50.442499 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:50.644839 kubelet[1916]: E0714 21:44:50.644804 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:50.720646 kubelet[1916]: E0714 21:44:50.720544 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:51.356348 env[1216]: time="2025-07-14T21:44:51.356301179Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:51.358124 env[1216]: time="2025-07-14T21:44:51.358083615Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:51.359619 env[1216]: time="2025-07-14T21:44:51.359588715Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:51.360320 env[1216]: time="2025-07-14T21:44:51.360291856Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 14 21:44:51.361918 env[1216]: time="2025-07-14T21:44:51.361871011Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 14 21:44:51.365327 env[1216]: time="2025-07-14T21:44:51.363485614Z" level=info msg="CreateContainer within sandbox \"e38b0619c83573d8dc40f03495cd62e4515de498aedb711f71a8cfea2e0b6732\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 14 21:44:51.377221 env[1216]: time="2025-07-14T21:44:51.377170746Z" level=info msg="CreateContainer within sandbox \"e38b0619c83573d8dc40f03495cd62e4515de498aedb711f71a8cfea2e0b6732\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"774e9191ec9af1d030044e30e5085656ab2d51881e3474a0b23a535791247f8a\"" Jul 14 21:44:51.377889 env[1216]: time="2025-07-14T21:44:51.377828197Z" level=info msg="StartContainer for \"774e9191ec9af1d030044e30e5085656ab2d51881e3474a0b23a535791247f8a\"" Jul 14 21:44:51.396323 systemd[1]: Started cri-containerd-774e9191ec9af1d030044e30e5085656ab2d51881e3474a0b23a535791247f8a.scope. Jul 14 21:44:51.443156 env[1216]: time="2025-07-14T21:44:51.442980807Z" level=info msg="StartContainer for \"774e9191ec9af1d030044e30e5085656ab2d51881e3474a0b23a535791247f8a\" returns successfully" Jul 14 21:44:51.470329 systemd[1]: cri-containerd-774e9191ec9af1d030044e30e5085656ab2d51881e3474a0b23a535791247f8a.scope: Deactivated successfully. Jul 14 21:44:51.617639 env[1216]: time="2025-07-14T21:44:51.617505096Z" level=info msg="shim disconnected" id=774e9191ec9af1d030044e30e5085656ab2d51881e3474a0b23a535791247f8a Jul 14 21:44:51.617639 env[1216]: time="2025-07-14T21:44:51.617549945Z" level=warning msg="cleaning up after shim disconnected" id=774e9191ec9af1d030044e30e5085656ab2d51881e3474a0b23a535791247f8a namespace=k8s.io Jul 14 21:44:51.617639 env[1216]: time="2025-07-14T21:44:51.617559627Z" level=info msg="cleaning up dead shim" Jul 14 21:44:51.627051 env[1216]: time="2025-07-14T21:44:51.627002353Z" level=warning msg="cleanup warnings time=\"2025-07-14T21:44:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2335 runtime=io.containerd.runc.v2\n" Jul 14 21:44:51.723398 kubelet[1916]: E0714 21:44:51.723349 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:51.732459 env[1216]: time="2025-07-14T21:44:51.732414042Z" level=info msg="CreateContainer within sandbox \"e38b0619c83573d8dc40f03495cd62e4515de498aedb711f71a8cfea2e0b6732\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 14 21:44:51.749046 env[1216]: time="2025-07-14T21:44:51.748999953Z" level=info msg="CreateContainer within sandbox \"e38b0619c83573d8dc40f03495cd62e4515de498aedb711f71a8cfea2e0b6732\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f75893db1797a35b1f1fe4a8cff8d77055046e9801e522bb7f1f6dbb452b45ab\"" Jul 14 21:44:51.751159 env[1216]: time="2025-07-14T21:44:51.751119817Z" level=info msg="StartContainer for \"f75893db1797a35b1f1fe4a8cff8d77055046e9801e522bb7f1f6dbb452b45ab\"" Jul 14 21:44:51.764541 systemd[1]: Started cri-containerd-f75893db1797a35b1f1fe4a8cff8d77055046e9801e522bb7f1f6dbb452b45ab.scope. Jul 14 21:44:51.813465 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 14 21:44:51.813732 systemd[1]: Stopped systemd-sysctl.service. Jul 14 21:44:51.813969 systemd[1]: Stopping systemd-sysctl.service... Jul 14 21:44:51.815830 systemd[1]: Starting systemd-sysctl.service... Jul 14 21:44:51.818415 systemd[1]: cri-containerd-f75893db1797a35b1f1fe4a8cff8d77055046e9801e522bb7f1f6dbb452b45ab.scope: Deactivated successfully. Jul 14 21:44:51.819817 env[1216]: time="2025-07-14T21:44:51.819776286Z" level=info msg="StartContainer for \"f75893db1797a35b1f1fe4a8cff8d77055046e9801e522bb7f1f6dbb452b45ab\" returns successfully" Jul 14 21:44:51.830361 systemd[1]: Finished systemd-sysctl.service. Jul 14 21:44:51.848982 env[1216]: time="2025-07-14T21:44:51.848933788Z" level=info msg="shim disconnected" id=f75893db1797a35b1f1fe4a8cff8d77055046e9801e522bb7f1f6dbb452b45ab Jul 14 21:44:51.849281 env[1216]: time="2025-07-14T21:44:51.849260054Z" level=warning msg="cleaning up after shim disconnected" id=f75893db1797a35b1f1fe4a8cff8d77055046e9801e522bb7f1f6dbb452b45ab namespace=k8s.io Jul 14 21:44:51.849356 env[1216]: time="2025-07-14T21:44:51.849341950Z" level=info msg="cleaning up dead shim" Jul 14 21:44:51.856210 env[1216]: time="2025-07-14T21:44:51.856155350Z" level=warning msg="cleanup warnings time=\"2025-07-14T21:44:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2402 runtime=io.containerd.runc.v2\n" Jul 14 21:44:51.930614 update_engine[1209]: I0714 21:44:51.930562 1209 update_attempter.cc:509] Updating boot flags... Jul 14 21:44:52.373696 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-774e9191ec9af1d030044e30e5085656ab2d51881e3474a0b23a535791247f8a-rootfs.mount: Deactivated successfully. Jul 14 21:44:52.643094 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2515266724.mount: Deactivated successfully. Jul 14 21:44:52.726941 kubelet[1916]: E0714 21:44:52.726907 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:52.729084 env[1216]: time="2025-07-14T21:44:52.728940322Z" level=info msg="CreateContainer within sandbox \"e38b0619c83573d8dc40f03495cd62e4515de498aedb711f71a8cfea2e0b6732\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 14 21:44:52.752969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1606384385.mount: Deactivated successfully. Jul 14 21:44:52.759546 env[1216]: time="2025-07-14T21:44:52.759487482Z" level=info msg="CreateContainer within sandbox \"e38b0619c83573d8dc40f03495cd62e4515de498aedb711f71a8cfea2e0b6732\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5361f072d5e880a8c84177b16cf4c0fc5f99e6af9313c4d4c41450fad3c4ada0\"" Jul 14 21:44:52.760359 env[1216]: time="2025-07-14T21:44:52.760308798Z" level=info msg="StartContainer for \"5361f072d5e880a8c84177b16cf4c0fc5f99e6af9313c4d4c41450fad3c4ada0\"" Jul 14 21:44:52.777783 systemd[1]: Started cri-containerd-5361f072d5e880a8c84177b16cf4c0fc5f99e6af9313c4d4c41450fad3c4ada0.scope. Jul 14 21:44:52.854636 systemd[1]: cri-containerd-5361f072d5e880a8c84177b16cf4c0fc5f99e6af9313c4d4c41450fad3c4ada0.scope: Deactivated successfully. Jul 14 21:44:52.873896 env[1216]: time="2025-07-14T21:44:52.873811508Z" level=info msg="StartContainer for \"5361f072d5e880a8c84177b16cf4c0fc5f99e6af9313c4d4c41450fad3c4ada0\" returns successfully" Jul 14 21:44:52.922515 env[1216]: time="2025-07-14T21:44:52.922391452Z" level=info msg="shim disconnected" id=5361f072d5e880a8c84177b16cf4c0fc5f99e6af9313c4d4c41450fad3c4ada0 Jul 14 21:44:52.922515 env[1216]: time="2025-07-14T21:44:52.922446182Z" level=warning msg="cleaning up after shim disconnected" id=5361f072d5e880a8c84177b16cf4c0fc5f99e6af9313c4d4c41450fad3c4ada0 namespace=k8s.io Jul 14 21:44:52.922515 env[1216]: time="2025-07-14T21:44:52.922457945Z" level=info msg="cleaning up dead shim" Jul 14 21:44:52.931144 env[1216]: time="2025-07-14T21:44:52.931095505Z" level=warning msg="cleanup warnings time=\"2025-07-14T21:44:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2473 runtime=io.containerd.runc.v2\n" Jul 14 21:44:53.195893 env[1216]: time="2025-07-14T21:44:53.195737878Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:53.198783 env[1216]: time="2025-07-14T21:44:53.198720896Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:53.200593 env[1216]: time="2025-07-14T21:44:53.200544346Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:53.201088 env[1216]: time="2025-07-14T21:44:53.201055918Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 14 21:44:53.203179 env[1216]: time="2025-07-14T21:44:53.203144296Z" level=info msg="CreateContainer within sandbox \"3c0b73516f845f53ce82f6c1f042df4846dbb00b110a124dddec4ca0b2a14e84\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 14 21:44:53.218431 env[1216]: time="2025-07-14T21:44:53.218368166Z" level=info msg="CreateContainer within sandbox \"3c0b73516f845f53ce82f6c1f042df4846dbb00b110a124dddec4ca0b2a14e84\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2cd031fe8d610059be419bb014ab14e581cdbcbe13c1984e1278628d41802858\"" Jul 14 21:44:53.220566 env[1216]: time="2025-07-14T21:44:53.220510273Z" level=info msg="StartContainer for \"2cd031fe8d610059be419bb014ab14e581cdbcbe13c1984e1278628d41802858\"" Jul 14 21:44:53.235125 systemd[1]: Started cri-containerd-2cd031fe8d610059be419bb014ab14e581cdbcbe13c1984e1278628d41802858.scope. Jul 14 21:44:53.290228 env[1216]: time="2025-07-14T21:44:53.290174859Z" level=info msg="StartContainer for \"2cd031fe8d610059be419bb014ab14e581cdbcbe13c1984e1278628d41802858\" returns successfully" Jul 14 21:44:53.729739 kubelet[1916]: E0714 21:44:53.729698 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:53.732374 kubelet[1916]: E0714 21:44:53.732343 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:53.736844 env[1216]: time="2025-07-14T21:44:53.736790066Z" level=info msg="CreateContainer within sandbox \"e38b0619c83573d8dc40f03495cd62e4515de498aedb711f71a8cfea2e0b6732\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 14 21:44:53.758108 env[1216]: time="2025-07-14T21:44:53.758041385Z" level=info msg="CreateContainer within sandbox \"e38b0619c83573d8dc40f03495cd62e4515de498aedb711f71a8cfea2e0b6732\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f10e8e7b6373d05a7ca29508bada92093af517d1828e9e78f8e0abc6f55f4ef7\"" Jul 14 21:44:53.758602 env[1216]: time="2025-07-14T21:44:53.758554478Z" level=info msg="StartContainer for \"f10e8e7b6373d05a7ca29508bada92093af517d1828e9e78f8e0abc6f55f4ef7\"" Jul 14 21:44:53.795958 systemd[1]: Started cri-containerd-f10e8e7b6373d05a7ca29508bada92093af517d1828e9e78f8e0abc6f55f4ef7.scope. Jul 14 21:44:53.805807 kubelet[1916]: I0714 21:44:53.805743 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-sjl6r" podStartSLOduration=1.785563651 podStartE2EDuration="9.805722159s" podCreationTimestamp="2025-07-14 21:44:44 +0000 UTC" firstStartedPulling="2025-07-14 21:44:45.181859784 +0000 UTC m=+6.620566895" lastFinishedPulling="2025-07-14 21:44:53.202018292 +0000 UTC m=+14.640725403" observedRunningTime="2025-07-14 21:44:53.757326856 +0000 UTC m=+15.196033927" watchObservedRunningTime="2025-07-14 21:44:53.805722159 +0000 UTC m=+15.244429270" Jul 14 21:44:53.851992 systemd[1]: cri-containerd-f10e8e7b6373d05a7ca29508bada92093af517d1828e9e78f8e0abc6f55f4ef7.scope: Deactivated successfully. Jul 14 21:44:53.880791 env[1216]: time="2025-07-14T21:44:53.880739712Z" level=info msg="StartContainer for \"f10e8e7b6373d05a7ca29508bada92093af517d1828e9e78f8e0abc6f55f4ef7\" returns successfully" Jul 14 21:44:53.913200 env[1216]: time="2025-07-14T21:44:53.913155408Z" level=info msg="shim disconnected" id=f10e8e7b6373d05a7ca29508bada92093af517d1828e9e78f8e0abc6f55f4ef7 Jul 14 21:44:53.913433 env[1216]: time="2025-07-14T21:44:53.913415335Z" level=warning msg="cleaning up after shim disconnected" id=f10e8e7b6373d05a7ca29508bada92093af517d1828e9e78f8e0abc6f55f4ef7 namespace=k8s.io Jul 14 21:44:53.913493 env[1216]: time="2025-07-14T21:44:53.913479907Z" level=info msg="cleaning up dead shim" Jul 14 21:44:53.932247 env[1216]: time="2025-07-14T21:44:53.932202089Z" level=warning msg="cleanup warnings time=\"2025-07-14T21:44:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2566 runtime=io.containerd.runc.v2\n" Jul 14 21:44:54.373555 systemd[1]: run-containerd-runc-k8s.io-f10e8e7b6373d05a7ca29508bada92093af517d1828e9e78f8e0abc6f55f4ef7-runc.VTRKvZ.mount: Deactivated successfully. Jul 14 21:44:54.373656 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f10e8e7b6373d05a7ca29508bada92093af517d1828e9e78f8e0abc6f55f4ef7-rootfs.mount: Deactivated successfully. Jul 14 21:44:54.736620 kubelet[1916]: E0714 21:44:54.736402 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:54.736620 kubelet[1916]: E0714 21:44:54.736446 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:54.738886 env[1216]: time="2025-07-14T21:44:54.738836371Z" level=info msg="CreateContainer within sandbox \"e38b0619c83573d8dc40f03495cd62e4515de498aedb711f71a8cfea2e0b6732\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 14 21:44:54.764867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3572008326.mount: Deactivated successfully. Jul 14 21:44:54.782332 env[1216]: time="2025-07-14T21:44:54.782260962Z" level=info msg="CreateContainer within sandbox \"e38b0619c83573d8dc40f03495cd62e4515de498aedb711f71a8cfea2e0b6732\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"035bbbcf31f3b05067a899197dc7f450f37bb581c97a1bfb6b2f5ca4c7df613a\"" Jul 14 21:44:54.784723 env[1216]: time="2025-07-14T21:44:54.784680258Z" level=info msg="StartContainer for \"035bbbcf31f3b05067a899197dc7f450f37bb581c97a1bfb6b2f5ca4c7df613a\"" Jul 14 21:44:54.800162 systemd[1]: Started cri-containerd-035bbbcf31f3b05067a899197dc7f450f37bb581c97a1bfb6b2f5ca4c7df613a.scope. Jul 14 21:44:54.882716 env[1216]: time="2025-07-14T21:44:54.882664915Z" level=info msg="StartContainer for \"035bbbcf31f3b05067a899197dc7f450f37bb581c97a1bfb6b2f5ca4c7df613a\" returns successfully" Jul 14 21:44:55.054598 kubelet[1916]: I0714 21:44:55.054457 1916 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 14 21:44:55.156842 systemd[1]: Created slice kubepods-burstable-podbe2d6bd7_3a71_447a_9459_feb1900944eb.slice. Jul 14 21:44:55.162354 systemd[1]: Created slice kubepods-burstable-pod662aa4ad_956f_43b9_bcd2_7e9735a2ad2d.slice. Jul 14 21:44:55.200885 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 14 21:44:55.206350 kubelet[1916]: I0714 21:44:55.206309 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be2d6bd7-3a71-447a-9459-feb1900944eb-config-volume\") pod \"coredns-668d6bf9bc-sk86q\" (UID: \"be2d6bd7-3a71-447a-9459-feb1900944eb\") " pod="kube-system/coredns-668d6bf9bc-sk86q" Jul 14 21:44:55.206482 kubelet[1916]: I0714 21:44:55.206359 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzgsz\" (UniqueName: \"kubernetes.io/projected/662aa4ad-956f-43b9-bcd2-7e9735a2ad2d-kube-api-access-bzgsz\") pod \"coredns-668d6bf9bc-wl9gx\" (UID: \"662aa4ad-956f-43b9-bcd2-7e9735a2ad2d\") " pod="kube-system/coredns-668d6bf9bc-wl9gx" Jul 14 21:44:55.206482 kubelet[1916]: I0714 21:44:55.206392 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cw5n\" (UniqueName: \"kubernetes.io/projected/be2d6bd7-3a71-447a-9459-feb1900944eb-kube-api-access-6cw5n\") pod \"coredns-668d6bf9bc-sk86q\" (UID: \"be2d6bd7-3a71-447a-9459-feb1900944eb\") " pod="kube-system/coredns-668d6bf9bc-sk86q" Jul 14 21:44:55.206482 kubelet[1916]: I0714 21:44:55.206407 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/662aa4ad-956f-43b9-bcd2-7e9735a2ad2d-config-volume\") pod \"coredns-668d6bf9bc-wl9gx\" (UID: \"662aa4ad-956f-43b9-bcd2-7e9735a2ad2d\") " pod="kube-system/coredns-668d6bf9bc-wl9gx" Jul 14 21:44:55.460598 kubelet[1916]: E0714 21:44:55.460537 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:55.461486 env[1216]: time="2025-07-14T21:44:55.461273180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sk86q,Uid:be2d6bd7-3a71-447a-9459-feb1900944eb,Namespace:kube-system,Attempt:0,}" Jul 14 21:44:55.466929 kubelet[1916]: E0714 21:44:55.466829 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:55.467827 env[1216]: time="2025-07-14T21:44:55.467775766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wl9gx,Uid:662aa4ad-956f-43b9-bcd2-7e9735a2ad2d,Namespace:kube-system,Attempt:0,}" Jul 14 21:44:55.469913 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 14 21:44:55.741247 kubelet[1916]: E0714 21:44:55.741047 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:55.759766 kubelet[1916]: I0714 21:44:55.759696 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pp4s5" podStartSLOduration=5.261925785 podStartE2EDuration="11.759678182s" podCreationTimestamp="2025-07-14 21:44:44 +0000 UTC" firstStartedPulling="2025-07-14 21:44:44.863908252 +0000 UTC m=+6.302615363" lastFinishedPulling="2025-07-14 21:44:51.361660649 +0000 UTC m=+12.800367760" observedRunningTime="2025-07-14 21:44:55.758697581 +0000 UTC m=+17.197404692" watchObservedRunningTime="2025-07-14 21:44:55.759678182 +0000 UTC m=+17.198385293" Jul 14 21:44:56.743282 kubelet[1916]: E0714 21:44:56.743240 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:57.147284 systemd-networkd[1048]: cilium_host: Link UP Jul 14 21:44:57.148900 systemd-networkd[1048]: cilium_net: Link UP Jul 14 21:44:57.150265 systemd-networkd[1048]: cilium_net: Gained carrier Jul 14 21:44:57.150967 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Jul 14 21:44:57.151043 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 14 21:44:57.151148 systemd-networkd[1048]: cilium_host: Gained carrier Jul 14 21:44:57.254682 systemd-networkd[1048]: cilium_vxlan: Link UP Jul 14 21:44:57.254874 systemd-networkd[1048]: cilium_vxlan: Gained carrier Jul 14 21:44:57.621877 kernel: NET: Registered PF_ALG protocol family Jul 14 21:44:57.744547 kubelet[1916]: E0714 21:44:57.744492 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:57.864995 systemd-networkd[1048]: cilium_host: Gained IPv6LL Jul 14 21:44:57.929269 systemd-networkd[1048]: cilium_net: Gained IPv6LL Jul 14 21:44:58.233210 systemd-networkd[1048]: lxc_health: Link UP Jul 14 21:44:58.247093 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 14 21:44:58.246930 systemd-networkd[1048]: lxc_health: Gained carrier Jul 14 21:44:58.551740 systemd-networkd[1048]: lxc5daf284bb1ef: Link UP Jul 14 21:44:58.562260 systemd-networkd[1048]: lxc79ab317399bd: Link UP Jul 14 21:44:58.573941 kernel: eth0: renamed from tmp1aeff Jul 14 21:44:58.578890 kernel: eth0: renamed from tmpbf8d4 Jul 14 21:44:58.585802 systemd-networkd[1048]: lxc5daf284bb1ef: Gained carrier Jul 14 21:44:58.589888 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc5daf284bb1ef: link becomes ready Jul 14 21:44:58.593869 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc79ab317399bd: link becomes ready Jul 14 21:44:58.594679 systemd-networkd[1048]: lxc79ab317399bd: Gained carrier Jul 14 21:44:58.747758 kubelet[1916]: E0714 21:44:58.747693 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:58.760992 systemd-networkd[1048]: cilium_vxlan: Gained IPv6LL Jul 14 21:44:59.749094 kubelet[1916]: E0714 21:44:59.749055 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:59.912986 systemd-networkd[1048]: lxc5daf284bb1ef: Gained IPv6LL Jul 14 21:44:59.977028 systemd-networkd[1048]: lxc_health: Gained IPv6LL Jul 14 21:45:00.617017 systemd-networkd[1048]: lxc79ab317399bd: Gained IPv6LL Jul 14 21:45:00.750391 kubelet[1916]: E0714 21:45:00.750318 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:45:02.272448 env[1216]: time="2025-07-14T21:45:02.272361334Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:45:02.272448 env[1216]: time="2025-07-14T21:45:02.272403179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:45:02.272448 env[1216]: time="2025-07-14T21:45:02.272414180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:45:02.273175 env[1216]: time="2025-07-14T21:45:02.273125945Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1aeff7b3269fcb375615b565bb81c16b7a113052d3f639cfd9d58c2b8438df04 pid=3139 runtime=io.containerd.runc.v2 Jul 14 21:45:02.275511 env[1216]: time="2025-07-14T21:45:02.275426341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:45:02.275511 env[1216]: time="2025-07-14T21:45:02.275469586Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:45:02.275511 env[1216]: time="2025-07-14T21:45:02.275479707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:45:02.275893 env[1216]: time="2025-07-14T21:45:02.275809507Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bf8d4038a7f2cbe3e362b11d595965cb4104ea81202895467ec0f59763ce1065 pid=3155 runtime=io.containerd.runc.v2 Jul 14 21:45:02.290186 systemd[1]: Started cri-containerd-bf8d4038a7f2cbe3e362b11d595965cb4104ea81202895467ec0f59763ce1065.scope. Jul 14 21:45:02.300056 systemd[1]: run-containerd-runc-k8s.io-1aeff7b3269fcb375615b565bb81c16b7a113052d3f639cfd9d58c2b8438df04-runc.TWAg6o.mount: Deactivated successfully. Jul 14 21:45:02.304025 systemd[1]: Started cri-containerd-1aeff7b3269fcb375615b565bb81c16b7a113052d3f639cfd9d58c2b8438df04.scope. Jul 14 21:45:02.337179 systemd-resolved[1155]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 21:45:02.338197 systemd-resolved[1155]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 21:45:02.357595 env[1216]: time="2025-07-14T21:45:02.357541862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wl9gx,Uid:662aa4ad-956f-43b9-bcd2-7e9735a2ad2d,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf8d4038a7f2cbe3e362b11d595965cb4104ea81202895467ec0f59763ce1065\"" Jul 14 21:45:02.358834 kubelet[1916]: E0714 21:45:02.358358 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:45:02.361450 env[1216]: time="2025-07-14T21:45:02.361413566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sk86q,Uid:be2d6bd7-3a71-447a-9459-feb1900944eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"1aeff7b3269fcb375615b565bb81c16b7a113052d3f639cfd9d58c2b8438df04\"" Jul 14 21:45:02.362197 kubelet[1916]: E0714 21:45:02.362109 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:45:02.363037 env[1216]: time="2025-07-14T21:45:02.362883862Z" level=info msg="CreateContainer within sandbox \"bf8d4038a7f2cbe3e362b11d595965cb4104ea81202895467ec0f59763ce1065\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 21:45:02.364840 env[1216]: time="2025-07-14T21:45:02.364806613Z" level=info msg="CreateContainer within sandbox \"1aeff7b3269fcb375615b565bb81c16b7a113052d3f639cfd9d58c2b8438df04\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 21:45:02.379737 env[1216]: time="2025-07-14T21:45:02.379681035Z" level=info msg="CreateContainer within sandbox \"bf8d4038a7f2cbe3e362b11d595965cb4104ea81202895467ec0f59763ce1065\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5de4690d81ef5b9a8a028f1ada6ca2c87c062df883456f75aefb96432da6b9b3\"" Jul 14 21:45:02.380808 env[1216]: time="2025-07-14T21:45:02.380776566Z" level=info msg="StartContainer for \"5de4690d81ef5b9a8a028f1ada6ca2c87c062df883456f75aefb96432da6b9b3\"" Jul 14 21:45:02.388576 env[1216]: time="2025-07-14T21:45:02.388532776Z" level=info msg="CreateContainer within sandbox \"1aeff7b3269fcb375615b565bb81c16b7a113052d3f639cfd9d58c2b8438df04\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3a9df2425d23bbfdce06bd779a512c4a9e594aa9739191355f25fa462a91eed5\"" Jul 14 21:45:02.389277 env[1216]: time="2025-07-14T21:45:02.389248422Z" level=info msg="StartContainer for \"3a9df2425d23bbfdce06bd779a512c4a9e594aa9739191355f25fa462a91eed5\"" Jul 14 21:45:02.403691 systemd[1]: Started cri-containerd-5de4690d81ef5b9a8a028f1ada6ca2c87c062df883456f75aefb96432da6b9b3.scope. Jul 14 21:45:02.410634 systemd[1]: Started cri-containerd-3a9df2425d23bbfdce06bd779a512c4a9e594aa9739191355f25fa462a91eed5.scope. Jul 14 21:45:02.453023 env[1216]: time="2025-07-14T21:45:02.452954376Z" level=info msg="StartContainer for \"3a9df2425d23bbfdce06bd779a512c4a9e594aa9739191355f25fa462a91eed5\" returns successfully" Jul 14 21:45:02.456348 env[1216]: time="2025-07-14T21:45:02.456179363Z" level=info msg="StartContainer for \"5de4690d81ef5b9a8a028f1ada6ca2c87c062df883456f75aefb96432da6b9b3\" returns successfully" Jul 14 21:45:02.754870 kubelet[1916]: E0714 21:45:02.754761 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:45:02.757548 kubelet[1916]: E0714 21:45:02.757519 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:45:02.779578 kubelet[1916]: I0714 21:45:02.779511 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wl9gx" podStartSLOduration=18.77949415 podStartE2EDuration="18.77949415s" podCreationTimestamp="2025-07-14 21:44:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:45:02.768045018 +0000 UTC m=+24.206752129" watchObservedRunningTime="2025-07-14 21:45:02.77949415 +0000 UTC m=+24.218201261" Jul 14 21:45:02.792557 kubelet[1916]: I0714 21:45:02.792492 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-sk86q" podStartSLOduration=18.792473745 podStartE2EDuration="18.792473745s" podCreationTimestamp="2025-07-14 21:44:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:45:02.780488469 +0000 UTC m=+24.219195620" watchObservedRunningTime="2025-07-14 21:45:02.792473745 +0000 UTC m=+24.231180856" Jul 14 21:45:03.759429 kubelet[1916]: E0714 21:45:03.759385 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:45:03.760000 kubelet[1916]: E0714 21:45:03.759977 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:45:04.761588 kubelet[1916]: E0714 21:45:04.761560 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:45:04.761957 kubelet[1916]: E0714 21:45:04.761634 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:45:11.867238 systemd[1]: Started sshd@5-10.0.0.12:22-10.0.0.1:55726.service. Jul 14 21:45:11.912929 sshd[3301]: Accepted publickey for core from 10.0.0.1 port 55726 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:45:11.915158 sshd[3301]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:45:11.919267 systemd-logind[1203]: New session 6 of user core. Jul 14 21:45:11.920300 systemd[1]: Started session-6.scope. Jul 14 21:45:12.048246 sshd[3301]: pam_unix(sshd:session): session closed for user core Jul 14 21:45:12.050868 systemd[1]: sshd@5-10.0.0.12:22-10.0.0.1:55726.service: Deactivated successfully. Jul 14 21:45:12.051648 systemd[1]: session-6.scope: Deactivated successfully. Jul 14 21:45:12.052147 systemd-logind[1203]: Session 6 logged out. Waiting for processes to exit. Jul 14 21:45:12.052781 systemd-logind[1203]: Removed session 6. Jul 14 21:45:17.052918 systemd[1]: Started sshd@6-10.0.0.12:22-10.0.0.1:56982.service. Jul 14 21:45:17.096913 sshd[3320]: Accepted publickey for core from 10.0.0.1 port 56982 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:45:17.098814 sshd[3320]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:45:17.104702 systemd-logind[1203]: New session 7 of user core. Jul 14 21:45:17.105149 systemd[1]: Started session-7.scope. Jul 14 21:45:17.254238 sshd[3320]: pam_unix(sshd:session): session closed for user core Jul 14 21:45:17.260957 systemd[1]: sshd@6-10.0.0.12:22-10.0.0.1:56982.service: Deactivated successfully. Jul 14 21:45:17.261745 systemd[1]: session-7.scope: Deactivated successfully. Jul 14 21:45:17.266413 systemd-logind[1203]: Session 7 logged out. Waiting for processes to exit. Jul 14 21:45:17.267515 systemd-logind[1203]: Removed session 7. Jul 14 21:45:22.263009 systemd[1]: Started sshd@7-10.0.0.12:22-10.0.0.1:56994.service. Jul 14 21:45:22.311595 sshd[3336]: Accepted publickey for core from 10.0.0.1 port 56994 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:45:22.313362 sshd[3336]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:45:22.318091 systemd-logind[1203]: New session 8 of user core. Jul 14 21:45:22.318534 systemd[1]: Started session-8.scope. Jul 14 21:45:22.453419 sshd[3336]: pam_unix(sshd:session): session closed for user core Jul 14 21:45:22.455764 systemd[1]: session-8.scope: Deactivated successfully. Jul 14 21:45:22.456369 systemd-logind[1203]: Session 8 logged out. Waiting for processes to exit. Jul 14 21:45:22.456471 systemd[1]: sshd@7-10.0.0.12:22-10.0.0.1:56994.service: Deactivated successfully. Jul 14 21:45:22.457560 systemd-logind[1203]: Removed session 8. Jul 14 21:45:27.457082 systemd[1]: Started sshd@8-10.0.0.12:22-10.0.0.1:46880.service. Jul 14 21:45:27.502752 sshd[3350]: Accepted publickey for core from 10.0.0.1 port 46880 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:45:27.504052 sshd[3350]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:45:27.509457 systemd-logind[1203]: New session 9 of user core. Jul 14 21:45:27.509878 systemd[1]: Started session-9.scope. Jul 14 21:45:27.647566 sshd[3350]: pam_unix(sshd:session): session closed for user core Jul 14 21:45:27.651940 systemd[1]: Started sshd@9-10.0.0.12:22-10.0.0.1:46894.service. Jul 14 21:45:27.652513 systemd[1]: sshd@8-10.0.0.12:22-10.0.0.1:46880.service: Deactivated successfully. Jul 14 21:45:27.653454 systemd[1]: session-9.scope: Deactivated successfully. Jul 14 21:45:27.654171 systemd-logind[1203]: Session 9 logged out. Waiting for processes to exit. Jul 14 21:45:27.655126 systemd-logind[1203]: Removed session 9. Jul 14 21:45:27.689584 sshd[3364]: Accepted publickey for core from 10.0.0.1 port 46894 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:45:27.691406 sshd[3364]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:45:27.695881 systemd-logind[1203]: New session 10 of user core. Jul 14 21:45:27.696744 systemd[1]: Started session-10.scope. Jul 14 21:45:27.910312 sshd[3364]: pam_unix(sshd:session): session closed for user core Jul 14 21:45:27.914175 systemd[1]: Started sshd@10-10.0.0.12:22-10.0.0.1:46898.service. Jul 14 21:45:27.914706 systemd[1]: sshd@9-10.0.0.12:22-10.0.0.1:46894.service: Deactivated successfully. Jul 14 21:45:27.915735 systemd[1]: session-10.scope: Deactivated successfully. Jul 14 21:45:27.916625 systemd-logind[1203]: Session 10 logged out. Waiting for processes to exit. Jul 14 21:45:27.917745 systemd-logind[1203]: Removed session 10. Jul 14 21:45:27.957889 sshd[3376]: Accepted publickey for core from 10.0.0.1 port 46898 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:45:27.959977 sshd[3376]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:45:27.963545 systemd-logind[1203]: New session 11 of user core. Jul 14 21:45:27.964423 systemd[1]: Started session-11.scope. Jul 14 21:45:28.082307 sshd[3376]: pam_unix(sshd:session): session closed for user core Jul 14 21:45:28.084628 systemd[1]: session-11.scope: Deactivated successfully. Jul 14 21:45:28.085290 systemd-logind[1203]: Session 11 logged out. Waiting for processes to exit. Jul 14 21:45:28.085403 systemd[1]: sshd@10-10.0.0.12:22-10.0.0.1:46898.service: Deactivated successfully. Jul 14 21:45:28.086517 systemd-logind[1203]: Removed session 11. Jul 14 21:45:33.087306 systemd[1]: Started sshd@11-10.0.0.12:22-10.0.0.1:50262.service. Jul 14 21:45:33.122975 sshd[3392]: Accepted publickey for core from 10.0.0.1 port 50262 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:45:33.124672 sshd[3392]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:45:33.128384 systemd-logind[1203]: New session 12 of user core. Jul 14 21:45:33.129212 systemd[1]: Started session-12.scope. Jul 14 21:45:33.242983 sshd[3392]: pam_unix(sshd:session): session closed for user core Jul 14 21:45:33.246330 systemd[1]: sshd@11-10.0.0.12:22-10.0.0.1:50262.service: Deactivated successfully. Jul 14 21:45:33.247157 systemd[1]: session-12.scope: Deactivated successfully. Jul 14 21:45:33.247804 systemd-logind[1203]: Session 12 logged out. Waiting for processes to exit. Jul 14 21:45:33.248554 systemd-logind[1203]: Removed session 12. Jul 14 21:45:38.255339 systemd[1]: Started sshd@12-10.0.0.12:22-10.0.0.1:50270.service. Jul 14 21:45:38.300970 sshd[3406]: Accepted publickey for core from 10.0.0.1 port 50270 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:45:38.302358 sshd[3406]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:45:38.306072 systemd-logind[1203]: New session 13 of user core. Jul 14 21:45:38.307103 systemd[1]: Started session-13.scope. Jul 14 21:45:38.421463 sshd[3406]: pam_unix(sshd:session): session closed for user core Jul 14 21:45:38.424436 systemd[1]: sshd@12-10.0.0.12:22-10.0.0.1:50270.service: Deactivated successfully. Jul 14 21:45:38.425147 systemd[1]: session-13.scope: Deactivated successfully. Jul 14 21:45:38.425738 systemd-logind[1203]: Session 13 logged out. Waiting for processes to exit. Jul 14 21:45:38.426990 systemd[1]: Started sshd@13-10.0.0.12:22-10.0.0.1:50272.service. Jul 14 21:45:38.427737 systemd-logind[1203]: Removed session 13. Jul 14 21:45:38.467592 sshd[3420]: Accepted publickey for core from 10.0.0.1 port 50272 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:45:38.468945 sshd[3420]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:45:38.472228 systemd-logind[1203]: New session 14 of user core. Jul 14 21:45:38.473276 systemd[1]: Started session-14.scope. Jul 14 21:45:38.688737 sshd[3420]: pam_unix(sshd:session): session closed for user core Jul 14 21:45:38.692562 systemd[1]: Started sshd@14-10.0.0.12:22-10.0.0.1:50280.service. Jul 14 21:45:38.693116 systemd[1]: sshd@13-10.0.0.12:22-10.0.0.1:50272.service: Deactivated successfully. Jul 14 21:45:38.694323 systemd[1]: session-14.scope: Deactivated successfully. Jul 14 21:45:38.694978 systemd-logind[1203]: Session 14 logged out. Waiting for processes to exit. Jul 14 21:45:38.695974 systemd-logind[1203]: Removed session 14. Jul 14 21:45:38.730171 sshd[3433]: Accepted publickey for core from 10.0.0.1 port 50280 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:45:38.732089 sshd[3433]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:45:38.736568 systemd-logind[1203]: New session 15 of user core. Jul 14 21:45:38.736982 systemd[1]: Started session-15.scope. Jul 14 21:45:39.454775 sshd[3433]: pam_unix(sshd:session): session closed for user core Jul 14 21:45:39.457958 systemd[1]: Started sshd@15-10.0.0.12:22-10.0.0.1:50294.service. Jul 14 21:45:39.458905 systemd[1]: sshd@14-10.0.0.12:22-10.0.0.1:50280.service: Deactivated successfully. Jul 14 21:45:39.459590 systemd[1]: session-15.scope: Deactivated successfully. Jul 14 21:45:39.460352 systemd-logind[1203]: Session 15 logged out. Waiting for processes to exit. Jul 14 21:45:39.461217 systemd-logind[1203]: Removed session 15. Jul 14 21:45:39.494071 sshd[3452]: Accepted publickey for core from 10.0.0.1 port 50294 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:45:39.495264 sshd[3452]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:45:39.498913 systemd-logind[1203]: New session 16 of user core. Jul 14 21:45:39.499514 systemd[1]: Started session-16.scope. Jul 14 21:45:39.717404 sshd[3452]: pam_unix(sshd:session): session closed for user core Jul 14 21:45:39.721036 systemd[1]: Started sshd@16-10.0.0.12:22-10.0.0.1:50310.service. Jul 14 21:45:39.725766 systemd[1]: sshd@15-10.0.0.12:22-10.0.0.1:50294.service: Deactivated successfully. Jul 14 21:45:39.726582 systemd[1]: session-16.scope: Deactivated successfully. Jul 14 21:45:39.727750 systemd-logind[1203]: Session 16 logged out. Waiting for processes to exit. Jul 14 21:45:39.728429 systemd-logind[1203]: Removed session 16. Jul 14 21:45:39.758890 sshd[3465]: Accepted publickey for core from 10.0.0.1 port 50310 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:45:39.760173 sshd[3465]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:45:39.763530 systemd-logind[1203]: New session 17 of user core. Jul 14 21:45:39.764659 systemd[1]: Started session-17.scope. Jul 14 21:45:39.878617 sshd[3465]: pam_unix(sshd:session): session closed for user core Jul 14 21:45:39.881179 systemd[1]: sshd@16-10.0.0.12:22-10.0.0.1:50310.service: Deactivated successfully. Jul 14 21:45:39.881909 systemd[1]: session-17.scope: Deactivated successfully. Jul 14 21:45:39.882443 systemd-logind[1203]: Session 17 logged out. Waiting for processes to exit. Jul 14 21:45:39.883871 systemd-logind[1203]: Removed session 17. Jul 14 21:45:44.883434 systemd[1]: Started sshd@17-10.0.0.12:22-10.0.0.1:52148.service. Jul 14 21:45:44.930213 sshd[3481]: Accepted publickey for core from 10.0.0.1 port 52148 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:45:44.932010 sshd[3481]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:45:44.936243 systemd-logind[1203]: New session 18 of user core. Jul 14 21:45:44.939929 systemd[1]: Started session-18.scope. Jul 14 21:45:45.070690 sshd[3481]: pam_unix(sshd:session): session closed for user core Jul 14 21:45:45.073298 systemd[1]: sshd@17-10.0.0.12:22-10.0.0.1:52148.service: Deactivated successfully. Jul 14 21:45:45.074134 systemd[1]: session-18.scope: Deactivated successfully. Jul 14 21:45:45.074660 systemd-logind[1203]: Session 18 logged out. Waiting for processes to exit. Jul 14 21:45:45.075375 systemd-logind[1203]: Removed session 18. Jul 14 21:45:50.079890 systemd[1]: Started sshd@18-10.0.0.12:22-10.0.0.1:52154.service. Jul 14 21:45:50.115970 sshd[3497]: Accepted publickey for core from 10.0.0.1 port 52154 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:45:50.117079 sshd[3497]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:45:50.120856 systemd-logind[1203]: New session 19 of user core. Jul 14 21:45:50.123934 systemd[1]: Started session-19.scope. Jul 14 21:45:50.240760 sshd[3497]: pam_unix(sshd:session): session closed for user core Jul 14 21:45:50.243325 systemd[1]: sshd@18-10.0.0.12:22-10.0.0.1:52154.service: Deactivated successfully. Jul 14 21:45:50.244131 systemd[1]: session-19.scope: Deactivated successfully. Jul 14 21:45:50.244899 systemd-logind[1203]: Session 19 logged out. Waiting for processes to exit. Jul 14 21:45:50.246897 systemd-logind[1203]: Removed session 19. Jul 14 21:45:51.681864 kubelet[1916]: E0714 21:45:51.681822 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:45:55.245268 systemd[1]: Started sshd@19-10.0.0.12:22-10.0.0.1:50776.service. Jul 14 21:45:55.280797 sshd[3510]: Accepted publickey for core from 10.0.0.1 port 50776 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:45:55.281955 sshd[3510]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:45:55.285769 systemd-logind[1203]: New session 20 of user core. Jul 14 21:45:55.287435 systemd[1]: Started session-20.scope. Jul 14 21:45:55.416603 sshd[3510]: pam_unix(sshd:session): session closed for user core Jul 14 21:45:55.418975 systemd[1]: sshd@19-10.0.0.12:22-10.0.0.1:50776.service: Deactivated successfully. Jul 14 21:45:55.419786 systemd[1]: session-20.scope: Deactivated successfully. Jul 14 21:45:55.420492 systemd-logind[1203]: Session 20 logged out. Waiting for processes to exit. Jul 14 21:45:55.421267 systemd-logind[1203]: Removed session 20. Jul 14 21:46:00.421641 systemd[1]: Started sshd@20-10.0.0.12:22-10.0.0.1:50786.service. Jul 14 21:46:00.459492 sshd[3523]: Accepted publickey for core from 10.0.0.1 port 50786 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:46:00.460772 sshd[3523]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:46:00.465358 systemd-logind[1203]: New session 21 of user core. Jul 14 21:46:00.465751 systemd[1]: Started session-21.scope. Jul 14 21:46:00.608258 sshd[3523]: pam_unix(sshd:session): session closed for user core Jul 14 21:46:00.612735 systemd[1]: sshd@20-10.0.0.12:22-10.0.0.1:50786.service: Deactivated successfully. Jul 14 21:46:00.613563 systemd[1]: session-21.scope: Deactivated successfully. Jul 14 21:46:00.614437 systemd-logind[1203]: Session 21 logged out. Waiting for processes to exit. Jul 14 21:46:00.617115 systemd-logind[1203]: Removed session 21. Jul 14 21:46:00.619047 systemd[1]: Started sshd@21-10.0.0.12:22-10.0.0.1:50790.service. Jul 14 21:46:00.656794 sshd[3537]: Accepted publickey for core from 10.0.0.1 port 50790 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:46:00.658058 sshd[3537]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:46:00.661932 systemd-logind[1203]: New session 22 of user core. Jul 14 21:46:00.664924 systemd[1]: Started session-22.scope. Jul 14 21:46:02.508324 env[1216]: time="2025-07-14T21:46:02.507572133Z" level=info msg="StopContainer for \"2cd031fe8d610059be419bb014ab14e581cdbcbe13c1984e1278628d41802858\" with timeout 30 (s)" Jul 14 21:46:02.508324 env[1216]: time="2025-07-14T21:46:02.507998891Z" level=info msg="Stop container \"2cd031fe8d610059be419bb014ab14e581cdbcbe13c1984e1278628d41802858\" with signal terminated" Jul 14 21:46:02.519626 systemd[1]: cri-containerd-2cd031fe8d610059be419bb014ab14e581cdbcbe13c1984e1278628d41802858.scope: Deactivated successfully. Jul 14 21:46:02.539657 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2cd031fe8d610059be419bb014ab14e581cdbcbe13c1984e1278628d41802858-rootfs.mount: Deactivated successfully. Jul 14 21:46:02.547258 env[1216]: time="2025-07-14T21:46:02.547212898Z" level=info msg="shim disconnected" id=2cd031fe8d610059be419bb014ab14e581cdbcbe13c1984e1278628d41802858 Jul 14 21:46:02.547548 env[1216]: time="2025-07-14T21:46:02.547525736Z" level=warning msg="cleaning up after shim disconnected" id=2cd031fe8d610059be419bb014ab14e581cdbcbe13c1984e1278628d41802858 namespace=k8s.io Jul 14 21:46:02.547628 env[1216]: time="2025-07-14T21:46:02.547614536Z" level=info msg="cleaning up dead shim" Jul 14 21:46:02.548424 env[1216]: time="2025-07-14T21:46:02.548349331Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 14 21:46:02.553711 env[1216]: time="2025-07-14T21:46:02.553667780Z" level=info msg="StopContainer for \"035bbbcf31f3b05067a899197dc7f450f37bb581c97a1bfb6b2f5ca4c7df613a\" with timeout 2 (s)" Jul 14 21:46:02.554061 env[1216]: time="2025-07-14T21:46:02.554029658Z" level=info msg="Stop container \"035bbbcf31f3b05067a899197dc7f450f37bb581c97a1bfb6b2f5ca4c7df613a\" with signal terminated" Jul 14 21:46:02.555301 env[1216]: time="2025-07-14T21:46:02.555268370Z" level=warning msg="cleanup warnings time=\"2025-07-14T21:46:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3581 runtime=io.containerd.runc.v2\n" Jul 14 21:46:02.558186 env[1216]: time="2025-07-14T21:46:02.558148953Z" level=info msg="StopContainer for \"2cd031fe8d610059be419bb014ab14e581cdbcbe13c1984e1278628d41802858\" returns successfully" Jul 14 21:46:02.558925 env[1216]: time="2025-07-14T21:46:02.558891949Z" level=info msg="StopPodSandbox for \"3c0b73516f845f53ce82f6c1f042df4846dbb00b110a124dddec4ca0b2a14e84\"" Jul 14 21:46:02.558997 env[1216]: time="2025-07-14T21:46:02.558970788Z" level=info msg="Container to stop \"2cd031fe8d610059be419bb014ab14e581cdbcbe13c1984e1278628d41802858\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 21:46:02.560707 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3c0b73516f845f53ce82f6c1f042df4846dbb00b110a124dddec4ca0b2a14e84-shm.mount: Deactivated successfully. Jul 14 21:46:02.564146 systemd-networkd[1048]: lxc_health: Link DOWN Jul 14 21:46:02.564150 systemd-networkd[1048]: lxc_health: Lost carrier Jul 14 21:46:02.568871 systemd[1]: cri-containerd-3c0b73516f845f53ce82f6c1f042df4846dbb00b110a124dddec4ca0b2a14e84.scope: Deactivated successfully. Jul 14 21:46:02.594794 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c0b73516f845f53ce82f6c1f042df4846dbb00b110a124dddec4ca0b2a14e84-rootfs.mount: Deactivated successfully. Jul 14 21:46:02.595593 systemd[1]: cri-containerd-035bbbcf31f3b05067a899197dc7f450f37bb581c97a1bfb6b2f5ca4c7df613a.scope: Deactivated successfully. Jul 14 21:46:02.595971 systemd[1]: cri-containerd-035bbbcf31f3b05067a899197dc7f450f37bb581c97a1bfb6b2f5ca4c7df613a.scope: Consumed 6.801s CPU time. Jul 14 21:46:02.607288 env[1216]: time="2025-07-14T21:46:02.607232662Z" level=info msg="shim disconnected" id=3c0b73516f845f53ce82f6c1f042df4846dbb00b110a124dddec4ca0b2a14e84 Jul 14 21:46:02.607559 env[1216]: time="2025-07-14T21:46:02.607538940Z" level=warning msg="cleaning up after shim disconnected" id=3c0b73516f845f53ce82f6c1f042df4846dbb00b110a124dddec4ca0b2a14e84 namespace=k8s.io Jul 14 21:46:02.607630 env[1216]: time="2025-07-14T21:46:02.607617460Z" level=info msg="cleaning up dead shim" Jul 14 21:46:02.612584 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-035bbbcf31f3b05067a899197dc7f450f37bb581c97a1bfb6b2f5ca4c7df613a-rootfs.mount: Deactivated successfully. Jul 14 21:46:02.615967 env[1216]: time="2025-07-14T21:46:02.615932010Z" level=warning msg="cleanup warnings time=\"2025-07-14T21:46:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3634 runtime=io.containerd.runc.v2\n" Jul 14 21:46:02.616542 env[1216]: time="2025-07-14T21:46:02.616504567Z" level=info msg="TearDown network for sandbox \"3c0b73516f845f53ce82f6c1f042df4846dbb00b110a124dddec4ca0b2a14e84\" successfully" Jul 14 21:46:02.618528 env[1216]: time="2025-07-14T21:46:02.616631686Z" level=info msg="StopPodSandbox for \"3c0b73516f845f53ce82f6c1f042df4846dbb00b110a124dddec4ca0b2a14e84\" returns successfully" Jul 14 21:46:02.627500 env[1216]: time="2025-07-14T21:46:02.627447862Z" level=info msg="shim disconnected" id=035bbbcf31f3b05067a899197dc7f450f37bb581c97a1bfb6b2f5ca4c7df613a Jul 14 21:46:02.627500 env[1216]: time="2025-07-14T21:46:02.627498782Z" level=warning msg="cleaning up after shim disconnected" id=035bbbcf31f3b05067a899197dc7f450f37bb581c97a1bfb6b2f5ca4c7df613a namespace=k8s.io Jul 14 21:46:02.627650 env[1216]: time="2025-07-14T21:46:02.627510662Z" level=info msg="cleaning up dead shim" Jul 14 21:46:02.637123 env[1216]: time="2025-07-14T21:46:02.637000325Z" level=warning msg="cleanup warnings time=\"2025-07-14T21:46:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3647 runtime=io.containerd.runc.v2\n" Jul 14 21:46:02.639372 env[1216]: time="2025-07-14T21:46:02.639294352Z" level=info msg="StopContainer for \"035bbbcf31f3b05067a899197dc7f450f37bb581c97a1bfb6b2f5ca4c7df613a\" returns successfully" Jul 14 21:46:02.641554 env[1216]: time="2025-07-14T21:46:02.641525818Z" level=info msg="StopPodSandbox for \"e38b0619c83573d8dc40f03495cd62e4515de498aedb711f71a8cfea2e0b6732\"" Jul 14 21:46:02.641714 env[1216]: time="2025-07-14T21:46:02.641690217Z" level=info msg="Container to stop \"f10e8e7b6373d05a7ca29508bada92093af517d1828e9e78f8e0abc6f55f4ef7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 21:46:02.641781 env[1216]: time="2025-07-14T21:46:02.641765817Z" level=info msg="Container to stop \"5361f072d5e880a8c84177b16cf4c0fc5f99e6af9313c4d4c41450fad3c4ada0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 21:46:02.641841 env[1216]: time="2025-07-14T21:46:02.641825137Z" level=info msg="Container to stop \"774e9191ec9af1d030044e30e5085656ab2d51881e3474a0b23a535791247f8a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 21:46:02.641954 env[1216]: time="2025-07-14T21:46:02.641920416Z" level=info msg="Container to stop \"f75893db1797a35b1f1fe4a8cff8d77055046e9801e522bb7f1f6dbb452b45ab\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 21:46:02.642014 env[1216]: time="2025-07-14T21:46:02.641998736Z" level=info msg="Container to stop \"035bbbcf31f3b05067a899197dc7f450f37bb581c97a1bfb6b2f5ca4c7df613a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 21:46:02.649246 systemd[1]: cri-containerd-e38b0619c83573d8dc40f03495cd62e4515de498aedb711f71a8cfea2e0b6732.scope: Deactivated successfully. Jul 14 21:46:02.674664 env[1216]: time="2025-07-14T21:46:02.674399143Z" level=info msg="shim disconnected" id=e38b0619c83573d8dc40f03495cd62e4515de498aedb711f71a8cfea2e0b6732 Jul 14 21:46:02.674961 env[1216]: time="2025-07-14T21:46:02.674662342Z" level=warning msg="cleaning up after shim disconnected" id=e38b0619c83573d8dc40f03495cd62e4515de498aedb711f71a8cfea2e0b6732 namespace=k8s.io Jul 14 21:46:02.674961 env[1216]: time="2025-07-14T21:46:02.674682422Z" level=info msg="cleaning up dead shim" Jul 14 21:46:02.683633 env[1216]: time="2025-07-14T21:46:02.683589049Z" level=warning msg="cleanup warnings time=\"2025-07-14T21:46:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3677 runtime=io.containerd.runc.v2\n" Jul 14 21:46:02.685346 env[1216]: time="2025-07-14T21:46:02.685263279Z" level=info msg="TearDown network for sandbox \"e38b0619c83573d8dc40f03495cd62e4515de498aedb711f71a8cfea2e0b6732\" successfully" Jul 14 21:46:02.685346 env[1216]: time="2025-07-14T21:46:02.685338359Z" level=info msg="StopPodSandbox for \"e38b0619c83573d8dc40f03495cd62e4515de498aedb711f71a8cfea2e0b6732\" returns successfully" Jul 14 21:46:02.719626 kubelet[1916]: I0714 21:46:02.719578 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b312e59-3d12-41c8-bef1-0ed26e880928-lib-modules\") pod \"9b312e59-3d12-41c8-bef1-0ed26e880928\" (UID: \"9b312e59-3d12-41c8-bef1-0ed26e880928\") " Jul 14 21:46:02.719626 kubelet[1916]: I0714 21:46:02.719629 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d85766d9-e2ee-47db-97b5-d0ba9f06a74f-cilium-config-path\") pod \"d85766d9-e2ee-47db-97b5-d0ba9f06a74f\" (UID: \"d85766d9-e2ee-47db-97b5-d0ba9f06a74f\") " Jul 14 21:46:02.720179 kubelet[1916]: I0714 21:46:02.719672 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gsgf\" (UniqueName: \"kubernetes.io/projected/9b312e59-3d12-41c8-bef1-0ed26e880928-kube-api-access-8gsgf\") pod \"9b312e59-3d12-41c8-bef1-0ed26e880928\" (UID: \"9b312e59-3d12-41c8-bef1-0ed26e880928\") " Jul 14 21:46:02.720179 kubelet[1916]: I0714 21:46:02.719691 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9b312e59-3d12-41c8-bef1-0ed26e880928-etc-cni-netd\") pod \"9b312e59-3d12-41c8-bef1-0ed26e880928\" (UID: \"9b312e59-3d12-41c8-bef1-0ed26e880928\") " Jul 14 21:46:02.720179 kubelet[1916]: I0714 21:46:02.719783 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9b312e59-3d12-41c8-bef1-0ed26e880928-hostproc\") pod \"9b312e59-3d12-41c8-bef1-0ed26e880928\" (UID: \"9b312e59-3d12-41c8-bef1-0ed26e880928\") " Jul 14 21:46:02.720179 kubelet[1916]: I0714 21:46:02.719800 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9b312e59-3d12-41c8-bef1-0ed26e880928-host-proc-sys-kernel\") pod \"9b312e59-3d12-41c8-bef1-0ed26e880928\" (UID: \"9b312e59-3d12-41c8-bef1-0ed26e880928\") " Jul 14 21:46:02.720179 kubelet[1916]: I0714 21:46:02.719817 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9b312e59-3d12-41c8-bef1-0ed26e880928-cilium-run\") pod \"9b312e59-3d12-41c8-bef1-0ed26e880928\" (UID: \"9b312e59-3d12-41c8-bef1-0ed26e880928\") " Jul 14 21:46:02.720179 kubelet[1916]: I0714 21:46:02.719837 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9b312e59-3d12-41c8-bef1-0ed26e880928-cilium-config-path\") pod \"9b312e59-3d12-41c8-bef1-0ed26e880928\" (UID: \"9b312e59-3d12-41c8-bef1-0ed26e880928\") " Jul 14 21:46:02.720328 kubelet[1916]: I0714 21:46:02.719880 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9b312e59-3d12-41c8-bef1-0ed26e880928-clustermesh-secrets\") pod \"9b312e59-3d12-41c8-bef1-0ed26e880928\" (UID: \"9b312e59-3d12-41c8-bef1-0ed26e880928\") " Jul 14 21:46:02.720328 kubelet[1916]: I0714 21:46:02.719912 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9b312e59-3d12-41c8-bef1-0ed26e880928-cni-path\") pod \"9b312e59-3d12-41c8-bef1-0ed26e880928\" (UID: \"9b312e59-3d12-41c8-bef1-0ed26e880928\") " Jul 14 21:46:02.720328 kubelet[1916]: I0714 21:46:02.719926 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9b312e59-3d12-41c8-bef1-0ed26e880928-bpf-maps\") pod \"9b312e59-3d12-41c8-bef1-0ed26e880928\" (UID: \"9b312e59-3d12-41c8-bef1-0ed26e880928\") " Jul 14 21:46:02.720328 kubelet[1916]: I0714 21:46:02.719941 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9b312e59-3d12-41c8-bef1-0ed26e880928-cilium-cgroup\") pod \"9b312e59-3d12-41c8-bef1-0ed26e880928\" (UID: \"9b312e59-3d12-41c8-bef1-0ed26e880928\") " Jul 14 21:46:02.720328 kubelet[1916]: I0714 21:46:02.719958 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9b312e59-3d12-41c8-bef1-0ed26e880928-hubble-tls\") pod \"9b312e59-3d12-41c8-bef1-0ed26e880928\" (UID: \"9b312e59-3d12-41c8-bef1-0ed26e880928\") " Jul 14 21:46:02.720328 kubelet[1916]: I0714 21:46:02.719972 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9b312e59-3d12-41c8-bef1-0ed26e880928-host-proc-sys-net\") pod \"9b312e59-3d12-41c8-bef1-0ed26e880928\" (UID: \"9b312e59-3d12-41c8-bef1-0ed26e880928\") " Jul 14 21:46:02.720493 kubelet[1916]: I0714 21:46:02.719988 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b312e59-3d12-41c8-bef1-0ed26e880928-xtables-lock\") pod \"9b312e59-3d12-41c8-bef1-0ed26e880928\" (UID: \"9b312e59-3d12-41c8-bef1-0ed26e880928\") " Jul 14 21:46:02.720493 kubelet[1916]: I0714 21:46:02.720007 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-647mg\" (UniqueName: \"kubernetes.io/projected/d85766d9-e2ee-47db-97b5-d0ba9f06a74f-kube-api-access-647mg\") pod \"d85766d9-e2ee-47db-97b5-d0ba9f06a74f\" (UID: \"d85766d9-e2ee-47db-97b5-d0ba9f06a74f\") " Jul 14 21:46:02.721477 kubelet[1916]: I0714 21:46:02.721427 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b312e59-3d12-41c8-bef1-0ed26e880928-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9b312e59-3d12-41c8-bef1-0ed26e880928" (UID: "9b312e59-3d12-41c8-bef1-0ed26e880928"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:46:02.721554 kubelet[1916]: I0714 21:46:02.721500 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b312e59-3d12-41c8-bef1-0ed26e880928-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9b312e59-3d12-41c8-bef1-0ed26e880928" (UID: "9b312e59-3d12-41c8-bef1-0ed26e880928"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:46:02.721554 kubelet[1916]: I0714 21:46:02.721520 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b312e59-3d12-41c8-bef1-0ed26e880928-cni-path" (OuterVolumeSpecName: "cni-path") pod "9b312e59-3d12-41c8-bef1-0ed26e880928" (UID: "9b312e59-3d12-41c8-bef1-0ed26e880928"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:46:02.721554 kubelet[1916]: I0714 21:46:02.721538 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b312e59-3d12-41c8-bef1-0ed26e880928-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9b312e59-3d12-41c8-bef1-0ed26e880928" (UID: "9b312e59-3d12-41c8-bef1-0ed26e880928"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:46:02.721554 kubelet[1916]: I0714 21:46:02.721552 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b312e59-3d12-41c8-bef1-0ed26e880928-hostproc" (OuterVolumeSpecName: "hostproc") pod "9b312e59-3d12-41c8-bef1-0ed26e880928" (UID: "9b312e59-3d12-41c8-bef1-0ed26e880928"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:46:02.723508 kubelet[1916]: I0714 21:46:02.723468 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d85766d9-e2ee-47db-97b5-d0ba9f06a74f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d85766d9-e2ee-47db-97b5-d0ba9f06a74f" (UID: "d85766d9-e2ee-47db-97b5-d0ba9f06a74f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 14 21:46:02.723579 kubelet[1916]: I0714 21:46:02.723535 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b312e59-3d12-41c8-bef1-0ed26e880928-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9b312e59-3d12-41c8-bef1-0ed26e880928" (UID: "9b312e59-3d12-41c8-bef1-0ed26e880928"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:46:02.723579 kubelet[1916]: I0714 21:46:02.723554 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b312e59-3d12-41c8-bef1-0ed26e880928-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9b312e59-3d12-41c8-bef1-0ed26e880928" (UID: "9b312e59-3d12-41c8-bef1-0ed26e880928"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:46:02.723653 kubelet[1916]: I0714 21:46:02.723579 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b312e59-3d12-41c8-bef1-0ed26e880928-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9b312e59-3d12-41c8-bef1-0ed26e880928" (UID: "9b312e59-3d12-41c8-bef1-0ed26e880928"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:46:02.724824 kubelet[1916]: I0714 21:46:02.724792 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d85766d9-e2ee-47db-97b5-d0ba9f06a74f-kube-api-access-647mg" (OuterVolumeSpecName: "kube-api-access-647mg") pod "d85766d9-e2ee-47db-97b5-d0ba9f06a74f" (UID: "d85766d9-e2ee-47db-97b5-d0ba9f06a74f"). InnerVolumeSpecName "kube-api-access-647mg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 14 21:46:02.724952 kubelet[1916]: I0714 21:46:02.724928 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b312e59-3d12-41c8-bef1-0ed26e880928-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9b312e59-3d12-41c8-bef1-0ed26e880928" (UID: "9b312e59-3d12-41c8-bef1-0ed26e880928"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:46:02.724992 kubelet[1916]: I0714 21:46:02.724968 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b312e59-3d12-41c8-bef1-0ed26e880928-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9b312e59-3d12-41c8-bef1-0ed26e880928" (UID: "9b312e59-3d12-41c8-bef1-0ed26e880928"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:46:02.725053 kubelet[1916]: I0714 21:46:02.724902 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b312e59-3d12-41c8-bef1-0ed26e880928-kube-api-access-8gsgf" (OuterVolumeSpecName: "kube-api-access-8gsgf") pod "9b312e59-3d12-41c8-bef1-0ed26e880928" (UID: "9b312e59-3d12-41c8-bef1-0ed26e880928"). InnerVolumeSpecName "kube-api-access-8gsgf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 14 21:46:02.726534 kubelet[1916]: I0714 21:46:02.726492 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b312e59-3d12-41c8-bef1-0ed26e880928-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9b312e59-3d12-41c8-bef1-0ed26e880928" (UID: "9b312e59-3d12-41c8-bef1-0ed26e880928"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 14 21:46:02.726795 kubelet[1916]: I0714 21:46:02.726759 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b312e59-3d12-41c8-bef1-0ed26e880928-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9b312e59-3d12-41c8-bef1-0ed26e880928" (UID: "9b312e59-3d12-41c8-bef1-0ed26e880928"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 14 21:46:02.728731 kubelet[1916]: I0714 21:46:02.728694 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b312e59-3d12-41c8-bef1-0ed26e880928-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9b312e59-3d12-41c8-bef1-0ed26e880928" (UID: "9b312e59-3d12-41c8-bef1-0ed26e880928"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 14 21:46:02.822109 kubelet[1916]: I0714 21:46:02.820522 1916 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b312e59-3d12-41c8-bef1-0ed26e880928-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 14 21:46:02.822109 kubelet[1916]: I0714 21:46:02.820563 1916 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-647mg\" (UniqueName: \"kubernetes.io/projected/d85766d9-e2ee-47db-97b5-d0ba9f06a74f-kube-api-access-647mg\") on node \"localhost\" DevicePath \"\"" Jul 14 21:46:02.822109 kubelet[1916]: I0714 21:46:02.820575 1916 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b312e59-3d12-41c8-bef1-0ed26e880928-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 14 21:46:02.822109 kubelet[1916]: I0714 21:46:02.820584 1916 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d85766d9-e2ee-47db-97b5-d0ba9f06a74f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 14 21:46:02.822109 kubelet[1916]: I0714 21:46:02.820592 1916 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8gsgf\" (UniqueName: \"kubernetes.io/projected/9b312e59-3d12-41c8-bef1-0ed26e880928-kube-api-access-8gsgf\") on node \"localhost\" DevicePath \"\"" Jul 14 21:46:02.822109 kubelet[1916]: I0714 21:46:02.820600 1916 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9b312e59-3d12-41c8-bef1-0ed26e880928-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 14 21:46:02.822109 kubelet[1916]: I0714 21:46:02.820608 1916 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9b312e59-3d12-41c8-bef1-0ed26e880928-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 14 21:46:02.822109 kubelet[1916]: I0714 21:46:02.820616 1916 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9b312e59-3d12-41c8-bef1-0ed26e880928-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 14 21:46:02.822405 kubelet[1916]: I0714 21:46:02.820623 1916 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9b312e59-3d12-41c8-bef1-0ed26e880928-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 14 21:46:02.822405 kubelet[1916]: I0714 21:46:02.820630 1916 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9b312e59-3d12-41c8-bef1-0ed26e880928-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 14 21:46:02.822405 kubelet[1916]: I0714 21:46:02.820638 1916 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9b312e59-3d12-41c8-bef1-0ed26e880928-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 14 21:46:02.822405 kubelet[1916]: I0714 21:46:02.820645 1916 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9b312e59-3d12-41c8-bef1-0ed26e880928-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 14 21:46:02.822405 kubelet[1916]: I0714 21:46:02.820657 1916 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9b312e59-3d12-41c8-bef1-0ed26e880928-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 14 21:46:02.822405 kubelet[1916]: I0714 21:46:02.820665 1916 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9b312e59-3d12-41c8-bef1-0ed26e880928-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 14 21:46:02.822405 kubelet[1916]: I0714 21:46:02.820672 1916 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9b312e59-3d12-41c8-bef1-0ed26e880928-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 14 21:46:02.822405 kubelet[1916]: I0714 21:46:02.820679 1916 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9b312e59-3d12-41c8-bef1-0ed26e880928-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 14 21:46:02.881722 kubelet[1916]: I0714 21:46:02.881695 1916 scope.go:117] "RemoveContainer" containerID="035bbbcf31f3b05067a899197dc7f450f37bb581c97a1bfb6b2f5ca4c7df613a" Jul 14 21:46:02.883495 env[1216]: time="2025-07-14T21:46:02.883442463Z" level=info msg="RemoveContainer for \"035bbbcf31f3b05067a899197dc7f450f37bb581c97a1bfb6b2f5ca4c7df613a\"" Jul 14 21:46:02.888244 systemd[1]: Removed slice kubepods-besteffort-podd85766d9_e2ee_47db_97b5_d0ba9f06a74f.slice. Jul 14 21:46:02.891311 systemd[1]: Removed slice kubepods-burstable-pod9b312e59_3d12_41c8_bef1_0ed26e880928.slice. Jul 14 21:46:02.891397 systemd[1]: kubepods-burstable-pod9b312e59_3d12_41c8_bef1_0ed26e880928.slice: Consumed 7.021s CPU time. Jul 14 21:46:02.894526 env[1216]: time="2025-07-14T21:46:02.894480558Z" level=info msg="RemoveContainer for \"035bbbcf31f3b05067a899197dc7f450f37bb581c97a1bfb6b2f5ca4c7df613a\" returns successfully" Jul 14 21:46:02.894808 kubelet[1916]: I0714 21:46:02.894778 1916 scope.go:117] "RemoveContainer" containerID="f10e8e7b6373d05a7ca29508bada92093af517d1828e9e78f8e0abc6f55f4ef7" Jul 14 21:46:02.896447 env[1216]: time="2025-07-14T21:46:02.896257307Z" level=info msg="RemoveContainer for \"f10e8e7b6373d05a7ca29508bada92093af517d1828e9e78f8e0abc6f55f4ef7\"" Jul 14 21:46:02.900745 env[1216]: time="2025-07-14T21:46:02.900675361Z" level=info msg="RemoveContainer for \"f10e8e7b6373d05a7ca29508bada92093af517d1828e9e78f8e0abc6f55f4ef7\" returns successfully" Jul 14 21:46:02.900997 kubelet[1916]: I0714 21:46:02.900975 1916 scope.go:117] "RemoveContainer" containerID="5361f072d5e880a8c84177b16cf4c0fc5f99e6af9313c4d4c41450fad3c4ada0" Jul 14 21:46:02.903220 env[1216]: time="2025-07-14T21:46:02.903171906Z" level=info msg="RemoveContainer for \"5361f072d5e880a8c84177b16cf4c0fc5f99e6af9313c4d4c41450fad3c4ada0\"" Jul 14 21:46:02.908641 env[1216]: time="2025-07-14T21:46:02.908571834Z" level=info msg="RemoveContainer for \"5361f072d5e880a8c84177b16cf4c0fc5f99e6af9313c4d4c41450fad3c4ada0\" returns successfully" Jul 14 21:46:02.908913 kubelet[1916]: I0714 21:46:02.908890 1916 scope.go:117] "RemoveContainer" containerID="f75893db1797a35b1f1fe4a8cff8d77055046e9801e522bb7f1f6dbb452b45ab" Jul 14 21:46:02.918755 env[1216]: time="2025-07-14T21:46:02.916200189Z" level=info msg="RemoveContainer for \"f75893db1797a35b1f1fe4a8cff8d77055046e9801e522bb7f1f6dbb452b45ab\"" Jul 14 21:46:02.919279 env[1216]: time="2025-07-14T21:46:02.919210811Z" level=info msg="RemoveContainer for \"f75893db1797a35b1f1fe4a8cff8d77055046e9801e522bb7f1f6dbb452b45ab\" returns successfully" Jul 14 21:46:02.919471 kubelet[1916]: I0714 21:46:02.919448 1916 scope.go:117] "RemoveContainer" containerID="774e9191ec9af1d030044e30e5085656ab2d51881e3474a0b23a535791247f8a" Jul 14 21:46:02.920875 env[1216]: time="2025-07-14T21:46:02.920795402Z" level=info msg="RemoveContainer for \"774e9191ec9af1d030044e30e5085656ab2d51881e3474a0b23a535791247f8a\"" Jul 14 21:46:02.924029 env[1216]: time="2025-07-14T21:46:02.923990223Z" level=info msg="RemoveContainer for \"774e9191ec9af1d030044e30e5085656ab2d51881e3474a0b23a535791247f8a\" returns successfully" Jul 14 21:46:02.924280 kubelet[1916]: I0714 21:46:02.924255 1916 scope.go:117] "RemoveContainer" containerID="035bbbcf31f3b05067a899197dc7f450f37bb581c97a1bfb6b2f5ca4c7df613a" Jul 14 21:46:02.924569 env[1216]: time="2025-07-14T21:46:02.924479260Z" level=error msg="ContainerStatus for \"035bbbcf31f3b05067a899197dc7f450f37bb581c97a1bfb6b2f5ca4c7df613a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"035bbbcf31f3b05067a899197dc7f450f37bb581c97a1bfb6b2f5ca4c7df613a\": not found" Jul 14 21:46:02.924746 kubelet[1916]: E0714 21:46:02.924723 1916 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"035bbbcf31f3b05067a899197dc7f450f37bb581c97a1bfb6b2f5ca4c7df613a\": not found" containerID="035bbbcf31f3b05067a899197dc7f450f37bb581c97a1bfb6b2f5ca4c7df613a" Jul 14 21:46:02.926096 kubelet[1916]: I0714 21:46:02.925988 1916 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"035bbbcf31f3b05067a899197dc7f450f37bb581c97a1bfb6b2f5ca4c7df613a"} err="failed to get container status \"035bbbcf31f3b05067a899197dc7f450f37bb581c97a1bfb6b2f5ca4c7df613a\": rpc error: code = NotFound desc = an error occurred when try to find container \"035bbbcf31f3b05067a899197dc7f450f37bb581c97a1bfb6b2f5ca4c7df613a\": not found" Jul 14 21:46:02.926096 kubelet[1916]: I0714 21:46:02.926094 1916 scope.go:117] "RemoveContainer" containerID="f10e8e7b6373d05a7ca29508bada92093af517d1828e9e78f8e0abc6f55f4ef7" Jul 14 21:46:02.926414 env[1216]: time="2025-07-14T21:46:02.926355769Z" level=error msg="ContainerStatus for \"f10e8e7b6373d05a7ca29508bada92093af517d1828e9e78f8e0abc6f55f4ef7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f10e8e7b6373d05a7ca29508bada92093af517d1828e9e78f8e0abc6f55f4ef7\": not found" Jul 14 21:46:02.926636 kubelet[1916]: E0714 21:46:02.926596 1916 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f10e8e7b6373d05a7ca29508bada92093af517d1828e9e78f8e0abc6f55f4ef7\": not found" containerID="f10e8e7b6373d05a7ca29508bada92093af517d1828e9e78f8e0abc6f55f4ef7" Jul 14 21:46:02.926698 kubelet[1916]: I0714 21:46:02.926639 1916 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f10e8e7b6373d05a7ca29508bada92093af517d1828e9e78f8e0abc6f55f4ef7"} err="failed to get container status \"f10e8e7b6373d05a7ca29508bada92093af517d1828e9e78f8e0abc6f55f4ef7\": rpc error: code = NotFound desc = an error occurred when try to find container \"f10e8e7b6373d05a7ca29508bada92093af517d1828e9e78f8e0abc6f55f4ef7\": not found" Jul 14 21:46:02.926698 kubelet[1916]: I0714 21:46:02.926655 1916 scope.go:117] "RemoveContainer" containerID="5361f072d5e880a8c84177b16cf4c0fc5f99e6af9313c4d4c41450fad3c4ada0" Jul 14 21:46:02.926900 env[1216]: time="2025-07-14T21:46:02.926830366Z" level=error msg="ContainerStatus for \"5361f072d5e880a8c84177b16cf4c0fc5f99e6af9313c4d4c41450fad3c4ada0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5361f072d5e880a8c84177b16cf4c0fc5f99e6af9313c4d4c41450fad3c4ada0\": not found" Jul 14 21:46:02.927070 kubelet[1916]: E0714 21:46:02.927036 1916 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5361f072d5e880a8c84177b16cf4c0fc5f99e6af9313c4d4c41450fad3c4ada0\": not found" containerID="5361f072d5e880a8c84177b16cf4c0fc5f99e6af9313c4d4c41450fad3c4ada0" Jul 14 21:46:02.927070 kubelet[1916]: I0714 21:46:02.927062 1916 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5361f072d5e880a8c84177b16cf4c0fc5f99e6af9313c4d4c41450fad3c4ada0"} err="failed to get container status \"5361f072d5e880a8c84177b16cf4c0fc5f99e6af9313c4d4c41450fad3c4ada0\": rpc error: code = NotFound desc = an error occurred when try to find container \"5361f072d5e880a8c84177b16cf4c0fc5f99e6af9313c4d4c41450fad3c4ada0\": not found" Jul 14 21:46:02.927148 kubelet[1916]: I0714 21:46:02.927078 1916 scope.go:117] "RemoveContainer" containerID="f75893db1797a35b1f1fe4a8cff8d77055046e9801e522bb7f1f6dbb452b45ab" Jul 14 21:46:02.927303 env[1216]: time="2025-07-14T21:46:02.927252563Z" level=error msg="ContainerStatus for \"f75893db1797a35b1f1fe4a8cff8d77055046e9801e522bb7f1f6dbb452b45ab\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f75893db1797a35b1f1fe4a8cff8d77055046e9801e522bb7f1f6dbb452b45ab\": not found" Jul 14 21:46:02.927498 kubelet[1916]: E0714 21:46:02.927476 1916 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f75893db1797a35b1f1fe4a8cff8d77055046e9801e522bb7f1f6dbb452b45ab\": not found" containerID="f75893db1797a35b1f1fe4a8cff8d77055046e9801e522bb7f1f6dbb452b45ab" Jul 14 21:46:02.927539 kubelet[1916]: I0714 21:46:02.927502 1916 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f75893db1797a35b1f1fe4a8cff8d77055046e9801e522bb7f1f6dbb452b45ab"} err="failed to get container status \"f75893db1797a35b1f1fe4a8cff8d77055046e9801e522bb7f1f6dbb452b45ab\": rpc error: code = NotFound desc = an error occurred when try to find container \"f75893db1797a35b1f1fe4a8cff8d77055046e9801e522bb7f1f6dbb452b45ab\": not found" Jul 14 21:46:02.927539 kubelet[1916]: I0714 21:46:02.927516 1916 scope.go:117] "RemoveContainer" containerID="774e9191ec9af1d030044e30e5085656ab2d51881e3474a0b23a535791247f8a" Jul 14 21:46:02.927774 env[1216]: time="2025-07-14T21:46:02.927723760Z" level=error msg="ContainerStatus for \"774e9191ec9af1d030044e30e5085656ab2d51881e3474a0b23a535791247f8a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"774e9191ec9af1d030044e30e5085656ab2d51881e3474a0b23a535791247f8a\": not found" Jul 14 21:46:02.927916 kubelet[1916]: E0714 21:46:02.927893 1916 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"774e9191ec9af1d030044e30e5085656ab2d51881e3474a0b23a535791247f8a\": not found" containerID="774e9191ec9af1d030044e30e5085656ab2d51881e3474a0b23a535791247f8a" Jul 14 21:46:02.927961 kubelet[1916]: I0714 21:46:02.927918 1916 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"774e9191ec9af1d030044e30e5085656ab2d51881e3474a0b23a535791247f8a"} err="failed to get container status \"774e9191ec9af1d030044e30e5085656ab2d51881e3474a0b23a535791247f8a\": rpc error: code = NotFound desc = an error occurred when try to find container \"774e9191ec9af1d030044e30e5085656ab2d51881e3474a0b23a535791247f8a\": not found" Jul 14 21:46:02.927961 kubelet[1916]: I0714 21:46:02.927932 1916 scope.go:117] "RemoveContainer" containerID="2cd031fe8d610059be419bb014ab14e581cdbcbe13c1984e1278628d41802858" Jul 14 21:46:02.929136 env[1216]: time="2025-07-14T21:46:02.929096352Z" level=info msg="RemoveContainer for \"2cd031fe8d610059be419bb014ab14e581cdbcbe13c1984e1278628d41802858\"" Jul 14 21:46:02.932548 env[1216]: time="2025-07-14T21:46:02.932504212Z" level=info msg="RemoveContainer for \"2cd031fe8d610059be419bb014ab14e581cdbcbe13c1984e1278628d41802858\" returns successfully" Jul 14 21:46:02.932971 kubelet[1916]: I0714 21:46:02.932954 1916 scope.go:117] "RemoveContainer" containerID="2cd031fe8d610059be419bb014ab14e581cdbcbe13c1984e1278628d41802858" Jul 14 21:46:02.933244 env[1216]: time="2025-07-14T21:46:02.933189368Z" level=error msg="ContainerStatus for \"2cd031fe8d610059be419bb014ab14e581cdbcbe13c1984e1278628d41802858\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2cd031fe8d610059be419bb014ab14e581cdbcbe13c1984e1278628d41802858\": not found" Jul 14 21:46:02.933481 kubelet[1916]: E0714 21:46:02.933443 1916 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2cd031fe8d610059be419bb014ab14e581cdbcbe13c1984e1278628d41802858\": not found" containerID="2cd031fe8d610059be419bb014ab14e581cdbcbe13c1984e1278628d41802858" Jul 14 21:46:02.933544 kubelet[1916]: I0714 21:46:02.933484 1916 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2cd031fe8d610059be419bb014ab14e581cdbcbe13c1984e1278628d41802858"} err="failed to get container status \"2cd031fe8d610059be419bb014ab14e581cdbcbe13c1984e1278628d41802858\": rpc error: code = NotFound desc = an error occurred when try to find container \"2cd031fe8d610059be419bb014ab14e581cdbcbe13c1984e1278628d41802858\": not found" Jul 14 21:46:03.519161 systemd[1]: var-lib-kubelet-pods-d85766d9\x2de2ee\x2d47db\x2d97b5\x2dd0ba9f06a74f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d647mg.mount: Deactivated successfully. Jul 14 21:46:03.519262 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e38b0619c83573d8dc40f03495cd62e4515de498aedb711f71a8cfea2e0b6732-rootfs.mount: Deactivated successfully. Jul 14 21:46:03.519331 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e38b0619c83573d8dc40f03495cd62e4515de498aedb711f71a8cfea2e0b6732-shm.mount: Deactivated successfully. Jul 14 21:46:03.519396 systemd[1]: var-lib-kubelet-pods-9b312e59\x2d3d12\x2d41c8\x2dbef1\x2d0ed26e880928-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8gsgf.mount: Deactivated successfully. Jul 14 21:46:03.519450 systemd[1]: var-lib-kubelet-pods-9b312e59\x2d3d12\x2d41c8\x2dbef1\x2d0ed26e880928-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 14 21:46:03.519503 systemd[1]: var-lib-kubelet-pods-9b312e59\x2d3d12\x2d41c8\x2dbef1\x2d0ed26e880928-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 14 21:46:03.741333 kubelet[1916]: E0714 21:46:03.741255 1916 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 14 21:46:04.462575 sshd[3537]: pam_unix(sshd:session): session closed for user core Jul 14 21:46:04.466567 systemd[1]: sshd@21-10.0.0.12:22-10.0.0.1:50790.service: Deactivated successfully. Jul 14 21:46:04.467182 systemd[1]: session-22.scope: Deactivated successfully. Jul 14 21:46:04.467323 systemd[1]: session-22.scope: Consumed 1.133s CPU time. Jul 14 21:46:04.468063 systemd-logind[1203]: Session 22 logged out. Waiting for processes to exit. Jul 14 21:46:04.472075 systemd[1]: Started sshd@22-10.0.0.12:22-10.0.0.1:35712.service. Jul 14 21:46:04.476006 systemd-logind[1203]: Removed session 22. Jul 14 21:46:04.510274 sshd[3697]: Accepted publickey for core from 10.0.0.1 port 35712 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:46:04.511614 sshd[3697]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:46:04.515567 systemd-logind[1203]: New session 23 of user core. Jul 14 21:46:04.516036 systemd[1]: Started session-23.scope. Jul 14 21:46:04.684710 kubelet[1916]: I0714 21:46:04.684661 1916 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b312e59-3d12-41c8-bef1-0ed26e880928" path="/var/lib/kubelet/pods/9b312e59-3d12-41c8-bef1-0ed26e880928/volumes" Jul 14 21:46:04.685275 kubelet[1916]: I0714 21:46:04.685244 1916 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d85766d9-e2ee-47db-97b5-d0ba9f06a74f" path="/var/lib/kubelet/pods/d85766d9-e2ee-47db-97b5-d0ba9f06a74f/volumes" Jul 14 21:46:05.474582 kernel: hrtimer: interrupt took 3303993 ns Jul 14 21:46:05.626285 sshd[3697]: pam_unix(sshd:session): session closed for user core Jul 14 21:46:05.629673 systemd[1]: Started sshd@23-10.0.0.12:22-10.0.0.1:35718.service. Jul 14 21:46:05.630207 systemd[1]: sshd@22-10.0.0.12:22-10.0.0.1:35712.service: Deactivated successfully. Jul 14 21:46:05.630918 systemd[1]: session-23.scope: Deactivated successfully. Jul 14 21:46:05.633516 systemd-logind[1203]: Session 23 logged out. Waiting for processes to exit. Jul 14 21:46:05.635239 systemd-logind[1203]: Removed session 23. Jul 14 21:46:05.658043 kubelet[1916]: I0714 21:46:05.657994 1916 memory_manager.go:355] "RemoveStaleState removing state" podUID="d85766d9-e2ee-47db-97b5-d0ba9f06a74f" containerName="cilium-operator" Jul 14 21:46:05.658043 kubelet[1916]: I0714 21:46:05.658023 1916 memory_manager.go:355] "RemoveStaleState removing state" podUID="9b312e59-3d12-41c8-bef1-0ed26e880928" containerName="cilium-agent" Jul 14 21:46:05.667522 systemd[1]: Created slice kubepods-burstable-podcffd60f5_8339_44a8_9c73_471552e0175b.slice. Jul 14 21:46:05.681716 kubelet[1916]: E0714 21:46:05.681670 1916 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-sk86q" podUID="be2d6bd7-3a71-447a-9459-feb1900944eb" Jul 14 21:46:05.682592 sshd[3708]: Accepted publickey for core from 10.0.0.1 port 35718 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:46:05.684213 sshd[3708]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:46:05.689116 systemd[1]: Started session-24.scope. Jul 14 21:46:05.689467 systemd-logind[1203]: New session 24 of user core. Jul 14 21:46:05.739314 kubelet[1916]: I0714 21:46:05.739204 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cffd60f5-8339-44a8-9c73-471552e0175b-bpf-maps\") pod \"cilium-9h6q6\" (UID: \"cffd60f5-8339-44a8-9c73-471552e0175b\") " pod="kube-system/cilium-9h6q6" Jul 14 21:46:05.739314 kubelet[1916]: I0714 21:46:05.739248 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxc8v\" (UniqueName: \"kubernetes.io/projected/cffd60f5-8339-44a8-9c73-471552e0175b-kube-api-access-dxc8v\") pod \"cilium-9h6q6\" (UID: \"cffd60f5-8339-44a8-9c73-471552e0175b\") " pod="kube-system/cilium-9h6q6" Jul 14 21:46:05.739314 kubelet[1916]: I0714 21:46:05.739268 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cffd60f5-8339-44a8-9c73-471552e0175b-cilium-cgroup\") pod \"cilium-9h6q6\" (UID: \"cffd60f5-8339-44a8-9c73-471552e0175b\") " pod="kube-system/cilium-9h6q6" Jul 14 21:46:05.739314 kubelet[1916]: I0714 21:46:05.739284 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cffd60f5-8339-44a8-9c73-471552e0175b-cni-path\") pod \"cilium-9h6q6\" (UID: \"cffd60f5-8339-44a8-9c73-471552e0175b\") " pod="kube-system/cilium-9h6q6" Jul 14 21:46:05.739314 kubelet[1916]: I0714 21:46:05.739300 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cffd60f5-8339-44a8-9c73-471552e0175b-etc-cni-netd\") pod \"cilium-9h6q6\" (UID: \"cffd60f5-8339-44a8-9c73-471552e0175b\") " pod="kube-system/cilium-9h6q6" Jul 14 21:46:05.739314 kubelet[1916]: I0714 21:46:05.739317 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cffd60f5-8339-44a8-9c73-471552e0175b-hubble-tls\") pod \"cilium-9h6q6\" (UID: \"cffd60f5-8339-44a8-9c73-471552e0175b\") " pod="kube-system/cilium-9h6q6" Jul 14 21:46:05.739554 kubelet[1916]: I0714 21:46:05.739342 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cffd60f5-8339-44a8-9c73-471552e0175b-lib-modules\") pod \"cilium-9h6q6\" (UID: \"cffd60f5-8339-44a8-9c73-471552e0175b\") " pod="kube-system/cilium-9h6q6" Jul 14 21:46:05.739554 kubelet[1916]: I0714 21:46:05.739362 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cffd60f5-8339-44a8-9c73-471552e0175b-xtables-lock\") pod \"cilium-9h6q6\" (UID: \"cffd60f5-8339-44a8-9c73-471552e0175b\") " pod="kube-system/cilium-9h6q6" Jul 14 21:46:05.739554 kubelet[1916]: I0714 21:46:05.739379 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/cffd60f5-8339-44a8-9c73-471552e0175b-cilium-ipsec-secrets\") pod \"cilium-9h6q6\" (UID: \"cffd60f5-8339-44a8-9c73-471552e0175b\") " pod="kube-system/cilium-9h6q6" Jul 14 21:46:05.739554 kubelet[1916]: I0714 21:46:05.739395 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cffd60f5-8339-44a8-9c73-471552e0175b-host-proc-sys-kernel\") pod \"cilium-9h6q6\" (UID: \"cffd60f5-8339-44a8-9c73-471552e0175b\") " pod="kube-system/cilium-9h6q6" Jul 14 21:46:05.739554 kubelet[1916]: I0714 21:46:05.739410 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cffd60f5-8339-44a8-9c73-471552e0175b-cilium-run\") pod \"cilium-9h6q6\" (UID: \"cffd60f5-8339-44a8-9c73-471552e0175b\") " pod="kube-system/cilium-9h6q6" Jul 14 21:46:05.739554 kubelet[1916]: I0714 21:46:05.739429 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cffd60f5-8339-44a8-9c73-471552e0175b-clustermesh-secrets\") pod \"cilium-9h6q6\" (UID: \"cffd60f5-8339-44a8-9c73-471552e0175b\") " pod="kube-system/cilium-9h6q6" Jul 14 21:46:05.739685 kubelet[1916]: I0714 21:46:05.739445 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cffd60f5-8339-44a8-9c73-471552e0175b-host-proc-sys-net\") pod \"cilium-9h6q6\" (UID: \"cffd60f5-8339-44a8-9c73-471552e0175b\") " pod="kube-system/cilium-9h6q6" Jul 14 21:46:05.739685 kubelet[1916]: I0714 21:46:05.739473 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cffd60f5-8339-44a8-9c73-471552e0175b-hostproc\") pod \"cilium-9h6q6\" (UID: \"cffd60f5-8339-44a8-9c73-471552e0175b\") " pod="kube-system/cilium-9h6q6" Jul 14 21:46:05.739685 kubelet[1916]: I0714 21:46:05.739489 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cffd60f5-8339-44a8-9c73-471552e0175b-cilium-config-path\") pod \"cilium-9h6q6\" (UID: \"cffd60f5-8339-44a8-9c73-471552e0175b\") " pod="kube-system/cilium-9h6q6" Jul 14 21:46:05.825449 sshd[3708]: pam_unix(sshd:session): session closed for user core Jul 14 21:46:05.827910 systemd[1]: Started sshd@24-10.0.0.12:22-10.0.0.1:35722.service. Jul 14 21:46:05.831338 kubelet[1916]: E0714 21:46:05.831176 1916 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-dxc8v lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-9h6q6" podUID="cffd60f5-8339-44a8-9c73-471552e0175b" Jul 14 21:46:05.837259 systemd[1]: sshd@23-10.0.0.12:22-10.0.0.1:35718.service: Deactivated successfully. Jul 14 21:46:05.838137 systemd[1]: session-24.scope: Deactivated successfully. Jul 14 21:46:05.839121 systemd-logind[1203]: Session 24 logged out. Waiting for processes to exit. Jul 14 21:46:05.839993 systemd-logind[1203]: Removed session 24. Jul 14 21:46:05.873568 sshd[3721]: Accepted publickey for core from 10.0.0.1 port 35722 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:46:05.875206 sshd[3721]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:46:05.878880 systemd-logind[1203]: New session 25 of user core. Jul 14 21:46:05.880053 systemd[1]: Started session-25.scope. Jul 14 21:46:05.940594 kubelet[1916]: I0714 21:46:05.940538 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cffd60f5-8339-44a8-9c73-471552e0175b-xtables-lock\") pod \"cffd60f5-8339-44a8-9c73-471552e0175b\" (UID: \"cffd60f5-8339-44a8-9c73-471552e0175b\") " Jul 14 21:46:05.940594 kubelet[1916]: I0714 21:46:05.940592 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/cffd60f5-8339-44a8-9c73-471552e0175b-cilium-ipsec-secrets\") pod \"cffd60f5-8339-44a8-9c73-471552e0175b\" (UID: \"cffd60f5-8339-44a8-9c73-471552e0175b\") " Jul 14 21:46:05.940787 kubelet[1916]: I0714 21:46:05.940613 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cffd60f5-8339-44a8-9c73-471552e0175b-hostproc\") pod \"cffd60f5-8339-44a8-9c73-471552e0175b\" (UID: \"cffd60f5-8339-44a8-9c73-471552e0175b\") " Jul 14 21:46:05.940787 kubelet[1916]: I0714 21:46:05.940664 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cffd60f5-8339-44a8-9c73-471552e0175b-etc-cni-netd\") pod \"cffd60f5-8339-44a8-9c73-471552e0175b\" (UID: \"cffd60f5-8339-44a8-9c73-471552e0175b\") " Jul 14 21:46:05.940787 kubelet[1916]: I0714 21:46:05.940680 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cffd60f5-8339-44a8-9c73-471552e0175b-lib-modules\") pod \"cffd60f5-8339-44a8-9c73-471552e0175b\" (UID: \"cffd60f5-8339-44a8-9c73-471552e0175b\") " Jul 14 21:46:05.940787 kubelet[1916]: I0714 21:46:05.940698 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cffd60f5-8339-44a8-9c73-471552e0175b-cni-path\") pod \"cffd60f5-8339-44a8-9c73-471552e0175b\" (UID: \"cffd60f5-8339-44a8-9c73-471552e0175b\") " Jul 14 21:46:05.940787 kubelet[1916]: I0714 21:46:05.940720 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxc8v\" (UniqueName: \"kubernetes.io/projected/cffd60f5-8339-44a8-9c73-471552e0175b-kube-api-access-dxc8v\") pod \"cffd60f5-8339-44a8-9c73-471552e0175b\" (UID: \"cffd60f5-8339-44a8-9c73-471552e0175b\") " Jul 14 21:46:05.940787 kubelet[1916]: I0714 21:46:05.940737 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cffd60f5-8339-44a8-9c73-471552e0175b-cilium-config-path\") pod \"cffd60f5-8339-44a8-9c73-471552e0175b\" (UID: \"cffd60f5-8339-44a8-9c73-471552e0175b\") " Jul 14 21:46:05.940946 kubelet[1916]: I0714 21:46:05.940753 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cffd60f5-8339-44a8-9c73-471552e0175b-host-proc-sys-net\") pod \"cffd60f5-8339-44a8-9c73-471552e0175b\" (UID: \"cffd60f5-8339-44a8-9c73-471552e0175b\") " Jul 14 21:46:05.940946 kubelet[1916]: I0714 21:46:05.940768 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cffd60f5-8339-44a8-9c73-471552e0175b-cilium-run\") pod \"cffd60f5-8339-44a8-9c73-471552e0175b\" (UID: \"cffd60f5-8339-44a8-9c73-471552e0175b\") " Jul 14 21:46:05.940946 kubelet[1916]: I0714 21:46:05.940786 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cffd60f5-8339-44a8-9c73-471552e0175b-bpf-maps\") pod \"cffd60f5-8339-44a8-9c73-471552e0175b\" (UID: \"cffd60f5-8339-44a8-9c73-471552e0175b\") " Jul 14 21:46:05.940946 kubelet[1916]: I0714 21:46:05.940800 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cffd60f5-8339-44a8-9c73-471552e0175b-cilium-cgroup\") pod \"cffd60f5-8339-44a8-9c73-471552e0175b\" (UID: \"cffd60f5-8339-44a8-9c73-471552e0175b\") " Jul 14 21:46:05.940946 kubelet[1916]: I0714 21:46:05.940817 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cffd60f5-8339-44a8-9c73-471552e0175b-host-proc-sys-kernel\") pod \"cffd60f5-8339-44a8-9c73-471552e0175b\" (UID: \"cffd60f5-8339-44a8-9c73-471552e0175b\") " Jul 14 21:46:05.940946 kubelet[1916]: I0714 21:46:05.940834 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cffd60f5-8339-44a8-9c73-471552e0175b-clustermesh-secrets\") pod \"cffd60f5-8339-44a8-9c73-471552e0175b\" (UID: \"cffd60f5-8339-44a8-9c73-471552e0175b\") " Jul 14 21:46:05.941120 kubelet[1916]: I0714 21:46:05.940864 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cffd60f5-8339-44a8-9c73-471552e0175b-hubble-tls\") pod \"cffd60f5-8339-44a8-9c73-471552e0175b\" (UID: \"cffd60f5-8339-44a8-9c73-471552e0175b\") " Jul 14 21:46:05.941352 kubelet[1916]: I0714 21:46:05.941312 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cffd60f5-8339-44a8-9c73-471552e0175b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "cffd60f5-8339-44a8-9c73-471552e0175b" (UID: "cffd60f5-8339-44a8-9c73-471552e0175b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:46:05.942090 kubelet[1916]: I0714 21:46:05.942018 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cffd60f5-8339-44a8-9c73-471552e0175b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "cffd60f5-8339-44a8-9c73-471552e0175b" (UID: "cffd60f5-8339-44a8-9c73-471552e0175b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:46:05.942369 kubelet[1916]: I0714 21:46:05.942333 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cffd60f5-8339-44a8-9c73-471552e0175b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "cffd60f5-8339-44a8-9c73-471552e0175b" (UID: "cffd60f5-8339-44a8-9c73-471552e0175b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:46:05.942422 kubelet[1916]: I0714 21:46:05.942378 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cffd60f5-8339-44a8-9c73-471552e0175b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "cffd60f5-8339-44a8-9c73-471552e0175b" (UID: "cffd60f5-8339-44a8-9c73-471552e0175b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:46:05.943570 kubelet[1916]: I0714 21:46:05.943525 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cffd60f5-8339-44a8-9c73-471552e0175b-hostproc" (OuterVolumeSpecName: "hostproc") pod "cffd60f5-8339-44a8-9c73-471552e0175b" (UID: "cffd60f5-8339-44a8-9c73-471552e0175b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:46:05.943570 kubelet[1916]: I0714 21:46:05.943570 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cffd60f5-8339-44a8-9c73-471552e0175b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "cffd60f5-8339-44a8-9c73-471552e0175b" (UID: "cffd60f5-8339-44a8-9c73-471552e0175b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:46:05.943678 kubelet[1916]: I0714 21:46:05.943587 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cffd60f5-8339-44a8-9c73-471552e0175b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "cffd60f5-8339-44a8-9c73-471552e0175b" (UID: "cffd60f5-8339-44a8-9c73-471552e0175b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:46:05.943678 kubelet[1916]: I0714 21:46:05.943602 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cffd60f5-8339-44a8-9c73-471552e0175b-cni-path" (OuterVolumeSpecName: "cni-path") pod "cffd60f5-8339-44a8-9c73-471552e0175b" (UID: "cffd60f5-8339-44a8-9c73-471552e0175b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:46:05.943752 kubelet[1916]: I0714 21:46:05.943676 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cffd60f5-8339-44a8-9c73-471552e0175b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cffd60f5-8339-44a8-9c73-471552e0175b" (UID: "cffd60f5-8339-44a8-9c73-471552e0175b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 14 21:46:05.943778 kubelet[1916]: I0714 21:46:05.943755 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cffd60f5-8339-44a8-9c73-471552e0175b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "cffd60f5-8339-44a8-9c73-471552e0175b" (UID: "cffd60f5-8339-44a8-9c73-471552e0175b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:46:05.943825 kubelet[1916]: I0714 21:46:05.943804 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cffd60f5-8339-44a8-9c73-471552e0175b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "cffd60f5-8339-44a8-9c73-471552e0175b" (UID: "cffd60f5-8339-44a8-9c73-471552e0175b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:46:05.945704 systemd[1]: var-lib-kubelet-pods-cffd60f5\x2d8339\x2d44a8\x2d9c73\x2d471552e0175b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddxc8v.mount: Deactivated successfully. Jul 14 21:46:05.945805 systemd[1]: var-lib-kubelet-pods-cffd60f5\x2d8339\x2d44a8\x2d9c73\x2d471552e0175b-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 14 21:46:05.945895 systemd[1]: var-lib-kubelet-pods-cffd60f5\x2d8339\x2d44a8\x2d9c73\x2d471552e0175b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 14 21:46:05.948198 kubelet[1916]: I0714 21:46:05.948137 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cffd60f5-8339-44a8-9c73-471552e0175b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "cffd60f5-8339-44a8-9c73-471552e0175b" (UID: "cffd60f5-8339-44a8-9c73-471552e0175b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 14 21:46:05.948305 kubelet[1916]: I0714 21:46:05.948244 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cffd60f5-8339-44a8-9c73-471552e0175b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "cffd60f5-8339-44a8-9c73-471552e0175b" (UID: "cffd60f5-8339-44a8-9c73-471552e0175b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 14 21:46:05.949517 kubelet[1916]: I0714 21:46:05.949387 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cffd60f5-8339-44a8-9c73-471552e0175b-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "cffd60f5-8339-44a8-9c73-471552e0175b" (UID: "cffd60f5-8339-44a8-9c73-471552e0175b"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 14 21:46:05.949703 kubelet[1916]: I0714 21:46:05.949683 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cffd60f5-8339-44a8-9c73-471552e0175b-kube-api-access-dxc8v" (OuterVolumeSpecName: "kube-api-access-dxc8v") pod "cffd60f5-8339-44a8-9c73-471552e0175b" (UID: "cffd60f5-8339-44a8-9c73-471552e0175b"). InnerVolumeSpecName "kube-api-access-dxc8v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 14 21:46:06.041526 kubelet[1916]: I0714 21:46:06.041412 1916 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cffd60f5-8339-44a8-9c73-471552e0175b-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 14 21:46:06.041526 kubelet[1916]: I0714 21:46:06.041451 1916 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cffd60f5-8339-44a8-9c73-471552e0175b-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 14 21:46:06.041526 kubelet[1916]: I0714 21:46:06.041462 1916 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cffd60f5-8339-44a8-9c73-471552e0175b-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 14 21:46:06.041526 kubelet[1916]: I0714 21:46:06.041471 1916 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cffd60f5-8339-44a8-9c73-471552e0175b-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 14 21:46:06.041526 kubelet[1916]: I0714 21:46:06.041482 1916 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cffd60f5-8339-44a8-9c73-471552e0175b-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 14 21:46:06.041526 kubelet[1916]: I0714 21:46:06.041490 1916 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cffd60f5-8339-44a8-9c73-471552e0175b-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 14 21:46:06.041526 kubelet[1916]: I0714 21:46:06.041505 1916 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cffd60f5-8339-44a8-9c73-471552e0175b-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 14 21:46:06.041526 kubelet[1916]: I0714 21:46:06.041520 1916 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cffd60f5-8339-44a8-9c73-471552e0175b-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 14 21:46:06.041814 kubelet[1916]: I0714 21:46:06.041529 1916 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/cffd60f5-8339-44a8-9c73-471552e0175b-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Jul 14 21:46:06.041814 kubelet[1916]: I0714 21:46:06.041538 1916 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cffd60f5-8339-44a8-9c73-471552e0175b-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 14 21:46:06.041814 kubelet[1916]: I0714 21:46:06.041548 1916 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dxc8v\" (UniqueName: \"kubernetes.io/projected/cffd60f5-8339-44a8-9c73-471552e0175b-kube-api-access-dxc8v\") on node \"localhost\" DevicePath \"\"" Jul 14 21:46:06.041814 kubelet[1916]: I0714 21:46:06.041556 1916 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cffd60f5-8339-44a8-9c73-471552e0175b-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 14 21:46:06.041814 kubelet[1916]: I0714 21:46:06.041564 1916 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cffd60f5-8339-44a8-9c73-471552e0175b-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 14 21:46:06.041814 kubelet[1916]: I0714 21:46:06.041572 1916 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cffd60f5-8339-44a8-9c73-471552e0175b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 14 21:46:06.041814 kubelet[1916]: I0714 21:46:06.041579 1916 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cffd60f5-8339-44a8-9c73-471552e0175b-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 14 21:46:06.686739 systemd[1]: Removed slice kubepods-burstable-podcffd60f5_8339_44a8_9c73_471552e0175b.slice. Jul 14 21:46:06.846949 systemd[1]: var-lib-kubelet-pods-cffd60f5\x2d8339\x2d44a8\x2d9c73\x2d471552e0175b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 14 21:46:06.935020 kubelet[1916]: W0714 21:46:06.934983 1916 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 14 21:46:06.935369 kubelet[1916]: E0714 21:46:06.935033 1916 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jul 14 21:46:06.935369 kubelet[1916]: I0714 21:46:06.934917 1916 status_manager.go:890] "Failed to get status for pod" podUID="03aaaeb4-cfd4-41f8-8dc9-a35e56b4b06b" pod="kube-system/cilium-tjmnm" err="pods \"cilium-tjmnm\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" Jul 14 21:46:06.936100 kubelet[1916]: W0714 21:46:06.936080 1916 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 14 21:46:06.936261 kubelet[1916]: E0714 21:46:06.936236 1916 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jul 14 21:46:06.937621 systemd[1]: Created slice kubepods-burstable-pod03aaaeb4_cfd4_41f8_8dc9_a35e56b4b06b.slice. Jul 14 21:46:06.947031 kubelet[1916]: I0714 21:46:06.946989 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/03aaaeb4-cfd4-41f8-8dc9-a35e56b4b06b-cilium-ipsec-secrets\") pod \"cilium-tjmnm\" (UID: \"03aaaeb4-cfd4-41f8-8dc9-a35e56b4b06b\") " pod="kube-system/cilium-tjmnm" Jul 14 21:46:06.947031 kubelet[1916]: I0714 21:46:06.947032 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/03aaaeb4-cfd4-41f8-8dc9-a35e56b4b06b-xtables-lock\") pod \"cilium-tjmnm\" (UID: \"03aaaeb4-cfd4-41f8-8dc9-a35e56b4b06b\") " pod="kube-system/cilium-tjmnm" Jul 14 21:46:06.947031 kubelet[1916]: I0714 21:46:06.947049 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/03aaaeb4-cfd4-41f8-8dc9-a35e56b4b06b-cilium-run\") pod \"cilium-tjmnm\" (UID: \"03aaaeb4-cfd4-41f8-8dc9-a35e56b4b06b\") " pod="kube-system/cilium-tjmnm" Jul 14 21:46:06.947265 kubelet[1916]: I0714 21:46:06.947067 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/03aaaeb4-cfd4-41f8-8dc9-a35e56b4b06b-cilium-cgroup\") pod \"cilium-tjmnm\" (UID: \"03aaaeb4-cfd4-41f8-8dc9-a35e56b4b06b\") " pod="kube-system/cilium-tjmnm" Jul 14 21:46:06.947265 kubelet[1916]: I0714 21:46:06.947083 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/03aaaeb4-cfd4-41f8-8dc9-a35e56b4b06b-cni-path\") pod \"cilium-tjmnm\" (UID: \"03aaaeb4-cfd4-41f8-8dc9-a35e56b4b06b\") " pod="kube-system/cilium-tjmnm" Jul 14 21:46:06.947265 kubelet[1916]: I0714 21:46:06.947099 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/03aaaeb4-cfd4-41f8-8dc9-a35e56b4b06b-etc-cni-netd\") pod \"cilium-tjmnm\" (UID: \"03aaaeb4-cfd4-41f8-8dc9-a35e56b4b06b\") " pod="kube-system/cilium-tjmnm" Jul 14 21:46:06.947265 kubelet[1916]: I0714 21:46:06.947115 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/03aaaeb4-cfd4-41f8-8dc9-a35e56b4b06b-host-proc-sys-net\") pod \"cilium-tjmnm\" (UID: \"03aaaeb4-cfd4-41f8-8dc9-a35e56b4b06b\") " pod="kube-system/cilium-tjmnm" Jul 14 21:46:06.947265 kubelet[1916]: I0714 21:46:06.947132 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/03aaaeb4-cfd4-41f8-8dc9-a35e56b4b06b-hubble-tls\") pod \"cilium-tjmnm\" (UID: \"03aaaeb4-cfd4-41f8-8dc9-a35e56b4b06b\") " pod="kube-system/cilium-tjmnm" Jul 14 21:46:06.947265 kubelet[1916]: I0714 21:46:06.947146 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/03aaaeb4-cfd4-41f8-8dc9-a35e56b4b06b-clustermesh-secrets\") pod \"cilium-tjmnm\" (UID: \"03aaaeb4-cfd4-41f8-8dc9-a35e56b4b06b\") " pod="kube-system/cilium-tjmnm" Jul 14 21:46:06.947469 kubelet[1916]: I0714 21:46:06.947172 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/03aaaeb4-cfd4-41f8-8dc9-a35e56b4b06b-host-proc-sys-kernel\") pod \"cilium-tjmnm\" (UID: \"03aaaeb4-cfd4-41f8-8dc9-a35e56b4b06b\") " pod="kube-system/cilium-tjmnm" Jul 14 21:46:06.947469 kubelet[1916]: I0714 21:46:06.947192 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/03aaaeb4-cfd4-41f8-8dc9-a35e56b4b06b-hostproc\") pod \"cilium-tjmnm\" (UID: \"03aaaeb4-cfd4-41f8-8dc9-a35e56b4b06b\") " pod="kube-system/cilium-tjmnm" Jul 14 21:46:06.947469 kubelet[1916]: I0714 21:46:06.947206 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/03aaaeb4-cfd4-41f8-8dc9-a35e56b4b06b-lib-modules\") pod \"cilium-tjmnm\" (UID: \"03aaaeb4-cfd4-41f8-8dc9-a35e56b4b06b\") " pod="kube-system/cilium-tjmnm" Jul 14 21:46:06.947469 kubelet[1916]: I0714 21:46:06.947220 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/03aaaeb4-cfd4-41f8-8dc9-a35e56b4b06b-bpf-maps\") pod \"cilium-tjmnm\" (UID: \"03aaaeb4-cfd4-41f8-8dc9-a35e56b4b06b\") " pod="kube-system/cilium-tjmnm" Jul 14 21:46:06.947469 kubelet[1916]: I0714 21:46:06.947236 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpbsk\" (UniqueName: \"kubernetes.io/projected/03aaaeb4-cfd4-41f8-8dc9-a35e56b4b06b-kube-api-access-bpbsk\") pod \"cilium-tjmnm\" (UID: \"03aaaeb4-cfd4-41f8-8dc9-a35e56b4b06b\") " pod="kube-system/cilium-tjmnm" Jul 14 21:46:06.947469 kubelet[1916]: I0714 21:46:06.947254 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/03aaaeb4-cfd4-41f8-8dc9-a35e56b4b06b-cilium-config-path\") pod \"cilium-tjmnm\" (UID: \"03aaaeb4-cfd4-41f8-8dc9-a35e56b4b06b\") " pod="kube-system/cilium-tjmnm" Jul 14 21:46:07.682108 kubelet[1916]: E0714 21:46:07.682048 1916 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-sk86q" podUID="be2d6bd7-3a71-447a-9459-feb1900944eb" Jul 14 21:46:07.682366 kubelet[1916]: E0714 21:46:07.682345 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:08.050881 kubelet[1916]: E0714 21:46:08.049423 1916 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jul 14 21:46:08.050881 kubelet[1916]: E0714 21:46:08.049455 1916 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-tjmnm: failed to sync secret cache: timed out waiting for the condition Jul 14 21:46:08.050881 kubelet[1916]: E0714 21:46:08.050398 1916 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/03aaaeb4-cfd4-41f8-8dc9-a35e56b4b06b-hubble-tls podName:03aaaeb4-cfd4-41f8-8dc9-a35e56b4b06b nodeName:}" failed. No retries permitted until 2025-07-14 21:46:08.55037283 +0000 UTC m=+89.989079941 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/03aaaeb4-cfd4-41f8-8dc9-a35e56b4b06b-hubble-tls") pod "cilium-tjmnm" (UID: "03aaaeb4-cfd4-41f8-8dc9-a35e56b4b06b") : failed to sync secret cache: timed out waiting for the condition Jul 14 21:46:08.059626 kubelet[1916]: E0714 21:46:08.051283 1916 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jul 14 21:46:08.059626 kubelet[1916]: E0714 21:46:08.051355 1916 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/03aaaeb4-cfd4-41f8-8dc9-a35e56b4b06b-cilium-config-path podName:03aaaeb4-cfd4-41f8-8dc9-a35e56b4b06b nodeName:}" failed. No retries permitted until 2025-07-14 21:46:08.551342591 +0000 UTC m=+89.990049702 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/03aaaeb4-cfd4-41f8-8dc9-a35e56b4b06b-cilium-config-path") pod "cilium-tjmnm" (UID: "03aaaeb4-cfd4-41f8-8dc9-a35e56b4b06b") : failed to sync configmap cache: timed out waiting for the condition Jul 14 21:46:08.683823 kubelet[1916]: I0714 21:46:08.683790 1916 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cffd60f5-8339-44a8-9c73-471552e0175b" path="/var/lib/kubelet/pods/cffd60f5-8339-44a8-9c73-471552e0175b/volumes" Jul 14 21:46:08.740621 kubelet[1916]: E0714 21:46:08.740580 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:08.741269 env[1216]: time="2025-07-14T21:46:08.741085746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tjmnm,Uid:03aaaeb4-cfd4-41f8-8dc9-a35e56b4b06b,Namespace:kube-system,Attempt:0,}" Jul 14 21:46:08.742174 kubelet[1916]: E0714 21:46:08.742140 1916 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 14 21:46:08.763707 env[1216]: time="2025-07-14T21:46:08.763090046Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:46:08.763707 env[1216]: time="2025-07-14T21:46:08.763129606Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:46:08.763707 env[1216]: time="2025-07-14T21:46:08.763140006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:46:08.763937 env[1216]: time="2025-07-14T21:46:08.763836927Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7493d3d2eb585ad76836f684eb902b68e45078fb9e21904b8e2e283aa48b6535 pid=3752 runtime=io.containerd.runc.v2 Jul 14 21:46:08.783248 systemd[1]: run-containerd-runc-k8s.io-7493d3d2eb585ad76836f684eb902b68e45078fb9e21904b8e2e283aa48b6535-runc.u4lBrj.mount: Deactivated successfully. Jul 14 21:46:08.787500 systemd[1]: Started cri-containerd-7493d3d2eb585ad76836f684eb902b68e45078fb9e21904b8e2e283aa48b6535.scope. Jul 14 21:46:08.832585 env[1216]: time="2025-07-14T21:46:08.832538790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tjmnm,Uid:03aaaeb4-cfd4-41f8-8dc9-a35e56b4b06b,Namespace:kube-system,Attempt:0,} returns sandbox id \"7493d3d2eb585ad76836f684eb902b68e45078fb9e21904b8e2e283aa48b6535\"" Jul 14 21:46:08.833311 kubelet[1916]: E0714 21:46:08.833274 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:08.836885 env[1216]: time="2025-07-14T21:46:08.836780194Z" level=info msg="CreateContainer within sandbox \"7493d3d2eb585ad76836f684eb902b68e45078fb9e21904b8e2e283aa48b6535\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 14 21:46:08.856509 env[1216]: time="2025-07-14T21:46:08.856412052Z" level=info msg="CreateContainer within sandbox \"7493d3d2eb585ad76836f684eb902b68e45078fb9e21904b8e2e283aa48b6535\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d503e3a79c3e58a0d0f7f7807df16c788e5caa1c06c1080db392482f538cc4a5\"" Jul 14 21:46:08.857804 env[1216]: time="2025-07-14T21:46:08.857258373Z" level=info msg="StartContainer for \"d503e3a79c3e58a0d0f7f7807df16c788e5caa1c06c1080db392482f538cc4a5\"" Jul 14 21:46:08.873904 systemd[1]: Started cri-containerd-d503e3a79c3e58a0d0f7f7807df16c788e5caa1c06c1080db392482f538cc4a5.scope. Jul 14 21:46:08.920789 env[1216]: time="2025-07-14T21:46:08.920571911Z" level=info msg="StartContainer for \"d503e3a79c3e58a0d0f7f7807df16c788e5caa1c06c1080db392482f538cc4a5\" returns successfully" Jul 14 21:46:08.924300 systemd[1]: cri-containerd-d503e3a79c3e58a0d0f7f7807df16c788e5caa1c06c1080db392482f538cc4a5.scope: Deactivated successfully. Jul 14 21:46:08.927787 kubelet[1916]: E0714 21:46:08.927710 1916 cadvisor_stats_provider.go:522] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03aaaeb4_cfd4_41f8_8dc9_a35e56b4b06b.slice/cri-containerd-d503e3a79c3e58a0d0f7f7807df16c788e5caa1c06c1080db392482f538cc4a5.scope\": RecentStats: unable to find data in memory cache]" Jul 14 21:46:08.967679 env[1216]: time="2025-07-14T21:46:08.967560514Z" level=info msg="shim disconnected" id=d503e3a79c3e58a0d0f7f7807df16c788e5caa1c06c1080db392482f538cc4a5 Jul 14 21:46:08.970245 env[1216]: time="2025-07-14T21:46:08.970212877Z" level=warning msg="cleaning up after shim disconnected" id=d503e3a79c3e58a0d0f7f7807df16c788e5caa1c06c1080db392482f538cc4a5 namespace=k8s.io Jul 14 21:46:08.970359 env[1216]: time="2025-07-14T21:46:08.970343757Z" level=info msg="cleaning up dead shim" Jul 14 21:46:08.977728 env[1216]: time="2025-07-14T21:46:08.977692124Z" level=warning msg="cleanup warnings time=\"2025-07-14T21:46:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3838 runtime=io.containerd.runc.v2\n" Jul 14 21:46:09.682182 kubelet[1916]: E0714 21:46:09.682123 1916 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-sk86q" podUID="be2d6bd7-3a71-447a-9459-feb1900944eb" Jul 14 21:46:09.901693 kubelet[1916]: E0714 21:46:09.901647 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:09.904187 env[1216]: time="2025-07-14T21:46:09.904134578Z" level=info msg="CreateContainer within sandbox \"7493d3d2eb585ad76836f684eb902b68e45078fb9e21904b8e2e283aa48b6535\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 14 21:46:09.924290 env[1216]: time="2025-07-14T21:46:09.924243177Z" level=info msg="CreateContainer within sandbox \"7493d3d2eb585ad76836f684eb902b68e45078fb9e21904b8e2e283aa48b6535\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ad4ba4ce4c981f61c6e419f0cd9a7a4c48251787ca68e233ce95c928a8fac450\"" Jul 14 21:46:09.925580 env[1216]: time="2025-07-14T21:46:09.925549740Z" level=info msg="StartContainer for \"ad4ba4ce4c981f61c6e419f0cd9a7a4c48251787ca68e233ce95c928a8fac450\"" Jul 14 21:46:09.945932 systemd[1]: Started cri-containerd-ad4ba4ce4c981f61c6e419f0cd9a7a4c48251787ca68e233ce95c928a8fac450.scope. Jul 14 21:46:09.980590 env[1216]: time="2025-07-14T21:46:09.978182362Z" level=info msg="StartContainer for \"ad4ba4ce4c981f61c6e419f0cd9a7a4c48251787ca68e233ce95c928a8fac450\" returns successfully" Jul 14 21:46:09.985939 systemd[1]: cri-containerd-ad4ba4ce4c981f61c6e419f0cd9a7a4c48251787ca68e233ce95c928a8fac450.scope: Deactivated successfully. Jul 14 21:46:10.051015 env[1216]: time="2025-07-14T21:46:10.050651192Z" level=info msg="shim disconnected" id=ad4ba4ce4c981f61c6e419f0cd9a7a4c48251787ca68e233ce95c928a8fac450 Jul 14 21:46:10.051015 env[1216]: time="2025-07-14T21:46:10.050701432Z" level=warning msg="cleaning up after shim disconnected" id=ad4ba4ce4c981f61c6e419f0cd9a7a4c48251787ca68e233ce95c928a8fac450 namespace=k8s.io Jul 14 21:46:10.051015 env[1216]: time="2025-07-14T21:46:10.050712592Z" level=info msg="cleaning up dead shim" Jul 14 21:46:10.064254 env[1216]: time="2025-07-14T21:46:10.064207231Z" level=warning msg="cleanup warnings time=\"2025-07-14T21:46:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3900 runtime=io.containerd.runc.v2\n" Jul 14 21:46:10.563793 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad4ba4ce4c981f61c6e419f0cd9a7a4c48251787ca68e233ce95c928a8fac450-rootfs.mount: Deactivated successfully. Jul 14 21:46:10.894768 kubelet[1916]: I0714 21:46:10.894705 1916 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-14T21:46:10Z","lastTransitionTime":"2025-07-14T21:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 14 21:46:10.905880 kubelet[1916]: E0714 21:46:10.905227 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:10.908603 env[1216]: time="2025-07-14T21:46:10.908546905Z" level=info msg="CreateContainer within sandbox \"7493d3d2eb585ad76836f684eb902b68e45078fb9e21904b8e2e283aa48b6535\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 14 21:46:11.028299 env[1216]: time="2025-07-14T21:46:11.028238802Z" level=info msg="CreateContainer within sandbox \"7493d3d2eb585ad76836f684eb902b68e45078fb9e21904b8e2e283aa48b6535\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3651fa7d68cb1b0e8e7d47c4a1dff65d0d444f5ee8abd8d48bf4590a097a4dc8\"" Jul 14 21:46:11.028944 env[1216]: time="2025-07-14T21:46:11.028913084Z" level=info msg="StartContainer for \"3651fa7d68cb1b0e8e7d47c4a1dff65d0d444f5ee8abd8d48bf4590a097a4dc8\"" Jul 14 21:46:11.052307 systemd[1]: Started cri-containerd-3651fa7d68cb1b0e8e7d47c4a1dff65d0d444f5ee8abd8d48bf4590a097a4dc8.scope. Jul 14 21:46:11.084530 systemd[1]: cri-containerd-3651fa7d68cb1b0e8e7d47c4a1dff65d0d444f5ee8abd8d48bf4590a097a4dc8.scope: Deactivated successfully. Jul 14 21:46:11.086779 env[1216]: time="2025-07-14T21:46:11.086730109Z" level=info msg="StartContainer for \"3651fa7d68cb1b0e8e7d47c4a1dff65d0d444f5ee8abd8d48bf4590a097a4dc8\" returns successfully" Jul 14 21:46:11.109327 env[1216]: time="2025-07-14T21:46:11.109268796Z" level=info msg="shim disconnected" id=3651fa7d68cb1b0e8e7d47c4a1dff65d0d444f5ee8abd8d48bf4590a097a4dc8 Jul 14 21:46:11.109327 env[1216]: time="2025-07-14T21:46:11.109326837Z" level=warning msg="cleaning up after shim disconnected" id=3651fa7d68cb1b0e8e7d47c4a1dff65d0d444f5ee8abd8d48bf4590a097a4dc8 namespace=k8s.io Jul 14 21:46:11.109545 env[1216]: time="2025-07-14T21:46:11.109338117Z" level=info msg="cleaning up dead shim" Jul 14 21:46:11.115881 env[1216]: time="2025-07-14T21:46:11.115833262Z" level=warning msg="cleanup warnings time=\"2025-07-14T21:46:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3956 runtime=io.containerd.runc.v2\n" Jul 14 21:46:11.563926 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3651fa7d68cb1b0e8e7d47c4a1dff65d0d444f5ee8abd8d48bf4590a097a4dc8-rootfs.mount: Deactivated successfully. Jul 14 21:46:11.681769 kubelet[1916]: E0714 21:46:11.681707 1916 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-sk86q" podUID="be2d6bd7-3a71-447a-9459-feb1900944eb" Jul 14 21:46:11.909180 kubelet[1916]: E0714 21:46:11.909136 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:11.915313 env[1216]: time="2025-07-14T21:46:11.913517603Z" level=info msg="CreateContainer within sandbox \"7493d3d2eb585ad76836f684eb902b68e45078fb9e21904b8e2e283aa48b6535\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 14 21:46:11.932206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount43392819.mount: Deactivated successfully. Jul 14 21:46:11.949901 env[1216]: time="2025-07-14T21:46:11.949751144Z" level=info msg="CreateContainer within sandbox \"7493d3d2eb585ad76836f684eb902b68e45078fb9e21904b8e2e283aa48b6535\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"432534742830eb8e25cd54a5200d062c28d4686400b81d818f41d8b1249f9410\"" Jul 14 21:46:11.952334 env[1216]: time="2025-07-14T21:46:11.950494546Z" level=info msg="StartContainer for \"432534742830eb8e25cd54a5200d062c28d4686400b81d818f41d8b1249f9410\"" Jul 14 21:46:11.967012 systemd[1]: Started cri-containerd-432534742830eb8e25cd54a5200d062c28d4686400b81d818f41d8b1249f9410.scope. Jul 14 21:46:12.013736 systemd[1]: cri-containerd-432534742830eb8e25cd54a5200d062c28d4686400b81d818f41d8b1249f9410.scope: Deactivated successfully. Jul 14 21:46:12.024796 env[1216]: time="2025-07-14T21:46:12.024734257Z" level=info msg="StartContainer for \"432534742830eb8e25cd54a5200d062c28d4686400b81d818f41d8b1249f9410\" returns successfully" Jul 14 21:46:12.060470 env[1216]: time="2025-07-14T21:46:12.060391029Z" level=info msg="shim disconnected" id=432534742830eb8e25cd54a5200d062c28d4686400b81d818f41d8b1249f9410 Jul 14 21:46:12.060470 env[1216]: time="2025-07-14T21:46:12.060462789Z" level=warning msg="cleaning up after shim disconnected" id=432534742830eb8e25cd54a5200d062c28d4686400b81d818f41d8b1249f9410 namespace=k8s.io Jul 14 21:46:12.060470 env[1216]: time="2025-07-14T21:46:12.060473469Z" level=info msg="cleaning up dead shim" Jul 14 21:46:12.071407 env[1216]: time="2025-07-14T21:46:12.071338601Z" level=warning msg="cleanup warnings time=\"2025-07-14T21:46:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4012 runtime=io.containerd.runc.v2\n" Jul 14 21:46:12.683017 kubelet[1916]: E0714 21:46:12.682974 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:12.916272 kubelet[1916]: E0714 21:46:12.916238 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:12.922511 env[1216]: time="2025-07-14T21:46:12.922009937Z" level=info msg="CreateContainer within sandbox \"7493d3d2eb585ad76836f684eb902b68e45078fb9e21904b8e2e283aa48b6535\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 14 21:46:12.952821 env[1216]: time="2025-07-14T21:46:12.952730045Z" level=info msg="CreateContainer within sandbox \"7493d3d2eb585ad76836f684eb902b68e45078fb9e21904b8e2e283aa48b6535\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"04122424bc2a88d8e11dbe277654d84523c83fcfe81c781e1e51f98f049bb4e6\"" Jul 14 21:46:12.955499 env[1216]: time="2025-07-14T21:46:12.955467218Z" level=info msg="StartContainer for \"04122424bc2a88d8e11dbe277654d84523c83fcfe81c781e1e51f98f049bb4e6\"" Jul 14 21:46:12.969426 systemd[1]: Started cri-containerd-04122424bc2a88d8e11dbe277654d84523c83fcfe81c781e1e51f98f049bb4e6.scope. Jul 14 21:46:13.008555 env[1216]: time="2025-07-14T21:46:13.008489681Z" level=info msg="StartContainer for \"04122424bc2a88d8e11dbe277654d84523c83fcfe81c781e1e51f98f049bb4e6\" returns successfully" Jul 14 21:46:13.250433 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Jul 14 21:46:13.681814 kubelet[1916]: E0714 21:46:13.681755 1916 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-sk86q" podUID="be2d6bd7-3a71-447a-9459-feb1900944eb" Jul 14 21:46:13.922442 kubelet[1916]: E0714 21:46:13.921950 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:14.215571 systemd[1]: run-containerd-runc-k8s.io-04122424bc2a88d8e11dbe277654d84523c83fcfe81c781e1e51f98f049bb4e6-runc.rOVjU5.mount: Deactivated successfully. Jul 14 21:46:14.268883 kubelet[1916]: E0714 21:46:14.268827 1916 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:45142->127.0.0.1:43077: read tcp 127.0.0.1:45142->127.0.0.1:43077: read: connection reset by peer Jul 14 21:46:14.924192 kubelet[1916]: E0714 21:46:14.924152 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:15.682514 kubelet[1916]: E0714 21:46:15.682470 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:15.682675 kubelet[1916]: E0714 21:46:15.682654 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:15.925556 kubelet[1916]: E0714 21:46:15.925526 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:16.130938 systemd-networkd[1048]: lxc_health: Link UP Jul 14 21:46:16.140890 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 14 21:46:16.143063 systemd-networkd[1048]: lxc_health: Gained carrier Jul 14 21:46:16.344558 systemd[1]: run-containerd-runc-k8s.io-04122424bc2a88d8e11dbe277654d84523c83fcfe81c781e1e51f98f049bb4e6-runc.ldTGdh.mount: Deactivated successfully. Jul 14 21:46:16.758100 kubelet[1916]: I0714 21:46:16.758031 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tjmnm" podStartSLOduration=10.758015886 podStartE2EDuration="10.758015886s" podCreationTimestamp="2025-07-14 21:46:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:46:13.945219153 +0000 UTC m=+95.383926264" watchObservedRunningTime="2025-07-14 21:46:16.758015886 +0000 UTC m=+98.196722957" Jul 14 21:46:16.927193 kubelet[1916]: E0714 21:46:16.927150 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:17.225045 systemd-networkd[1048]: lxc_health: Gained IPv6LL Jul 14 21:46:17.682444 kubelet[1916]: E0714 21:46:17.682403 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:17.929167 kubelet[1916]: E0714 21:46:17.929137 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:18.476430 systemd[1]: run-containerd-runc-k8s.io-04122424bc2a88d8e11dbe277654d84523c83fcfe81c781e1e51f98f049bb4e6-runc.eLIuKU.mount: Deactivated successfully. Jul 14 21:46:18.933699 kubelet[1916]: E0714 21:46:18.933659 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:20.602253 systemd[1]: run-containerd-runc-k8s.io-04122424bc2a88d8e11dbe277654d84523c83fcfe81c781e1e51f98f049bb4e6-runc.my5ElR.mount: Deactivated successfully. Jul 14 21:46:22.785185 sshd[3721]: pam_unix(sshd:session): session closed for user core Jul 14 21:46:22.788022 systemd[1]: sshd@24-10.0.0.12:22-10.0.0.1:35722.service: Deactivated successfully. Jul 14 21:46:22.788712 systemd[1]: session-25.scope: Deactivated successfully. Jul 14 21:46:22.789633 systemd-logind[1203]: Session 25 logged out. Waiting for processes to exit. Jul 14 21:46:22.790420 systemd-logind[1203]: Removed session 25.