Jul 14 21:44:56.728026 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 14 21:44:56.728047 kernel: Linux version 5.15.187-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Jul 14 20:49:56 -00 2025 Jul 14 21:44:56.728054 kernel: efi: EFI v2.70 by EDK II Jul 14 21:44:56.728060 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Jul 14 21:44:56.728065 kernel: random: crng init done Jul 14 21:44:56.728070 kernel: ACPI: Early table checksum verification disabled Jul 14 21:44:56.728077 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Jul 14 21:44:56.728084 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 14 21:44:56.728090 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:44:56.728095 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:44:56.728100 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:44:56.728105 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:44:56.728111 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:44:56.728116 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:44:56.728124 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:44:56.728129 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:44:56.728135 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:44:56.728141 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 14 21:44:56.728147 kernel: NUMA: Failed to initialise from firmware Jul 14 21:44:56.728153 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 14 21:44:56.728158 kernel: NUMA: NODE_DATA [mem 0xdcb0c900-0xdcb11fff] Jul 14 21:44:56.728164 kernel: Zone ranges: Jul 14 21:44:56.728170 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 14 21:44:56.728177 kernel: DMA32 empty Jul 14 21:44:56.728182 kernel: Normal empty Jul 14 21:44:56.728188 kernel: Movable zone start for each node Jul 14 21:44:56.728193 kernel: Early memory node ranges Jul 14 21:44:56.728199 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Jul 14 21:44:56.728204 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Jul 14 21:44:56.728210 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Jul 14 21:44:56.728215 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Jul 14 21:44:56.728221 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Jul 14 21:44:56.728226 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Jul 14 21:44:56.728232 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Jul 14 21:44:56.728237 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 14 21:44:56.728244 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 14 21:44:56.728250 kernel: psci: probing for conduit method from ACPI. Jul 14 21:44:56.728264 kernel: psci: PSCIv1.1 detected in firmware. Jul 14 21:44:56.728270 kernel: psci: Using standard PSCI v0.2 function IDs Jul 14 21:44:56.728276 kernel: psci: Trusted OS migration not required Jul 14 21:44:56.728284 kernel: psci: SMC Calling Convention v1.1 Jul 14 21:44:56.728290 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 14 21:44:56.728298 kernel: ACPI: SRAT not present Jul 14 21:44:56.728304 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Jul 14 21:44:56.728310 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Jul 14 21:44:56.728316 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 14 21:44:56.728322 kernel: Detected PIPT I-cache on CPU0 Jul 14 21:44:56.728329 kernel: CPU features: detected: GIC system register CPU interface Jul 14 21:44:56.728334 kernel: CPU features: detected: Hardware dirty bit management Jul 14 21:44:56.728340 kernel: CPU features: detected: Spectre-v4 Jul 14 21:44:56.728347 kernel: CPU features: detected: Spectre-BHB Jul 14 21:44:56.728354 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 14 21:44:56.728360 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 14 21:44:56.728366 kernel: CPU features: detected: ARM erratum 1418040 Jul 14 21:44:56.728372 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 14 21:44:56.728378 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 14 21:44:56.728384 kernel: Policy zone: DMA Jul 14 21:44:56.728392 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=0fbac260ee8dcd4db6590eed44229ca41387b27ea0fa758fd2be410620d68236 Jul 14 21:44:56.728398 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 14 21:44:56.728404 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 14 21:44:56.728411 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 14 21:44:56.728417 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 14 21:44:56.728425 kernel: Memory: 2457344K/2572288K available (9792K kernel code, 2094K rwdata, 7588K rodata, 36416K init, 777K bss, 114944K reserved, 0K cma-reserved) Jul 14 21:44:56.728431 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 14 21:44:56.728437 kernel: trace event string verifier disabled Jul 14 21:44:56.728443 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 14 21:44:56.728449 kernel: rcu: RCU event tracing is enabled. Jul 14 21:44:56.728458 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 14 21:44:56.728466 kernel: Trampoline variant of Tasks RCU enabled. Jul 14 21:44:56.728474 kernel: Tracing variant of Tasks RCU enabled. Jul 14 21:44:56.728481 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 14 21:44:56.728488 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 14 21:44:56.728494 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 14 21:44:56.728501 kernel: GICv3: 256 SPIs implemented Jul 14 21:44:56.728507 kernel: GICv3: 0 Extended SPIs implemented Jul 14 21:44:56.728513 kernel: GICv3: Distributor has no Range Selector support Jul 14 21:44:56.728519 kernel: Root IRQ handler: gic_handle_irq Jul 14 21:44:56.728525 kernel: GICv3: 16 PPIs implemented Jul 14 21:44:56.728531 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 14 21:44:56.728537 kernel: ACPI: SRAT not present Jul 14 21:44:56.728543 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 14 21:44:56.728549 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Jul 14 21:44:56.728555 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Jul 14 21:44:56.728561 kernel: GICv3: using LPI property table @0x00000000400d0000 Jul 14 21:44:56.728567 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Jul 14 21:44:56.728575 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 21:44:56.728581 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 14 21:44:56.728588 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 14 21:44:56.728594 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 14 21:44:56.728600 kernel: arm-pv: using stolen time PV Jul 14 21:44:56.728606 kernel: Console: colour dummy device 80x25 Jul 14 21:44:56.728613 kernel: ACPI: Core revision 20210730 Jul 14 21:44:56.728620 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 14 21:44:56.728626 kernel: pid_max: default: 32768 minimum: 301 Jul 14 21:44:56.728632 kernel: LSM: Security Framework initializing Jul 14 21:44:56.728640 kernel: SELinux: Initializing. Jul 14 21:44:56.728646 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 21:44:56.728653 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 21:44:56.728659 kernel: rcu: Hierarchical SRCU implementation. Jul 14 21:44:56.728665 kernel: Platform MSI: ITS@0x8080000 domain created Jul 14 21:44:56.728671 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 14 21:44:56.728677 kernel: Remapping and enabling EFI services. Jul 14 21:44:56.728683 kernel: smp: Bringing up secondary CPUs ... Jul 14 21:44:56.728690 kernel: Detected PIPT I-cache on CPU1 Jul 14 21:44:56.728698 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 14 21:44:56.728704 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Jul 14 21:44:56.728710 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 21:44:56.728716 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 14 21:44:56.728723 kernel: Detected PIPT I-cache on CPU2 Jul 14 21:44:56.728730 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 14 21:44:56.728736 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Jul 14 21:44:56.728742 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 21:44:56.728748 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 14 21:44:56.728755 kernel: Detected PIPT I-cache on CPU3 Jul 14 21:44:56.728762 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 14 21:44:56.728768 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Jul 14 21:44:56.728775 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 21:44:56.728781 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 14 21:44:56.728792 kernel: smp: Brought up 1 node, 4 CPUs Jul 14 21:44:56.728800 kernel: SMP: Total of 4 processors activated. Jul 14 21:44:56.728806 kernel: CPU features: detected: 32-bit EL0 Support Jul 14 21:44:56.728813 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 14 21:44:56.728828 kernel: CPU features: detected: Common not Private translations Jul 14 21:44:56.728835 kernel: CPU features: detected: CRC32 instructions Jul 14 21:44:56.728841 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 14 21:44:56.728848 kernel: CPU features: detected: LSE atomic instructions Jul 14 21:44:56.728856 kernel: CPU features: detected: Privileged Access Never Jul 14 21:44:56.728862 kernel: CPU features: detected: RAS Extension Support Jul 14 21:44:56.728869 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 14 21:44:56.728875 kernel: CPU: All CPU(s) started at EL1 Jul 14 21:44:56.728882 kernel: alternatives: patching kernel code Jul 14 21:44:56.728890 kernel: devtmpfs: initialized Jul 14 21:44:56.728896 kernel: KASLR enabled Jul 14 21:44:56.728903 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 14 21:44:56.728910 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 14 21:44:56.728917 kernel: pinctrl core: initialized pinctrl subsystem Jul 14 21:44:56.728923 kernel: SMBIOS 3.0.0 present. Jul 14 21:44:56.728929 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Jul 14 21:44:56.728936 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 14 21:44:56.728943 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 14 21:44:56.728951 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 14 21:44:56.728958 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 14 21:44:56.728964 kernel: audit: initializing netlink subsys (disabled) Jul 14 21:44:56.728971 kernel: audit: type=2000 audit(0.032:1): state=initialized audit_enabled=0 res=1 Jul 14 21:44:56.728977 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 14 21:44:56.728984 kernel: cpuidle: using governor menu Jul 14 21:44:56.728990 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 14 21:44:56.728997 kernel: ASID allocator initialised with 32768 entries Jul 14 21:44:56.729003 kernel: ACPI: bus type PCI registered Jul 14 21:44:56.729011 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 14 21:44:56.729018 kernel: Serial: AMBA PL011 UART driver Jul 14 21:44:56.729024 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 14 21:44:56.729031 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Jul 14 21:44:56.729037 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 14 21:44:56.729044 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Jul 14 21:44:56.729050 kernel: cryptd: max_cpu_qlen set to 1000 Jul 14 21:44:56.729057 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 14 21:44:56.729064 kernel: ACPI: Added _OSI(Module Device) Jul 14 21:44:56.729071 kernel: ACPI: Added _OSI(Processor Device) Jul 14 21:44:56.729078 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 14 21:44:56.729085 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 14 21:44:56.729092 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 14 21:44:56.729099 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 14 21:44:56.729106 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 14 21:44:56.729113 kernel: ACPI: Interpreter enabled Jul 14 21:44:56.729120 kernel: ACPI: Using GIC for interrupt routing Jul 14 21:44:56.729127 kernel: ACPI: MCFG table detected, 1 entries Jul 14 21:44:56.729135 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 14 21:44:56.729141 kernel: printk: console [ttyAMA0] enabled Jul 14 21:44:56.729148 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 14 21:44:56.729280 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 14 21:44:56.729347 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 14 21:44:56.729406 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 14 21:44:56.729469 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 14 21:44:56.729536 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 14 21:44:56.729545 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 14 21:44:56.729551 kernel: PCI host bridge to bus 0000:00 Jul 14 21:44:56.729625 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 14 21:44:56.729710 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 14 21:44:56.729771 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 14 21:44:56.729832 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 14 21:44:56.729911 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 14 21:44:56.729981 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 14 21:44:56.730043 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 14 21:44:56.730102 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 14 21:44:56.730160 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 14 21:44:56.730218 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 14 21:44:56.730289 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 14 21:44:56.730351 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 14 21:44:56.730404 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 14 21:44:56.730455 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 14 21:44:56.730507 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 14 21:44:56.730516 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 14 21:44:56.730523 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 14 21:44:56.730529 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 14 21:44:56.730537 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 14 21:44:56.730544 kernel: iommu: Default domain type: Translated Jul 14 21:44:56.730551 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 14 21:44:56.730557 kernel: vgaarb: loaded Jul 14 21:44:56.730564 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 14 21:44:56.730571 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 14 21:44:56.730577 kernel: PTP clock support registered Jul 14 21:44:56.730584 kernel: Registered efivars operations Jul 14 21:44:56.730591 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 14 21:44:56.730598 kernel: VFS: Disk quotas dquot_6.6.0 Jul 14 21:44:56.730606 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 14 21:44:56.730612 kernel: pnp: PnP ACPI init Jul 14 21:44:56.730688 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 14 21:44:56.730698 kernel: pnp: PnP ACPI: found 1 devices Jul 14 21:44:56.730705 kernel: NET: Registered PF_INET protocol family Jul 14 21:44:56.730712 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 14 21:44:56.730719 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 14 21:44:56.730725 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 14 21:44:56.730734 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 14 21:44:56.730741 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 14 21:44:56.730747 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 14 21:44:56.730754 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 21:44:56.730761 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 21:44:56.730767 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 14 21:44:56.730774 kernel: PCI: CLS 0 bytes, default 64 Jul 14 21:44:56.730780 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 14 21:44:56.730788 kernel: kvm [1]: HYP mode not available Jul 14 21:44:56.730796 kernel: Initialise system trusted keyrings Jul 14 21:44:56.730803 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 14 21:44:56.730810 kernel: Key type asymmetric registered Jul 14 21:44:56.730823 kernel: Asymmetric key parser 'x509' registered Jul 14 21:44:56.730831 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 14 21:44:56.730838 kernel: io scheduler mq-deadline registered Jul 14 21:44:56.730844 kernel: io scheduler kyber registered Jul 14 21:44:56.730851 kernel: io scheduler bfq registered Jul 14 21:44:56.730858 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 14 21:44:56.730866 kernel: ACPI: button: Power Button [PWRB] Jul 14 21:44:56.730873 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 14 21:44:56.730934 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 14 21:44:56.730943 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 14 21:44:56.730950 kernel: thunder_xcv, ver 1.0 Jul 14 21:44:56.730956 kernel: thunder_bgx, ver 1.0 Jul 14 21:44:56.730963 kernel: nicpf, ver 1.0 Jul 14 21:44:56.730969 kernel: nicvf, ver 1.0 Jul 14 21:44:56.731040 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 14 21:44:56.731098 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-14T21:44:56 UTC (1752529496) Jul 14 21:44:56.731107 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 14 21:44:56.731114 kernel: NET: Registered PF_INET6 protocol family Jul 14 21:44:56.731120 kernel: Segment Routing with IPv6 Jul 14 21:44:56.731127 kernel: In-situ OAM (IOAM) with IPv6 Jul 14 21:44:56.731133 kernel: NET: Registered PF_PACKET protocol family Jul 14 21:44:56.731140 kernel: Key type dns_resolver registered Jul 14 21:44:56.731147 kernel: registered taskstats version 1 Jul 14 21:44:56.731154 kernel: Loading compiled-in X.509 certificates Jul 14 21:44:56.731161 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.187-flatcar: 118351bb2b1409a8fe1c98db16ecff1bb5342a27' Jul 14 21:44:56.731168 kernel: Key type .fscrypt registered Jul 14 21:44:56.731174 kernel: Key type fscrypt-provisioning registered Jul 14 21:44:56.731181 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 14 21:44:56.731188 kernel: ima: Allocated hash algorithm: sha1 Jul 14 21:44:56.731194 kernel: ima: No architecture policies found Jul 14 21:44:56.731201 kernel: clk: Disabling unused clocks Jul 14 21:44:56.731207 kernel: Freeing unused kernel memory: 36416K Jul 14 21:44:56.731215 kernel: Run /init as init process Jul 14 21:44:56.731222 kernel: with arguments: Jul 14 21:44:56.731228 kernel: /init Jul 14 21:44:56.731235 kernel: with environment: Jul 14 21:44:56.731241 kernel: HOME=/ Jul 14 21:44:56.731247 kernel: TERM=linux Jul 14 21:44:56.731254 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 14 21:44:56.731270 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 14 21:44:56.731280 systemd[1]: Detected virtualization kvm. Jul 14 21:44:56.731288 systemd[1]: Detected architecture arm64. Jul 14 21:44:56.731295 systemd[1]: Running in initrd. Jul 14 21:44:56.731302 systemd[1]: No hostname configured, using default hostname. Jul 14 21:44:56.731308 systemd[1]: Hostname set to . Jul 14 21:44:56.731316 systemd[1]: Initializing machine ID from VM UUID. Jul 14 21:44:56.731323 systemd[1]: Queued start job for default target initrd.target. Jul 14 21:44:56.731330 systemd[1]: Started systemd-ask-password-console.path. Jul 14 21:44:56.731337 systemd[1]: Reached target cryptsetup.target. Jul 14 21:44:56.731344 systemd[1]: Reached target paths.target. Jul 14 21:44:56.731351 systemd[1]: Reached target slices.target. Jul 14 21:44:56.731358 systemd[1]: Reached target swap.target. Jul 14 21:44:56.731365 systemd[1]: Reached target timers.target. Jul 14 21:44:56.731373 systemd[1]: Listening on iscsid.socket. Jul 14 21:44:56.731380 systemd[1]: Listening on iscsiuio.socket. Jul 14 21:44:56.731388 systemd[1]: Listening on systemd-journald-audit.socket. Jul 14 21:44:56.731395 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 14 21:44:56.731402 systemd[1]: Listening on systemd-journald.socket. Jul 14 21:44:56.731409 systemd[1]: Listening on systemd-networkd.socket. Jul 14 21:44:56.731416 systemd[1]: Listening on systemd-udevd-control.socket. Jul 14 21:44:56.731423 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 14 21:44:56.731430 systemd[1]: Reached target sockets.target. Jul 14 21:44:56.731437 systemd[1]: Starting kmod-static-nodes.service... Jul 14 21:44:56.731444 systemd[1]: Finished network-cleanup.service. Jul 14 21:44:56.731453 systemd[1]: Starting systemd-fsck-usr.service... Jul 14 21:44:56.731460 systemd[1]: Starting systemd-journald.service... Jul 14 21:44:56.731466 systemd[1]: Starting systemd-modules-load.service... Jul 14 21:44:56.731473 systemd[1]: Starting systemd-resolved.service... Jul 14 21:44:56.731480 systemd[1]: Starting systemd-vconsole-setup.service... Jul 14 21:44:56.731487 systemd[1]: Finished kmod-static-nodes.service. Jul 14 21:44:56.731494 systemd[1]: Finished systemd-fsck-usr.service. Jul 14 21:44:56.731502 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 14 21:44:56.731512 systemd-journald[289]: Journal started Jul 14 21:44:56.731559 systemd-journald[289]: Runtime Journal (/run/log/journal/6f5d0cdfd1b44ce18732436f03229263) is 6.0M, max 48.7M, 42.6M free. Jul 14 21:44:56.727444 systemd-modules-load[290]: Inserted module 'overlay' Jul 14 21:44:56.733066 systemd[1]: Started systemd-journald.service. Jul 14 21:44:56.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:56.738909 systemd[1]: Finished systemd-vconsole-setup.service. Jul 14 21:44:56.744207 kernel: audit: type=1130 audit(1752529496.737:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:56.744230 kernel: audit: type=1130 audit(1752529496.740:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:56.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:56.741326 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 14 21:44:56.744581 systemd[1]: Starting dracut-cmdline-ask.service... Jul 14 21:44:56.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:56.747849 kernel: audit: type=1130 audit(1752529496.742:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:56.761475 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 14 21:44:56.762982 systemd-resolved[291]: Positive Trust Anchors: Jul 14 21:44:56.762997 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 21:44:56.763025 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 14 21:44:56.767383 systemd-resolved[291]: Defaulting to hostname 'linux'. Jul 14 21:44:56.773297 kernel: audit: type=1130 audit(1752529496.769:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:56.773321 kernel: Bridge firewalling registered Jul 14 21:44:56.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:56.768305 systemd[1]: Started systemd-resolved.service. Jul 14 21:44:56.769212 systemd[1]: Reached target nss-lookup.target. Jul 14 21:44:56.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:56.772931 systemd-modules-load[290]: Inserted module 'br_netfilter' Jul 14 21:44:56.777696 kernel: audit: type=1130 audit(1752529496.773:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:56.773501 systemd[1]: Finished dracut-cmdline-ask.service. Jul 14 21:44:56.775310 systemd[1]: Starting dracut-cmdline.service... Jul 14 21:44:56.785964 dracut-cmdline[307]: dracut-dracut-053 Jul 14 21:44:56.789369 dracut-cmdline[307]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=0fbac260ee8dcd4db6590eed44229ca41387b27ea0fa758fd2be410620d68236 Jul 14 21:44:56.793847 kernel: SCSI subsystem initialized Jul 14 21:44:56.801496 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 14 21:44:56.801553 kernel: device-mapper: uevent: version 1.0.3 Jul 14 21:44:56.801563 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 14 21:44:56.805129 systemd-modules-load[290]: Inserted module 'dm_multipath' Jul 14 21:44:56.806011 systemd[1]: Finished systemd-modules-load.service. Jul 14 21:44:56.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:56.807641 systemd[1]: Starting systemd-sysctl.service... Jul 14 21:44:56.810132 kernel: audit: type=1130 audit(1752529496.805:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:56.816312 systemd[1]: Finished systemd-sysctl.service. Jul 14 21:44:56.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:56.819849 kernel: audit: type=1130 audit(1752529496.816:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:56.860842 kernel: Loading iSCSI transport class v2.0-870. Jul 14 21:44:56.873848 kernel: iscsi: registered transport (tcp) Jul 14 21:44:56.889840 kernel: iscsi: registered transport (qla4xxx) Jul 14 21:44:56.889858 kernel: QLogic iSCSI HBA Driver Jul 14 21:44:56.929876 systemd[1]: Finished dracut-cmdline.service. Jul 14 21:44:56.933918 kernel: audit: type=1130 audit(1752529496.929:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:56.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:56.931651 systemd[1]: Starting dracut-pre-udev.service... Jul 14 21:44:56.988858 kernel: raid6: neonx8 gen() 13705 MB/s Jul 14 21:44:57.005832 kernel: raid6: neonx8 xor() 10601 MB/s Jul 14 21:44:57.022827 kernel: raid6: neonx4 gen() 13497 MB/s Jul 14 21:44:57.039830 kernel: raid6: neonx4 xor() 11109 MB/s Jul 14 21:44:57.056831 kernel: raid6: neonx2 gen() 12905 MB/s Jul 14 21:44:57.073832 kernel: raid6: neonx2 xor() 10345 MB/s Jul 14 21:44:57.090831 kernel: raid6: neonx1 gen() 10473 MB/s Jul 14 21:44:57.107832 kernel: raid6: neonx1 xor() 8646 MB/s Jul 14 21:44:57.124838 kernel: raid6: int64x8 gen() 6241 MB/s Jul 14 21:44:57.141835 kernel: raid6: int64x8 xor() 3521 MB/s Jul 14 21:44:57.158838 kernel: raid6: int64x4 gen() 7171 MB/s Jul 14 21:44:57.175835 kernel: raid6: int64x4 xor() 3827 MB/s Jul 14 21:44:57.192837 kernel: raid6: int64x2 gen() 6083 MB/s Jul 14 21:44:57.209839 kernel: raid6: int64x2 xor() 3281 MB/s Jul 14 21:44:57.226835 kernel: raid6: int64x1 gen() 5028 MB/s Jul 14 21:44:57.244115 kernel: raid6: int64x1 xor() 2644 MB/s Jul 14 21:44:57.244127 kernel: raid6: using algorithm neonx8 gen() 13705 MB/s Jul 14 21:44:57.244136 kernel: raid6: .... xor() 10601 MB/s, rmw enabled Jul 14 21:44:57.244145 kernel: raid6: using neon recovery algorithm Jul 14 21:44:57.256840 kernel: xor: measuring software checksum speed Jul 14 21:44:57.257910 kernel: 8regs : 15908 MB/sec Jul 14 21:44:57.257926 kernel: 32regs : 20691 MB/sec Jul 14 21:44:57.258929 kernel: arm64_neon : 27710 MB/sec Jul 14 21:44:57.258942 kernel: xor: using function: arm64_neon (27710 MB/sec) Jul 14 21:44:57.320847 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Jul 14 21:44:57.332492 systemd[1]: Finished dracut-pre-udev.service. Jul 14 21:44:57.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:57.334000 audit: BPF prog-id=7 op=LOAD Jul 14 21:44:57.334000 audit: BPF prog-id=8 op=LOAD Jul 14 21:44:57.335862 kernel: audit: type=1130 audit(1752529497.332:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:57.335974 systemd[1]: Starting systemd-udevd.service... Jul 14 21:44:57.351478 systemd-udevd[490]: Using default interface naming scheme 'v252'. Jul 14 21:44:57.354871 systemd[1]: Started systemd-udevd.service. Jul 14 21:44:57.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:57.356695 systemd[1]: Starting dracut-pre-trigger.service... Jul 14 21:44:57.367954 dracut-pre-trigger[499]: rd.md=0: removing MD RAID activation Jul 14 21:44:57.395852 systemd[1]: Finished dracut-pre-trigger.service. Jul 14 21:44:57.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:57.397262 systemd[1]: Starting systemd-udev-trigger.service... Jul 14 21:44:57.432792 systemd[1]: Finished systemd-udev-trigger.service. Jul 14 21:44:57.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:57.463322 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 14 21:44:57.467435 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 14 21:44:57.467449 kernel: GPT:9289727 != 19775487 Jul 14 21:44:57.467458 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 14 21:44:57.467473 kernel: GPT:9289727 != 19775487 Jul 14 21:44:57.467481 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 14 21:44:57.467489 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 21:44:57.480861 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (539) Jul 14 21:44:57.483050 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 14 21:44:57.487555 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 14 21:44:57.488352 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 14 21:44:57.492073 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 14 21:44:57.497018 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 14 21:44:57.498465 systemd[1]: Starting disk-uuid.service... Jul 14 21:44:57.564915 disk-uuid[561]: Primary Header is updated. Jul 14 21:44:57.564915 disk-uuid[561]: Secondary Entries is updated. Jul 14 21:44:57.564915 disk-uuid[561]: Secondary Header is updated. Jul 14 21:44:57.568836 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 21:44:57.577850 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 21:44:57.580848 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 21:44:58.583603 disk-uuid[562]: The operation has completed successfully. Jul 14 21:44:58.584558 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 21:44:58.607412 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 14 21:44:58.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:58.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:58.607511 systemd[1]: Finished disk-uuid.service. Jul 14 21:44:58.608959 systemd[1]: Starting verity-setup.service... Jul 14 21:44:58.627484 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 14 21:44:58.647231 systemd[1]: Found device dev-mapper-usr.device. Jul 14 21:44:58.649304 systemd[1]: Mounting sysusr-usr.mount... Jul 14 21:44:58.651324 systemd[1]: Finished verity-setup.service. Jul 14 21:44:58.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:58.698595 systemd[1]: Mounted sysusr-usr.mount. Jul 14 21:44:58.699187 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 14 21:44:58.699874 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 14 21:44:58.701630 systemd[1]: Starting ignition-setup.service... Jul 14 21:44:58.703766 systemd[1]: Starting parse-ip-for-networkd.service... Jul 14 21:44:58.711883 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 14 21:44:58.711930 kernel: BTRFS info (device vda6): using free space tree Jul 14 21:44:58.711940 kernel: BTRFS info (device vda6): has skinny extents Jul 14 21:44:58.720537 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 14 21:44:58.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:58.729991 systemd[1]: Finished ignition-setup.service. Jul 14 21:44:58.731415 systemd[1]: Starting ignition-fetch-offline.service... Jul 14 21:44:58.791063 systemd[1]: Finished parse-ip-for-networkd.service. Jul 14 21:44:58.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:58.791000 audit: BPF prog-id=9 op=LOAD Jul 14 21:44:58.793035 systemd[1]: Starting systemd-networkd.service... Jul 14 21:44:58.825660 ignition[654]: Ignition 2.14.0 Jul 14 21:44:58.825672 ignition[654]: Stage: fetch-offline Jul 14 21:44:58.825725 ignition[654]: no configs at "/usr/lib/ignition/base.d" Jul 14 21:44:58.825734 ignition[654]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:44:58.825920 ignition[654]: parsed url from cmdline: "" Jul 14 21:44:58.825923 ignition[654]: no config URL provided Jul 14 21:44:58.825928 ignition[654]: reading system config file "/usr/lib/ignition/user.ign" Jul 14 21:44:58.825936 ignition[654]: no config at "/usr/lib/ignition/user.ign" Jul 14 21:44:58.825953 ignition[654]: op(1): [started] loading QEMU firmware config module Jul 14 21:44:58.825958 ignition[654]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 14 21:44:58.831913 systemd-networkd[738]: lo: Link UP Jul 14 21:44:58.831932 ignition[654]: op(1): [finished] loading QEMU firmware config module Jul 14 21:44:58.833527 systemd-networkd[738]: lo: Gained carrier Jul 14 21:44:58.834952 systemd-networkd[738]: Enumeration completed Jul 14 21:44:58.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:58.835080 systemd[1]: Started systemd-networkd.service. Jul 14 21:44:58.836011 systemd[1]: Reached target network.target. Jul 14 21:44:58.838118 systemd[1]: Starting iscsiuio.service... Jul 14 21:44:58.839981 systemd-networkd[738]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 21:44:58.842744 ignition[654]: parsing config with SHA512: 213d525593fa36aba22f2b3753a476a560f88b59aa0b3c8351abdb55a0b30f8140f3f58113517031d024bd66b4c15d55cf4be3a5d78581350ca441a2df095865 Jul 14 21:44:58.842845 systemd-networkd[738]: eth0: Link UP Jul 14 21:44:58.842849 systemd-networkd[738]: eth0: Gained carrier Jul 14 21:44:58.849117 systemd[1]: Started iscsiuio.service. Jul 14 21:44:58.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:58.850536 systemd[1]: Starting iscsid.service... Jul 14 21:44:58.853604 unknown[654]: fetched base config from "system" Jul 14 21:44:58.853622 unknown[654]: fetched user config from "qemu" Jul 14 21:44:58.854199 ignition[654]: fetch-offline: fetch-offline passed Jul 14 21:44:58.855429 iscsid[745]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 14 21:44:58.855429 iscsid[745]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 14 21:44:58.855429 iscsid[745]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 14 21:44:58.855429 iscsid[745]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 14 21:44:58.855429 iscsid[745]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 14 21:44:58.855429 iscsid[745]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 14 21:44:58.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:58.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:58.854288 ignition[654]: Ignition finished successfully Jul 14 21:44:58.856899 systemd[1]: Started iscsid.service. Jul 14 21:44:58.860923 systemd[1]: Finished ignition-fetch-offline.service. Jul 14 21:44:58.862591 systemd[1]: Starting dracut-initqueue.service... Jul 14 21:44:58.863842 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 14 21:44:58.864508 systemd[1]: Starting ignition-kargs.service... Jul 14 21:44:58.868614 systemd-networkd[738]: eth0: DHCPv4 address 10.0.0.15/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 21:44:58.872991 systemd[1]: Finished dracut-initqueue.service. Jul 14 21:44:58.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:58.874206 ignition[747]: Ignition 2.14.0 Jul 14 21:44:58.874040 systemd[1]: Reached target remote-fs-pre.target. Jul 14 21:44:58.874211 ignition[747]: Stage: kargs Jul 14 21:44:58.875128 systemd[1]: Reached target remote-cryptsetup.target. Jul 14 21:44:58.874321 ignition[747]: no configs at "/usr/lib/ignition/base.d" Jul 14 21:44:58.876260 systemd[1]: Reached target remote-fs.target. Jul 14 21:44:58.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:58.874330 ignition[747]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:44:58.878207 systemd[1]: Starting dracut-pre-mount.service... Jul 14 21:44:58.874997 ignition[747]: kargs: kargs passed Jul 14 21:44:58.879212 systemd[1]: Finished ignition-kargs.service. Jul 14 21:44:58.875043 ignition[747]: Ignition finished successfully Jul 14 21:44:58.881192 systemd[1]: Starting ignition-disks.service... Jul 14 21:44:58.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:58.887537 systemd[1]: Finished dracut-pre-mount.service. Jul 14 21:44:58.889026 ignition[761]: Ignition 2.14.0 Jul 14 21:44:58.889032 ignition[761]: Stage: disks Jul 14 21:44:58.889125 ignition[761]: no configs at "/usr/lib/ignition/base.d" Jul 14 21:44:58.890568 systemd[1]: Finished ignition-disks.service. Jul 14 21:44:58.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:58.889134 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:44:58.892078 systemd[1]: Reached target initrd-root-device.target. Jul 14 21:44:58.889808 ignition[761]: disks: disks passed Jul 14 21:44:58.893195 systemd[1]: Reached target local-fs-pre.target. Jul 14 21:44:58.889865 ignition[761]: Ignition finished successfully Jul 14 21:44:58.894530 systemd[1]: Reached target local-fs.target. Jul 14 21:44:58.895721 systemd[1]: Reached target sysinit.target. Jul 14 21:44:58.896736 systemd[1]: Reached target basic.target. Jul 14 21:44:58.898715 systemd[1]: Starting systemd-fsck-root.service... Jul 14 21:44:58.909039 systemd-fsck[773]: ROOT: clean, 619/553520 files, 56022/553472 blocks Jul 14 21:44:58.912840 systemd[1]: Finished systemd-fsck-root.service. Jul 14 21:44:58.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:58.914544 systemd[1]: Mounting sysroot.mount... Jul 14 21:44:58.920847 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 14 21:44:58.921408 systemd[1]: Mounted sysroot.mount. Jul 14 21:44:58.922010 systemd[1]: Reached target initrd-root-fs.target. Jul 14 21:44:58.923875 systemd[1]: Mounting sysroot-usr.mount... Jul 14 21:44:58.924583 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 14 21:44:58.924623 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 14 21:44:58.924646 systemd[1]: Reached target ignition-diskful.target. Jul 14 21:44:58.926699 systemd[1]: Mounted sysroot-usr.mount. Jul 14 21:44:58.931140 systemd[1]: Starting initrd-setup-root.service... Jul 14 21:44:58.935559 initrd-setup-root[783]: cut: /sysroot/etc/passwd: No such file or directory Jul 14 21:44:58.939810 initrd-setup-root[791]: cut: /sysroot/etc/group: No such file or directory Jul 14 21:44:58.943737 initrd-setup-root[799]: cut: /sysroot/etc/shadow: No such file or directory Jul 14 21:44:58.947624 initrd-setup-root[807]: cut: /sysroot/etc/gshadow: No such file or directory Jul 14 21:44:58.976335 systemd[1]: Finished initrd-setup-root.service. Jul 14 21:44:58.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:58.977799 systemd[1]: Starting ignition-mount.service... Jul 14 21:44:58.978960 systemd[1]: Starting sysroot-boot.service... Jul 14 21:44:58.983410 bash[824]: umount: /sysroot/usr/share/oem: not mounted. Jul 14 21:44:58.992265 ignition[826]: INFO : Ignition 2.14.0 Jul 14 21:44:58.993030 ignition[826]: INFO : Stage: mount Jul 14 21:44:58.993030 ignition[826]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 21:44:58.993030 ignition[826]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:44:58.994914 ignition[826]: INFO : mount: mount passed Jul 14 21:44:58.994914 ignition[826]: INFO : Ignition finished successfully Jul 14 21:44:58.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:58.995085 systemd[1]: Finished ignition-mount.service. Jul 14 21:44:59.004042 systemd[1]: Finished sysroot-boot.service. Jul 14 21:44:59.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:44:59.658041 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 14 21:44:59.663832 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (834) Jul 14 21:44:59.665129 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 14 21:44:59.665142 kernel: BTRFS info (device vda6): using free space tree Jul 14 21:44:59.665152 kernel: BTRFS info (device vda6): has skinny extents Jul 14 21:44:59.668529 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 14 21:44:59.669995 systemd[1]: Starting ignition-files.service... Jul 14 21:44:59.684584 ignition[854]: INFO : Ignition 2.14.0 Jul 14 21:44:59.684584 ignition[854]: INFO : Stage: files Jul 14 21:44:59.685930 ignition[854]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 21:44:59.685930 ignition[854]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:44:59.685930 ignition[854]: DEBUG : files: compiled without relabeling support, skipping Jul 14 21:44:59.691535 ignition[854]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 14 21:44:59.691535 ignition[854]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 14 21:44:59.695474 ignition[854]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 14 21:44:59.696565 ignition[854]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 14 21:44:59.696565 ignition[854]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 14 21:44:59.696218 unknown[854]: wrote ssh authorized keys file for user: core Jul 14 21:44:59.699476 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jul 14 21:44:59.699476 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jul 14 21:44:59.699476 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 21:44:59.699476 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 21:44:59.699476 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 14 21:44:59.699476 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 14 21:44:59.699476 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 14 21:44:59.699476 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 14 21:44:59.913948 systemd-networkd[738]: eth0: Gained IPv6LL Jul 14 21:45:00.179740 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jul 14 21:45:00.483380 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 14 21:45:00.483380 ignition[854]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Jul 14 21:45:00.486371 ignition[854]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 21:45:00.486371 ignition[854]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 21:45:00.486371 ignition[854]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Jul 14 21:45:00.486371 ignition[854]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Jul 14 21:45:00.486371 ignition[854]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 21:45:00.526089 ignition[854]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 21:45:00.528366 ignition[854]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Jul 14 21:45:00.528366 ignition[854]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 14 21:45:00.528366 ignition[854]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 14 21:45:00.528366 ignition[854]: INFO : files: files passed Jul 14 21:45:00.528366 ignition[854]: INFO : Ignition finished successfully Jul 14 21:45:00.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:00.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:00.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:00.528208 systemd[1]: Finished ignition-files.service. Jul 14 21:45:00.530001 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 14 21:45:00.530721 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 14 21:45:00.538896 initrd-setup-root-after-ignition[879]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Jul 14 21:45:00.531486 systemd[1]: Starting ignition-quench.service... Jul 14 21:45:00.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:00.541182 initrd-setup-root-after-ignition[881]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 21:45:00.534441 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 14 21:45:00.534529 systemd[1]: Finished ignition-quench.service. Jul 14 21:45:00.539174 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 14 21:45:00.540717 systemd[1]: Reached target ignition-complete.target. Jul 14 21:45:00.542431 systemd[1]: Starting initrd-parse-etc.service... Jul 14 21:45:00.555502 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 14 21:45:00.555612 systemd[1]: Finished initrd-parse-etc.service. Jul 14 21:45:00.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:00.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:00.557100 systemd[1]: Reached target initrd-fs.target. Jul 14 21:45:00.558008 systemd[1]: Reached target initrd.target. Jul 14 21:45:00.559069 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 14 21:45:00.559854 systemd[1]: Starting dracut-pre-pivot.service... Jul 14 21:45:00.570319 systemd[1]: Finished dracut-pre-pivot.service. Jul 14 21:45:00.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:00.571784 systemd[1]: Starting initrd-cleanup.service... Jul 14 21:45:00.580182 systemd[1]: Stopped target nss-lookup.target. Jul 14 21:45:00.580866 systemd[1]: Stopped target remote-cryptsetup.target. Jul 14 21:45:00.581996 systemd[1]: Stopped target timers.target. Jul 14 21:45:00.583049 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 14 21:45:00.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:00.583158 systemd[1]: Stopped dracut-pre-pivot.service. Jul 14 21:45:00.584168 systemd[1]: Stopped target initrd.target. Jul 14 21:45:00.585154 systemd[1]: Stopped target basic.target. Jul 14 21:45:00.586123 systemd[1]: Stopped target ignition-complete.target. Jul 14 21:45:00.587178 systemd[1]: Stopped target ignition-diskful.target. Jul 14 21:45:00.588193 systemd[1]: Stopped target initrd-root-device.target. Jul 14 21:45:00.589323 systemd[1]: Stopped target remote-fs.target. Jul 14 21:45:00.590425 systemd[1]: Stopped target remote-fs-pre.target. Jul 14 21:45:00.591524 systemd[1]: Stopped target sysinit.target. Jul 14 21:45:00.592507 systemd[1]: Stopped target local-fs.target. Jul 14 21:45:00.593550 systemd[1]: Stopped target local-fs-pre.target. Jul 14 21:45:00.594543 systemd[1]: Stopped target swap.target. Jul 14 21:45:00.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:00.595455 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 14 21:45:00.595579 systemd[1]: Stopped dracut-pre-mount.service. Jul 14 21:45:00.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:00.596580 systemd[1]: Stopped target cryptsetup.target. Jul 14 21:45:00.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:00.597470 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 14 21:45:00.597567 systemd[1]: Stopped dracut-initqueue.service. Jul 14 21:45:00.598712 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 14 21:45:00.598808 systemd[1]: Stopped ignition-fetch-offline.service. Jul 14 21:45:00.599765 systemd[1]: Stopped target paths.target. Jul 14 21:45:00.600606 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 14 21:45:00.604869 systemd[1]: Stopped systemd-ask-password-console.path. Jul 14 21:45:00.605620 systemd[1]: Stopped target slices.target. Jul 14 21:45:00.606646 systemd[1]: Stopped target sockets.target. Jul 14 21:45:00.607594 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 14 21:45:00.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:00.607716 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 14 21:45:00.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:00.608707 systemd[1]: ignition-files.service: Deactivated successfully. Jul 14 21:45:00.608799 systemd[1]: Stopped ignition-files.service. Jul 14 21:45:00.612221 iscsid[745]: iscsid shutting down. Jul 14 21:45:00.611144 systemd[1]: Stopping ignition-mount.service... Jul 14 21:45:00.613991 systemd[1]: Stopping iscsid.service... Jul 14 21:45:00.615441 systemd[1]: Stopping sysroot-boot.service... Jul 14 21:45:00.616019 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 14 21:45:00.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:00.616170 systemd[1]: Stopped systemd-udev-trigger.service. Jul 14 21:45:00.618000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:00.617181 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 14 21:45:00.617288 systemd[1]: Stopped dracut-pre-trigger.service. Jul 14 21:45:00.620115 ignition[894]: INFO : Ignition 2.14.0 Jul 14 21:45:00.620115 ignition[894]: INFO : Stage: umount Jul 14 21:45:00.620115 ignition[894]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 21:45:00.620115 ignition[894]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:45:00.620115 ignition[894]: INFO : umount: umount passed Jul 14 21:45:00.620115 ignition[894]: INFO : Ignition finished successfully Jul 14 21:45:00.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:00.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:00.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:00.625000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:00.619911 systemd[1]: iscsid.service: Deactivated successfully. Jul 14 21:45:00.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:00.620010 systemd[1]: Stopped iscsid.service. Jul 14 21:45:00.620987 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 14 21:45:00.621067 systemd[1]: Stopped ignition-mount.service. Jul 14 21:45:00.622466 systemd[1]: iscsid.socket: Deactivated successfully. Jul 14 21:45:00.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:00.622535 systemd[1]: Closed iscsid.socket. Jul 14 21:45:00.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:00.631000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:00.623323 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 14 21:45:00.623367 systemd[1]: Stopped ignition-disks.service. Jul 14 21:45:00.624732 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 14 21:45:00.624773 systemd[1]: Stopped ignition-kargs.service. Jul 14 21:45:00.625766 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 14 21:45:00.625801 systemd[1]: Stopped ignition-setup.service. Jul 14 21:45:00.627146 systemd[1]: Stopping iscsiuio.service... Jul 14 21:45:00.629423 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 14 21:45:00.629850 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 14 21:45:00.629933 systemd[1]: Stopped iscsiuio.service. Jul 14 21:45:00.630944 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 14 21:45:00.631022 systemd[1]: Finished initrd-cleanup.service. Jul 14 21:45:00.632588 systemd[1]: Stopped target network.target. Jul 14 21:45:00.633483 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 14 21:45:00.633519 systemd[1]: Closed iscsiuio.socket. Jul 14 21:45:00.634437 systemd[1]: Stopping systemd-networkd.service... Jul 14 21:45:00.635533 systemd[1]: Stopping systemd-resolved.service... Jul 14 21:45:00.644857 systemd-networkd[738]: eth0: DHCPv6 lease lost Jul 14 21:45:00.645701 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 14 21:45:00.645806 systemd[1]: Stopped systemd-resolved.service. Jul 14 21:45:00.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:00.647447 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 14 21:45:00.647532 systemd[1]: Stopped systemd-networkd.service. Jul 14 21:45:00.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:00.649013 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 14 21:45:00.649040 systemd[1]: Closed systemd-networkd.socket. Jul 14 21:45:00.650447 systemd[1]: Stopping network-cleanup.service... Jul 14 21:45:00.651000 audit: BPF prog-id=6 op=UNLOAD Jul 14 21:45:00.651000 audit: BPF prog-id=9 op=UNLOAD Jul 14 21:45:00.651399 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 14 21:45:00.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:00.651454 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 14 21:45:00.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:00.652555 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 14 21:45:00.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:00.652593 systemd[1]: Stopped systemd-sysctl.service. Jul 14 21:45:00.654162 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 14 21:45:00.654199 systemd[1]: Stopped systemd-modules-load.service. Jul 14 21:45:00.656764 systemd[1]: Stopping systemd-udevd.service... Jul 14 21:45:00.659114 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 14 21:45:00.663063 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 14 21:45:00.663172 systemd[1]: Stopped network-cleanup.service. Jul 14 21:45:00.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:00.664635 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 14 21:45:00.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:00.664745 systemd[1]: Stopped systemd-udevd.service. Jul 14 21:45:00.665684 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 14 21:45:00.665713 systemd[1]: Closed systemd-udevd-control.socket. Jul 14 21:45:00.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:00.666609 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 14 21:45:00.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:00.666639 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 14 21:45:00.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:00.667524 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 14 21:45:00.667561 systemd[1]: Stopped dracut-pre-udev.service. Jul 14 21:45:00.668636 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 14 21:45:00.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:00.668670 systemd[1]: Stopped dracut-cmdline.service. Jul 14 21:45:00.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:00.669581 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 21:45:00.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:00.669617 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 14 21:45:00.671398 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 14 21:45:00.672600 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 14 21:45:00.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:00.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:00.672654 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Jul 14 21:45:00.674300 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 14 21:45:00.674339 systemd[1]: Stopped kmod-static-nodes.service. Jul 14 21:45:00.674950 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 21:45:00.674989 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 14 21:45:00.676703 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 14 21:45:00.677240 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 14 21:45:00.677328 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 14 21:45:00.752760 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 14 21:45:00.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:00.753526 systemd[1]: Stopped sysroot-boot.service. Jul 14 21:45:00.754224 systemd[1]: Reached target initrd-switch-root.target. Jul 14 21:45:00.755341 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 14 21:45:00.755388 systemd[1]: Stopped initrd-setup-root.service. Jul 14 21:45:00.757148 systemd[1]: Starting initrd-switch-root.service... Jul 14 21:45:00.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:00.763798 systemd[1]: Switching root. Jul 14 21:45:00.784128 systemd-journald[289]: Journal stopped Jul 14 21:45:02.844580 systemd-journald[289]: Received SIGTERM from PID 1 (systemd). Jul 14 21:45:02.844638 kernel: SELinux: Class mctp_socket not defined in policy. Jul 14 21:45:02.844651 kernel: SELinux: Class anon_inode not defined in policy. Jul 14 21:45:02.844665 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 14 21:45:02.844675 kernel: SELinux: policy capability network_peer_controls=1 Jul 14 21:45:02.844685 kernel: SELinux: policy capability open_perms=1 Jul 14 21:45:02.844695 kernel: SELinux: policy capability extended_socket_class=1 Jul 14 21:45:02.844705 kernel: SELinux: policy capability always_check_network=0 Jul 14 21:45:02.844716 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 14 21:45:02.844730 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 14 21:45:02.844742 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 14 21:45:02.844752 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 14 21:45:02.844762 systemd[1]: Successfully loaded SELinux policy in 39.173ms. Jul 14 21:45:02.844781 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.379ms. Jul 14 21:45:02.844793 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 14 21:45:02.844804 systemd[1]: Detected virtualization kvm. Jul 14 21:45:02.844814 systemd[1]: Detected architecture arm64. Jul 14 21:45:02.844841 systemd[1]: Detected first boot. Jul 14 21:45:02.844851 systemd[1]: Initializing machine ID from VM UUID. Jul 14 21:45:02.844862 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 14 21:45:02.844872 systemd[1]: Populated /etc with preset unit settings. Jul 14 21:45:02.844883 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 14 21:45:02.844895 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 14 21:45:02.844907 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 21:45:02.844919 kernel: kauditd_printk_skb: 81 callbacks suppressed Jul 14 21:45:02.844929 kernel: audit: type=1334 audit(1752529502.703:85): prog-id=12 op=LOAD Jul 14 21:45:02.844939 kernel: audit: type=1334 audit(1752529502.703:86): prog-id=3 op=UNLOAD Jul 14 21:45:02.844949 kernel: audit: type=1334 audit(1752529502.704:87): prog-id=13 op=LOAD Jul 14 21:45:02.844958 kernel: audit: type=1334 audit(1752529502.705:88): prog-id=14 op=LOAD Jul 14 21:45:02.844968 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 14 21:45:02.844978 kernel: audit: type=1334 audit(1752529502.705:89): prog-id=4 op=UNLOAD Jul 14 21:45:02.844990 systemd[1]: Stopped initrd-switch-root.service. Jul 14 21:45:02.845000 kernel: audit: type=1334 audit(1752529502.705:90): prog-id=5 op=UNLOAD Jul 14 21:45:02.845012 kernel: audit: type=1131 audit(1752529502.706:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:02.845022 kernel: audit: type=1130 audit(1752529502.712:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:02.845036 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 14 21:45:02.845046 kernel: audit: type=1131 audit(1752529502.712:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:02.845059 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 14 21:45:02.845071 systemd[1]: Created slice system-addon\x2drun.slice. Jul 14 21:45:02.845081 kernel: audit: type=1334 audit(1752529502.720:94): prog-id=12 op=UNLOAD Jul 14 21:45:02.845091 systemd[1]: Created slice system-getty.slice. Jul 14 21:45:02.845103 systemd[1]: Created slice system-modprobe.slice. Jul 14 21:45:02.845118 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 14 21:45:02.845129 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 14 21:45:02.845140 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 14 21:45:02.845151 systemd[1]: Created slice user.slice. Jul 14 21:45:02.845162 systemd[1]: Started systemd-ask-password-console.path. Jul 14 21:45:02.845172 systemd[1]: Started systemd-ask-password-wall.path. Jul 14 21:45:02.845183 systemd[1]: Set up automount boot.automount. Jul 14 21:45:02.845194 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 14 21:45:02.845206 systemd[1]: Stopped target initrd-switch-root.target. Jul 14 21:45:02.845217 systemd[1]: Stopped target initrd-fs.target. Jul 14 21:45:02.845229 systemd[1]: Stopped target initrd-root-fs.target. Jul 14 21:45:02.845247 systemd[1]: Reached target integritysetup.target. Jul 14 21:45:02.845259 systemd[1]: Reached target remote-cryptsetup.target. Jul 14 21:45:02.845270 systemd[1]: Reached target remote-fs.target. Jul 14 21:45:02.845280 systemd[1]: Reached target slices.target. Jul 14 21:45:02.845291 systemd[1]: Reached target swap.target. Jul 14 21:45:02.845303 systemd[1]: Reached target torcx.target. Jul 14 21:45:02.845316 systemd[1]: Reached target veritysetup.target. Jul 14 21:45:02.845327 systemd[1]: Listening on systemd-coredump.socket. Jul 14 21:45:02.845338 systemd[1]: Listening on systemd-initctl.socket. Jul 14 21:45:02.845348 systemd[1]: Listening on systemd-networkd.socket. Jul 14 21:45:02.845359 systemd[1]: Listening on systemd-udevd-control.socket. Jul 14 21:45:02.845371 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 14 21:45:02.845381 systemd[1]: Listening on systemd-userdbd.socket. Jul 14 21:45:02.845392 systemd[1]: Mounting dev-hugepages.mount... Jul 14 21:45:02.845402 systemd[1]: Mounting dev-mqueue.mount... Jul 14 21:45:02.845413 systemd[1]: Mounting media.mount... Jul 14 21:45:02.845423 systemd[1]: Mounting sys-kernel-debug.mount... Jul 14 21:45:02.845434 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 14 21:45:02.845444 systemd[1]: Mounting tmp.mount... Jul 14 21:45:02.845454 systemd[1]: Starting flatcar-tmpfiles.service... Jul 14 21:45:02.845468 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 14 21:45:02.845478 systemd[1]: Starting kmod-static-nodes.service... Jul 14 21:45:02.845489 systemd[1]: Starting modprobe@configfs.service... Jul 14 21:45:02.845499 systemd[1]: Starting modprobe@dm_mod.service... Jul 14 21:45:02.845509 systemd[1]: Starting modprobe@drm.service... Jul 14 21:45:02.845519 systemd[1]: Starting modprobe@efi_pstore.service... Jul 14 21:45:02.845533 systemd[1]: Starting modprobe@fuse.service... Jul 14 21:45:02.845544 systemd[1]: Starting modprobe@loop.service... Jul 14 21:45:02.845555 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 14 21:45:02.845567 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 14 21:45:02.845578 systemd[1]: Stopped systemd-fsck-root.service. Jul 14 21:45:02.845589 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 14 21:45:02.845599 systemd[1]: Stopped systemd-fsck-usr.service. Jul 14 21:45:02.845610 systemd[1]: Stopped systemd-journald.service. Jul 14 21:45:02.845620 kernel: fuse: init (API version 7.34) Jul 14 21:45:02.845630 kernel: loop: module loaded Jul 14 21:45:02.845640 systemd[1]: Starting systemd-journald.service... Jul 14 21:45:02.845650 systemd[1]: Starting systemd-modules-load.service... Jul 14 21:45:02.845663 systemd[1]: Starting systemd-network-generator.service... Jul 14 21:45:02.845673 systemd[1]: Starting systemd-remount-fs.service... Jul 14 21:45:02.845683 systemd[1]: Starting systemd-udev-trigger.service... Jul 14 21:45:02.845695 systemd[1]: verity-setup.service: Deactivated successfully. Jul 14 21:45:02.845705 systemd[1]: Stopped verity-setup.service. Jul 14 21:45:02.845716 systemd[1]: Mounted dev-hugepages.mount. Jul 14 21:45:02.845726 systemd[1]: Mounted dev-mqueue.mount. Jul 14 21:45:02.845737 systemd[1]: Mounted media.mount. Jul 14 21:45:02.845752 systemd[1]: Mounted sys-kernel-debug.mount. Jul 14 21:45:02.845763 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 14 21:45:02.845774 systemd[1]: Mounted tmp.mount. Jul 14 21:45:02.845784 systemd[1]: Finished kmod-static-nodes.service. Jul 14 21:45:02.845795 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 14 21:45:02.845806 systemd[1]: Finished modprobe@configfs.service. Jul 14 21:45:02.845824 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 21:45:02.845836 systemd[1]: Finished modprobe@dm_mod.service. Jul 14 21:45:02.845849 systemd-journald[989]: Journal started Jul 14 21:45:02.845895 systemd-journald[989]: Runtime Journal (/run/log/journal/6f5d0cdfd1b44ce18732436f03229263) is 6.0M, max 48.7M, 42.6M free. Jul 14 21:45:00.874000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 14 21:45:00.964000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 14 21:45:00.965000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 14 21:45:00.965000 audit: BPF prog-id=10 op=LOAD Jul 14 21:45:00.965000 audit: BPF prog-id=10 op=UNLOAD Jul 14 21:45:00.965000 audit: BPF prog-id=11 op=LOAD Jul 14 21:45:00.965000 audit: BPF prog-id=11 op=UNLOAD Jul 14 21:45:01.006000 audit[927]: AVC avc: denied { associate } for pid=927 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 14 21:45:01.006000 audit[927]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c58b4 a1=40000c8de0 a2=40000cf040 a3=32 items=0 ppid=910 pid=927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:45:01.006000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 14 21:45:01.007000 audit[927]: AVC avc: denied { associate } for pid=927 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 14 21:45:01.007000 audit[927]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001c5989 a2=1ed a3=0 items=2 ppid=910 pid=927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:45:01.007000 audit: CWD cwd="/" Jul 14 21:45:01.007000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 21:45:01.007000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 21:45:01.007000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 14 21:45:02.703000 audit: BPF prog-id=12 op=LOAD Jul 14 21:45:02.703000 audit: BPF prog-id=3 op=UNLOAD Jul 14 21:45:02.704000 audit: BPF prog-id=13 op=LOAD Jul 14 21:45:02.705000 audit: BPF prog-id=14 op=LOAD Jul 14 21:45:02.705000 audit: BPF prog-id=4 op=UNLOAD Jul 14 21:45:02.705000 audit: BPF prog-id=5 op=UNLOAD Jul 14 21:45:02.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:02.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:02.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:02.720000 audit: BPF prog-id=12 op=UNLOAD Jul 14 21:45:02.799000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:02.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:02.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:02.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:02.810000 audit: BPF prog-id=15 op=LOAD Jul 14 21:45:02.812000 audit: BPF prog-id=16 op=LOAD Jul 14 21:45:02.812000 audit: BPF prog-id=17 op=LOAD Jul 14 21:45:02.812000 audit: BPF prog-id=13 op=UNLOAD Jul 14 21:45:02.812000 audit: BPF prog-id=14 op=UNLOAD Jul 14 21:45:02.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:02.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:02.843000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 14 21:45:02.843000 audit[989]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=fffff613dbc0 a2=4000 a3=1 items=0 ppid=1 pid=989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:45:02.843000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 14 21:45:02.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:02.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:02.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:02.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:01.005185 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2025-07-14T21:45:01Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.101 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.101 /var/lib/torcx/store]" Jul 14 21:45:02.702715 systemd[1]: Queued start job for default target multi-user.target. Jul 14 21:45:01.005457 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2025-07-14T21:45:01Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 14 21:45:02.702728 systemd[1]: Unnecessary job was removed for dev-vda6.device. Jul 14 21:45:01.005475 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2025-07-14T21:45:01Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 14 21:45:02.706616 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 14 21:45:01.005506 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2025-07-14T21:45:01Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 14 21:45:02.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:01.005516 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2025-07-14T21:45:01Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 14 21:45:01.005544 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2025-07-14T21:45:01Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 14 21:45:01.005556 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2025-07-14T21:45:01Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 14 21:45:02.847832 systemd[1]: Started systemd-journald.service. Jul 14 21:45:01.005754 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2025-07-14T21:45:01Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 14 21:45:02.847853 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 21:45:01.005786 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2025-07-14T21:45:01Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 14 21:45:01.005798 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2025-07-14T21:45:01Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 14 21:45:02.848043 systemd[1]: Finished modprobe@drm.service. Jul 14 21:45:01.006765 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2025-07-14T21:45:01Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 14 21:45:01.006807 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2025-07-14T21:45:01Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 14 21:45:01.006839 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2025-07-14T21:45:01Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.101: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.101 Jul 14 21:45:01.006853 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2025-07-14T21:45:01Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 14 21:45:01.006872 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2025-07-14T21:45:01Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.101: no such file or directory" path=/var/lib/torcx/store/3510.3.101 Jul 14 21:45:01.006885 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2025-07-14T21:45:01Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 14 21:45:02.446180 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2025-07-14T21:45:02Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 14 21:45:02.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:02.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:02.446455 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2025-07-14T21:45:02Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 14 21:45:02.446549 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2025-07-14T21:45:02Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 14 21:45:02.446714 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2025-07-14T21:45:02Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 14 21:45:02.446764 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2025-07-14T21:45:02Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 14 21:45:02.446848 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2025-07-14T21:45:02Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 14 21:45:02.849248 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 21:45:02.849690 systemd[1]: Finished modprobe@efi_pstore.service. Jul 14 21:45:02.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:02.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:02.850720 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 14 21:45:02.850913 systemd[1]: Finished modprobe@fuse.service. Jul 14 21:45:02.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:02.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:02.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:02.851000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:02.851776 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 21:45:02.852180 systemd[1]: Finished modprobe@loop.service. Jul 14 21:45:02.853130 systemd[1]: Finished systemd-modules-load.service. Jul 14 21:45:02.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:02.854229 systemd[1]: Finished systemd-network-generator.service. Jul 14 21:45:02.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:02.855313 systemd[1]: Finished flatcar-tmpfiles.service. Jul 14 21:45:02.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:02.856249 systemd[1]: Finished systemd-remount-fs.service. Jul 14 21:45:02.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:02.857650 systemd[1]: Reached target network-pre.target. Jul 14 21:45:02.859625 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 14 21:45:02.861391 systemd[1]: Mounting sys-kernel-config.mount... Jul 14 21:45:02.862206 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 14 21:45:02.868726 systemd[1]: Starting systemd-hwdb-update.service... Jul 14 21:45:02.870526 systemd[1]: Starting systemd-journal-flush.service... Jul 14 21:45:02.871321 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 21:45:02.872333 systemd[1]: Starting systemd-random-seed.service... Jul 14 21:45:02.873092 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 14 21:45:02.874155 systemd[1]: Starting systemd-sysctl.service... Jul 14 21:45:02.880362 systemd-journald[989]: Time spent on flushing to /var/log/journal/6f5d0cdfd1b44ce18732436f03229263 is 16.579ms for 975 entries. Jul 14 21:45:02.880362 systemd-journald[989]: System Journal (/var/log/journal/6f5d0cdfd1b44ce18732436f03229263) is 8.0M, max 195.6M, 187.6M free. Jul 14 21:45:02.908376 systemd-journald[989]: Received client request to flush runtime journal. Jul 14 21:45:02.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:02.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:02.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:02.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:02.876163 systemd[1]: Starting systemd-sysusers.service... Jul 14 21:45:02.878947 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 14 21:45:02.882009 systemd[1]: Mounted sys-kernel-config.mount. Jul 14 21:45:02.909456 udevadm[1028]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 14 21:45:02.883414 systemd[1]: Finished systemd-udev-trigger.service. Jul 14 21:45:02.885353 systemd[1]: Starting systemd-udev-settle.service... Jul 14 21:45:02.890427 systemd[1]: Finished systemd-random-seed.service. Jul 14 21:45:02.893135 systemd[1]: Reached target first-boot-complete.target. Jul 14 21:45:02.904009 systemd[1]: Finished systemd-sysctl.service. Jul 14 21:45:02.905025 systemd[1]: Finished systemd-sysusers.service. Jul 14 21:45:02.907074 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 14 21:45:02.910670 systemd[1]: Finished systemd-journal-flush.service. Jul 14 21:45:02.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:02.924147 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 14 21:45:02.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:03.280380 systemd[1]: Finished systemd-hwdb-update.service. Jul 14 21:45:03.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:03.281000 audit: BPF prog-id=18 op=LOAD Jul 14 21:45:03.284000 audit: BPF prog-id=19 op=LOAD Jul 14 21:45:03.284000 audit: BPF prog-id=7 op=UNLOAD Jul 14 21:45:03.284000 audit: BPF prog-id=8 op=UNLOAD Jul 14 21:45:03.286773 systemd[1]: Starting systemd-udevd.service... Jul 14 21:45:03.322487 systemd-udevd[1033]: Using default interface naming scheme 'v252'. Jul 14 21:45:03.344132 systemd[1]: Started systemd-udevd.service. Jul 14 21:45:03.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:03.344000 audit: BPF prog-id=20 op=LOAD Jul 14 21:45:03.347072 systemd[1]: Starting systemd-networkd.service... Jul 14 21:45:03.351000 audit: BPF prog-id=21 op=LOAD Jul 14 21:45:03.351000 audit: BPF prog-id=22 op=LOAD Jul 14 21:45:03.351000 audit: BPF prog-id=23 op=LOAD Jul 14 21:45:03.353270 systemd[1]: Starting systemd-userdbd.service... Jul 14 21:45:03.368103 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Jul 14 21:45:03.383300 systemd[1]: Started systemd-userdbd.service. Jul 14 21:45:03.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:03.431339 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 14 21:45:03.463273 systemd[1]: Finished systemd-udev-settle.service. Jul 14 21:45:03.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:03.465612 systemd[1]: Starting lvm2-activation-early.service... Jul 14 21:45:03.476679 systemd-networkd[1040]: lo: Link UP Jul 14 21:45:03.477043 systemd-networkd[1040]: lo: Gained carrier Jul 14 21:45:03.477517 systemd-networkd[1040]: Enumeration completed Jul 14 21:45:03.477720 systemd-networkd[1040]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 21:45:03.477740 systemd[1]: Started systemd-networkd.service. Jul 14 21:45:03.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:03.485430 systemd-networkd[1040]: eth0: Link UP Jul 14 21:45:03.485548 systemd-networkd[1040]: eth0: Gained carrier Jul 14 21:45:03.508652 lvm[1067]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 21:45:03.522970 systemd-networkd[1040]: eth0: DHCPv4 address 10.0.0.15/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 21:45:03.530807 systemd[1]: Finished lvm2-activation-early.service. Jul 14 21:45:03.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:03.531640 systemd[1]: Reached target cryptsetup.target. Jul 14 21:45:03.533555 systemd[1]: Starting lvm2-activation.service... Jul 14 21:45:03.537453 lvm[1068]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 21:45:03.560790 systemd[1]: Finished lvm2-activation.service. Jul 14 21:45:03.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:03.561625 systemd[1]: Reached target local-fs-pre.target. Jul 14 21:45:03.562353 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 14 21:45:03.562384 systemd[1]: Reached target local-fs.target. Jul 14 21:45:03.563019 systemd[1]: Reached target machines.target. Jul 14 21:45:03.565216 systemd[1]: Starting ldconfig.service... Jul 14 21:45:03.566277 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 14 21:45:03.566375 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 21:45:03.567856 systemd[1]: Starting systemd-boot-update.service... Jul 14 21:45:03.569967 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 14 21:45:03.572612 systemd[1]: Starting systemd-machine-id-commit.service... Jul 14 21:45:03.574741 systemd[1]: Starting systemd-sysext.service... Jul 14 21:45:03.575654 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1070 (bootctl) Jul 14 21:45:03.578572 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 14 21:45:03.586231 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 14 21:45:03.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:03.591456 systemd[1]: Unmounting usr-share-oem.mount... Jul 14 21:45:03.596927 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 14 21:45:03.597127 systemd[1]: Unmounted usr-share-oem.mount. Jul 14 21:45:03.657601 systemd[1]: Finished systemd-machine-id-commit.service. Jul 14 21:45:03.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:03.659841 kernel: loop0: detected capacity change from 0 to 207008 Jul 14 21:45:03.672847 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 14 21:45:03.674046 systemd-fsck[1078]: fsck.fat 4.2 (2021-01-31) Jul 14 21:45:03.674046 systemd-fsck[1078]: /dev/vda1: 236 files, 117310/258078 clusters Jul 14 21:45:03.677431 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 14 21:45:03.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:03.696876 kernel: loop1: detected capacity change from 0 to 207008 Jul 14 21:45:03.701884 (sd-sysext)[1082]: Using extensions 'kubernetes'. Jul 14 21:45:03.702539 (sd-sysext)[1082]: Merged extensions into '/usr'. Jul 14 21:45:03.731201 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 14 21:45:03.734508 systemd[1]: Starting modprobe@dm_mod.service... Jul 14 21:45:03.737922 systemd[1]: Starting modprobe@efi_pstore.service... Jul 14 21:45:03.741297 systemd[1]: Starting modprobe@loop.service... Jul 14 21:45:03.742592 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 14 21:45:03.742948 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 21:45:03.745002 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 21:45:03.745375 systemd[1]: Finished modprobe@dm_mod.service. Jul 14 21:45:03.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:03.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:03.746956 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 21:45:03.747106 systemd[1]: Finished modprobe@efi_pstore.service. Jul 14 21:45:03.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:03.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:03.748365 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 21:45:03.748489 systemd[1]: Finished modprobe@loop.service. Jul 14 21:45:03.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:03.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:03.749842 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 21:45:03.749965 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 14 21:45:03.769236 ldconfig[1069]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 14 21:45:03.773279 systemd[1]: Finished ldconfig.service. Jul 14 21:45:03.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:03.831587 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 14 21:45:03.833402 systemd[1]: Mounting boot.mount... Jul 14 21:45:03.835464 systemd[1]: Mounting usr-share-oem.mount... Jul 14 21:45:03.841917 systemd[1]: Mounted boot.mount. Jul 14 21:45:03.842687 systemd[1]: Mounted usr-share-oem.mount. Jul 14 21:45:03.844598 systemd[1]: Finished systemd-sysext.service. Jul 14 21:45:03.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:03.846962 systemd[1]: Starting ensure-sysext.service... Jul 14 21:45:03.848744 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 14 21:45:03.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:03.850002 systemd[1]: Finished systemd-boot-update.service. Jul 14 21:45:03.854046 systemd[1]: Reloading. Jul 14 21:45:03.858591 systemd-tmpfiles[1090]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 14 21:45:03.859299 systemd-tmpfiles[1090]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 14 21:45:03.860619 systemd-tmpfiles[1090]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 14 21:45:03.894281 /usr/lib/systemd/system-generators/torcx-generator[1110]: time="2025-07-14T21:45:03Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.101 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.101 /var/lib/torcx/store]" Jul 14 21:45:03.894616 /usr/lib/systemd/system-generators/torcx-generator[1110]: time="2025-07-14T21:45:03Z" level=info msg="torcx already run" Jul 14 21:45:03.956304 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 14 21:45:03.956323 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 14 21:45:03.972389 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 21:45:04.015000 audit: BPF prog-id=24 op=LOAD Jul 14 21:45:04.015000 audit: BPF prog-id=20 op=UNLOAD Jul 14 21:45:04.016000 audit: BPF prog-id=25 op=LOAD Jul 14 21:45:04.016000 audit: BPF prog-id=15 op=UNLOAD Jul 14 21:45:04.016000 audit: BPF prog-id=26 op=LOAD Jul 14 21:45:04.016000 audit: BPF prog-id=27 op=LOAD Jul 14 21:45:04.016000 audit: BPF prog-id=16 op=UNLOAD Jul 14 21:45:04.016000 audit: BPF prog-id=17 op=UNLOAD Jul 14 21:45:04.017000 audit: BPF prog-id=28 op=LOAD Jul 14 21:45:04.018000 audit: BPF prog-id=29 op=LOAD Jul 14 21:45:04.018000 audit: BPF prog-id=18 op=UNLOAD Jul 14 21:45:04.018000 audit: BPF prog-id=19 op=UNLOAD Jul 14 21:45:04.018000 audit: BPF prog-id=30 op=LOAD Jul 14 21:45:04.018000 audit: BPF prog-id=21 op=UNLOAD Jul 14 21:45:04.018000 audit: BPF prog-id=31 op=LOAD Jul 14 21:45:04.018000 audit: BPF prog-id=32 op=LOAD Jul 14 21:45:04.018000 audit: BPF prog-id=22 op=UNLOAD Jul 14 21:45:04.018000 audit: BPF prog-id=23 op=UNLOAD Jul 14 21:45:04.021649 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 14 21:45:04.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:04.026426 systemd[1]: Starting audit-rules.service... Jul 14 21:45:04.028483 systemd[1]: Starting clean-ca-certificates.service... Jul 14 21:45:04.031062 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 14 21:45:04.035000 audit: BPF prog-id=33 op=LOAD Jul 14 21:45:04.044460 systemd[1]: Starting systemd-resolved.service... Jul 14 21:45:04.047000 audit: BPF prog-id=34 op=LOAD Jul 14 21:45:04.050072 systemd[1]: Starting systemd-timesyncd.service... Jul 14 21:45:04.052751 systemd[1]: Starting systemd-update-utmp.service... Jul 14 21:45:04.062106 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 14 21:45:04.064912 systemd[1]: Starting modprobe@dm_mod.service... Jul 14 21:45:04.066917 systemd[1]: Starting modprobe@efi_pstore.service... Jul 14 21:45:04.070682 systemd[1]: Starting modprobe@loop.service... Jul 14 21:45:04.071000 audit[1160]: SYSTEM_BOOT pid=1160 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 14 21:45:04.071444 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 14 21:45:04.071659 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 21:45:04.072876 systemd[1]: Finished clean-ca-certificates.service. Jul 14 21:45:04.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:04.074268 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 14 21:45:04.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:04.075545 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 21:45:04.075678 systemd[1]: Finished modprobe@dm_mod.service. Jul 14 21:45:04.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:04.076000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:04.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:04.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:04.077031 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 21:45:04.077161 systemd[1]: Finished modprobe@efi_pstore.service. Jul 14 21:45:04.078336 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 21:45:04.078456 systemd[1]: Finished modprobe@loop.service. Jul 14 21:45:04.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:04.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:04.081799 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 21:45:04.081990 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 14 21:45:04.084099 systemd[1]: Starting systemd-update-done.service... Jul 14 21:45:04.084881 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 14 21:45:04.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:04.087886 systemd[1]: Finished systemd-update-utmp.service. Jul 14 21:45:04.092120 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 14 21:45:04.093793 systemd[1]: Starting modprobe@dm_mod.service... Jul 14 21:45:04.095994 systemd[1]: Starting modprobe@efi_pstore.service... Jul 14 21:45:04.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:04.097922 systemd[1]: Starting modprobe@loop.service... Jul 14 21:45:04.098562 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 14 21:45:04.098730 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 21:45:04.098866 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 14 21:45:04.099788 systemd[1]: Finished systemd-update-done.service. Jul 14 21:45:04.101117 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 21:45:04.101256 systemd[1]: Finished modprobe@dm_mod.service. Jul 14 21:45:04.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:04.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:04.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:04.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:04.102316 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 21:45:04.102441 systemd[1]: Finished modprobe@efi_pstore.service. Jul 14 21:45:04.103699 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 21:45:04.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:04.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:45:04.103985 systemd[1]: Finished modprobe@loop.service. Jul 14 21:45:04.104997 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 21:45:04.105085 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 14 21:45:04.113025 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 14 21:45:04.113000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 14 21:45:04.113000 audit[1177]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffffbabb430 a2=420 a3=0 items=0 ppid=1149 pid=1177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:45:04.113000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 14 21:45:04.114334 augenrules[1177]: No rules Jul 14 21:45:04.115068 systemd[1]: Starting modprobe@dm_mod.service... Jul 14 21:45:04.117321 systemd[1]: Starting modprobe@drm.service... Jul 14 21:45:04.119015 systemd[1]: Starting modprobe@efi_pstore.service... Jul 14 21:45:04.121496 systemd[1]: Starting modprobe@loop.service... Jul 14 21:45:04.122341 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 14 21:45:04.122536 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 21:45:04.124604 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 14 21:45:04.125587 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 14 21:45:04.127400 systemd[1]: Finished audit-rules.service. Jul 14 21:45:04.128524 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 21:45:04.128655 systemd[1]: Finished modprobe@dm_mod.service. Jul 14 21:45:04.129957 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 21:45:04.130091 systemd[1]: Finished modprobe@drm.service. Jul 14 21:45:04.131351 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 21:45:04.131473 systemd[1]: Finished modprobe@efi_pstore.service. Jul 14 21:45:04.132494 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 21:45:04.132610 systemd[1]: Finished modprobe@loop.service. Jul 14 21:45:04.134027 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 21:45:04.134099 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 14 21:45:04.135120 systemd[1]: Finished ensure-sysext.service. Jul 14 21:45:04.137141 systemd[1]: Started systemd-timesyncd.service. Jul 14 21:45:04.138152 systemd-timesyncd[1159]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 14 21:45:04.138268 systemd[1]: Reached target time-set.target. Jul 14 21:45:04.138507 systemd-timesyncd[1159]: Initial clock synchronization to Mon 2025-07-14 21:45:04.493449 UTC. Jul 14 21:45:04.139935 systemd-resolved[1153]: Positive Trust Anchors: Jul 14 21:45:04.139948 systemd-resolved[1153]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 21:45:04.139975 systemd-resolved[1153]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 14 21:45:04.162110 systemd-resolved[1153]: Defaulting to hostname 'linux'. Jul 14 21:45:04.163682 systemd[1]: Started systemd-resolved.service. Jul 14 21:45:04.164465 systemd[1]: Reached target network.target. Jul 14 21:45:04.165049 systemd[1]: Reached target nss-lookup.target. Jul 14 21:45:04.165631 systemd[1]: Reached target sysinit.target. Jul 14 21:45:04.166328 systemd[1]: Started motdgen.path. Jul 14 21:45:04.166863 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 14 21:45:04.167946 systemd[1]: Started logrotate.timer. Jul 14 21:45:04.168640 systemd[1]: Started mdadm.timer. Jul 14 21:45:04.169178 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 14 21:45:04.169795 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 14 21:45:04.169853 systemd[1]: Reached target paths.target. Jul 14 21:45:04.170391 systemd[1]: Reached target timers.target. Jul 14 21:45:04.171298 systemd[1]: Listening on dbus.socket. Jul 14 21:45:04.173066 systemd[1]: Starting docker.socket... Jul 14 21:45:04.176651 systemd[1]: Listening on sshd.socket. Jul 14 21:45:04.177565 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 21:45:04.178544 systemd[1]: Listening on docker.socket. Jul 14 21:45:04.179276 systemd[1]: Reached target sockets.target. Jul 14 21:45:04.179861 systemd[1]: Reached target basic.target. Jul 14 21:45:04.180463 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 14 21:45:04.180506 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 14 21:45:04.181849 systemd[1]: Starting containerd.service... Jul 14 21:45:04.183899 systemd[1]: Starting dbus.service... Jul 14 21:45:04.185996 systemd[1]: Starting enable-oem-cloudinit.service... Jul 14 21:45:04.188582 systemd[1]: Starting extend-filesystems.service... Jul 14 21:45:04.189441 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 14 21:45:04.191446 systemd[1]: Starting motdgen.service... Jul 14 21:45:04.193897 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 14 21:45:04.196405 systemd[1]: Starting sshd-keygen.service... Jul 14 21:45:04.202290 systemd[1]: Starting systemd-logind.service... Jul 14 21:45:04.206057 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 21:45:04.206303 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 14 21:45:04.207143 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 14 21:45:04.208348 systemd[1]: Starting update-engine.service... Jul 14 21:45:04.209241 jq[1192]: false Jul 14 21:45:04.210499 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 14 21:45:04.214592 jq[1205]: true Jul 14 21:45:04.214690 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 14 21:45:04.214954 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 14 21:45:04.231734 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 14 21:45:04.231988 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 14 21:45:04.236911 jq[1208]: true Jul 14 21:45:04.276311 extend-filesystems[1193]: Found loop1 Jul 14 21:45:04.277247 extend-filesystems[1193]: Found vda Jul 14 21:45:04.278136 systemd[1]: motdgen.service: Deactivated successfully. Jul 14 21:45:04.278406 systemd[1]: Finished motdgen.service. Jul 14 21:45:04.278934 extend-filesystems[1193]: Found vda1 Jul 14 21:45:04.281526 extend-filesystems[1193]: Found vda2 Jul 14 21:45:04.282140 extend-filesystems[1193]: Found vda3 Jul 14 21:45:04.282704 extend-filesystems[1193]: Found usr Jul 14 21:45:04.283320 extend-filesystems[1193]: Found vda4 Jul 14 21:45:04.283895 extend-filesystems[1193]: Found vda6 Jul 14 21:45:04.284456 extend-filesystems[1193]: Found vda7 Jul 14 21:45:04.285016 extend-filesystems[1193]: Found vda9 Jul 14 21:45:04.285565 extend-filesystems[1193]: Checking size of /dev/vda9 Jul 14 21:45:04.321449 bash[1229]: Updated "/home/core/.ssh/authorized_keys" Jul 14 21:45:04.321048 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 14 21:45:04.322668 dbus-daemon[1191]: [system] SELinux support is enabled Jul 14 21:45:04.331068 systemd[1]: Started dbus.service. Jul 14 21:45:04.337164 extend-filesystems[1193]: Resized partition /dev/vda9 Jul 14 21:45:04.334675 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 14 21:45:04.334706 systemd[1]: Reached target system-config.target. Jul 14 21:45:04.335740 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 14 21:45:04.335758 systemd[1]: Reached target user-config.target. Jul 14 21:45:04.343161 extend-filesystems[1238]: resize2fs 1.46.5 (30-Dec-2021) Jul 14 21:45:04.347731 systemd-logind[1197]: Watching system buttons on /dev/input/event0 (Power Button) Jul 14 21:45:04.348026 systemd-logind[1197]: New seat seat0. Jul 14 21:45:04.358890 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 14 21:45:04.360619 systemd[1]: Started systemd-logind.service. Jul 14 21:45:04.374844 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 14 21:45:04.391019 extend-filesystems[1238]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 14 21:45:04.391019 extend-filesystems[1238]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 14 21:45:04.391019 extend-filesystems[1238]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 14 21:45:04.394363 extend-filesystems[1193]: Resized filesystem in /dev/vda9 Jul 14 21:45:04.395081 update_engine[1200]: I0714 21:45:04.390916 1200 main.cc:92] Flatcar Update Engine starting Jul 14 21:45:04.391878 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 14 21:45:04.392059 systemd[1]: Finished extend-filesystems.service. Jul 14 21:45:04.398795 systemd[1]: Started update-engine.service. Jul 14 21:45:04.401423 systemd[1]: Started locksmithd.service. Jul 14 21:45:04.402645 update_engine[1200]: I0714 21:45:04.402602 1200 update_check_scheduler.cc:74] Next update check in 2m48s Jul 14 21:45:04.424254 env[1210]: time="2025-07-14T21:45:04.423937680Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 14 21:45:04.445144 env[1210]: time="2025-07-14T21:45:04.445099320Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 14 21:45:04.445443 env[1210]: time="2025-07-14T21:45:04.445421800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 14 21:45:04.445557 locksmithd[1241]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 14 21:45:04.446870 env[1210]: time="2025-07-14T21:45:04.446832240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.187-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 14 21:45:04.446954 env[1210]: time="2025-07-14T21:45:04.446939120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 14 21:45:04.447251 env[1210]: time="2025-07-14T21:45:04.447213280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 21:45:04.447339 env[1210]: time="2025-07-14T21:45:04.447322560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 14 21:45:04.447397 env[1210]: time="2025-07-14T21:45:04.447383080Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 14 21:45:04.447447 env[1210]: time="2025-07-14T21:45:04.447434600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 14 21:45:04.447576 env[1210]: time="2025-07-14T21:45:04.447557400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 14 21:45:04.447943 env[1210]: time="2025-07-14T21:45:04.447920520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 14 21:45:04.448170 env[1210]: time="2025-07-14T21:45:04.448148760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 21:45:04.448256 env[1210]: time="2025-07-14T21:45:04.448227960Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 14 21:45:04.448369 env[1210]: time="2025-07-14T21:45:04.448350480Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 14 21:45:04.448443 env[1210]: time="2025-07-14T21:45:04.448427960Z" level=info msg="metadata content store policy set" policy=shared Jul 14 21:45:04.451594 env[1210]: time="2025-07-14T21:45:04.451570640Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 14 21:45:04.451692 env[1210]: time="2025-07-14T21:45:04.451676320Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 14 21:45:04.451776 env[1210]: time="2025-07-14T21:45:04.451760040Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 14 21:45:04.451877 env[1210]: time="2025-07-14T21:45:04.451859080Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 14 21:45:04.451981 env[1210]: time="2025-07-14T21:45:04.451951840Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 14 21:45:04.452044 env[1210]: time="2025-07-14T21:45:04.452031160Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 14 21:45:04.452102 env[1210]: time="2025-07-14T21:45:04.452089200Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 14 21:45:04.452507 env[1210]: time="2025-07-14T21:45:04.452475200Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 14 21:45:04.452597 env[1210]: time="2025-07-14T21:45:04.452581600Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 14 21:45:04.452662 env[1210]: time="2025-07-14T21:45:04.452648280Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 14 21:45:04.452718 env[1210]: time="2025-07-14T21:45:04.452705560Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 14 21:45:04.452774 env[1210]: time="2025-07-14T21:45:04.452761120Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 14 21:45:04.452938 env[1210]: time="2025-07-14T21:45:04.452917680Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 14 21:45:04.453083 env[1210]: time="2025-07-14T21:45:04.453064760Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 14 21:45:04.453391 env[1210]: time="2025-07-14T21:45:04.453368000Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 14 21:45:04.453479 env[1210]: time="2025-07-14T21:45:04.453463360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 14 21:45:04.453551 env[1210]: time="2025-07-14T21:45:04.453537320Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 14 21:45:04.453716 env[1210]: time="2025-07-14T21:45:04.453700720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 14 21:45:04.453797 env[1210]: time="2025-07-14T21:45:04.453782800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 14 21:45:04.453882 env[1210]: time="2025-07-14T21:45:04.453867880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 14 21:45:04.453941 env[1210]: time="2025-07-14T21:45:04.453928560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 14 21:45:04.454018 env[1210]: time="2025-07-14T21:45:04.454002600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 14 21:45:04.454079 env[1210]: time="2025-07-14T21:45:04.454066160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 14 21:45:04.454137 env[1210]: time="2025-07-14T21:45:04.454122760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 14 21:45:04.454193 env[1210]: time="2025-07-14T21:45:04.454179320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 14 21:45:04.454274 env[1210]: time="2025-07-14T21:45:04.454259280Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 14 21:45:04.454450 env[1210]: time="2025-07-14T21:45:04.454430000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 14 21:45:04.454520 env[1210]: time="2025-07-14T21:45:04.454506160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 14 21:45:04.454585 env[1210]: time="2025-07-14T21:45:04.454571360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 14 21:45:04.454651 env[1210]: time="2025-07-14T21:45:04.454637520Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 14 21:45:04.454728 env[1210]: time="2025-07-14T21:45:04.454710160Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 14 21:45:04.454782 env[1210]: time="2025-07-14T21:45:04.454769320Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 14 21:45:04.454876 env[1210]: time="2025-07-14T21:45:04.454860680Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 14 21:45:04.454959 env[1210]: time="2025-07-14T21:45:04.454945440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 14 21:45:04.455247 env[1210]: time="2025-07-14T21:45:04.455182880Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 14 21:45:04.455941 env[1210]: time="2025-07-14T21:45:04.455574920Z" level=info msg="Connect containerd service" Jul 14 21:45:04.455941 env[1210]: time="2025-07-14T21:45:04.455617680Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 14 21:45:04.456466 env[1210]: time="2025-07-14T21:45:04.456437200Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 14 21:45:04.456728 env[1210]: time="2025-07-14T21:45:04.456699720Z" level=info msg="Start subscribing containerd event" Jul 14 21:45:04.456829 env[1210]: time="2025-07-14T21:45:04.456801240Z" level=info msg="Start recovering state" Jul 14 21:45:04.456936 env[1210]: time="2025-07-14T21:45:04.456921480Z" level=info msg="Start event monitor" Jul 14 21:45:04.457009 env[1210]: time="2025-07-14T21:45:04.456995520Z" level=info msg="Start snapshots syncer" Jul 14 21:45:04.457082 env[1210]: time="2025-07-14T21:45:04.457068200Z" level=info msg="Start cni network conf syncer for default" Jul 14 21:45:04.457137 env[1210]: time="2025-07-14T21:45:04.457123600Z" level=info msg="Start streaming server" Jul 14 21:45:04.457504 env[1210]: time="2025-07-14T21:45:04.457469480Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 14 21:45:04.457658 env[1210]: time="2025-07-14T21:45:04.457642400Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 14 21:45:04.457781 env[1210]: time="2025-07-14T21:45:04.457768320Z" level=info msg="containerd successfully booted in 0.035186s" Jul 14 21:45:04.457858 systemd[1]: Started containerd.service. Jul 14 21:45:05.200100 sshd_keygen[1209]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 14 21:45:05.219390 systemd[1]: Finished sshd-keygen.service. Jul 14 21:45:05.221696 systemd[1]: Starting issuegen.service... Jul 14 21:45:05.227026 systemd[1]: issuegen.service: Deactivated successfully. Jul 14 21:45:05.227188 systemd[1]: Finished issuegen.service. Jul 14 21:45:05.229436 systemd[1]: Starting systemd-user-sessions.service... Jul 14 21:45:05.236097 systemd[1]: Finished systemd-user-sessions.service. Jul 14 21:45:05.238364 systemd[1]: Started getty@tty1.service. Jul 14 21:45:05.240484 systemd[1]: Started serial-getty@ttyAMA0.service. Jul 14 21:45:05.241424 systemd[1]: Reached target getty.target. Jul 14 21:45:05.292112 systemd-networkd[1040]: eth0: Gained IPv6LL Jul 14 21:45:05.293819 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 14 21:45:05.294897 systemd[1]: Reached target network-online.target. Jul 14 21:45:05.297517 systemd[1]: Starting kubelet.service... Jul 14 21:45:05.902203 systemd[1]: Started kubelet.service. Jul 14 21:45:05.903288 systemd[1]: Reached target multi-user.target. Jul 14 21:45:05.905757 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 14 21:45:05.912745 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 14 21:45:05.912925 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 14 21:45:05.914107 systemd[1]: Startup finished in 601ms (kernel) + 4.241s (initrd) + 5.093s (userspace) = 9.936s. Jul 14 21:45:06.358007 kubelet[1267]: E0714 21:45:06.357906 1267 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 21:45:06.360019 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 21:45:06.360149 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 21:45:09.057215 systemd[1]: Created slice system-sshd.slice. Jul 14 21:45:09.058335 systemd[1]: Started sshd@0-10.0.0.15:22-10.0.0.1:51602.service. Jul 14 21:45:09.104161 sshd[1277]: Accepted publickey for core from 10.0.0.1 port 51602 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:45:09.106670 sshd[1277]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:45:09.117803 systemd-logind[1197]: New session 1 of user core. Jul 14 21:45:09.118792 systemd[1]: Created slice user-500.slice. Jul 14 21:45:09.119977 systemd[1]: Starting user-runtime-dir@500.service... Jul 14 21:45:09.128919 systemd[1]: Finished user-runtime-dir@500.service. Jul 14 21:45:09.130433 systemd[1]: Starting user@500.service... Jul 14 21:45:09.133336 (systemd)[1280]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:45:09.195410 systemd[1280]: Queued start job for default target default.target. Jul 14 21:45:09.195948 systemd[1280]: Reached target paths.target. Jul 14 21:45:09.195980 systemd[1280]: Reached target sockets.target. Jul 14 21:45:09.195992 systemd[1280]: Reached target timers.target. Jul 14 21:45:09.196002 systemd[1280]: Reached target basic.target. Jul 14 21:45:09.196042 systemd[1280]: Reached target default.target. Jul 14 21:45:09.196075 systemd[1280]: Startup finished in 56ms. Jul 14 21:45:09.196137 systemd[1]: Started user@500.service. Jul 14 21:45:09.197470 systemd[1]: Started session-1.scope. Jul 14 21:45:09.250620 systemd[1]: Started sshd@1-10.0.0.15:22-10.0.0.1:51618.service. Jul 14 21:45:09.297025 sshd[1289]: Accepted publickey for core from 10.0.0.1 port 51618 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:45:09.298986 sshd[1289]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:45:09.302701 systemd-logind[1197]: New session 2 of user core. Jul 14 21:45:09.303921 systemd[1]: Started session-2.scope. Jul 14 21:45:09.361372 sshd[1289]: pam_unix(sshd:session): session closed for user core Jul 14 21:45:09.365045 systemd[1]: Started sshd@2-10.0.0.15:22-10.0.0.1:51628.service. Jul 14 21:45:09.365576 systemd[1]: sshd@1-10.0.0.15:22-10.0.0.1:51618.service: Deactivated successfully. Jul 14 21:45:09.366428 systemd[1]: session-2.scope: Deactivated successfully. Jul 14 21:45:09.367006 systemd-logind[1197]: Session 2 logged out. Waiting for processes to exit. Jul 14 21:45:09.368108 systemd-logind[1197]: Removed session 2. Jul 14 21:45:09.408001 sshd[1294]: Accepted publickey for core from 10.0.0.1 port 51628 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:45:09.409393 sshd[1294]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:45:09.413082 systemd-logind[1197]: New session 3 of user core. Jul 14 21:45:09.414484 systemd[1]: Started session-3.scope. Jul 14 21:45:09.465848 sshd[1294]: pam_unix(sshd:session): session closed for user core Jul 14 21:45:09.469428 systemd[1]: sshd@2-10.0.0.15:22-10.0.0.1:51628.service: Deactivated successfully. Jul 14 21:45:09.470139 systemd[1]: session-3.scope: Deactivated successfully. Jul 14 21:45:09.470768 systemd-logind[1197]: Session 3 logged out. Waiting for processes to exit. Jul 14 21:45:09.472536 systemd[1]: Started sshd@3-10.0.0.15:22-10.0.0.1:51644.service. Jul 14 21:45:09.473385 systemd-logind[1197]: Removed session 3. Jul 14 21:45:09.513845 sshd[1301]: Accepted publickey for core from 10.0.0.1 port 51644 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:45:09.515134 sshd[1301]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:45:09.518593 systemd-logind[1197]: New session 4 of user core. Jul 14 21:45:09.519422 systemd[1]: Started session-4.scope. Jul 14 21:45:09.573527 sshd[1301]: pam_unix(sshd:session): session closed for user core Jul 14 21:45:09.576614 systemd[1]: sshd@3-10.0.0.15:22-10.0.0.1:51644.service: Deactivated successfully. Jul 14 21:45:09.577366 systemd[1]: session-4.scope: Deactivated successfully. Jul 14 21:45:09.577903 systemd-logind[1197]: Session 4 logged out. Waiting for processes to exit. Jul 14 21:45:09.579077 systemd[1]: Started sshd@4-10.0.0.15:22-10.0.0.1:51660.service. Jul 14 21:45:09.579740 systemd-logind[1197]: Removed session 4. Jul 14 21:45:09.619526 sshd[1307]: Accepted publickey for core from 10.0.0.1 port 51660 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:45:09.621077 sshd[1307]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:45:09.624937 systemd-logind[1197]: New session 5 of user core. Jul 14 21:45:09.625493 systemd[1]: Started session-5.scope. Jul 14 21:45:09.688058 sudo[1310]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 14 21:45:09.688291 sudo[1310]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 14 21:45:09.701289 systemd[1]: Starting coreos-metadata.service... Jul 14 21:45:09.708057 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 14 21:45:09.708248 systemd[1]: Finished coreos-metadata.service. Jul 14 21:45:10.239816 systemd[1]: Stopped kubelet.service. Jul 14 21:45:10.241901 systemd[1]: Starting kubelet.service... Jul 14 21:45:10.265367 systemd[1]: Reloading. Jul 14 21:45:10.323890 /usr/lib/systemd/system-generators/torcx-generator[1370]: time="2025-07-14T21:45:10Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.101 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.101 /var/lib/torcx/store]" Jul 14 21:45:10.323918 /usr/lib/systemd/system-generators/torcx-generator[1370]: time="2025-07-14T21:45:10Z" level=info msg="torcx already run" Jul 14 21:45:10.474107 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 14 21:45:10.474126 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 14 21:45:10.490205 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 21:45:10.572281 systemd[1]: Started kubelet.service. Jul 14 21:45:10.574146 systemd[1]: Stopping kubelet.service... Jul 14 21:45:10.574397 systemd[1]: kubelet.service: Deactivated successfully. Jul 14 21:45:10.574589 systemd[1]: Stopped kubelet.service. Jul 14 21:45:10.576241 systemd[1]: Starting kubelet.service... Jul 14 21:45:10.675109 systemd[1]: Started kubelet.service. Jul 14 21:45:10.712523 kubelet[1414]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 21:45:10.712523 kubelet[1414]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 14 21:45:10.712523 kubelet[1414]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 21:45:10.712895 kubelet[1414]: I0714 21:45:10.712574 1414 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 21:45:11.328300 kubelet[1414]: I0714 21:45:11.328251 1414 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 14 21:45:11.328300 kubelet[1414]: I0714 21:45:11.328287 1414 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 21:45:11.328592 kubelet[1414]: I0714 21:45:11.328565 1414 server.go:954] "Client rotation is on, will bootstrap in background" Jul 14 21:45:11.378580 kubelet[1414]: I0714 21:45:11.378536 1414 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 21:45:11.386450 kubelet[1414]: E0714 21:45:11.386413 1414 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 21:45:11.386581 kubelet[1414]: I0714 21:45:11.386567 1414 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 21:45:11.389264 kubelet[1414]: I0714 21:45:11.389239 1414 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 21:45:11.390137 kubelet[1414]: I0714 21:45:11.390095 1414 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 21:45:11.390416 kubelet[1414]: I0714 21:45:11.390228 1414 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.15","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 14 21:45:11.390600 kubelet[1414]: I0714 21:45:11.390585 1414 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 21:45:11.390655 kubelet[1414]: I0714 21:45:11.390646 1414 container_manager_linux.go:304] "Creating device plugin manager" Jul 14 21:45:11.390942 kubelet[1414]: I0714 21:45:11.390926 1414 state_mem.go:36] "Initialized new in-memory state store" Jul 14 21:45:11.393633 kubelet[1414]: I0714 21:45:11.393612 1414 kubelet.go:446] "Attempting to sync node with API server" Jul 14 21:45:11.393732 kubelet[1414]: I0714 21:45:11.393720 1414 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 21:45:11.393798 kubelet[1414]: I0714 21:45:11.393788 1414 kubelet.go:352] "Adding apiserver pod source" Jul 14 21:45:11.393888 kubelet[1414]: I0714 21:45:11.393877 1414 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 21:45:11.395633 kubelet[1414]: E0714 21:45:11.395588 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:11.401124 kubelet[1414]: E0714 21:45:11.401091 1414 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:11.401986 kubelet[1414]: W0714 21:45:11.401942 1414 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.15" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jul 14 21:45:11.401986 kubelet[1414]: E0714 21:45:11.401981 1414 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.15\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jul 14 21:45:11.403448 kubelet[1414]: I0714 21:45:11.403404 1414 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 14 21:45:11.404072 kubelet[1414]: I0714 21:45:11.404057 1414 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 14 21:45:11.404183 kubelet[1414]: W0714 21:45:11.404171 1414 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 14 21:45:11.405018 kubelet[1414]: I0714 21:45:11.404998 1414 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 14 21:45:11.405078 kubelet[1414]: I0714 21:45:11.405039 1414 server.go:1287] "Started kubelet" Jul 14 21:45:11.408102 kubelet[1414]: I0714 21:45:11.408016 1414 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 21:45:11.408376 kubelet[1414]: I0714 21:45:11.408354 1414 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 21:45:11.408441 kubelet[1414]: I0714 21:45:11.408421 1414 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 21:45:11.409451 kubelet[1414]: I0714 21:45:11.409425 1414 server.go:479] "Adding debug handlers to kubelet server" Jul 14 21:45:11.413143 kubelet[1414]: E0714 21:45:11.413122 1414 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 21:45:11.421766 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 14 21:45:11.421915 kubelet[1414]: I0714 21:45:11.421885 1414 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 21:45:11.422082 kubelet[1414]: I0714 21:45:11.422061 1414 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 21:45:11.422934 kubelet[1414]: E0714 21:45:11.422818 1414 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.15\" not found" Jul 14 21:45:11.422934 kubelet[1414]: I0714 21:45:11.422862 1414 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 14 21:45:11.423133 kubelet[1414]: I0714 21:45:11.423097 1414 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 14 21:45:11.423241 kubelet[1414]: I0714 21:45:11.423218 1414 reconciler.go:26] "Reconciler: start to sync state" Jul 14 21:45:11.423633 kubelet[1414]: I0714 21:45:11.423593 1414 factory.go:221] Registration of the systemd container factory successfully Jul 14 21:45:11.423775 kubelet[1414]: I0714 21:45:11.423742 1414 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 21:45:11.424864 kubelet[1414]: I0714 21:45:11.424842 1414 factory.go:221] Registration of the containerd container factory successfully Jul 14 21:45:11.430064 kubelet[1414]: E0714 21:45:11.430029 1414 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.15\" not found" node="10.0.0.15" Jul 14 21:45:11.431810 kubelet[1414]: I0714 21:45:11.431790 1414 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 14 21:45:11.431810 kubelet[1414]: I0714 21:45:11.431804 1414 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 14 21:45:11.431910 kubelet[1414]: I0714 21:45:11.431825 1414 state_mem.go:36] "Initialized new in-memory state store" Jul 14 21:45:11.523025 kubelet[1414]: E0714 21:45:11.522944 1414 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.15\" not found" Jul 14 21:45:11.523658 kubelet[1414]: I0714 21:45:11.523640 1414 policy_none.go:49] "None policy: Start" Jul 14 21:45:11.523716 kubelet[1414]: I0714 21:45:11.523664 1414 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 14 21:45:11.523716 kubelet[1414]: I0714 21:45:11.523686 1414 state_mem.go:35] "Initializing new in-memory state store" Jul 14 21:45:11.528457 systemd[1]: Created slice kubepods.slice. Jul 14 21:45:11.532747 systemd[1]: Created slice kubepods-burstable.slice. Jul 14 21:45:11.535094 systemd[1]: Created slice kubepods-besteffort.slice. Jul 14 21:45:11.545989 kubelet[1414]: I0714 21:45:11.545958 1414 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 14 21:45:11.546157 kubelet[1414]: I0714 21:45:11.546139 1414 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 21:45:11.546194 kubelet[1414]: I0714 21:45:11.546153 1414 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 21:45:11.546665 kubelet[1414]: I0714 21:45:11.546430 1414 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 21:45:11.548058 kubelet[1414]: E0714 21:45:11.548033 1414 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 14 21:45:11.548126 kubelet[1414]: E0714 21:45:11.548077 1414 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.15\" not found" Jul 14 21:45:11.610140 kubelet[1414]: I0714 21:45:11.610032 1414 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 14 21:45:11.611438 kubelet[1414]: I0714 21:45:11.611415 1414 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 14 21:45:11.611564 kubelet[1414]: I0714 21:45:11.611550 1414 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 14 21:45:11.611637 kubelet[1414]: I0714 21:45:11.611626 1414 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 14 21:45:11.611696 kubelet[1414]: I0714 21:45:11.611678 1414 kubelet.go:2382] "Starting kubelet main sync loop" Jul 14 21:45:11.611788 kubelet[1414]: E0714 21:45:11.611775 1414 kubelet.go:2406] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jul 14 21:45:11.648045 kubelet[1414]: I0714 21:45:11.648009 1414 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.15" Jul 14 21:45:11.655366 kubelet[1414]: I0714 21:45:11.655326 1414 kubelet_node_status.go:78] "Successfully registered node" node="10.0.0.15" Jul 14 21:45:11.655366 kubelet[1414]: E0714 21:45:11.655368 1414 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"10.0.0.15\": node \"10.0.0.15\" not found" Jul 14 21:45:11.672365 kubelet[1414]: E0714 21:45:11.672317 1414 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.15\" not found" Jul 14 21:45:11.706946 sudo[1310]: pam_unix(sudo:session): session closed for user root Jul 14 21:45:11.709738 sshd[1307]: pam_unix(sshd:session): session closed for user core Jul 14 21:45:11.712067 systemd[1]: sshd@4-10.0.0.15:22-10.0.0.1:51660.service: Deactivated successfully. Jul 14 21:45:11.712805 systemd[1]: session-5.scope: Deactivated successfully. Jul 14 21:45:11.713408 systemd-logind[1197]: Session 5 logged out. Waiting for processes to exit. Jul 14 21:45:11.714201 systemd-logind[1197]: Removed session 5. Jul 14 21:45:11.773477 kubelet[1414]: E0714 21:45:11.773446 1414 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.15\" not found" Jul 14 21:45:11.874264 kubelet[1414]: E0714 21:45:11.874134 1414 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.15\" not found" Jul 14 21:45:11.974872 kubelet[1414]: E0714 21:45:11.974831 1414 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.15\" not found" Jul 14 21:45:12.075719 kubelet[1414]: E0714 21:45:12.075690 1414 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.15\" not found" Jul 14 21:45:12.176579 kubelet[1414]: E0714 21:45:12.176491 1414 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.15\" not found" Jul 14 21:45:12.277585 kubelet[1414]: E0714 21:45:12.277554 1414 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.15\" not found" Jul 14 21:45:12.330682 kubelet[1414]: I0714 21:45:12.330660 1414 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jul 14 21:45:12.330976 kubelet[1414]: W0714 21:45:12.330942 1414 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jul 14 21:45:12.330976 kubelet[1414]: W0714 21:45:12.330941 1414 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jul 14 21:45:12.331048 kubelet[1414]: W0714 21:45:12.330993 1414 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jul 14 21:45:12.378514 kubelet[1414]: E0714 21:45:12.378492 1414 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.15\" not found" Jul 14 21:45:12.395902 kubelet[1414]: E0714 21:45:12.395877 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:12.478952 kubelet[1414]: E0714 21:45:12.478868 1414 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.15\" not found" Jul 14 21:45:12.579149 kubelet[1414]: E0714 21:45:12.579120 1414 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.15\" not found" Jul 14 21:45:12.680469 kubelet[1414]: I0714 21:45:12.680439 1414 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jul 14 21:45:12.680973 env[1210]: time="2025-07-14T21:45:12.680904461Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 14 21:45:12.681230 kubelet[1414]: I0714 21:45:12.681108 1414 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jul 14 21:45:13.395183 kubelet[1414]: I0714 21:45:13.395157 1414 apiserver.go:52] "Watching apiserver" Jul 14 21:45:13.396259 kubelet[1414]: E0714 21:45:13.396239 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:13.405499 systemd[1]: Created slice kubepods-besteffort-pod9e9e6f88_c8fc_434f_ba69_c189192f4d10.slice. Jul 14 21:45:13.417695 systemd[1]: Created slice kubepods-burstable-pod9775ce27_3a5f_457e_ba5a_0f31528fb8e2.slice. Jul 14 21:45:13.424675 kubelet[1414]: I0714 21:45:13.424643 1414 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 14 21:45:13.436899 kubelet[1414]: I0714 21:45:13.436849 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-xtables-lock\") pod \"cilium-lsjwc\" (UID: \"9775ce27-3a5f-457e-ba5a-0f31528fb8e2\") " pod="kube-system/cilium-lsjwc" Jul 14 21:45:13.436899 kubelet[1414]: I0714 21:45:13.436895 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e9e6f88-c8fc-434f-ba69-c189192f4d10-lib-modules\") pod \"kube-proxy-grcgj\" (UID: \"9e9e6f88-c8fc-434f-ba69-c189192f4d10\") " pod="kube-system/kube-proxy-grcgj" Jul 14 21:45:13.437116 kubelet[1414]: I0714 21:45:13.436915 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnk5w\" (UniqueName: \"kubernetes.io/projected/9e9e6f88-c8fc-434f-ba69-c189192f4d10-kube-api-access-hnk5w\") pod \"kube-proxy-grcgj\" (UID: \"9e9e6f88-c8fc-434f-ba69-c189192f4d10\") " pod="kube-system/kube-proxy-grcgj" Jul 14 21:45:13.437116 kubelet[1414]: I0714 21:45:13.436932 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-cilium-cgroup\") pod \"cilium-lsjwc\" (UID: \"9775ce27-3a5f-457e-ba5a-0f31528fb8e2\") " pod="kube-system/cilium-lsjwc" Jul 14 21:45:13.437116 kubelet[1414]: I0714 21:45:13.436948 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-etc-cni-netd\") pod \"cilium-lsjwc\" (UID: \"9775ce27-3a5f-457e-ba5a-0f31528fb8e2\") " pod="kube-system/cilium-lsjwc" Jul 14 21:45:13.437116 kubelet[1414]: I0714 21:45:13.436963 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-cilium-config-path\") pod \"cilium-lsjwc\" (UID: \"9775ce27-3a5f-457e-ba5a-0f31528fb8e2\") " pod="kube-system/cilium-lsjwc" Jul 14 21:45:13.437116 kubelet[1414]: I0714 21:45:13.436977 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-host-proc-sys-kernel\") pod \"cilium-lsjwc\" (UID: \"9775ce27-3a5f-457e-ba5a-0f31528fb8e2\") " pod="kube-system/cilium-lsjwc" Jul 14 21:45:13.437228 kubelet[1414]: I0714 21:45:13.436991 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9e9e6f88-c8fc-434f-ba69-c189192f4d10-kube-proxy\") pod \"kube-proxy-grcgj\" (UID: \"9e9e6f88-c8fc-434f-ba69-c189192f4d10\") " pod="kube-system/kube-proxy-grcgj" Jul 14 21:45:13.437228 kubelet[1414]: I0714 21:45:13.437008 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e9e6f88-c8fc-434f-ba69-c189192f4d10-xtables-lock\") pod \"kube-proxy-grcgj\" (UID: \"9e9e6f88-c8fc-434f-ba69-c189192f4d10\") " pod="kube-system/kube-proxy-grcgj" Jul 14 21:45:13.437228 kubelet[1414]: I0714 21:45:13.437023 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-cilium-run\") pod \"cilium-lsjwc\" (UID: \"9775ce27-3a5f-457e-ba5a-0f31528fb8e2\") " pod="kube-system/cilium-lsjwc" Jul 14 21:45:13.437228 kubelet[1414]: I0714 21:45:13.437045 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-bpf-maps\") pod \"cilium-lsjwc\" (UID: \"9775ce27-3a5f-457e-ba5a-0f31528fb8e2\") " pod="kube-system/cilium-lsjwc" Jul 14 21:45:13.437228 kubelet[1414]: I0714 21:45:13.437062 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-hostproc\") pod \"cilium-lsjwc\" (UID: \"9775ce27-3a5f-457e-ba5a-0f31528fb8e2\") " pod="kube-system/cilium-lsjwc" Jul 14 21:45:13.437228 kubelet[1414]: I0714 21:45:13.437077 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-clustermesh-secrets\") pod \"cilium-lsjwc\" (UID: \"9775ce27-3a5f-457e-ba5a-0f31528fb8e2\") " pod="kube-system/cilium-lsjwc" Jul 14 21:45:13.437347 kubelet[1414]: I0714 21:45:13.437091 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-hubble-tls\") pod \"cilium-lsjwc\" (UID: \"9775ce27-3a5f-457e-ba5a-0f31528fb8e2\") " pod="kube-system/cilium-lsjwc" Jul 14 21:45:13.437347 kubelet[1414]: I0714 21:45:13.437105 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-cni-path\") pod \"cilium-lsjwc\" (UID: \"9775ce27-3a5f-457e-ba5a-0f31528fb8e2\") " pod="kube-system/cilium-lsjwc" Jul 14 21:45:13.437347 kubelet[1414]: I0714 21:45:13.437120 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-lib-modules\") pod \"cilium-lsjwc\" (UID: \"9775ce27-3a5f-457e-ba5a-0f31528fb8e2\") " pod="kube-system/cilium-lsjwc" Jul 14 21:45:13.437347 kubelet[1414]: I0714 21:45:13.437135 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-host-proc-sys-net\") pod \"cilium-lsjwc\" (UID: \"9775ce27-3a5f-457e-ba5a-0f31528fb8e2\") " pod="kube-system/cilium-lsjwc" Jul 14 21:45:13.437347 kubelet[1414]: I0714 21:45:13.437151 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksnf6\" (UniqueName: \"kubernetes.io/projected/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-kube-api-access-ksnf6\") pod \"cilium-lsjwc\" (UID: \"9775ce27-3a5f-457e-ba5a-0f31528fb8e2\") " pod="kube-system/cilium-lsjwc" Jul 14 21:45:13.539136 kubelet[1414]: I0714 21:45:13.539093 1414 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jul 14 21:45:13.716553 kubelet[1414]: E0714 21:45:13.716446 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:45:13.717695 env[1210]: time="2025-07-14T21:45:13.717648300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-grcgj,Uid:9e9e6f88-c8fc-434f-ba69-c189192f4d10,Namespace:kube-system,Attempt:0,}" Jul 14 21:45:13.729027 kubelet[1414]: E0714 21:45:13.728988 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:45:13.729590 env[1210]: time="2025-07-14T21:45:13.729528310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lsjwc,Uid:9775ce27-3a5f-457e-ba5a-0f31528fb8e2,Namespace:kube-system,Attempt:0,}" Jul 14 21:45:14.397137 kubelet[1414]: E0714 21:45:14.397084 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:14.466015 env[1210]: time="2025-07-14T21:45:14.465963716Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:45:14.468064 env[1210]: time="2025-07-14T21:45:14.468031570Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:45:14.468982 env[1210]: time="2025-07-14T21:45:14.468952143Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:45:14.470476 env[1210]: time="2025-07-14T21:45:14.470442749Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:45:14.472081 env[1210]: time="2025-07-14T21:45:14.472051878Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:45:14.474444 env[1210]: time="2025-07-14T21:45:14.474414862Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:45:14.477023 env[1210]: time="2025-07-14T21:45:14.476993893Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:45:14.477951 env[1210]: time="2025-07-14T21:45:14.477926464Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:45:14.504190 env[1210]: time="2025-07-14T21:45:14.504094437Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:45:14.504190 env[1210]: time="2025-07-14T21:45:14.504136917Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:45:14.504190 env[1210]: time="2025-07-14T21:45:14.504161319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:45:14.504485 env[1210]: time="2025-07-14T21:45:14.504438654Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ce0bc1991324c326efe57c9c404da4bb264ea1384dcd7e174d42c66715453a8f pid=1479 runtime=io.containerd.runc.v2 Jul 14 21:45:14.504485 env[1210]: time="2025-07-14T21:45:14.504432655Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:45:14.504561 env[1210]: time="2025-07-14T21:45:14.504494469Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:45:14.504561 env[1210]: time="2025-07-14T21:45:14.504521911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:45:14.504709 env[1210]: time="2025-07-14T21:45:14.504659525Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7eee1c091e8212c8bb544f2c5326dbeb23d1d74afc6a3e00ed0997e9d23ca191 pid=1480 runtime=io.containerd.runc.v2 Jul 14 21:45:14.524007 systemd[1]: Started cri-containerd-ce0bc1991324c326efe57c9c404da4bb264ea1384dcd7e174d42c66715453a8f.scope. Jul 14 21:45:14.526721 systemd[1]: Started cri-containerd-7eee1c091e8212c8bb544f2c5326dbeb23d1d74afc6a3e00ed0997e9d23ca191.scope. Jul 14 21:45:14.544999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1790909453.mount: Deactivated successfully. Jul 14 21:45:14.564772 env[1210]: time="2025-07-14T21:45:14.564721093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lsjwc,Uid:9775ce27-3a5f-457e-ba5a-0f31528fb8e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce0bc1991324c326efe57c9c404da4bb264ea1384dcd7e174d42c66715453a8f\"" Jul 14 21:45:14.565761 env[1210]: time="2025-07-14T21:45:14.565733193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-grcgj,Uid:9e9e6f88-c8fc-434f-ba69-c189192f4d10,Namespace:kube-system,Attempt:0,} returns sandbox id \"7eee1c091e8212c8bb544f2c5326dbeb23d1d74afc6a3e00ed0997e9d23ca191\"" Jul 14 21:45:14.565849 kubelet[1414]: E0714 21:45:14.565809 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:45:14.567857 env[1210]: time="2025-07-14T21:45:14.567816896Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 14 21:45:14.568693 kubelet[1414]: E0714 21:45:14.568671 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:45:15.397233 kubelet[1414]: E0714 21:45:15.397183 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:16.398354 kubelet[1414]: E0714 21:45:16.398305 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:17.399331 kubelet[1414]: E0714 21:45:17.399285 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:17.885614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2647218905.mount: Deactivated successfully. Jul 14 21:45:18.400290 kubelet[1414]: E0714 21:45:18.400246 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:19.400968 kubelet[1414]: E0714 21:45:19.400916 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:20.019138 env[1210]: time="2025-07-14T21:45:20.019090872Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:45:20.020424 env[1210]: time="2025-07-14T21:45:20.020397417Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:45:20.021633 env[1210]: time="2025-07-14T21:45:20.021604208Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:45:20.022375 env[1210]: time="2025-07-14T21:45:20.022347758Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 14 21:45:20.024752 env[1210]: time="2025-07-14T21:45:20.024727015Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 14 21:45:20.024903 env[1210]: time="2025-07-14T21:45:20.024868417Z" level=info msg="CreateContainer within sandbox \"ce0bc1991324c326efe57c9c404da4bb264ea1384dcd7e174d42c66715453a8f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 14 21:45:20.036726 env[1210]: time="2025-07-14T21:45:20.036675529Z" level=info msg="CreateContainer within sandbox \"ce0bc1991324c326efe57c9c404da4bb264ea1384dcd7e174d42c66715453a8f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bd2a3be1efcbbce1acb69b7841d03bcf717d2db3772a2367d0a4e258e9b8b8de\"" Jul 14 21:45:20.037411 env[1210]: time="2025-07-14T21:45:20.037379564Z" level=info msg="StartContainer for \"bd2a3be1efcbbce1acb69b7841d03bcf717d2db3772a2367d0a4e258e9b8b8de\"" Jul 14 21:45:20.055103 systemd[1]: Started cri-containerd-bd2a3be1efcbbce1acb69b7841d03bcf717d2db3772a2367d0a4e258e9b8b8de.scope. Jul 14 21:45:20.093080 env[1210]: time="2025-07-14T21:45:20.093028099Z" level=info msg="StartContainer for \"bd2a3be1efcbbce1acb69b7841d03bcf717d2db3772a2367d0a4e258e9b8b8de\" returns successfully" Jul 14 21:45:20.125439 systemd[1]: cri-containerd-bd2a3be1efcbbce1acb69b7841d03bcf717d2db3772a2367d0a4e258e9b8b8de.scope: Deactivated successfully. Jul 14 21:45:20.256615 env[1210]: time="2025-07-14T21:45:20.256564231Z" level=info msg="shim disconnected" id=bd2a3be1efcbbce1acb69b7841d03bcf717d2db3772a2367d0a4e258e9b8b8de Jul 14 21:45:20.256846 env[1210]: time="2025-07-14T21:45:20.256813677Z" level=warning msg="cleaning up after shim disconnected" id=bd2a3be1efcbbce1acb69b7841d03bcf717d2db3772a2367d0a4e258e9b8b8de namespace=k8s.io Jul 14 21:45:20.256915 env[1210]: time="2025-07-14T21:45:20.256900877Z" level=info msg="cleaning up dead shim" Jul 14 21:45:20.264119 env[1210]: time="2025-07-14T21:45:20.264085686Z" level=warning msg="cleanup warnings time=\"2025-07-14T21:45:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1594 runtime=io.containerd.runc.v2\n" Jul 14 21:45:20.402105 kubelet[1414]: E0714 21:45:20.401432 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:20.626357 kubelet[1414]: E0714 21:45:20.626303 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:45:20.628210 env[1210]: time="2025-07-14T21:45:20.628167386Z" level=info msg="CreateContainer within sandbox \"ce0bc1991324c326efe57c9c404da4bb264ea1384dcd7e174d42c66715453a8f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 14 21:45:20.671549 env[1210]: time="2025-07-14T21:45:20.671199025Z" level=info msg="CreateContainer within sandbox \"ce0bc1991324c326efe57c9c404da4bb264ea1384dcd7e174d42c66715453a8f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7ced66586cb8dc218d755b75b7c24f90238bb013bc48fe3278e9ef8c743a3ca4\"" Jul 14 21:45:20.672183 env[1210]: time="2025-07-14T21:45:20.672146632Z" level=info msg="StartContainer for \"7ced66586cb8dc218d755b75b7c24f90238bb013bc48fe3278e9ef8c743a3ca4\"" Jul 14 21:45:20.686920 systemd[1]: Started cri-containerd-7ced66586cb8dc218d755b75b7c24f90238bb013bc48fe3278e9ef8c743a3ca4.scope. Jul 14 21:45:20.722419 env[1210]: time="2025-07-14T21:45:20.722373342Z" level=info msg="StartContainer for \"7ced66586cb8dc218d755b75b7c24f90238bb013bc48fe3278e9ef8c743a3ca4\" returns successfully" Jul 14 21:45:20.730940 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 14 21:45:20.731144 systemd[1]: Stopped systemd-sysctl.service. Jul 14 21:45:20.731308 systemd[1]: Stopping systemd-sysctl.service... Jul 14 21:45:20.733655 systemd[1]: Starting systemd-sysctl.service... Jul 14 21:45:20.733954 systemd[1]: cri-containerd-7ced66586cb8dc218d755b75b7c24f90238bb013bc48fe3278e9ef8c743a3ca4.scope: Deactivated successfully. Jul 14 21:45:20.740562 systemd[1]: Finished systemd-sysctl.service. Jul 14 21:45:20.754936 env[1210]: time="2025-07-14T21:45:20.754890894Z" level=info msg="shim disconnected" id=7ced66586cb8dc218d755b75b7c24f90238bb013bc48fe3278e9ef8c743a3ca4 Jul 14 21:45:20.754936 env[1210]: time="2025-07-14T21:45:20.754932542Z" level=warning msg="cleaning up after shim disconnected" id=7ced66586cb8dc218d755b75b7c24f90238bb013bc48fe3278e9ef8c743a3ca4 namespace=k8s.io Jul 14 21:45:20.755116 env[1210]: time="2025-07-14T21:45:20.754944252Z" level=info msg="cleaning up dead shim" Jul 14 21:45:20.761274 env[1210]: time="2025-07-14T21:45:20.761230307Z" level=warning msg="cleanup warnings time=\"2025-07-14T21:45:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1659 runtime=io.containerd.runc.v2\n" Jul 14 21:45:21.032891 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd2a3be1efcbbce1acb69b7841d03bcf717d2db3772a2367d0a4e258e9b8b8de-rootfs.mount: Deactivated successfully. Jul 14 21:45:21.372409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount897981767.mount: Deactivated successfully. Jul 14 21:45:21.401957 kubelet[1414]: E0714 21:45:21.401905 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:21.630274 kubelet[1414]: E0714 21:45:21.629978 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:45:21.632310 env[1210]: time="2025-07-14T21:45:21.632268565Z" level=info msg="CreateContainer within sandbox \"ce0bc1991324c326efe57c9c404da4bb264ea1384dcd7e174d42c66715453a8f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 14 21:45:21.644812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount663430803.mount: Deactivated successfully. Jul 14 21:45:21.650637 env[1210]: time="2025-07-14T21:45:21.650586861Z" level=info msg="CreateContainer within sandbox \"ce0bc1991324c326efe57c9c404da4bb264ea1384dcd7e174d42c66715453a8f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9ed2880ffeb9fdc05eab82893cb4f89fde9a4c6d181c62215f799415ab6953d2\"" Jul 14 21:45:21.651495 env[1210]: time="2025-07-14T21:45:21.651395520Z" level=info msg="StartContainer for \"9ed2880ffeb9fdc05eab82893cb4f89fde9a4c6d181c62215f799415ab6953d2\"" Jul 14 21:45:21.673717 systemd[1]: Started cri-containerd-9ed2880ffeb9fdc05eab82893cb4f89fde9a4c6d181c62215f799415ab6953d2.scope. Jul 14 21:45:21.729022 env[1210]: time="2025-07-14T21:45:21.728969902Z" level=info msg="StartContainer for \"9ed2880ffeb9fdc05eab82893cb4f89fde9a4c6d181c62215f799415ab6953d2\" returns successfully" Jul 14 21:45:21.730797 systemd[1]: cri-containerd-9ed2880ffeb9fdc05eab82893cb4f89fde9a4c6d181c62215f799415ab6953d2.scope: Deactivated successfully. Jul 14 21:45:21.969123 env[1210]: time="2025-07-14T21:45:21.968737162Z" level=info msg="shim disconnected" id=9ed2880ffeb9fdc05eab82893cb4f89fde9a4c6d181c62215f799415ab6953d2 Jul 14 21:45:21.969123 env[1210]: time="2025-07-14T21:45:21.968778377Z" level=warning msg="cleaning up after shim disconnected" id=9ed2880ffeb9fdc05eab82893cb4f89fde9a4c6d181c62215f799415ab6953d2 namespace=k8s.io Jul 14 21:45:21.969123 env[1210]: time="2025-07-14T21:45:21.968786901Z" level=info msg="cleaning up dead shim" Jul 14 21:45:21.974631 env[1210]: time="2025-07-14T21:45:21.974581937Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:45:21.976036 env[1210]: time="2025-07-14T21:45:21.975999171Z" level=warning msg="cleanup warnings time=\"2025-07-14T21:45:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1714 runtime=io.containerd.runc.v2\n" Jul 14 21:45:21.976792 env[1210]: time="2025-07-14T21:45:21.976767098Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:45:21.978525 env[1210]: time="2025-07-14T21:45:21.978480196Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:45:21.980047 env[1210]: time="2025-07-14T21:45:21.980013234Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:45:21.980419 env[1210]: time="2025-07-14T21:45:21.980386019Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jul 14 21:45:21.982862 env[1210]: time="2025-07-14T21:45:21.982815535Z" level=info msg="CreateContainer within sandbox \"7eee1c091e8212c8bb544f2c5326dbeb23d1d74afc6a3e00ed0997e9d23ca191\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 14 21:45:21.992773 env[1210]: time="2025-07-14T21:45:21.992726325Z" level=info msg="CreateContainer within sandbox \"7eee1c091e8212c8bb544f2c5326dbeb23d1d74afc6a3e00ed0997e9d23ca191\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b2c95839bd24b44309c976e1fa4698804aa9d770e414fb5a17db1383feeec443\"" Jul 14 21:45:21.993433 env[1210]: time="2025-07-14T21:45:21.993401568Z" level=info msg="StartContainer for \"b2c95839bd24b44309c976e1fa4698804aa9d770e414fb5a17db1383feeec443\"" Jul 14 21:45:22.010743 systemd[1]: Started cri-containerd-b2c95839bd24b44309c976e1fa4698804aa9d770e414fb5a17db1383feeec443.scope. Jul 14 21:45:22.046286 env[1210]: time="2025-07-14T21:45:22.046242306Z" level=info msg="StartContainer for \"b2c95839bd24b44309c976e1fa4698804aa9d770e414fb5a17db1383feeec443\" returns successfully" Jul 14 21:45:22.403138 kubelet[1414]: E0714 21:45:22.403040 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:22.633850 kubelet[1414]: E0714 21:45:22.633622 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:45:22.635909 env[1210]: time="2025-07-14T21:45:22.635859717Z" level=info msg="CreateContainer within sandbox \"ce0bc1991324c326efe57c9c404da4bb264ea1384dcd7e174d42c66715453a8f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 14 21:45:22.637318 kubelet[1414]: E0714 21:45:22.637271 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:45:22.645498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3629219371.mount: Deactivated successfully. Jul 14 21:45:22.652052 env[1210]: time="2025-07-14T21:45:22.651998724Z" level=info msg="CreateContainer within sandbox \"ce0bc1991324c326efe57c9c404da4bb264ea1384dcd7e174d42c66715453a8f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"17bf42f11cff42882e6bd76e2316d1dda4fe61729ebf6963862ef8f170fae303\"" Jul 14 21:45:22.652692 env[1210]: time="2025-07-14T21:45:22.652653516Z" level=info msg="StartContainer for \"17bf42f11cff42882e6bd76e2316d1dda4fe61729ebf6963862ef8f170fae303\"" Jul 14 21:45:22.673203 systemd[1]: Started cri-containerd-17bf42f11cff42882e6bd76e2316d1dda4fe61729ebf6963862ef8f170fae303.scope. Jul 14 21:45:22.707597 systemd[1]: cri-containerd-17bf42f11cff42882e6bd76e2316d1dda4fe61729ebf6963862ef8f170fae303.scope: Deactivated successfully. Jul 14 21:45:22.708879 env[1210]: time="2025-07-14T21:45:22.708790645Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9775ce27_3a5f_457e_ba5a_0f31528fb8e2.slice/cri-containerd-17bf42f11cff42882e6bd76e2316d1dda4fe61729ebf6963862ef8f170fae303.scope/memory.events\": no such file or directory" Jul 14 21:45:22.713391 env[1210]: time="2025-07-14T21:45:22.713344046Z" level=info msg="StartContainer for \"17bf42f11cff42882e6bd76e2316d1dda4fe61729ebf6963862ef8f170fae303\" returns successfully" Jul 14 21:45:22.740158 env[1210]: time="2025-07-14T21:45:22.740111969Z" level=info msg="shim disconnected" id=17bf42f11cff42882e6bd76e2316d1dda4fe61729ebf6963862ef8f170fae303 Jul 14 21:45:22.740468 env[1210]: time="2025-07-14T21:45:22.740445935Z" level=warning msg="cleaning up after shim disconnected" id=17bf42f11cff42882e6bd76e2316d1dda4fe61729ebf6963862ef8f170fae303 namespace=k8s.io Jul 14 21:45:22.740542 env[1210]: time="2025-07-14T21:45:22.740527869Z" level=info msg="cleaning up dead shim" Jul 14 21:45:22.746979 env[1210]: time="2025-07-14T21:45:22.746945306Z" level=warning msg="cleanup warnings time=\"2025-07-14T21:45:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1940 runtime=io.containerd.runc.v2\n" Jul 14 21:45:23.403945 kubelet[1414]: E0714 21:45:23.403908 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:23.641490 kubelet[1414]: E0714 21:45:23.641460 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:45:23.641958 kubelet[1414]: E0714 21:45:23.641943 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:45:23.643994 env[1210]: time="2025-07-14T21:45:23.643956738Z" level=info msg="CreateContainer within sandbox \"ce0bc1991324c326efe57c9c404da4bb264ea1384dcd7e174d42c66715453a8f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 14 21:45:23.657709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1396893785.mount: Deactivated successfully. Jul 14 21:45:23.660368 kubelet[1414]: I0714 21:45:23.660299 1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-grcgj" podStartSLOduration=5.247872066 podStartE2EDuration="12.660283598s" podCreationTimestamp="2025-07-14 21:45:11 +0000 UTC" firstStartedPulling="2025-07-14 21:45:14.569068918 +0000 UTC m=+3.890292737" lastFinishedPulling="2025-07-14 21:45:21.981480449 +0000 UTC m=+11.302704269" observedRunningTime="2025-07-14 21:45:22.676642584 +0000 UTC m=+11.997866363" watchObservedRunningTime="2025-07-14 21:45:23.660283598 +0000 UTC m=+12.981507417" Jul 14 21:45:23.663710 env[1210]: time="2025-07-14T21:45:23.663646849Z" level=info msg="CreateContainer within sandbox \"ce0bc1991324c326efe57c9c404da4bb264ea1384dcd7e174d42c66715453a8f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e7a4ccbcf8b8a7224d2351f992ece08eb87745e9ce785dc9262ad6e60377f42f\"" Jul 14 21:45:23.664239 env[1210]: time="2025-07-14T21:45:23.664214600Z" level=info msg="StartContainer for \"e7a4ccbcf8b8a7224d2351f992ece08eb87745e9ce785dc9262ad6e60377f42f\"" Jul 14 21:45:23.680970 systemd[1]: Started cri-containerd-e7a4ccbcf8b8a7224d2351f992ece08eb87745e9ce785dc9262ad6e60377f42f.scope. Jul 14 21:45:23.721310 env[1210]: time="2025-07-14T21:45:23.721266902Z" level=info msg="StartContainer for \"e7a4ccbcf8b8a7224d2351f992ece08eb87745e9ce785dc9262ad6e60377f42f\" returns successfully" Jul 14 21:45:23.888637 kubelet[1414]: I0714 21:45:23.888370 1414 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 14 21:45:24.064853 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 14 21:45:24.304853 kernel: Initializing XFRM netlink socket Jul 14 21:45:24.307847 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 14 21:45:24.404621 kubelet[1414]: E0714 21:45:24.404561 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:24.645029 kubelet[1414]: E0714 21:45:24.644925 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:45:24.661256 kubelet[1414]: I0714 21:45:24.661181 1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lsjwc" podStartSLOduration=8.205228322 podStartE2EDuration="13.661152022s" podCreationTimestamp="2025-07-14 21:45:11 +0000 UTC" firstStartedPulling="2025-07-14 21:45:14.56744313 +0000 UTC m=+3.888666909" lastFinishedPulling="2025-07-14 21:45:20.02336683 +0000 UTC m=+9.344590609" observedRunningTime="2025-07-14 21:45:24.660707024 +0000 UTC m=+13.981930843" watchObservedRunningTime="2025-07-14 21:45:24.661152022 +0000 UTC m=+13.982375841" Jul 14 21:45:25.404854 kubelet[1414]: E0714 21:45:25.404806 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:25.645947 kubelet[1414]: E0714 21:45:25.645910 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:45:25.917793 systemd-networkd[1040]: cilium_host: Link UP Jul 14 21:45:25.918524 systemd-networkd[1040]: cilium_net: Link UP Jul 14 21:45:25.920399 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Jul 14 21:45:25.920471 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 14 21:45:25.919721 systemd-networkd[1040]: cilium_net: Gained carrier Jul 14 21:45:25.920397 systemd-networkd[1040]: cilium_host: Gained carrier Jul 14 21:45:25.999433 systemd-networkd[1040]: cilium_vxlan: Link UP Jul 14 21:45:25.999442 systemd-networkd[1040]: cilium_vxlan: Gained carrier Jul 14 21:45:26.298871 kernel: NET: Registered PF_ALG protocol family Jul 14 21:45:26.305991 systemd-networkd[1040]: cilium_host: Gained IPv6LL Jul 14 21:45:26.404964 kubelet[1414]: E0714 21:45:26.404917 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:26.647403 kubelet[1414]: E0714 21:45:26.647300 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:45:26.857991 systemd-networkd[1040]: cilium_net: Gained IPv6LL Jul 14 21:45:26.873757 systemd-networkd[1040]: lxc_health: Link UP Jul 14 21:45:26.879125 systemd-networkd[1040]: lxc_health: Gained carrier Jul 14 21:45:26.879890 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 14 21:45:27.050055 systemd-networkd[1040]: cilium_vxlan: Gained IPv6LL Jul 14 21:45:27.405395 kubelet[1414]: E0714 21:45:27.405270 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:27.730292 kubelet[1414]: E0714 21:45:27.730191 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:45:27.917405 systemd[1]: Created slice kubepods-besteffort-pod2cc16bef_941a_4188_8bb0_ff0932320e41.slice. Jul 14 21:45:27.936952 kubelet[1414]: I0714 21:45:27.936905 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb2v4\" (UniqueName: \"kubernetes.io/projected/2cc16bef-941a-4188-8bb0-ff0932320e41-kube-api-access-jb2v4\") pod \"nginx-deployment-7fcdb87857-h4l4s\" (UID: \"2cc16bef-941a-4188-8bb0-ff0932320e41\") " pod="default/nginx-deployment-7fcdb87857-h4l4s" Jul 14 21:45:28.221587 env[1210]: time="2025-07-14T21:45:28.221130397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-h4l4s,Uid:2cc16bef-941a-4188-8bb0-ff0932320e41,Namespace:default,Attempt:0,}" Jul 14 21:45:28.268408 systemd-networkd[1040]: lxcc89fa4d7e3f1: Link UP Jul 14 21:45:28.275949 kernel: eth0: renamed from tmp9184b Jul 14 21:45:28.284871 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 14 21:45:28.284958 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcc89fa4d7e3f1: link becomes ready Jul 14 21:45:28.285572 systemd-networkd[1040]: lxcc89fa4d7e3f1: Gained carrier Jul 14 21:45:28.406073 kubelet[1414]: E0714 21:45:28.406034 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:28.522978 systemd-networkd[1040]: lxc_health: Gained IPv6LL Jul 14 21:45:28.650687 kubelet[1414]: E0714 21:45:28.650654 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:45:29.406912 kubelet[1414]: E0714 21:45:29.406868 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:29.545979 systemd-networkd[1040]: lxcc89fa4d7e3f1: Gained IPv6LL Jul 14 21:45:29.651892 kubelet[1414]: E0714 21:45:29.651856 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:45:30.407354 kubelet[1414]: E0714 21:45:30.407317 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:31.394556 kubelet[1414]: E0714 21:45:31.394505 1414 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:31.407951 kubelet[1414]: E0714 21:45:31.407918 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:31.440051 env[1210]: time="2025-07-14T21:45:31.439979697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:45:31.440051 env[1210]: time="2025-07-14T21:45:31.440017268Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:45:31.440432 env[1210]: time="2025-07-14T21:45:31.440387779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:45:31.440645 env[1210]: time="2025-07-14T21:45:31.440609925Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9184bb65fb11dfb4a8d0c088508c390c4d74448b3554b808d092254d3091918a pid=2485 runtime=io.containerd.runc.v2 Jul 14 21:45:31.454039 systemd[1]: Started cri-containerd-9184bb65fb11dfb4a8d0c088508c390c4d74448b3554b808d092254d3091918a.scope. Jul 14 21:45:31.514378 systemd-resolved[1153]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 21:45:31.530499 env[1210]: time="2025-07-14T21:45:31.530441703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-h4l4s,Uid:2cc16bef-941a-4188-8bb0-ff0932320e41,Namespace:default,Attempt:0,} returns sandbox id \"9184bb65fb11dfb4a8d0c088508c390c4d74448b3554b808d092254d3091918a\"" Jul 14 21:45:31.531896 env[1210]: time="2025-07-14T21:45:31.531869150Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 14 21:45:32.408466 kubelet[1414]: E0714 21:45:32.408371 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:33.409078 kubelet[1414]: E0714 21:45:33.409035 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:33.612394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3955767636.mount: Deactivated successfully. Jul 14 21:45:34.409473 kubelet[1414]: E0714 21:45:34.409428 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:34.708998 env[1210]: time="2025-07-14T21:45:34.708748567Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:45:34.710370 env[1210]: time="2025-07-14T21:45:34.710320659Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cd8b38a4e22587134e82fff3512a99b84799274d989a1ec20f58c7f8c89b8511,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:45:34.712776 env[1210]: time="2025-07-14T21:45:34.712738932Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:45:34.714568 env[1210]: time="2025-07-14T21:45:34.714536912Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:30bb68e656e0665bce700e67d2756f68bdca3345fa1099a32bfdb8febcf621cd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:45:34.715527 env[1210]: time="2025-07-14T21:45:34.715478622Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:cd8b38a4e22587134e82fff3512a99b84799274d989a1ec20f58c7f8c89b8511\"" Jul 14 21:45:34.718023 env[1210]: time="2025-07-14T21:45:34.717990101Z" level=info msg="CreateContainer within sandbox \"9184bb65fb11dfb4a8d0c088508c390c4d74448b3554b808d092254d3091918a\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jul 14 21:45:34.727144 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount346731101.mount: Deactivated successfully. Jul 14 21:45:34.728346 env[1210]: time="2025-07-14T21:45:34.728288011Z" level=info msg="CreateContainer within sandbox \"9184bb65fb11dfb4a8d0c088508c390c4d74448b3554b808d092254d3091918a\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"31c2f475e372d9804c79a0510a3c5988767b039d5bd972c3b93ae100a370eba8\"" Jul 14 21:45:34.728815 env[1210]: time="2025-07-14T21:45:34.728789875Z" level=info msg="StartContainer for \"31c2f475e372d9804c79a0510a3c5988767b039d5bd972c3b93ae100a370eba8\"" Jul 14 21:45:34.747281 systemd[1]: Started cri-containerd-31c2f475e372d9804c79a0510a3c5988767b039d5bd972c3b93ae100a370eba8.scope. Jul 14 21:45:34.790443 env[1210]: time="2025-07-14T21:45:34.790387560Z" level=info msg="StartContainer for \"31c2f475e372d9804c79a0510a3c5988767b039d5bd972c3b93ae100a370eba8\" returns successfully" Jul 14 21:45:35.409724 kubelet[1414]: E0714 21:45:35.409679 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:35.674963 kubelet[1414]: I0714 21:45:35.674843 1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-h4l4s" podStartSLOduration=5.489502687 podStartE2EDuration="8.674815664s" podCreationTimestamp="2025-07-14 21:45:27 +0000 UTC" firstStartedPulling="2025-07-14 21:45:31.531402066 +0000 UTC m=+20.852625886" lastFinishedPulling="2025-07-14 21:45:34.716715044 +0000 UTC m=+24.037938863" observedRunningTime="2025-07-14 21:45:35.674481875 +0000 UTC m=+24.995705694" watchObservedRunningTime="2025-07-14 21:45:35.674815664 +0000 UTC m=+24.996039484" Jul 14 21:45:36.410301 kubelet[1414]: E0714 21:45:36.410260 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:37.411167 kubelet[1414]: E0714 21:45:37.411123 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:38.411971 kubelet[1414]: E0714 21:45:38.411929 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:39.412083 kubelet[1414]: E0714 21:45:39.412042 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:39.469412 systemd[1]: Created slice kubepods-besteffort-pod63df1261_c9c9_42eb_957c_ad0c0da708df.slice. Jul 14 21:45:39.507145 kubelet[1414]: I0714 21:45:39.507110 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/63df1261-c9c9-42eb-957c-ad0c0da708df-data\") pod \"nfs-server-provisioner-0\" (UID: \"63df1261-c9c9-42eb-957c-ad0c0da708df\") " pod="default/nfs-server-provisioner-0" Jul 14 21:45:39.507346 kubelet[1414]: I0714 21:45:39.507328 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqb9f\" (UniqueName: \"kubernetes.io/projected/63df1261-c9c9-42eb-957c-ad0c0da708df-kube-api-access-vqb9f\") pod \"nfs-server-provisioner-0\" (UID: \"63df1261-c9c9-42eb-957c-ad0c0da708df\") " pod="default/nfs-server-provisioner-0" Jul 14 21:45:39.773334 env[1210]: time="2025-07-14T21:45:39.773222756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:63df1261-c9c9-42eb-957c-ad0c0da708df,Namespace:default,Attempt:0,}" Jul 14 21:45:39.803056 systemd-networkd[1040]: lxce7e31eaa1bf5: Link UP Jul 14 21:45:39.815851 kernel: eth0: renamed from tmpe5e10 Jul 14 21:45:39.822853 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 14 21:45:39.822952 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxce7e31eaa1bf5: link becomes ready Jul 14 21:45:39.823552 systemd-networkd[1040]: lxce7e31eaa1bf5: Gained carrier Jul 14 21:45:40.077741 env[1210]: time="2025-07-14T21:45:40.077669060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:45:40.077741 env[1210]: time="2025-07-14T21:45:40.077711014Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:45:40.077741 env[1210]: time="2025-07-14T21:45:40.077721742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:45:40.078062 env[1210]: time="2025-07-14T21:45:40.078022060Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e5e100188931aa96a208c711168ab95cbc349b26c71433120460019962fe984b pid=2619 runtime=io.containerd.runc.v2 Jul 14 21:45:40.093795 systemd[1]: Started cri-containerd-e5e100188931aa96a208c711168ab95cbc349b26c71433120460019962fe984b.scope. Jul 14 21:45:40.122948 systemd-resolved[1153]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 21:45:40.142863 env[1210]: time="2025-07-14T21:45:40.142581823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:63df1261-c9c9-42eb-957c-ad0c0da708df,Namespace:default,Attempt:0,} returns sandbox id \"e5e100188931aa96a208c711168ab95cbc349b26c71433120460019962fe984b\"" Jul 14 21:45:40.144420 env[1210]: time="2025-07-14T21:45:40.144322201Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jul 14 21:45:40.413633 kubelet[1414]: E0714 21:45:40.413490 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:41.414289 kubelet[1414]: E0714 21:45:41.414224 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:41.642062 systemd-networkd[1040]: lxce7e31eaa1bf5: Gained IPv6LL Jul 14 21:45:42.348449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1581381006.mount: Deactivated successfully. Jul 14 21:45:42.414811 kubelet[1414]: E0714 21:45:42.414763 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:43.415050 kubelet[1414]: E0714 21:45:43.414989 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:44.125790 env[1210]: time="2025-07-14T21:45:44.125720853Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:45:44.126861 env[1210]: time="2025-07-14T21:45:44.126835882Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:45:44.128446 env[1210]: time="2025-07-14T21:45:44.128416607Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:45:44.131712 env[1210]: time="2025-07-14T21:45:44.131672876Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:45:44.132583 env[1210]: time="2025-07-14T21:45:44.132532582Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Jul 14 21:45:44.135102 env[1210]: time="2025-07-14T21:45:44.135047101Z" level=info msg="CreateContainer within sandbox \"e5e100188931aa96a208c711168ab95cbc349b26c71433120460019962fe984b\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jul 14 21:45:44.144987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1250054981.mount: Deactivated successfully. Jul 14 21:45:44.148257 env[1210]: time="2025-07-14T21:45:44.148213028Z" level=info msg="CreateContainer within sandbox \"e5e100188931aa96a208c711168ab95cbc349b26c71433120460019962fe984b\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"f6581d968b5c94fa710c70cd0cd1b764c884af157959746d782b8f9b71cb1def\"" Jul 14 21:45:44.148924 env[1210]: time="2025-07-14T21:45:44.148884054Z" level=info msg="StartContainer for \"f6581d968b5c94fa710c70cd0cd1b764c884af157959746d782b8f9b71cb1def\"" Jul 14 21:45:44.168867 systemd[1]: Started cri-containerd-f6581d968b5c94fa710c70cd0cd1b764c884af157959746d782b8f9b71cb1def.scope. Jul 14 21:45:44.221124 env[1210]: time="2025-07-14T21:45:44.221064847Z" level=info msg="StartContainer for \"f6581d968b5c94fa710c70cd0cd1b764c884af157959746d782b8f9b71cb1def\" returns successfully" Jul 14 21:45:44.415978 kubelet[1414]: E0714 21:45:44.415855 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:44.700848 kubelet[1414]: I0714 21:45:44.700665 1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.711021919 podStartE2EDuration="5.700649077s" podCreationTimestamp="2025-07-14 21:45:39 +0000 UTC" firstStartedPulling="2025-07-14 21:45:40.144091698 +0000 UTC m=+29.465315517" lastFinishedPulling="2025-07-14 21:45:44.133718896 +0000 UTC m=+33.454942675" observedRunningTime="2025-07-14 21:45:44.699624546 +0000 UTC m=+34.020848365" watchObservedRunningTime="2025-07-14 21:45:44.700649077 +0000 UTC m=+34.021872896" Jul 14 21:45:45.143118 systemd[1]: run-containerd-runc-k8s.io-f6581d968b5c94fa710c70cd0cd1b764c884af157959746d782b8f9b71cb1def-runc.ACE0H5.mount: Deactivated successfully. Jul 14 21:45:45.416403 kubelet[1414]: E0714 21:45:45.416291 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:46.416970 kubelet[1414]: E0714 21:45:46.416924 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:47.418065 kubelet[1414]: E0714 21:45:47.418011 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:48.418924 kubelet[1414]: E0714 21:45:48.418888 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:49.420309 kubelet[1414]: E0714 21:45:49.420272 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:49.984539 update_engine[1200]: I0714 21:45:49.984156 1200 update_attempter.cc:509] Updating boot flags... Jul 14 21:45:50.420983 kubelet[1414]: E0714 21:45:50.420926 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:51.394486 kubelet[1414]: E0714 21:45:51.394400 1414 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:51.421375 kubelet[1414]: E0714 21:45:51.421328 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:52.422056 kubelet[1414]: E0714 21:45:52.421971 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:53.422928 kubelet[1414]: E0714 21:45:53.422884 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:54.424076 kubelet[1414]: E0714 21:45:54.424019 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:54.477599 systemd[1]: Created slice kubepods-besteffort-pod867598ab_c4ee_40bb_9368_1a2bf80f83be.slice. Jul 14 21:45:54.504838 kubelet[1414]: I0714 21:45:54.504788 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-46c4c3f8-25a5-4144-ad97-ffc66a52bf66\" (UniqueName: \"kubernetes.io/nfs/867598ab-c4ee-40bb-9368-1a2bf80f83be-pvc-46c4c3f8-25a5-4144-ad97-ffc66a52bf66\") pod \"test-pod-1\" (UID: \"867598ab-c4ee-40bb-9368-1a2bf80f83be\") " pod="default/test-pod-1" Jul 14 21:45:54.505037 kubelet[1414]: I0714 21:45:54.505018 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxjtm\" (UniqueName: \"kubernetes.io/projected/867598ab-c4ee-40bb-9368-1a2bf80f83be-kube-api-access-qxjtm\") pod \"test-pod-1\" (UID: \"867598ab-c4ee-40bb-9368-1a2bf80f83be\") " pod="default/test-pod-1" Jul 14 21:45:54.644878 kernel: FS-Cache: Loaded Jul 14 21:45:54.674995 kernel: RPC: Registered named UNIX socket transport module. Jul 14 21:45:54.675114 kernel: RPC: Registered udp transport module. Jul 14 21:45:54.675844 kernel: RPC: Registered tcp transport module. Jul 14 21:45:54.676827 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jul 14 21:45:54.721857 kernel: FS-Cache: Netfs 'nfs' registered for caching Jul 14 21:45:54.864105 kernel: NFS: Registering the id_resolver key type Jul 14 21:45:54.864243 kernel: Key type id_resolver registered Jul 14 21:45:54.864263 kernel: Key type id_legacy registered Jul 14 21:45:54.907228 nfsidmap[2751]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jul 14 21:45:54.911076 nfsidmap[2754]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jul 14 21:45:55.083410 env[1210]: time="2025-07-14T21:45:55.083357032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:867598ab-c4ee-40bb-9368-1a2bf80f83be,Namespace:default,Attempt:0,}" Jul 14 21:45:55.109603 systemd-networkd[1040]: lxc6a0585376f6c: Link UP Jul 14 21:45:55.122861 kernel: eth0: renamed from tmpa2462 Jul 14 21:45:55.133881 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 14 21:45:55.133976 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc6a0585376f6c: link becomes ready Jul 14 21:45:55.133944 systemd-networkd[1040]: lxc6a0585376f6c: Gained carrier Jul 14 21:45:55.302736 env[1210]: time="2025-07-14T21:45:55.302202683Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:45:55.302736 env[1210]: time="2025-07-14T21:45:55.302245099Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:45:55.302736 env[1210]: time="2025-07-14T21:45:55.302255222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:45:55.302933 env[1210]: time="2025-07-14T21:45:55.302730396Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a24626ab2a89de9801f9ff609ea64695b1b9255b51e1f549193b641da2dcc538 pid=2791 runtime=io.containerd.runc.v2 Jul 14 21:45:55.313867 systemd[1]: Started cri-containerd-a24626ab2a89de9801f9ff609ea64695b1b9255b51e1f549193b641da2dcc538.scope. Jul 14 21:45:55.336858 systemd-resolved[1153]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 21:45:55.354254 env[1210]: time="2025-07-14T21:45:55.354206456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:867598ab-c4ee-40bb-9368-1a2bf80f83be,Namespace:default,Attempt:0,} returns sandbox id \"a24626ab2a89de9801f9ff609ea64695b1b9255b51e1f549193b641da2dcc538\"" Jul 14 21:45:55.355464 env[1210]: time="2025-07-14T21:45:55.355435746Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 14 21:45:55.424437 kubelet[1414]: E0714 21:45:55.424391 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:55.594227 env[1210]: time="2025-07-14T21:45:55.594120770Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:45:55.596052 env[1210]: time="2025-07-14T21:45:55.596015183Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:cd8b38a4e22587134e82fff3512a99b84799274d989a1ec20f58c7f8c89b8511,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:45:55.597559 env[1210]: time="2025-07-14T21:45:55.597530857Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:45:55.599330 env[1210]: time="2025-07-14T21:45:55.599295742Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:30bb68e656e0665bce700e67d2756f68bdca3345fa1099a32bfdb8febcf621cd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:45:55.600214 env[1210]: time="2025-07-14T21:45:55.600168781Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:cd8b38a4e22587134e82fff3512a99b84799274d989a1ec20f58c7f8c89b8511\"" Jul 14 21:45:55.602695 env[1210]: time="2025-07-14T21:45:55.602650769Z" level=info msg="CreateContainer within sandbox \"a24626ab2a89de9801f9ff609ea64695b1b9255b51e1f549193b641da2dcc538\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jul 14 21:45:55.615152 env[1210]: time="2025-07-14T21:45:55.615103962Z" level=info msg="CreateContainer within sandbox \"a24626ab2a89de9801f9ff609ea64695b1b9255b51e1f549193b641da2dcc538\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"1990c21792895ebdf5c8caed601b20bf3a011eccbdb4f0e4653901b0c369a7d2\"" Jul 14 21:45:55.615482 env[1210]: time="2025-07-14T21:45:55.615459172Z" level=info msg="StartContainer for \"1990c21792895ebdf5c8caed601b20bf3a011eccbdb4f0e4653901b0c369a7d2\"" Jul 14 21:45:55.634354 systemd[1]: Started cri-containerd-1990c21792895ebdf5c8caed601b20bf3a011eccbdb4f0e4653901b0c369a7d2.scope. Jul 14 21:45:55.667837 env[1210]: time="2025-07-14T21:45:55.667770977Z" level=info msg="StartContainer for \"1990c21792895ebdf5c8caed601b20bf3a011eccbdb4f0e4653901b0c369a7d2\" returns successfully" Jul 14 21:45:55.714718 kubelet[1414]: I0714 21:45:55.714429 1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=16.468004461 podStartE2EDuration="16.714400785s" podCreationTimestamp="2025-07-14 21:45:39 +0000 UTC" firstStartedPulling="2025-07-14 21:45:55.355026596 +0000 UTC m=+44.676250415" lastFinishedPulling="2025-07-14 21:45:55.60142292 +0000 UTC m=+44.922646739" observedRunningTime="2025-07-14 21:45:55.714236645 +0000 UTC m=+45.035460464" watchObservedRunningTime="2025-07-14 21:45:55.714400785 +0000 UTC m=+45.035624604" Jul 14 21:45:56.425169 kubelet[1414]: E0714 21:45:56.425123 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:57.066018 systemd-networkd[1040]: lxc6a0585376f6c: Gained IPv6LL Jul 14 21:45:57.425573 kubelet[1414]: E0714 21:45:57.425417 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:58.426974 kubelet[1414]: E0714 21:45:58.426936 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:45:59.427848 kubelet[1414]: E0714 21:45:59.427797 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:46:00.428432 kubelet[1414]: E0714 21:46:00.428397 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:46:01.429482 kubelet[1414]: E0714 21:46:01.429427 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:46:02.430012 kubelet[1414]: E0714 21:46:02.429965 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:46:02.525414 systemd[1]: run-containerd-runc-k8s.io-e7a4ccbcf8b8a7224d2351f992ece08eb87745e9ce785dc9262ad6e60377f42f-runc.ZHe2U1.mount: Deactivated successfully. Jul 14 21:46:02.539761 env[1210]: time="2025-07-14T21:46:02.539696489Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 14 21:46:02.545036 env[1210]: time="2025-07-14T21:46:02.544977479Z" level=info msg="StopContainer for \"e7a4ccbcf8b8a7224d2351f992ece08eb87745e9ce785dc9262ad6e60377f42f\" with timeout 2 (s)" Jul 14 21:46:02.545343 env[1210]: time="2025-07-14T21:46:02.545319531Z" level=info msg="Stop container \"e7a4ccbcf8b8a7224d2351f992ece08eb87745e9ce785dc9262ad6e60377f42f\" with signal terminated" Jul 14 21:46:02.552409 systemd-networkd[1040]: lxc_health: Link DOWN Jul 14 21:46:02.552415 systemd-networkd[1040]: lxc_health: Lost carrier Jul 14 21:46:02.592521 systemd[1]: cri-containerd-e7a4ccbcf8b8a7224d2351f992ece08eb87745e9ce785dc9262ad6e60377f42f.scope: Deactivated successfully. Jul 14 21:46:02.592865 systemd[1]: cri-containerd-e7a4ccbcf8b8a7224d2351f992ece08eb87745e9ce785dc9262ad6e60377f42f.scope: Consumed 6.593s CPU time. Jul 14 21:46:02.609747 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e7a4ccbcf8b8a7224d2351f992ece08eb87745e9ce785dc9262ad6e60377f42f-rootfs.mount: Deactivated successfully. Jul 14 21:46:02.793750 env[1210]: time="2025-07-14T21:46:02.793706641Z" level=info msg="shim disconnected" id=e7a4ccbcf8b8a7224d2351f992ece08eb87745e9ce785dc9262ad6e60377f42f Jul 14 21:46:02.794090 env[1210]: time="2025-07-14T21:46:02.794069979Z" level=warning msg="cleaning up after shim disconnected" id=e7a4ccbcf8b8a7224d2351f992ece08eb87745e9ce785dc9262ad6e60377f42f namespace=k8s.io Jul 14 21:46:02.794178 env[1210]: time="2025-07-14T21:46:02.794163445Z" level=info msg="cleaning up dead shim" Jul 14 21:46:02.801125 env[1210]: time="2025-07-14T21:46:02.801092680Z" level=warning msg="cleanup warnings time=\"2025-07-14T21:46:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2921 runtime=io.containerd.runc.v2\n" Jul 14 21:46:02.803953 env[1210]: time="2025-07-14T21:46:02.803917325Z" level=info msg="StopContainer for \"e7a4ccbcf8b8a7224d2351f992ece08eb87745e9ce785dc9262ad6e60377f42f\" returns successfully" Jul 14 21:46:02.804662 env[1210]: time="2025-07-14T21:46:02.804633159Z" level=info msg="StopPodSandbox for \"ce0bc1991324c326efe57c9c404da4bb264ea1384dcd7e174d42c66715453a8f\"" Jul 14 21:46:02.804722 env[1210]: time="2025-07-14T21:46:02.804695615Z" level=info msg="Container to stop \"bd2a3be1efcbbce1acb69b7841d03bcf717d2db3772a2367d0a4e258e9b8b8de\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 21:46:02.804722 env[1210]: time="2025-07-14T21:46:02.804710339Z" level=info msg="Container to stop \"9ed2880ffeb9fdc05eab82893cb4f89fde9a4c6d181c62215f799415ab6953d2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 21:46:02.804770 env[1210]: time="2025-07-14T21:46:02.804722623Z" level=info msg="Container to stop \"7ced66586cb8dc218d755b75b7c24f90238bb013bc48fe3278e9ef8c743a3ca4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 21:46:02.804770 env[1210]: time="2025-07-14T21:46:02.804733946Z" level=info msg="Container to stop \"17bf42f11cff42882e6bd76e2316d1dda4fe61729ebf6963862ef8f170fae303\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 21:46:02.804770 env[1210]: time="2025-07-14T21:46:02.804744309Z" level=info msg="Container to stop \"e7a4ccbcf8b8a7224d2351f992ece08eb87745e9ce785dc9262ad6e60377f42f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 21:46:02.806383 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ce0bc1991324c326efe57c9c404da4bb264ea1384dcd7e174d42c66715453a8f-shm.mount: Deactivated successfully. Jul 14 21:46:02.810519 systemd[1]: cri-containerd-ce0bc1991324c326efe57c9c404da4bb264ea1384dcd7e174d42c66715453a8f.scope: Deactivated successfully. Jul 14 21:46:02.830272 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce0bc1991324c326efe57c9c404da4bb264ea1384dcd7e174d42c66715453a8f-rootfs.mount: Deactivated successfully. Jul 14 21:46:02.836874 env[1210]: time="2025-07-14T21:46:02.836802786Z" level=info msg="shim disconnected" id=ce0bc1991324c326efe57c9c404da4bb264ea1384dcd7e174d42c66715453a8f Jul 14 21:46:02.836874 env[1210]: time="2025-07-14T21:46:02.836866883Z" level=warning msg="cleaning up after shim disconnected" id=ce0bc1991324c326efe57c9c404da4bb264ea1384dcd7e174d42c66715453a8f namespace=k8s.io Jul 14 21:46:02.836874 env[1210]: time="2025-07-14T21:46:02.836875925Z" level=info msg="cleaning up dead shim" Jul 14 21:46:02.846058 env[1210]: time="2025-07-14T21:46:02.846002676Z" level=warning msg="cleanup warnings time=\"2025-07-14T21:46:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2951 runtime=io.containerd.runc.v2\n" Jul 14 21:46:02.846354 env[1210]: time="2025-07-14T21:46:02.846317761Z" level=info msg="TearDown network for sandbox \"ce0bc1991324c326efe57c9c404da4bb264ea1384dcd7e174d42c66715453a8f\" successfully" Jul 14 21:46:02.846354 env[1210]: time="2025-07-14T21:46:02.846345889Z" level=info msg="StopPodSandbox for \"ce0bc1991324c326efe57c9c404da4bb264ea1384dcd7e174d42c66715453a8f\" returns successfully" Jul 14 21:46:02.957139 kubelet[1414]: I0714 21:46:02.957088 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-hubble-tls\") pod \"9775ce27-3a5f-457e-ba5a-0f31528fb8e2\" (UID: \"9775ce27-3a5f-457e-ba5a-0f31528fb8e2\") " Jul 14 21:46:02.957139 kubelet[1414]: I0714 21:46:02.957132 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-host-proc-sys-kernel\") pod \"9775ce27-3a5f-457e-ba5a-0f31528fb8e2\" (UID: \"9775ce27-3a5f-457e-ba5a-0f31528fb8e2\") " Jul 14 21:46:02.957139 kubelet[1414]: I0714 21:46:02.957152 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-cilium-run\") pod \"9775ce27-3a5f-457e-ba5a-0f31528fb8e2\" (UID: \"9775ce27-3a5f-457e-ba5a-0f31528fb8e2\") " Jul 14 21:46:02.957417 kubelet[1414]: I0714 21:46:02.957165 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-host-proc-sys-net\") pod \"9775ce27-3a5f-457e-ba5a-0f31528fb8e2\" (UID: \"9775ce27-3a5f-457e-ba5a-0f31528fb8e2\") " Jul 14 21:46:02.957417 kubelet[1414]: I0714 21:46:02.957190 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-hostproc\") pod \"9775ce27-3a5f-457e-ba5a-0f31528fb8e2\" (UID: \"9775ce27-3a5f-457e-ba5a-0f31528fb8e2\") " Jul 14 21:46:02.957417 kubelet[1414]: I0714 21:46:02.957222 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-cilium-cgroup\") pod \"9775ce27-3a5f-457e-ba5a-0f31528fb8e2\" (UID: \"9775ce27-3a5f-457e-ba5a-0f31528fb8e2\") " Jul 14 21:46:02.957417 kubelet[1414]: I0714 21:46:02.957239 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-lib-modules\") pod \"9775ce27-3a5f-457e-ba5a-0f31528fb8e2\" (UID: \"9775ce27-3a5f-457e-ba5a-0f31528fb8e2\") " Jul 14 21:46:02.957417 kubelet[1414]: I0714 21:46:02.957253 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-bpf-maps\") pod \"9775ce27-3a5f-457e-ba5a-0f31528fb8e2\" (UID: \"9775ce27-3a5f-457e-ba5a-0f31528fb8e2\") " Jul 14 21:46:02.957417 kubelet[1414]: I0714 21:46:02.957272 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-cilium-config-path\") pod \"9775ce27-3a5f-457e-ba5a-0f31528fb8e2\" (UID: \"9775ce27-3a5f-457e-ba5a-0f31528fb8e2\") " Jul 14 21:46:02.957550 kubelet[1414]: I0714 21:46:02.957290 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-clustermesh-secrets\") pod \"9775ce27-3a5f-457e-ba5a-0f31528fb8e2\" (UID: \"9775ce27-3a5f-457e-ba5a-0f31528fb8e2\") " Jul 14 21:46:02.957550 kubelet[1414]: I0714 21:46:02.957304 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-cni-path\") pod \"9775ce27-3a5f-457e-ba5a-0f31528fb8e2\" (UID: \"9775ce27-3a5f-457e-ba5a-0f31528fb8e2\") " Jul 14 21:46:02.957550 kubelet[1414]: I0714 21:46:02.957320 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksnf6\" (UniqueName: \"kubernetes.io/projected/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-kube-api-access-ksnf6\") pod \"9775ce27-3a5f-457e-ba5a-0f31528fb8e2\" (UID: \"9775ce27-3a5f-457e-ba5a-0f31528fb8e2\") " Jul 14 21:46:02.957550 kubelet[1414]: I0714 21:46:02.957334 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-etc-cni-netd\") pod \"9775ce27-3a5f-457e-ba5a-0f31528fb8e2\" (UID: \"9775ce27-3a5f-457e-ba5a-0f31528fb8e2\") " Jul 14 21:46:02.957550 kubelet[1414]: I0714 21:46:02.957350 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-xtables-lock\") pod \"9775ce27-3a5f-457e-ba5a-0f31528fb8e2\" (UID: \"9775ce27-3a5f-457e-ba5a-0f31528fb8e2\") " Jul 14 21:46:02.957550 kubelet[1414]: I0714 21:46:02.957415 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9775ce27-3a5f-457e-ba5a-0f31528fb8e2" (UID: "9775ce27-3a5f-457e-ba5a-0f31528fb8e2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:46:02.958044 kubelet[1414]: I0714 21:46:02.957696 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9775ce27-3a5f-457e-ba5a-0f31528fb8e2" (UID: "9775ce27-3a5f-457e-ba5a-0f31528fb8e2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:46:02.958044 kubelet[1414]: I0714 21:46:02.957736 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9775ce27-3a5f-457e-ba5a-0f31528fb8e2" (UID: "9775ce27-3a5f-457e-ba5a-0f31528fb8e2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:46:02.958044 kubelet[1414]: I0714 21:46:02.957765 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9775ce27-3a5f-457e-ba5a-0f31528fb8e2" (UID: "9775ce27-3a5f-457e-ba5a-0f31528fb8e2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:46:02.958044 kubelet[1414]: I0714 21:46:02.957785 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9775ce27-3a5f-457e-ba5a-0f31528fb8e2" (UID: "9775ce27-3a5f-457e-ba5a-0f31528fb8e2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:46:02.958044 kubelet[1414]: I0714 21:46:02.957799 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-hostproc" (OuterVolumeSpecName: "hostproc") pod "9775ce27-3a5f-457e-ba5a-0f31528fb8e2" (UID: "9775ce27-3a5f-457e-ba5a-0f31528fb8e2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:46:02.958239 kubelet[1414]: I0714 21:46:02.957812 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9775ce27-3a5f-457e-ba5a-0f31528fb8e2" (UID: "9775ce27-3a5f-457e-ba5a-0f31528fb8e2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:46:02.958239 kubelet[1414]: I0714 21:46:02.957841 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9775ce27-3a5f-457e-ba5a-0f31528fb8e2" (UID: "9775ce27-3a5f-457e-ba5a-0f31528fb8e2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:46:02.958239 kubelet[1414]: I0714 21:46:02.957855 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-cni-path" (OuterVolumeSpecName: "cni-path") pod "9775ce27-3a5f-457e-ba5a-0f31528fb8e2" (UID: "9775ce27-3a5f-457e-ba5a-0f31528fb8e2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:46:02.959009 kubelet[1414]: I0714 21:46:02.958972 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9775ce27-3a5f-457e-ba5a-0f31528fb8e2" (UID: "9775ce27-3a5f-457e-ba5a-0f31528fb8e2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:46:02.960108 kubelet[1414]: I0714 21:46:02.960069 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9775ce27-3a5f-457e-ba5a-0f31528fb8e2" (UID: "9775ce27-3a5f-457e-ba5a-0f31528fb8e2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 14 21:46:02.960879 kubelet[1414]: I0714 21:46:02.960830 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9775ce27-3a5f-457e-ba5a-0f31528fb8e2" (UID: "9775ce27-3a5f-457e-ba5a-0f31528fb8e2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 14 21:46:02.960947 kubelet[1414]: I0714 21:46:02.960934 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9775ce27-3a5f-457e-ba5a-0f31528fb8e2" (UID: "9775ce27-3a5f-457e-ba5a-0f31528fb8e2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 14 21:46:02.961510 kubelet[1414]: I0714 21:46:02.961462 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-kube-api-access-ksnf6" (OuterVolumeSpecName: "kube-api-access-ksnf6") pod "9775ce27-3a5f-457e-ba5a-0f31528fb8e2" (UID: "9775ce27-3a5f-457e-ba5a-0f31528fb8e2"). InnerVolumeSpecName "kube-api-access-ksnf6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 14 21:46:03.057975 kubelet[1414]: I0714 21:46:03.057877 1414 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-host-proc-sys-kernel\") on node \"10.0.0.15\" DevicePath \"\"" Jul 14 21:46:03.058116 kubelet[1414]: I0714 21:46:03.058102 1414 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-cilium-run\") on node \"10.0.0.15\" DevicePath \"\"" Jul 14 21:46:03.058183 kubelet[1414]: I0714 21:46:03.058172 1414 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-host-proc-sys-net\") on node \"10.0.0.15\" DevicePath \"\"" Jul 14 21:46:03.058256 kubelet[1414]: I0714 21:46:03.058245 1414 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-hostproc\") on node \"10.0.0.15\" DevicePath \"\"" Jul 14 21:46:03.058438 kubelet[1414]: I0714 21:46:03.058419 1414 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-cilium-cgroup\") on node \"10.0.0.15\" DevicePath \"\"" Jul 14 21:46:03.058897 kubelet[1414]: I0714 21:46:03.058751 1414 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-lib-modules\") on node \"10.0.0.15\" DevicePath \"\"" Jul 14 21:46:03.059131 kubelet[1414]: I0714 21:46:03.059115 1414 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-bpf-maps\") on node \"10.0.0.15\" DevicePath \"\"" Jul 14 21:46:03.059378 kubelet[1414]: I0714 21:46:03.059361 1414 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-cilium-config-path\") on node \"10.0.0.15\" DevicePath \"\"" Jul 14 21:46:03.059588 kubelet[1414]: I0714 21:46:03.059556 1414 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-clustermesh-secrets\") on node \"10.0.0.15\" DevicePath \"\"" Jul 14 21:46:03.059801 kubelet[1414]: I0714 21:46:03.059749 1414 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-cni-path\") on node \"10.0.0.15\" DevicePath \"\"" Jul 14 21:46:03.059936 kubelet[1414]: I0714 21:46:03.059922 1414 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ksnf6\" (UniqueName: \"kubernetes.io/projected/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-kube-api-access-ksnf6\") on node \"10.0.0.15\" DevicePath \"\"" Jul 14 21:46:03.060000 kubelet[1414]: I0714 21:46:03.059990 1414 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-etc-cni-netd\") on node \"10.0.0.15\" DevicePath \"\"" Jul 14 21:46:03.060056 kubelet[1414]: I0714 21:46:03.060047 1414 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-xtables-lock\") on node \"10.0.0.15\" DevicePath \"\"" Jul 14 21:46:03.060117 kubelet[1414]: I0714 21:46:03.060107 1414 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9775ce27-3a5f-457e-ba5a-0f31528fb8e2-hubble-tls\") on node \"10.0.0.15\" DevicePath \"\"" Jul 14 21:46:03.431189 kubelet[1414]: E0714 21:46:03.431092 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:46:03.515224 systemd[1]: var-lib-kubelet-pods-9775ce27\x2d3a5f\x2d457e\x2dba5a\x2d0f31528fb8e2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dksnf6.mount: Deactivated successfully. Jul 14 21:46:03.515331 systemd[1]: var-lib-kubelet-pods-9775ce27\x2d3a5f\x2d457e\x2dba5a\x2d0f31528fb8e2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 14 21:46:03.515389 systemd[1]: var-lib-kubelet-pods-9775ce27\x2d3a5f\x2d457e\x2dba5a\x2d0f31528fb8e2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 14 21:46:03.618348 systemd[1]: Removed slice kubepods-burstable-pod9775ce27_3a5f_457e_ba5a_0f31528fb8e2.slice. Jul 14 21:46:03.618444 systemd[1]: kubepods-burstable-pod9775ce27_3a5f_457e_ba5a_0f31528fb8e2.slice: Consumed 6.790s CPU time. Jul 14 21:46:03.721351 kubelet[1414]: I0714 21:46:03.721240 1414 scope.go:117] "RemoveContainer" containerID="e7a4ccbcf8b8a7224d2351f992ece08eb87745e9ce785dc9262ad6e60377f42f" Jul 14 21:46:03.723613 env[1210]: time="2025-07-14T21:46:03.723561101Z" level=info msg="RemoveContainer for \"e7a4ccbcf8b8a7224d2351f992ece08eb87745e9ce785dc9262ad6e60377f42f\"" Jul 14 21:46:03.731117 env[1210]: time="2025-07-14T21:46:03.731058932Z" level=info msg="RemoveContainer for \"e7a4ccbcf8b8a7224d2351f992ece08eb87745e9ce785dc9262ad6e60377f42f\" returns successfully" Jul 14 21:46:03.731509 kubelet[1414]: I0714 21:46:03.731477 1414 scope.go:117] "RemoveContainer" containerID="17bf42f11cff42882e6bd76e2316d1dda4fe61729ebf6963862ef8f170fae303" Jul 14 21:46:03.732697 env[1210]: time="2025-07-14T21:46:03.732664390Z" level=info msg="RemoveContainer for \"17bf42f11cff42882e6bd76e2316d1dda4fe61729ebf6963862ef8f170fae303\"" Jul 14 21:46:03.739251 env[1210]: time="2025-07-14T21:46:03.739196330Z" level=info msg="RemoveContainer for \"17bf42f11cff42882e6bd76e2316d1dda4fe61729ebf6963862ef8f170fae303\" returns successfully" Jul 14 21:46:03.739534 kubelet[1414]: I0714 21:46:03.739491 1414 scope.go:117] "RemoveContainer" containerID="9ed2880ffeb9fdc05eab82893cb4f89fde9a4c6d181c62215f799415ab6953d2" Jul 14 21:46:03.741181 env[1210]: time="2025-07-14T21:46:03.741148358Z" level=info msg="RemoveContainer for \"9ed2880ffeb9fdc05eab82893cb4f89fde9a4c6d181c62215f799415ab6953d2\"" Jul 14 21:46:03.747115 env[1210]: time="2025-07-14T21:46:03.747062497Z" level=info msg="RemoveContainer for \"9ed2880ffeb9fdc05eab82893cb4f89fde9a4c6d181c62215f799415ab6953d2\" returns successfully" Jul 14 21:46:03.747442 kubelet[1414]: I0714 21:46:03.747406 1414 scope.go:117] "RemoveContainer" containerID="7ced66586cb8dc218d755b75b7c24f90238bb013bc48fe3278e9ef8c743a3ca4" Jul 14 21:46:03.748576 env[1210]: time="2025-07-14T21:46:03.748545163Z" level=info msg="RemoveContainer for \"7ced66586cb8dc218d755b75b7c24f90238bb013bc48fe3278e9ef8c743a3ca4\"" Jul 14 21:46:03.751042 env[1210]: time="2025-07-14T21:46:03.750994561Z" level=info msg="RemoveContainer for \"7ced66586cb8dc218d755b75b7c24f90238bb013bc48fe3278e9ef8c743a3ca4\" returns successfully" Jul 14 21:46:03.751241 kubelet[1414]: I0714 21:46:03.751213 1414 scope.go:117] "RemoveContainer" containerID="bd2a3be1efcbbce1acb69b7841d03bcf717d2db3772a2367d0a4e258e9b8b8de" Jul 14 21:46:03.752339 env[1210]: time="2025-07-14T21:46:03.752305902Z" level=info msg="RemoveContainer for \"bd2a3be1efcbbce1acb69b7841d03bcf717d2db3772a2367d0a4e258e9b8b8de\"" Jul 14 21:46:03.754937 env[1210]: time="2025-07-14T21:46:03.754893536Z" level=info msg="RemoveContainer for \"bd2a3be1efcbbce1acb69b7841d03bcf717d2db3772a2367d0a4e258e9b8b8de\" returns successfully" Jul 14 21:46:03.755130 kubelet[1414]: I0714 21:46:03.755096 1414 scope.go:117] "RemoveContainer" containerID="e7a4ccbcf8b8a7224d2351f992ece08eb87745e9ce785dc9262ad6e60377f42f" Jul 14 21:46:03.755598 env[1210]: time="2025-07-14T21:46:03.755367939Z" level=error msg="ContainerStatus for \"e7a4ccbcf8b8a7224d2351f992ece08eb87745e9ce785dc9262ad6e60377f42f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e7a4ccbcf8b8a7224d2351f992ece08eb87745e9ce785dc9262ad6e60377f42f\": not found" Jul 14 21:46:03.755726 kubelet[1414]: E0714 21:46:03.755701 1414 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e7a4ccbcf8b8a7224d2351f992ece08eb87745e9ce785dc9262ad6e60377f42f\": not found" containerID="e7a4ccbcf8b8a7224d2351f992ece08eb87745e9ce785dc9262ad6e60377f42f" Jul 14 21:46:03.755827 kubelet[1414]: I0714 21:46:03.755735 1414 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e7a4ccbcf8b8a7224d2351f992ece08eb87745e9ce785dc9262ad6e60377f42f"} err="failed to get container status \"e7a4ccbcf8b8a7224d2351f992ece08eb87745e9ce785dc9262ad6e60377f42f\": rpc error: code = NotFound desc = an error occurred when try to find container \"e7a4ccbcf8b8a7224d2351f992ece08eb87745e9ce785dc9262ad6e60377f42f\": not found" Jul 14 21:46:03.755861 kubelet[1414]: I0714 21:46:03.755838 1414 scope.go:117] "RemoveContainer" containerID="17bf42f11cff42882e6bd76e2316d1dda4fe61729ebf6963862ef8f170fae303" Jul 14 21:46:03.756091 env[1210]: time="2025-07-14T21:46:03.756039354Z" level=error msg="ContainerStatus for \"17bf42f11cff42882e6bd76e2316d1dda4fe61729ebf6963862ef8f170fae303\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"17bf42f11cff42882e6bd76e2316d1dda4fe61729ebf6963862ef8f170fae303\": not found" Jul 14 21:46:03.756252 kubelet[1414]: E0714 21:46:03.756227 1414 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"17bf42f11cff42882e6bd76e2316d1dda4fe61729ebf6963862ef8f170fae303\": not found" containerID="17bf42f11cff42882e6bd76e2316d1dda4fe61729ebf6963862ef8f170fae303" Jul 14 21:46:03.756346 kubelet[1414]: I0714 21:46:03.756323 1414 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"17bf42f11cff42882e6bd76e2316d1dda4fe61729ebf6963862ef8f170fae303"} err="failed to get container status \"17bf42f11cff42882e6bd76e2316d1dda4fe61729ebf6963862ef8f170fae303\": rpc error: code = NotFound desc = an error occurred when try to find container \"17bf42f11cff42882e6bd76e2316d1dda4fe61729ebf6963862ef8f170fae303\": not found" Jul 14 21:46:03.756416 kubelet[1414]: I0714 21:46:03.756403 1414 scope.go:117] "RemoveContainer" containerID="9ed2880ffeb9fdc05eab82893cb4f89fde9a4c6d181c62215f799415ab6953d2" Jul 14 21:46:03.756682 env[1210]: time="2025-07-14T21:46:03.756618585Z" level=error msg="ContainerStatus for \"9ed2880ffeb9fdc05eab82893cb4f89fde9a4c6d181c62215f799415ab6953d2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9ed2880ffeb9fdc05eab82893cb4f89fde9a4c6d181c62215f799415ab6953d2\": not found" Jul 14 21:46:03.756765 kubelet[1414]: E0714 21:46:03.756744 1414 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9ed2880ffeb9fdc05eab82893cb4f89fde9a4c6d181c62215f799415ab6953d2\": not found" containerID="9ed2880ffeb9fdc05eab82893cb4f89fde9a4c6d181c62215f799415ab6953d2" Jul 14 21:46:03.756813 kubelet[1414]: I0714 21:46:03.756781 1414 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9ed2880ffeb9fdc05eab82893cb4f89fde9a4c6d181c62215f799415ab6953d2"} err="failed to get container status \"9ed2880ffeb9fdc05eab82893cb4f89fde9a4c6d181c62215f799415ab6953d2\": rpc error: code = NotFound desc = an error occurred when try to find container \"9ed2880ffeb9fdc05eab82893cb4f89fde9a4c6d181c62215f799415ab6953d2\": not found" Jul 14 21:46:03.756813 kubelet[1414]: I0714 21:46:03.756801 1414 scope.go:117] "RemoveContainer" containerID="7ced66586cb8dc218d755b75b7c24f90238bb013bc48fe3278e9ef8c743a3ca4" Jul 14 21:46:03.756972 env[1210]: time="2025-07-14T21:46:03.756936667Z" level=error msg="ContainerStatus for \"7ced66586cb8dc218d755b75b7c24f90238bb013bc48fe3278e9ef8c743a3ca4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7ced66586cb8dc218d755b75b7c24f90238bb013bc48fe3278e9ef8c743a3ca4\": not found" Jul 14 21:46:03.757062 kubelet[1414]: E0714 21:46:03.757044 1414 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7ced66586cb8dc218d755b75b7c24f90238bb013bc48fe3278e9ef8c743a3ca4\": not found" containerID="7ced66586cb8dc218d755b75b7c24f90238bb013bc48fe3278e9ef8c743a3ca4" Jul 14 21:46:03.757102 kubelet[1414]: I0714 21:46:03.757068 1414 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7ced66586cb8dc218d755b75b7c24f90238bb013bc48fe3278e9ef8c743a3ca4"} err="failed to get container status \"7ced66586cb8dc218d755b75b7c24f90238bb013bc48fe3278e9ef8c743a3ca4\": rpc error: code = NotFound desc = an error occurred when try to find container \"7ced66586cb8dc218d755b75b7c24f90238bb013bc48fe3278e9ef8c743a3ca4\": not found" Jul 14 21:46:03.757102 kubelet[1414]: I0714 21:46:03.757084 1414 scope.go:117] "RemoveContainer" containerID="bd2a3be1efcbbce1acb69b7841d03bcf717d2db3772a2367d0a4e258e9b8b8de" Jul 14 21:46:03.757233 env[1210]: time="2025-07-14T21:46:03.757199256Z" level=error msg="ContainerStatus for \"bd2a3be1efcbbce1acb69b7841d03bcf717d2db3772a2367d0a4e258e9b8b8de\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bd2a3be1efcbbce1acb69b7841d03bcf717d2db3772a2367d0a4e258e9b8b8de\": not found" Jul 14 21:46:03.757306 kubelet[1414]: E0714 21:46:03.757290 1414 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bd2a3be1efcbbce1acb69b7841d03bcf717d2db3772a2367d0a4e258e9b8b8de\": not found" containerID="bd2a3be1efcbbce1acb69b7841d03bcf717d2db3772a2367d0a4e258e9b8b8de" Jul 14 21:46:03.757344 kubelet[1414]: I0714 21:46:03.757309 1414 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bd2a3be1efcbbce1acb69b7841d03bcf717d2db3772a2367d0a4e258e9b8b8de"} err="failed to get container status \"bd2a3be1efcbbce1acb69b7841d03bcf717d2db3772a2367d0a4e258e9b8b8de\": rpc error: code = NotFound desc = an error occurred when try to find container \"bd2a3be1efcbbce1acb69b7841d03bcf717d2db3772a2367d0a4e258e9b8b8de\": not found" Jul 14 21:46:04.432245 kubelet[1414]: E0714 21:46:04.432202 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:46:05.433008 kubelet[1414]: E0714 21:46:05.432955 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:46:05.614722 kubelet[1414]: I0714 21:46:05.614672 1414 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9775ce27-3a5f-457e-ba5a-0f31528fb8e2" path="/var/lib/kubelet/pods/9775ce27-3a5f-457e-ba5a-0f31528fb8e2/volumes" Jul 14 21:46:05.653668 kubelet[1414]: I0714 21:46:05.653633 1414 memory_manager.go:355] "RemoveStaleState removing state" podUID="9775ce27-3a5f-457e-ba5a-0f31528fb8e2" containerName="cilium-agent" Jul 14 21:46:05.658881 systemd[1]: Created slice kubepods-besteffort-podcef200c7_dcd9_447d_a016_5016f61aed44.slice. Jul 14 21:46:05.664843 systemd[1]: Created slice kubepods-burstable-pod2c22aa0b_d429_4ca8_b913_12b36feb09e6.slice. Jul 14 21:46:05.677142 kubelet[1414]: I0714 21:46:05.677086 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2c22aa0b-d429-4ca8-b913-12b36feb09e6-cni-path\") pod \"cilium-5sptx\" (UID: \"2c22aa0b-d429-4ca8-b913-12b36feb09e6\") " pod="kube-system/cilium-5sptx" Jul 14 21:46:05.677142 kubelet[1414]: I0714 21:46:05.677131 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2c22aa0b-d429-4ca8-b913-12b36feb09e6-cilium-config-path\") pod \"cilium-5sptx\" (UID: \"2c22aa0b-d429-4ca8-b913-12b36feb09e6\") " pod="kube-system/cilium-5sptx" Jul 14 21:46:05.677309 kubelet[1414]: I0714 21:46:05.677157 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2c22aa0b-d429-4ca8-b913-12b36feb09e6-cilium-ipsec-secrets\") pod \"cilium-5sptx\" (UID: \"2c22aa0b-d429-4ca8-b913-12b36feb09e6\") " pod="kube-system/cilium-5sptx" Jul 14 21:46:05.677309 kubelet[1414]: I0714 21:46:05.677174 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cef200c7-dcd9-447d-a016-5016f61aed44-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-dtjnb\" (UID: \"cef200c7-dcd9-447d-a016-5016f61aed44\") " pod="kube-system/cilium-operator-6c4d7847fc-dtjnb" Jul 14 21:46:05.677309 kubelet[1414]: I0714 21:46:05.677194 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqs97\" (UniqueName: \"kubernetes.io/projected/cef200c7-dcd9-447d-a016-5016f61aed44-kube-api-access-nqs97\") pod \"cilium-operator-6c4d7847fc-dtjnb\" (UID: \"cef200c7-dcd9-447d-a016-5016f61aed44\") " pod="kube-system/cilium-operator-6c4d7847fc-dtjnb" Jul 14 21:46:05.677309 kubelet[1414]: I0714 21:46:05.677210 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2c22aa0b-d429-4ca8-b913-12b36feb09e6-cilium-cgroup\") pod \"cilium-5sptx\" (UID: \"2c22aa0b-d429-4ca8-b913-12b36feb09e6\") " pod="kube-system/cilium-5sptx" Jul 14 21:46:05.677309 kubelet[1414]: I0714 21:46:05.677227 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfx6n\" (UniqueName: \"kubernetes.io/projected/2c22aa0b-d429-4ca8-b913-12b36feb09e6-kube-api-access-hfx6n\") pod \"cilium-5sptx\" (UID: \"2c22aa0b-d429-4ca8-b913-12b36feb09e6\") " pod="kube-system/cilium-5sptx" Jul 14 21:46:05.677431 kubelet[1414]: I0714 21:46:05.677245 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c22aa0b-d429-4ca8-b913-12b36feb09e6-lib-modules\") pod \"cilium-5sptx\" (UID: \"2c22aa0b-d429-4ca8-b913-12b36feb09e6\") " pod="kube-system/cilium-5sptx" Jul 14 21:46:05.677431 kubelet[1414]: I0714 21:46:05.677265 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c22aa0b-d429-4ca8-b913-12b36feb09e6-xtables-lock\") pod \"cilium-5sptx\" (UID: \"2c22aa0b-d429-4ca8-b913-12b36feb09e6\") " pod="kube-system/cilium-5sptx" Jul 14 21:46:05.677431 kubelet[1414]: I0714 21:46:05.677280 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2c22aa0b-d429-4ca8-b913-12b36feb09e6-hubble-tls\") pod \"cilium-5sptx\" (UID: \"2c22aa0b-d429-4ca8-b913-12b36feb09e6\") " pod="kube-system/cilium-5sptx" Jul 14 21:46:05.677431 kubelet[1414]: I0714 21:46:05.677293 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c22aa0b-d429-4ca8-b913-12b36feb09e6-etc-cni-netd\") pod \"cilium-5sptx\" (UID: \"2c22aa0b-d429-4ca8-b913-12b36feb09e6\") " pod="kube-system/cilium-5sptx" Jul 14 21:46:05.677431 kubelet[1414]: I0714 21:46:05.677309 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2c22aa0b-d429-4ca8-b913-12b36feb09e6-clustermesh-secrets\") pod \"cilium-5sptx\" (UID: \"2c22aa0b-d429-4ca8-b913-12b36feb09e6\") " pod="kube-system/cilium-5sptx" Jul 14 21:46:05.677431 kubelet[1414]: I0714 21:46:05.677323 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2c22aa0b-d429-4ca8-b913-12b36feb09e6-bpf-maps\") pod \"cilium-5sptx\" (UID: \"2c22aa0b-d429-4ca8-b913-12b36feb09e6\") " pod="kube-system/cilium-5sptx" Jul 14 21:46:05.677559 kubelet[1414]: I0714 21:46:05.677342 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2c22aa0b-d429-4ca8-b913-12b36feb09e6-host-proc-sys-kernel\") pod \"cilium-5sptx\" (UID: \"2c22aa0b-d429-4ca8-b913-12b36feb09e6\") " pod="kube-system/cilium-5sptx" Jul 14 21:46:05.677559 kubelet[1414]: I0714 21:46:05.677367 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2c22aa0b-d429-4ca8-b913-12b36feb09e6-cilium-run\") pod \"cilium-5sptx\" (UID: \"2c22aa0b-d429-4ca8-b913-12b36feb09e6\") " pod="kube-system/cilium-5sptx" Jul 14 21:46:05.677559 kubelet[1414]: I0714 21:46:05.677382 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2c22aa0b-d429-4ca8-b913-12b36feb09e6-hostproc\") pod \"cilium-5sptx\" (UID: \"2c22aa0b-d429-4ca8-b913-12b36feb09e6\") " pod="kube-system/cilium-5sptx" Jul 14 21:46:05.677559 kubelet[1414]: I0714 21:46:05.677399 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2c22aa0b-d429-4ca8-b913-12b36feb09e6-host-proc-sys-net\") pod \"cilium-5sptx\" (UID: \"2c22aa0b-d429-4ca8-b913-12b36feb09e6\") " pod="kube-system/cilium-5sptx" Jul 14 21:46:05.833215 kubelet[1414]: E0714 21:46:05.833184 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:05.833704 env[1210]: time="2025-07-14T21:46:05.833666810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5sptx,Uid:2c22aa0b-d429-4ca8-b913-12b36feb09e6,Namespace:kube-system,Attempt:0,}" Jul 14 21:46:05.852675 env[1210]: time="2025-07-14T21:46:05.852570733Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:46:05.852675 env[1210]: time="2025-07-14T21:46:05.852615024Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:46:05.852675 env[1210]: time="2025-07-14T21:46:05.852625907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:46:05.853111 env[1210]: time="2025-07-14T21:46:05.852912976Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8bb8725ee77d19dcf477e6ddd39be5ec8387f4a4dda972efb373f4e160769f29 pid=2982 runtime=io.containerd.runc.v2 Jul 14 21:46:05.863743 systemd[1]: Started cri-containerd-8bb8725ee77d19dcf477e6ddd39be5ec8387f4a4dda972efb373f4e160769f29.scope. Jul 14 21:46:05.912174 env[1210]: time="2025-07-14T21:46:05.912129071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5sptx,Uid:2c22aa0b-d429-4ca8-b913-12b36feb09e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"8bb8725ee77d19dcf477e6ddd39be5ec8387f4a4dda972efb373f4e160769f29\"" Jul 14 21:46:05.912951 kubelet[1414]: E0714 21:46:05.912899 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:05.914906 env[1210]: time="2025-07-14T21:46:05.914867692Z" level=info msg="CreateContainer within sandbox \"8bb8725ee77d19dcf477e6ddd39be5ec8387f4a4dda972efb373f4e160769f29\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 14 21:46:05.932490 env[1210]: time="2025-07-14T21:46:05.932417168Z" level=info msg="CreateContainer within sandbox \"8bb8725ee77d19dcf477e6ddd39be5ec8387f4a4dda972efb373f4e160769f29\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"57e2a161bcd7c10aaa1e1145dec632030791fc1dd851c44e9d68eb00977972bc\"" Jul 14 21:46:05.933039 env[1210]: time="2025-07-14T21:46:05.933011231Z" level=info msg="StartContainer for \"57e2a161bcd7c10aaa1e1145dec632030791fc1dd851c44e9d68eb00977972bc\"" Jul 14 21:46:05.946802 systemd[1]: Started cri-containerd-57e2a161bcd7c10aaa1e1145dec632030791fc1dd851c44e9d68eb00977972bc.scope. Jul 14 21:46:05.961947 kubelet[1414]: E0714 21:46:05.961802 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:05.962420 env[1210]: time="2025-07-14T21:46:05.962334150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-dtjnb,Uid:cef200c7-dcd9-447d-a016-5016f61aed44,Namespace:kube-system,Attempt:0,}" Jul 14 21:46:05.968319 systemd[1]: cri-containerd-57e2a161bcd7c10aaa1e1145dec632030791fc1dd851c44e9d68eb00977972bc.scope: Deactivated successfully. Jul 14 21:46:05.986533 env[1210]: time="2025-07-14T21:46:05.986477378Z" level=info msg="shim disconnected" id=57e2a161bcd7c10aaa1e1145dec632030791fc1dd851c44e9d68eb00977972bc Jul 14 21:46:05.986533 env[1210]: time="2025-07-14T21:46:05.986530951Z" level=warning msg="cleaning up after shim disconnected" id=57e2a161bcd7c10aaa1e1145dec632030791fc1dd851c44e9d68eb00977972bc namespace=k8s.io Jul 14 21:46:05.986533 env[1210]: time="2025-07-14T21:46:05.986541834Z" level=info msg="cleaning up dead shim" Jul 14 21:46:05.992639 env[1210]: time="2025-07-14T21:46:05.992564127Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:46:05.992791 env[1210]: time="2025-07-14T21:46:05.992611579Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:46:05.992791 env[1210]: time="2025-07-14T21:46:05.992622181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:46:05.992901 env[1210]: time="2025-07-14T21:46:05.992792102Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8ced6fa5a23319fea2b61d9cbc11743be85d8d4d12615efd486154304ea60073 pid=3057 runtime=io.containerd.runc.v2 Jul 14 21:46:05.996456 env[1210]: time="2025-07-14T21:46:05.996404534Z" level=warning msg="cleanup warnings time=\"2025-07-14T21:46:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3042 runtime=io.containerd.runc.v2\ntime=\"2025-07-14T21:46:05Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/57e2a161bcd7c10aaa1e1145dec632030791fc1dd851c44e9d68eb00977972bc/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Jul 14 21:46:05.996764 env[1210]: time="2025-07-14T21:46:05.996664637Z" level=error msg="copy shim log" error="read /proc/self/fd/57: file already closed" Jul 14 21:46:05.997956 env[1210]: time="2025-07-14T21:46:05.997905457Z" level=error msg="Failed to pipe stderr of container \"57e2a161bcd7c10aaa1e1145dec632030791fc1dd851c44e9d68eb00977972bc\"" error="reading from a closed fifo" Jul 14 21:46:05.998224 env[1210]: time="2025-07-14T21:46:05.998182764Z" level=error msg="Failed to pipe stdout of container \"57e2a161bcd7c10aaa1e1145dec632030791fc1dd851c44e9d68eb00977972bc\"" error="reading from a closed fifo" Jul 14 21:46:06.002339 env[1210]: time="2025-07-14T21:46:06.002266146Z" level=error msg="StartContainer for \"57e2a161bcd7c10aaa1e1145dec632030791fc1dd851c44e9d68eb00977972bc\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Jul 14 21:46:06.003029 kubelet[1414]: E0714 21:46:06.002976 1414 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="57e2a161bcd7c10aaa1e1145dec632030791fc1dd851c44e9d68eb00977972bc" Jul 14 21:46:06.003451 kubelet[1414]: E0714 21:46:06.003413 1414 kuberuntime_manager.go:1341] "Unhandled Error" err=< Jul 14 21:46:06.003451 kubelet[1414]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Jul 14 21:46:06.003451 kubelet[1414]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Jul 14 21:46:06.003451 kubelet[1414]: rm /hostbin/cilium-mount Jul 14 21:46:06.003569 kubelet[1414]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hfx6n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-5sptx_kube-system(2c22aa0b-d429-4ca8-b913-12b36feb09e6): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Jul 14 21:46:06.003569 kubelet[1414]: > logger="UnhandledError" Jul 14 21:46:06.004807 kubelet[1414]: E0714 21:46:06.004760 1414 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-5sptx" podUID="2c22aa0b-d429-4ca8-b913-12b36feb09e6" Jul 14 21:46:06.010109 systemd[1]: Started cri-containerd-8ced6fa5a23319fea2b61d9cbc11743be85d8d4d12615efd486154304ea60073.scope. Jul 14 21:46:06.052050 env[1210]: time="2025-07-14T21:46:06.052007447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-dtjnb,Uid:cef200c7-dcd9-447d-a016-5016f61aed44,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ced6fa5a23319fea2b61d9cbc11743be85d8d4d12615efd486154304ea60073\"" Jul 14 21:46:06.052964 kubelet[1414]: E0714 21:46:06.052935 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:06.053888 env[1210]: time="2025-07-14T21:46:06.053861319Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 14 21:46:06.433516 kubelet[1414]: E0714 21:46:06.433449 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:46:06.559759 kubelet[1414]: E0714 21:46:06.559695 1414 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 14 21:46:06.732615 env[1210]: time="2025-07-14T21:46:06.732507692Z" level=info msg="StopPodSandbox for \"8bb8725ee77d19dcf477e6ddd39be5ec8387f4a4dda972efb373f4e160769f29\"" Jul 14 21:46:06.732813 env[1210]: time="2025-07-14T21:46:06.732788717Z" level=info msg="Container to stop \"57e2a161bcd7c10aaa1e1145dec632030791fc1dd851c44e9d68eb00977972bc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 21:46:06.739803 systemd[1]: cri-containerd-8bb8725ee77d19dcf477e6ddd39be5ec8387f4a4dda972efb373f4e160769f29.scope: Deactivated successfully. Jul 14 21:46:06.773808 env[1210]: time="2025-07-14T21:46:06.773739932Z" level=info msg="shim disconnected" id=8bb8725ee77d19dcf477e6ddd39be5ec8387f4a4dda972efb373f4e160769f29 Jul 14 21:46:06.773808 env[1210]: time="2025-07-14T21:46:06.773783782Z" level=warning msg="cleaning up after shim disconnected" id=8bb8725ee77d19dcf477e6ddd39be5ec8387f4a4dda972efb373f4e160769f29 namespace=k8s.io Jul 14 21:46:06.773808 env[1210]: time="2025-07-14T21:46:06.773806068Z" level=info msg="cleaning up dead shim" Jul 14 21:46:06.784813 env[1210]: time="2025-07-14T21:46:06.784509720Z" level=warning msg="cleanup warnings time=\"2025-07-14T21:46:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3113 runtime=io.containerd.runc.v2\n" Jul 14 21:46:06.785142 env[1210]: time="2025-07-14T21:46:06.785095016Z" level=info msg="TearDown network for sandbox \"8bb8725ee77d19dcf477e6ddd39be5ec8387f4a4dda972efb373f4e160769f29\" successfully" Jul 14 21:46:06.785142 env[1210]: time="2025-07-14T21:46:06.785135425Z" level=info msg="StopPodSandbox for \"8bb8725ee77d19dcf477e6ddd39be5ec8387f4a4dda972efb373f4e160769f29\" returns successfully" Jul 14 21:46:06.885628 kubelet[1414]: I0714 21:46:06.885577 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2c22aa0b-d429-4ca8-b913-12b36feb09e6-hubble-tls\") pod \"2c22aa0b-d429-4ca8-b913-12b36feb09e6\" (UID: \"2c22aa0b-d429-4ca8-b913-12b36feb09e6\") " Jul 14 21:46:06.885628 kubelet[1414]: I0714 21:46:06.885622 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2c22aa0b-d429-4ca8-b913-12b36feb09e6-clustermesh-secrets\") pod \"2c22aa0b-d429-4ca8-b913-12b36feb09e6\" (UID: \"2c22aa0b-d429-4ca8-b913-12b36feb09e6\") " Jul 14 21:46:06.885628 kubelet[1414]: I0714 21:46:06.885640 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2c22aa0b-d429-4ca8-b913-12b36feb09e6-bpf-maps\") pod \"2c22aa0b-d429-4ca8-b913-12b36feb09e6\" (UID: \"2c22aa0b-d429-4ca8-b913-12b36feb09e6\") " Jul 14 21:46:06.885844 kubelet[1414]: I0714 21:46:06.885659 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2c22aa0b-d429-4ca8-b913-12b36feb09e6-cilium-ipsec-secrets\") pod \"2c22aa0b-d429-4ca8-b913-12b36feb09e6\" (UID: \"2c22aa0b-d429-4ca8-b913-12b36feb09e6\") " Jul 14 21:46:06.885844 kubelet[1414]: I0714 21:46:06.885677 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfx6n\" (UniqueName: \"kubernetes.io/projected/2c22aa0b-d429-4ca8-b913-12b36feb09e6-kube-api-access-hfx6n\") pod \"2c22aa0b-d429-4ca8-b913-12b36feb09e6\" (UID: \"2c22aa0b-d429-4ca8-b913-12b36feb09e6\") " Jul 14 21:46:06.885844 kubelet[1414]: I0714 21:46:06.885691 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c22aa0b-d429-4ca8-b913-12b36feb09e6-xtables-lock\") pod \"2c22aa0b-d429-4ca8-b913-12b36feb09e6\" (UID: \"2c22aa0b-d429-4ca8-b913-12b36feb09e6\") " Jul 14 21:46:06.885844 kubelet[1414]: I0714 21:46:06.885712 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2c22aa0b-d429-4ca8-b913-12b36feb09e6-cilium-run\") pod \"2c22aa0b-d429-4ca8-b913-12b36feb09e6\" (UID: \"2c22aa0b-d429-4ca8-b913-12b36feb09e6\") " Jul 14 21:46:06.885844 kubelet[1414]: I0714 21:46:06.885730 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2c22aa0b-d429-4ca8-b913-12b36feb09e6-host-proc-sys-net\") pod \"2c22aa0b-d429-4ca8-b913-12b36feb09e6\" (UID: \"2c22aa0b-d429-4ca8-b913-12b36feb09e6\") " Jul 14 21:46:06.885844 kubelet[1414]: I0714 21:46:06.885760 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2c22aa0b-d429-4ca8-b913-12b36feb09e6-host-proc-sys-kernel\") pod \"2c22aa0b-d429-4ca8-b913-12b36feb09e6\" (UID: \"2c22aa0b-d429-4ca8-b913-12b36feb09e6\") " Jul 14 21:46:06.885844 kubelet[1414]: I0714 21:46:06.885776 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2c22aa0b-d429-4ca8-b913-12b36feb09e6-hostproc\") pod \"2c22aa0b-d429-4ca8-b913-12b36feb09e6\" (UID: \"2c22aa0b-d429-4ca8-b913-12b36feb09e6\") " Jul 14 21:46:06.885844 kubelet[1414]: I0714 21:46:06.885794 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c22aa0b-d429-4ca8-b913-12b36feb09e6-lib-modules\") pod \"2c22aa0b-d429-4ca8-b913-12b36feb09e6\" (UID: \"2c22aa0b-d429-4ca8-b913-12b36feb09e6\") " Jul 14 21:46:06.885844 kubelet[1414]: I0714 21:46:06.885812 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2c22aa0b-d429-4ca8-b913-12b36feb09e6-cilium-config-path\") pod \"2c22aa0b-d429-4ca8-b913-12b36feb09e6\" (UID: \"2c22aa0b-d429-4ca8-b913-12b36feb09e6\") " Jul 14 21:46:06.885844 kubelet[1414]: I0714 21:46:06.885844 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2c22aa0b-d429-4ca8-b913-12b36feb09e6-cilium-cgroup\") pod \"2c22aa0b-d429-4ca8-b913-12b36feb09e6\" (UID: \"2c22aa0b-d429-4ca8-b913-12b36feb09e6\") " Jul 14 21:46:06.886147 kubelet[1414]: I0714 21:46:06.885864 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2c22aa0b-d429-4ca8-b913-12b36feb09e6-cni-path\") pod \"2c22aa0b-d429-4ca8-b913-12b36feb09e6\" (UID: \"2c22aa0b-d429-4ca8-b913-12b36feb09e6\") " Jul 14 21:46:06.886147 kubelet[1414]: I0714 21:46:06.885879 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c22aa0b-d429-4ca8-b913-12b36feb09e6-etc-cni-netd\") pod \"2c22aa0b-d429-4ca8-b913-12b36feb09e6\" (UID: \"2c22aa0b-d429-4ca8-b913-12b36feb09e6\") " Jul 14 21:46:06.886147 kubelet[1414]: I0714 21:46:06.885951 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c22aa0b-d429-4ca8-b913-12b36feb09e6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2c22aa0b-d429-4ca8-b913-12b36feb09e6" (UID: "2c22aa0b-d429-4ca8-b913-12b36feb09e6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:46:06.887350 kubelet[1414]: I0714 21:46:06.886262 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c22aa0b-d429-4ca8-b913-12b36feb09e6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2c22aa0b-d429-4ca8-b913-12b36feb09e6" (UID: "2c22aa0b-d429-4ca8-b913-12b36feb09e6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:46:06.887350 kubelet[1414]: I0714 21:46:06.886344 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c22aa0b-d429-4ca8-b913-12b36feb09e6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2c22aa0b-d429-4ca8-b913-12b36feb09e6" (UID: "2c22aa0b-d429-4ca8-b913-12b36feb09e6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:46:06.887350 kubelet[1414]: I0714 21:46:06.886382 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c22aa0b-d429-4ca8-b913-12b36feb09e6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2c22aa0b-d429-4ca8-b913-12b36feb09e6" (UID: "2c22aa0b-d429-4ca8-b913-12b36feb09e6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:46:06.887350 kubelet[1414]: I0714 21:46:06.887172 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c22aa0b-d429-4ca8-b913-12b36feb09e6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2c22aa0b-d429-4ca8-b913-12b36feb09e6" (UID: "2c22aa0b-d429-4ca8-b913-12b36feb09e6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:46:06.887350 kubelet[1414]: I0714 21:46:06.887214 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c22aa0b-d429-4ca8-b913-12b36feb09e6-hostproc" (OuterVolumeSpecName: "hostproc") pod "2c22aa0b-d429-4ca8-b913-12b36feb09e6" (UID: "2c22aa0b-d429-4ca8-b913-12b36feb09e6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:46:06.887350 kubelet[1414]: I0714 21:46:06.887233 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c22aa0b-d429-4ca8-b913-12b36feb09e6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2c22aa0b-d429-4ca8-b913-12b36feb09e6" (UID: "2c22aa0b-d429-4ca8-b913-12b36feb09e6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:46:06.887692 kubelet[1414]: I0714 21:46:06.887629 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c22aa0b-d429-4ca8-b913-12b36feb09e6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2c22aa0b-d429-4ca8-b913-12b36feb09e6" (UID: "2c22aa0b-d429-4ca8-b913-12b36feb09e6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:46:06.887692 kubelet[1414]: I0714 21:46:06.887662 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c22aa0b-d429-4ca8-b913-12b36feb09e6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2c22aa0b-d429-4ca8-b913-12b36feb09e6" (UID: "2c22aa0b-d429-4ca8-b913-12b36feb09e6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:46:06.887692 kubelet[1414]: I0714 21:46:06.887680 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c22aa0b-d429-4ca8-b913-12b36feb09e6-cni-path" (OuterVolumeSpecName: "cni-path") pod "2c22aa0b-d429-4ca8-b913-12b36feb09e6" (UID: "2c22aa0b-d429-4ca8-b913-12b36feb09e6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:46:06.889544 kubelet[1414]: I0714 21:46:06.889490 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c22aa0b-d429-4ca8-b913-12b36feb09e6-kube-api-access-hfx6n" (OuterVolumeSpecName: "kube-api-access-hfx6n") pod "2c22aa0b-d429-4ca8-b913-12b36feb09e6" (UID: "2c22aa0b-d429-4ca8-b913-12b36feb09e6"). InnerVolumeSpecName "kube-api-access-hfx6n". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 14 21:46:06.889727 kubelet[1414]: I0714 21:46:06.889674 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c22aa0b-d429-4ca8-b913-12b36feb09e6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2c22aa0b-d429-4ca8-b913-12b36feb09e6" (UID: "2c22aa0b-d429-4ca8-b913-12b36feb09e6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 14 21:46:06.890592 kubelet[1414]: I0714 21:46:06.890225 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c22aa0b-d429-4ca8-b913-12b36feb09e6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2c22aa0b-d429-4ca8-b913-12b36feb09e6" (UID: "2c22aa0b-d429-4ca8-b913-12b36feb09e6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 14 21:46:06.891029 kubelet[1414]: I0714 21:46:06.891004 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c22aa0b-d429-4ca8-b913-12b36feb09e6-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "2c22aa0b-d429-4ca8-b913-12b36feb09e6" (UID: "2c22aa0b-d429-4ca8-b913-12b36feb09e6"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 14 21:46:06.891211 systemd[1]: var-lib-kubelet-pods-2c22aa0b\x2dd429\x2d4ca8\x2db913\x2d12b36feb09e6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhfx6n.mount: Deactivated successfully. Jul 14 21:46:06.891307 systemd[1]: var-lib-kubelet-pods-2c22aa0b\x2dd429\x2d4ca8\x2db913\x2d12b36feb09e6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 14 21:46:06.891360 systemd[1]: var-lib-kubelet-pods-2c22aa0b\x2dd429\x2d4ca8\x2db913\x2d12b36feb09e6-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 14 21:46:06.892586 kubelet[1414]: I0714 21:46:06.892549 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c22aa0b-d429-4ca8-b913-12b36feb09e6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2c22aa0b-d429-4ca8-b913-12b36feb09e6" (UID: "2c22aa0b-d429-4ca8-b913-12b36feb09e6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 14 21:46:06.893502 systemd[1]: var-lib-kubelet-pods-2c22aa0b\x2dd429\x2d4ca8\x2db913\x2d12b36feb09e6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 14 21:46:06.986680 kubelet[1414]: I0714 21:46:06.986563 1414 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2c22aa0b-d429-4ca8-b913-12b36feb09e6-hostproc\") on node \"10.0.0.15\" DevicePath \"\"" Jul 14 21:46:06.986680 kubelet[1414]: I0714 21:46:06.986595 1414 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c22aa0b-d429-4ca8-b913-12b36feb09e6-lib-modules\") on node \"10.0.0.15\" DevicePath \"\"" Jul 14 21:46:06.986680 kubelet[1414]: I0714 21:46:06.986607 1414 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2c22aa0b-d429-4ca8-b913-12b36feb09e6-host-proc-sys-kernel\") on node \"10.0.0.15\" DevicePath \"\"" Jul 14 21:46:06.986680 kubelet[1414]: I0714 21:46:06.986615 1414 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2c22aa0b-d429-4ca8-b913-12b36feb09e6-cilium-config-path\") on node \"10.0.0.15\" DevicePath \"\"" Jul 14 21:46:06.986680 kubelet[1414]: I0714 21:46:06.986625 1414 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2c22aa0b-d429-4ca8-b913-12b36feb09e6-cilium-cgroup\") on node \"10.0.0.15\" DevicePath \"\"" Jul 14 21:46:06.986680 kubelet[1414]: I0714 21:46:06.986634 1414 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2c22aa0b-d429-4ca8-b913-12b36feb09e6-cni-path\") on node \"10.0.0.15\" DevicePath \"\"" Jul 14 21:46:06.986680 kubelet[1414]: I0714 21:46:06.986642 1414 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c22aa0b-d429-4ca8-b913-12b36feb09e6-etc-cni-netd\") on node \"10.0.0.15\" DevicePath \"\"" Jul 14 21:46:06.986680 kubelet[1414]: I0714 21:46:06.986651 1414 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2c22aa0b-d429-4ca8-b913-12b36feb09e6-hubble-tls\") on node \"10.0.0.15\" DevicePath \"\"" Jul 14 21:46:06.986680 kubelet[1414]: I0714 21:46:06.986659 1414 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2c22aa0b-d429-4ca8-b913-12b36feb09e6-clustermesh-secrets\") on node \"10.0.0.15\" DevicePath \"\"" Jul 14 21:46:06.986680 kubelet[1414]: I0714 21:46:06.986667 1414 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2c22aa0b-d429-4ca8-b913-12b36feb09e6-bpf-maps\") on node \"10.0.0.15\" DevicePath \"\"" Jul 14 21:46:06.986680 kubelet[1414]: I0714 21:46:06.986675 1414 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hfx6n\" (UniqueName: \"kubernetes.io/projected/2c22aa0b-d429-4ca8-b913-12b36feb09e6-kube-api-access-hfx6n\") on node \"10.0.0.15\" DevicePath \"\"" Jul 14 21:46:06.986680 kubelet[1414]: I0714 21:46:06.986683 1414 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c22aa0b-d429-4ca8-b913-12b36feb09e6-xtables-lock\") on node \"10.0.0.15\" DevicePath \"\"" Jul 14 21:46:06.986680 kubelet[1414]: I0714 21:46:06.986691 1414 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2c22aa0b-d429-4ca8-b913-12b36feb09e6-cilium-run\") on node \"10.0.0.15\" DevicePath \"\"" Jul 14 21:46:06.987566 kubelet[1414]: I0714 21:46:06.986700 1414 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2c22aa0b-d429-4ca8-b913-12b36feb09e6-host-proc-sys-net\") on node \"10.0.0.15\" DevicePath \"\"" Jul 14 21:46:06.987566 kubelet[1414]: I0714 21:46:06.986708 1414 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2c22aa0b-d429-4ca8-b913-12b36feb09e6-cilium-ipsec-secrets\") on node \"10.0.0.15\" DevicePath \"\"" Jul 14 21:46:07.075058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1395528840.mount: Deactivated successfully. Jul 14 21:46:07.434312 kubelet[1414]: E0714 21:46:07.434253 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:46:07.617461 systemd[1]: Removed slice kubepods-burstable-pod2c22aa0b_d429_4ca8_b913_12b36feb09e6.slice. Jul 14 21:46:07.735580 kubelet[1414]: I0714 21:46:07.735492 1414 scope.go:117] "RemoveContainer" containerID="57e2a161bcd7c10aaa1e1145dec632030791fc1dd851c44e9d68eb00977972bc" Jul 14 21:46:07.737398 env[1210]: time="2025-07-14T21:46:07.737018873Z" level=info msg="RemoveContainer for \"57e2a161bcd7c10aaa1e1145dec632030791fc1dd851c44e9d68eb00977972bc\"" Jul 14 21:46:07.740939 env[1210]: time="2025-07-14T21:46:07.740907267Z" level=info msg="RemoveContainer for \"57e2a161bcd7c10aaa1e1145dec632030791fc1dd851c44e9d68eb00977972bc\" returns successfully" Jul 14 21:46:07.770128 env[1210]: time="2025-07-14T21:46:07.770064701Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:46:07.772258 env[1210]: time="2025-07-14T21:46:07.772227548Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:46:07.774128 env[1210]: time="2025-07-14T21:46:07.774098408Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:46:07.774581 env[1210]: time="2025-07-14T21:46:07.774542868Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 14 21:46:07.776547 env[1210]: time="2025-07-14T21:46:07.776519072Z" level=info msg="CreateContainer within sandbox \"8ced6fa5a23319fea2b61d9cbc11743be85d8d4d12615efd486154304ea60073\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 14 21:46:07.777850 kubelet[1414]: I0714 21:46:07.777400 1414 memory_manager.go:355] "RemoveStaleState removing state" podUID="2c22aa0b-d429-4ca8-b913-12b36feb09e6" containerName="mount-cgroup" Jul 14 21:46:07.782271 systemd[1]: Created slice kubepods-burstable-pod1cb5259f_39d6_4418_bfcb_a4bc929ab902.slice. Jul 14 21:46:07.792294 kubelet[1414]: I0714 21:46:07.792184 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1cb5259f-39d6-4418-bfcb-a4bc929ab902-bpf-maps\") pod \"cilium-qhrxn\" (UID: \"1cb5259f-39d6-4418-bfcb-a4bc929ab902\") " pod="kube-system/cilium-qhrxn" Jul 14 21:46:07.792294 kubelet[1414]: I0714 21:46:07.792226 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1cb5259f-39d6-4418-bfcb-a4bc929ab902-hostproc\") pod \"cilium-qhrxn\" (UID: \"1cb5259f-39d6-4418-bfcb-a4bc929ab902\") " pod="kube-system/cilium-qhrxn" Jul 14 21:46:07.792294 kubelet[1414]: I0714 21:46:07.792242 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1cb5259f-39d6-4418-bfcb-a4bc929ab902-cni-path\") pod \"cilium-qhrxn\" (UID: \"1cb5259f-39d6-4418-bfcb-a4bc929ab902\") " pod="kube-system/cilium-qhrxn" Jul 14 21:46:07.792294 kubelet[1414]: I0714 21:46:07.792259 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1cb5259f-39d6-4418-bfcb-a4bc929ab902-lib-modules\") pod \"cilium-qhrxn\" (UID: \"1cb5259f-39d6-4418-bfcb-a4bc929ab902\") " pod="kube-system/cilium-qhrxn" Jul 14 21:46:07.792294 kubelet[1414]: I0714 21:46:07.792279 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1cb5259f-39d6-4418-bfcb-a4bc929ab902-clustermesh-secrets\") pod \"cilium-qhrxn\" (UID: \"1cb5259f-39d6-4418-bfcb-a4bc929ab902\") " pod="kube-system/cilium-qhrxn" Jul 14 21:46:07.792294 kubelet[1414]: I0714 21:46:07.792296 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1cb5259f-39d6-4418-bfcb-a4bc929ab902-cilium-config-path\") pod \"cilium-qhrxn\" (UID: \"1cb5259f-39d6-4418-bfcb-a4bc929ab902\") " pod="kube-system/cilium-qhrxn" Jul 14 21:46:07.792553 kubelet[1414]: I0714 21:46:07.792315 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1cb5259f-39d6-4418-bfcb-a4bc929ab902-cilium-ipsec-secrets\") pod \"cilium-qhrxn\" (UID: \"1cb5259f-39d6-4418-bfcb-a4bc929ab902\") " pod="kube-system/cilium-qhrxn" Jul 14 21:46:07.792553 kubelet[1414]: I0714 21:46:07.792331 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1cb5259f-39d6-4418-bfcb-a4bc929ab902-host-proc-sys-kernel\") pod \"cilium-qhrxn\" (UID: \"1cb5259f-39d6-4418-bfcb-a4bc929ab902\") " pod="kube-system/cilium-qhrxn" Jul 14 21:46:07.792553 kubelet[1414]: I0714 21:46:07.792351 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1cb5259f-39d6-4418-bfcb-a4bc929ab902-hubble-tls\") pod \"cilium-qhrxn\" (UID: \"1cb5259f-39d6-4418-bfcb-a4bc929ab902\") " pod="kube-system/cilium-qhrxn" Jul 14 21:46:07.792553 kubelet[1414]: I0714 21:46:07.792369 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfhgs\" (UniqueName: \"kubernetes.io/projected/1cb5259f-39d6-4418-bfcb-a4bc929ab902-kube-api-access-jfhgs\") pod \"cilium-qhrxn\" (UID: \"1cb5259f-39d6-4418-bfcb-a4bc929ab902\") " pod="kube-system/cilium-qhrxn" Jul 14 21:46:07.792553 kubelet[1414]: I0714 21:46:07.792387 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1cb5259f-39d6-4418-bfcb-a4bc929ab902-etc-cni-netd\") pod \"cilium-qhrxn\" (UID: \"1cb5259f-39d6-4418-bfcb-a4bc929ab902\") " pod="kube-system/cilium-qhrxn" Jul 14 21:46:07.792553 kubelet[1414]: I0714 21:46:07.792402 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1cb5259f-39d6-4418-bfcb-a4bc929ab902-host-proc-sys-net\") pod \"cilium-qhrxn\" (UID: \"1cb5259f-39d6-4418-bfcb-a4bc929ab902\") " pod="kube-system/cilium-qhrxn" Jul 14 21:46:07.792553 kubelet[1414]: I0714 21:46:07.792419 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1cb5259f-39d6-4418-bfcb-a4bc929ab902-cilium-run\") pod \"cilium-qhrxn\" (UID: \"1cb5259f-39d6-4418-bfcb-a4bc929ab902\") " pod="kube-system/cilium-qhrxn" Jul 14 21:46:07.792553 kubelet[1414]: I0714 21:46:07.792432 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1cb5259f-39d6-4418-bfcb-a4bc929ab902-cilium-cgroup\") pod \"cilium-qhrxn\" (UID: \"1cb5259f-39d6-4418-bfcb-a4bc929ab902\") " pod="kube-system/cilium-qhrxn" Jul 14 21:46:07.792553 kubelet[1414]: I0714 21:46:07.792447 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1cb5259f-39d6-4418-bfcb-a4bc929ab902-xtables-lock\") pod \"cilium-qhrxn\" (UID: \"1cb5259f-39d6-4418-bfcb-a4bc929ab902\") " pod="kube-system/cilium-qhrxn" Jul 14 21:46:07.798620 env[1210]: time="2025-07-14T21:46:07.798561628Z" level=info msg="CreateContainer within sandbox \"8ced6fa5a23319fea2b61d9cbc11743be85d8d4d12615efd486154304ea60073\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6fe49457ca1365e8f64822436e53bf646467e65a912187307a36cc34d5a928da\"" Jul 14 21:46:07.799136 env[1210]: time="2025-07-14T21:46:07.799102549Z" level=info msg="StartContainer for \"6fe49457ca1365e8f64822436e53bf646467e65a912187307a36cc34d5a928da\"" Jul 14 21:46:07.821100 systemd[1]: run-containerd-runc-k8s.io-6fe49457ca1365e8f64822436e53bf646467e65a912187307a36cc34d5a928da-runc.yn6OCv.mount: Deactivated successfully. Jul 14 21:46:07.822566 systemd[1]: Started cri-containerd-6fe49457ca1365e8f64822436e53bf646467e65a912187307a36cc34d5a928da.scope. Jul 14 21:46:07.853903 env[1210]: time="2025-07-14T21:46:07.853777361Z" level=info msg="StartContainer for \"6fe49457ca1365e8f64822436e53bf646467e65a912187307a36cc34d5a928da\" returns successfully" Jul 14 21:46:08.098028 kubelet[1414]: E0714 21:46:08.097996 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:08.098881 env[1210]: time="2025-07-14T21:46:08.098815630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qhrxn,Uid:1cb5259f-39d6-4418-bfcb-a4bc929ab902,Namespace:kube-system,Attempt:0,}" Jul 14 21:46:08.206577 env[1210]: time="2025-07-14T21:46:08.206511070Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:46:08.207789 env[1210]: time="2025-07-14T21:46:08.206551079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:46:08.207789 env[1210]: time="2025-07-14T21:46:08.206562921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:46:08.207789 env[1210]: time="2025-07-14T21:46:08.206928841Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d9e9762ac0ea167bc7390bbda1b314edaef7335f839f533a5b3ae862272a0cdc pid=3180 runtime=io.containerd.runc.v2 Jul 14 21:46:08.217672 systemd[1]: Started cri-containerd-d9e9762ac0ea167bc7390bbda1b314edaef7335f839f533a5b3ae862272a0cdc.scope. Jul 14 21:46:08.264735 env[1210]: time="2025-07-14T21:46:08.264678749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qhrxn,Uid:1cb5259f-39d6-4418-bfcb-a4bc929ab902,Namespace:kube-system,Attempt:0,} returns sandbox id \"d9e9762ac0ea167bc7390bbda1b314edaef7335f839f533a5b3ae862272a0cdc\"" Jul 14 21:46:08.265798 kubelet[1414]: E0714 21:46:08.265770 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:08.268160 env[1210]: time="2025-07-14T21:46:08.268118256Z" level=info msg="CreateContainer within sandbox \"d9e9762ac0ea167bc7390bbda1b314edaef7335f839f533a5b3ae862272a0cdc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 14 21:46:08.289722 env[1210]: time="2025-07-14T21:46:08.289641853Z" level=info msg="CreateContainer within sandbox \"d9e9762ac0ea167bc7390bbda1b314edaef7335f839f533a5b3ae862272a0cdc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8842c27749631fb58bfa1621b9ec695d8fa3428b6b4207a953adbbf6a6544450\"" Jul 14 21:46:08.290237 env[1210]: time="2025-07-14T21:46:08.290185331Z" level=info msg="StartContainer for \"8842c27749631fb58bfa1621b9ec695d8fa3428b6b4207a953adbbf6a6544450\"" Jul 14 21:46:08.304662 systemd[1]: Started cri-containerd-8842c27749631fb58bfa1621b9ec695d8fa3428b6b4207a953adbbf6a6544450.scope. Jul 14 21:46:08.333838 env[1210]: time="2025-07-14T21:46:08.333780403Z" level=info msg="StartContainer for \"8842c27749631fb58bfa1621b9ec695d8fa3428b6b4207a953adbbf6a6544450\" returns successfully" Jul 14 21:46:08.367112 systemd[1]: cri-containerd-8842c27749631fb58bfa1621b9ec695d8fa3428b6b4207a953adbbf6a6544450.scope: Deactivated successfully. Jul 14 21:46:08.390427 env[1210]: time="2025-07-14T21:46:08.390380582Z" level=info msg="shim disconnected" id=8842c27749631fb58bfa1621b9ec695d8fa3428b6b4207a953adbbf6a6544450 Jul 14 21:46:08.390656 env[1210]: time="2025-07-14T21:46:08.390635597Z" level=warning msg="cleaning up after shim disconnected" id=8842c27749631fb58bfa1621b9ec695d8fa3428b6b4207a953adbbf6a6544450 namespace=k8s.io Jul 14 21:46:08.390724 env[1210]: time="2025-07-14T21:46:08.390710533Z" level=info msg="cleaning up dead shim" Jul 14 21:46:08.398151 env[1210]: time="2025-07-14T21:46:08.398056409Z" level=warning msg="cleanup warnings time=\"2025-07-14T21:46:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3262 runtime=io.containerd.runc.v2\n" Jul 14 21:46:08.434514 kubelet[1414]: E0714 21:46:08.434446 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:46:08.739413 kubelet[1414]: E0714 21:46:08.739009 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:08.741615 kubelet[1414]: E0714 21:46:08.741591 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:08.743524 env[1210]: time="2025-07-14T21:46:08.743485825Z" level=info msg="CreateContainer within sandbox \"d9e9762ac0ea167bc7390bbda1b314edaef7335f839f533a5b3ae862272a0cdc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 14 21:46:08.756740 kubelet[1414]: I0714 21:46:08.756648 1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-dtjnb" podStartSLOduration=2.034827879 podStartE2EDuration="3.756527779s" podCreationTimestamp="2025-07-14 21:46:05 +0000 UTC" firstStartedPulling="2025-07-14 21:46:06.053615902 +0000 UTC m=+55.374839721" lastFinishedPulling="2025-07-14 21:46:07.775315802 +0000 UTC m=+57.096539621" observedRunningTime="2025-07-14 21:46:08.755452985 +0000 UTC m=+58.076676804" watchObservedRunningTime="2025-07-14 21:46:08.756527779 +0000 UTC m=+58.077751598" Jul 14 21:46:08.768443 env[1210]: time="2025-07-14T21:46:08.768376913Z" level=info msg="CreateContainer within sandbox \"d9e9762ac0ea167bc7390bbda1b314edaef7335f839f533a5b3ae862272a0cdc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c8df2d78083d964e50b86182b5de9388cdd4d770127aa670c1c8bdf20d7b3629\"" Jul 14 21:46:08.769039 env[1210]: time="2025-07-14T21:46:08.769009131Z" level=info msg="StartContainer for \"c8df2d78083d964e50b86182b5de9388cdd4d770127aa670c1c8bdf20d7b3629\"" Jul 14 21:46:08.785631 systemd[1]: Started cri-containerd-c8df2d78083d964e50b86182b5de9388cdd4d770127aa670c1c8bdf20d7b3629.scope. Jul 14 21:46:08.838585 env[1210]: time="2025-07-14T21:46:08.838505671Z" level=info msg="StartContainer for \"c8df2d78083d964e50b86182b5de9388cdd4d770127aa670c1c8bdf20d7b3629\" returns successfully" Jul 14 21:46:08.842626 systemd[1]: cri-containerd-c8df2d78083d964e50b86182b5de9388cdd4d770127aa670c1c8bdf20d7b3629.scope: Deactivated successfully. Jul 14 21:46:08.862745 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c8df2d78083d964e50b86182b5de9388cdd4d770127aa670c1c8bdf20d7b3629-rootfs.mount: Deactivated successfully. Jul 14 21:46:08.868791 env[1210]: time="2025-07-14T21:46:08.868742921Z" level=info msg="shim disconnected" id=c8df2d78083d964e50b86182b5de9388cdd4d770127aa670c1c8bdf20d7b3629 Jul 14 21:46:08.868791 env[1210]: time="2025-07-14T21:46:08.868785130Z" level=warning msg="cleaning up after shim disconnected" id=c8df2d78083d964e50b86182b5de9388cdd4d770127aa670c1c8bdf20d7b3629 namespace=k8s.io Jul 14 21:46:08.868791 env[1210]: time="2025-07-14T21:46:08.868793892Z" level=info msg="cleaning up dead shim" Jul 14 21:46:08.875427 env[1210]: time="2025-07-14T21:46:08.875352077Z" level=warning msg="cleanup warnings time=\"2025-07-14T21:46:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3326 runtime=io.containerd.runc.v2\n" Jul 14 21:46:09.090980 kubelet[1414]: W0714 21:46:09.090909 1414 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c22aa0b_d429_4ca8_b913_12b36feb09e6.slice/cri-containerd-57e2a161bcd7c10aaa1e1145dec632030791fc1dd851c44e9d68eb00977972bc.scope WatchSource:0}: container "57e2a161bcd7c10aaa1e1145dec632030791fc1dd851c44e9d68eb00977972bc" in namespace "k8s.io": not found Jul 14 21:46:09.435226 kubelet[1414]: E0714 21:46:09.434999 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:46:09.615572 kubelet[1414]: I0714 21:46:09.615522 1414 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c22aa0b-d429-4ca8-b913-12b36feb09e6" path="/var/lib/kubelet/pods/2c22aa0b-d429-4ca8-b913-12b36feb09e6/volumes" Jul 14 21:46:09.745502 kubelet[1414]: E0714 21:46:09.745247 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:09.746107 kubelet[1414]: E0714 21:46:09.745855 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:09.747799 env[1210]: time="2025-07-14T21:46:09.747746963Z" level=info msg="CreateContainer within sandbox \"d9e9762ac0ea167bc7390bbda1b314edaef7335f839f533a5b3ae862272a0cdc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 14 21:46:09.761096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount323254917.mount: Deactivated successfully. Jul 14 21:46:09.764808 env[1210]: time="2025-07-14T21:46:09.764746017Z" level=info msg="CreateContainer within sandbox \"d9e9762ac0ea167bc7390bbda1b314edaef7335f839f533a5b3ae862272a0cdc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6951f0eec8486f61a3c722e15d15ef93bcb55a3930fdc29e59affbe439734a0a\"" Jul 14 21:46:09.765525 env[1210]: time="2025-07-14T21:46:09.765484292Z" level=info msg="StartContainer for \"6951f0eec8486f61a3c722e15d15ef93bcb55a3930fdc29e59affbe439734a0a\"" Jul 14 21:46:09.785896 systemd[1]: Started cri-containerd-6951f0eec8486f61a3c722e15d15ef93bcb55a3930fdc29e59affbe439734a0a.scope. Jul 14 21:46:09.825916 systemd[1]: cri-containerd-6951f0eec8486f61a3c722e15d15ef93bcb55a3930fdc29e59affbe439734a0a.scope: Deactivated successfully. Jul 14 21:46:09.827846 env[1210]: time="2025-07-14T21:46:09.827793671Z" level=info msg="StartContainer for \"6951f0eec8486f61a3c722e15d15ef93bcb55a3930fdc29e59affbe439734a0a\" returns successfully" Jul 14 21:46:09.846452 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6951f0eec8486f61a3c722e15d15ef93bcb55a3930fdc29e59affbe439734a0a-rootfs.mount: Deactivated successfully. Jul 14 21:46:09.866655 env[1210]: time="2025-07-14T21:46:09.866581305Z" level=info msg="shim disconnected" id=6951f0eec8486f61a3c722e15d15ef93bcb55a3930fdc29e59affbe439734a0a Jul 14 21:46:09.866655 env[1210]: time="2025-07-14T21:46:09.866624834Z" level=warning msg="cleaning up after shim disconnected" id=6951f0eec8486f61a3c722e15d15ef93bcb55a3930fdc29e59affbe439734a0a namespace=k8s.io Jul 14 21:46:09.866655 env[1210]: time="2025-07-14T21:46:09.866634156Z" level=info msg="cleaning up dead shim" Jul 14 21:46:09.872741 env[1210]: time="2025-07-14T21:46:09.872691950Z" level=warning msg="cleanup warnings time=\"2025-07-14T21:46:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3382 runtime=io.containerd.runc.v2\n" Jul 14 21:46:10.436770 kubelet[1414]: E0714 21:46:10.436720 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:46:10.749246 kubelet[1414]: E0714 21:46:10.749016 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:10.751186 env[1210]: time="2025-07-14T21:46:10.751137614Z" level=info msg="CreateContainer within sandbox \"d9e9762ac0ea167bc7390bbda1b314edaef7335f839f533a5b3ae862272a0cdc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 14 21:46:10.813658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount519570373.mount: Deactivated successfully. Jul 14 21:46:10.820192 env[1210]: time="2025-07-14T21:46:10.820142984Z" level=info msg="CreateContainer within sandbox \"d9e9762ac0ea167bc7390bbda1b314edaef7335f839f533a5b3ae862272a0cdc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"30a98b8991f8139c935318e3a3671bba2cbc404272b44537890643e9e1d52dc7\"" Jul 14 21:46:10.820903 env[1210]: time="2025-07-14T21:46:10.820872333Z" level=info msg="StartContainer for \"30a98b8991f8139c935318e3a3671bba2cbc404272b44537890643e9e1d52dc7\"" Jul 14 21:46:10.864211 systemd[1]: Started cri-containerd-30a98b8991f8139c935318e3a3671bba2cbc404272b44537890643e9e1d52dc7.scope. Jul 14 21:46:10.888761 systemd[1]: cri-containerd-30a98b8991f8139c935318e3a3671bba2cbc404272b44537890643e9e1d52dc7.scope: Deactivated successfully. Jul 14 21:46:10.893292 env[1210]: time="2025-07-14T21:46:10.893248789Z" level=info msg="StartContainer for \"30a98b8991f8139c935318e3a3671bba2cbc404272b44537890643e9e1d52dc7\" returns successfully" Jul 14 21:46:10.930592 env[1210]: time="2025-07-14T21:46:10.930539142Z" level=info msg="shim disconnected" id=30a98b8991f8139c935318e3a3671bba2cbc404272b44537890643e9e1d52dc7 Jul 14 21:46:10.930812 env[1210]: time="2025-07-14T21:46:10.930607476Z" level=warning msg="cleaning up after shim disconnected" id=30a98b8991f8139c935318e3a3671bba2cbc404272b44537890643e9e1d52dc7 namespace=k8s.io Jul 14 21:46:10.930812 env[1210]: time="2025-07-14T21:46:10.930618838Z" level=info msg="cleaning up dead shim" Jul 14 21:46:10.946459 env[1210]: time="2025-07-14T21:46:10.946401732Z" level=warning msg="cleanup warnings time=\"2025-07-14T21:46:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3437 runtime=io.containerd.runc.v2\n" Jul 14 21:46:11.394906 kubelet[1414]: E0714 21:46:11.394857 1414 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:46:11.423522 env[1210]: time="2025-07-14T21:46:11.423483413Z" level=info msg="StopPodSandbox for \"ce0bc1991324c326efe57c9c404da4bb264ea1384dcd7e174d42c66715453a8f\"" Jul 14 21:46:11.423834 env[1210]: time="2025-07-14T21:46:11.423775071Z" level=info msg="TearDown network for sandbox \"ce0bc1991324c326efe57c9c404da4bb264ea1384dcd7e174d42c66715453a8f\" successfully" Jul 14 21:46:11.423929 env[1210]: time="2025-07-14T21:46:11.423910098Z" level=info msg="StopPodSandbox for \"ce0bc1991324c326efe57c9c404da4bb264ea1384dcd7e174d42c66715453a8f\" returns successfully" Jul 14 21:46:11.424366 env[1210]: time="2025-07-14T21:46:11.424333701Z" level=info msg="RemovePodSandbox for \"ce0bc1991324c326efe57c9c404da4bb264ea1384dcd7e174d42c66715453a8f\"" Jul 14 21:46:11.424463 env[1210]: time="2025-07-14T21:46:11.424362067Z" level=info msg="Forcibly stopping sandbox \"ce0bc1991324c326efe57c9c404da4bb264ea1384dcd7e174d42c66715453a8f\"" Jul 14 21:46:11.424463 env[1210]: time="2025-07-14T21:46:11.424423679Z" level=info msg="TearDown network for sandbox \"ce0bc1991324c326efe57c9c404da4bb264ea1384dcd7e174d42c66715453a8f\" successfully" Jul 14 21:46:11.427974 env[1210]: time="2025-07-14T21:46:11.427922490Z" level=info msg="RemovePodSandbox \"ce0bc1991324c326efe57c9c404da4bb264ea1384dcd7e174d42c66715453a8f\" returns successfully" Jul 14 21:46:11.428415 env[1210]: time="2025-07-14T21:46:11.428378260Z" level=info msg="StopPodSandbox for \"8bb8725ee77d19dcf477e6ddd39be5ec8387f4a4dda972efb373f4e160769f29\"" Jul 14 21:46:11.428481 env[1210]: time="2025-07-14T21:46:11.428453074Z" level=info msg="TearDown network for sandbox \"8bb8725ee77d19dcf477e6ddd39be5ec8387f4a4dda972efb373f4e160769f29\" successfully" Jul 14 21:46:11.428514 env[1210]: time="2025-07-14T21:46:11.428480640Z" level=info msg="StopPodSandbox for \"8bb8725ee77d19dcf477e6ddd39be5ec8387f4a4dda972efb373f4e160769f29\" returns successfully" Jul 14 21:46:11.430168 env[1210]: time="2025-07-14T21:46:11.428893801Z" level=info msg="RemovePodSandbox for \"8bb8725ee77d19dcf477e6ddd39be5ec8387f4a4dda972efb373f4e160769f29\"" Jul 14 21:46:11.430168 env[1210]: time="2025-07-14T21:46:11.428924608Z" level=info msg="Forcibly stopping sandbox \"8bb8725ee77d19dcf477e6ddd39be5ec8387f4a4dda972efb373f4e160769f29\"" Jul 14 21:46:11.430168 env[1210]: time="2025-07-14T21:46:11.429007424Z" level=info msg="TearDown network for sandbox \"8bb8725ee77d19dcf477e6ddd39be5ec8387f4a4dda972efb373f4e160769f29\" successfully" Jul 14 21:46:11.433802 env[1210]: time="2025-07-14T21:46:11.431664628Z" level=info msg="RemovePodSandbox \"8bb8725ee77d19dcf477e6ddd39be5ec8387f4a4dda972efb373f4e160769f29\" returns successfully" Jul 14 21:46:11.436955 kubelet[1414]: E0714 21:46:11.436870 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:46:11.560439 kubelet[1414]: E0714 21:46:11.560386 1414 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 14 21:46:11.757556 kubelet[1414]: E0714 21:46:11.757459 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:11.764325 env[1210]: time="2025-07-14T21:46:11.762796237Z" level=info msg="CreateContainer within sandbox \"d9e9762ac0ea167bc7390bbda1b314edaef7335f839f533a5b3ae862272a0cdc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 14 21:46:11.778037 env[1210]: time="2025-07-14T21:46:11.777732266Z" level=info msg="CreateContainer within sandbox \"d9e9762ac0ea167bc7390bbda1b314edaef7335f839f533a5b3ae862272a0cdc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e4684521d74acb6b9f0013d15332a057af24414c053cc83f6f14f9a25b9931e0\"" Jul 14 21:46:11.778383 env[1210]: time="2025-07-14T21:46:11.778357069Z" level=info msg="StartContainer for \"e4684521d74acb6b9f0013d15332a057af24414c053cc83f6f14f9a25b9931e0\"" Jul 14 21:46:11.798595 systemd[1]: Started cri-containerd-e4684521d74acb6b9f0013d15332a057af24414c053cc83f6f14f9a25b9931e0.scope. Jul 14 21:46:11.811255 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30a98b8991f8139c935318e3a3671bba2cbc404272b44537890643e9e1d52dc7-rootfs.mount: Deactivated successfully. Jul 14 21:46:11.841853 env[1210]: time="2025-07-14T21:46:11.837921307Z" level=info msg="StartContainer for \"e4684521d74acb6b9f0013d15332a057af24414c053cc83f6f14f9a25b9931e0\" returns successfully" Jul 14 21:46:11.858895 systemd[1]: run-containerd-runc-k8s.io-e4684521d74acb6b9f0013d15332a057af24414c053cc83f6f14f9a25b9931e0-runc.MH6Gz5.mount: Deactivated successfully. Jul 14 21:46:12.105857 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Jul 14 21:46:12.208397 kubelet[1414]: W0714 21:46:12.208357 1414 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1cb5259f_39d6_4418_bfcb_a4bc929ab902.slice/cri-containerd-8842c27749631fb58bfa1621b9ec695d8fa3428b6b4207a953adbbf6a6544450.scope WatchSource:0}: task 8842c27749631fb58bfa1621b9ec695d8fa3428b6b4207a953adbbf6a6544450 not found: not found Jul 14 21:46:12.437989 kubelet[1414]: E0714 21:46:12.437869 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:46:12.762144 kubelet[1414]: E0714 21:46:12.761765 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:12.767188 kubelet[1414]: I0714 21:46:12.767150 1414 setters.go:602] "Node became not ready" node="10.0.0.15" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-14T21:46:12Z","lastTransitionTime":"2025-07-14T21:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 14 21:46:12.787995 kubelet[1414]: I0714 21:46:12.787917 1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qhrxn" podStartSLOduration=5.787893345 podStartE2EDuration="5.787893345s" podCreationTimestamp="2025-07-14 21:46:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:46:12.786810377 +0000 UTC m=+62.108034196" watchObservedRunningTime="2025-07-14 21:46:12.787893345 +0000 UTC m=+62.109117164" Jul 14 21:46:13.438232 kubelet[1414]: E0714 21:46:13.438184 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:46:14.099309 kubelet[1414]: E0714 21:46:14.099265 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:14.207466 systemd[1]: run-containerd-runc-k8s.io-e4684521d74acb6b9f0013d15332a057af24414c053cc83f6f14f9a25b9931e0-runc.w1r7PC.mount: Deactivated successfully. Jul 14 21:46:14.438942 kubelet[1414]: E0714 21:46:14.438830 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:46:14.938127 systemd-networkd[1040]: lxc_health: Link UP Jul 14 21:46:14.948474 systemd-networkd[1040]: lxc_health: Gained carrier Jul 14 21:46:14.948952 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 14 21:46:15.317957 kubelet[1414]: W0714 21:46:15.317757 1414 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1cb5259f_39d6_4418_bfcb_a4bc929ab902.slice/cri-containerd-c8df2d78083d964e50b86182b5de9388cdd4d770127aa670c1c8bdf20d7b3629.scope WatchSource:0}: task c8df2d78083d964e50b86182b5de9388cdd4d770127aa670c1c8bdf20d7b3629 not found: not found Jul 14 21:46:15.439857 kubelet[1414]: E0714 21:46:15.439783 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:46:16.100112 kubelet[1414]: E0714 21:46:16.100066 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:16.201955 systemd-networkd[1040]: lxc_health: Gained IPv6LL Jul 14 21:46:16.440381 kubelet[1414]: E0714 21:46:16.440278 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:46:16.767890 kubelet[1414]: E0714 21:46:16.767767 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:17.441395 kubelet[1414]: E0714 21:46:17.441346 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:46:17.769323 kubelet[1414]: E0714 21:46:17.769204 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:18.427002 kubelet[1414]: W0714 21:46:18.426964 1414 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1cb5259f_39d6_4418_bfcb_a4bc929ab902.slice/cri-containerd-6951f0eec8486f61a3c722e15d15ef93bcb55a3930fdc29e59affbe439734a0a.scope WatchSource:0}: task 6951f0eec8486f61a3c722e15d15ef93bcb55a3930fdc29e59affbe439734a0a not found: not found Jul 14 21:46:18.442186 kubelet[1414]: E0714 21:46:18.442133 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:46:18.485126 systemd[1]: run-containerd-runc-k8s.io-e4684521d74acb6b9f0013d15332a057af24414c053cc83f6f14f9a25b9931e0-runc.OSQCor.mount: Deactivated successfully. Jul 14 21:46:19.443020 kubelet[1414]: E0714 21:46:19.442963 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:46:20.444053 kubelet[1414]: E0714 21:46:20.444009 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:46:21.444462 kubelet[1414]: E0714 21:46:21.444420 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:46:21.536476 kubelet[1414]: W0714 21:46:21.536432 1414 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1cb5259f_39d6_4418_bfcb_a4bc929ab902.slice/cri-containerd-30a98b8991f8139c935318e3a3671bba2cbc404272b44537890643e9e1d52dc7.scope WatchSource:0}: task 30a98b8991f8139c935318e3a3671bba2cbc404272b44537890643e9e1d52dc7 not found: not found Jul 14 21:46:22.445529 kubelet[1414]: E0714 21:46:22.445481 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:46:22.729386 systemd[1]: run-containerd-runc-k8s.io-e4684521d74acb6b9f0013d15332a057af24414c053cc83f6f14f9a25b9931e0-runc.gtm8sn.mount: Deactivated successfully. Jul 14 21:46:23.446454 kubelet[1414]: E0714 21:46:23.446418 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 21:46:24.447485 kubelet[1414]: E0714 21:46:24.447435 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"