Jul 10 00:34:28.764584 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 10 00:34:28.764605 kernel: Linux version 5.15.186-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Wed Jul 9 23:19:15 -00 2025 Jul 10 00:34:28.764613 kernel: efi: EFI v2.70 by EDK II Jul 10 00:34:28.764619 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Jul 10 00:34:28.764624 kernel: random: crng init done Jul 10 00:34:28.764629 kernel: ACPI: Early table checksum verification disabled Jul 10 00:34:28.764636 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Jul 10 00:34:28.764643 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 10 00:34:28.764649 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:34:28.764654 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:34:28.764660 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:34:28.764665 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:34:28.764671 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:34:28.764677 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:34:28.764685 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:34:28.764691 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:34:28.764697 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:34:28.764703 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 10 00:34:28.764709 kernel: NUMA: Failed to initialise from firmware Jul 10 00:34:28.764715 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 10 00:34:28.764732 kernel: NUMA: NODE_DATA [mem 0xdcb0a900-0xdcb0ffff] Jul 10 00:34:28.764737 kernel: Zone ranges: Jul 10 00:34:28.764743 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 10 00:34:28.764750 kernel: DMA32 empty Jul 10 00:34:28.764756 kernel: Normal empty Jul 10 00:34:28.764762 kernel: Movable zone start for each node Jul 10 00:34:28.764768 kernel: Early memory node ranges Jul 10 00:34:28.764773 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Jul 10 00:34:28.764779 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Jul 10 00:34:28.764785 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Jul 10 00:34:28.764790 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Jul 10 00:34:28.764796 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Jul 10 00:34:28.764802 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Jul 10 00:34:28.764807 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Jul 10 00:34:28.764813 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 10 00:34:28.764820 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 10 00:34:28.764826 kernel: psci: probing for conduit method from ACPI. Jul 10 00:34:28.764832 kernel: psci: PSCIv1.1 detected in firmware. Jul 10 00:34:28.764837 kernel: psci: Using standard PSCI v0.2 function IDs Jul 10 00:34:28.764843 kernel: psci: Trusted OS migration not required Jul 10 00:34:28.764851 kernel: psci: SMC Calling Convention v1.1 Jul 10 00:34:28.764858 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 10 00:34:28.764865 kernel: ACPI: SRAT not present Jul 10 00:34:28.764871 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Jul 10 00:34:28.764877 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Jul 10 00:34:28.764884 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 10 00:34:28.764890 kernel: Detected PIPT I-cache on CPU0 Jul 10 00:34:28.764896 kernel: CPU features: detected: GIC system register CPU interface Jul 10 00:34:28.764902 kernel: CPU features: detected: Hardware dirty bit management Jul 10 00:34:28.764909 kernel: CPU features: detected: Spectre-v4 Jul 10 00:34:28.764915 kernel: CPU features: detected: Spectre-BHB Jul 10 00:34:28.764922 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 10 00:34:28.764928 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 10 00:34:28.764935 kernel: CPU features: detected: ARM erratum 1418040 Jul 10 00:34:28.764941 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 10 00:34:28.764947 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 10 00:34:28.764953 kernel: Policy zone: DMA Jul 10 00:34:28.764960 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=97626bbec4e8c603c151f40dbbae5fabba3cda417023e06335ea30183b36a27f Jul 10 00:34:28.764966 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 10 00:34:28.764973 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 10 00:34:28.764979 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 10 00:34:28.764986 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 10 00:34:28.764994 kernel: Memory: 2457336K/2572288K available (9792K kernel code, 2094K rwdata, 7588K rodata, 36416K init, 777K bss, 114952K reserved, 0K cma-reserved) Jul 10 00:34:28.765000 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 10 00:34:28.765006 kernel: trace event string verifier disabled Jul 10 00:34:28.765012 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 10 00:34:28.765019 kernel: rcu: RCU event tracing is enabled. Jul 10 00:34:28.765037 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 10 00:34:28.765045 kernel: Trampoline variant of Tasks RCU enabled. Jul 10 00:34:28.765052 kernel: Tracing variant of Tasks RCU enabled. Jul 10 00:34:28.765058 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 10 00:34:28.765068 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 10 00:34:28.765074 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 10 00:34:28.765082 kernel: GICv3: 256 SPIs implemented Jul 10 00:34:28.765088 kernel: GICv3: 0 Extended SPIs implemented Jul 10 00:34:28.765095 kernel: GICv3: Distributor has no Range Selector support Jul 10 00:34:28.765101 kernel: Root IRQ handler: gic_handle_irq Jul 10 00:34:28.765116 kernel: GICv3: 16 PPIs implemented Jul 10 00:34:28.765122 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 10 00:34:28.765128 kernel: ACPI: SRAT not present Jul 10 00:34:28.765134 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 10 00:34:28.765141 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Jul 10 00:34:28.765147 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Jul 10 00:34:28.765153 kernel: GICv3: using LPI property table @0x00000000400d0000 Jul 10 00:34:28.765159 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Jul 10 00:34:28.765167 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 00:34:28.765173 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 10 00:34:28.765180 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 10 00:34:28.765186 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 10 00:34:28.765192 kernel: arm-pv: using stolen time PV Jul 10 00:34:28.765199 kernel: Console: colour dummy device 80x25 Jul 10 00:34:28.765205 kernel: ACPI: Core revision 20210730 Jul 10 00:34:28.765212 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 10 00:34:28.765218 kernel: pid_max: default: 32768 minimum: 301 Jul 10 00:34:28.765224 kernel: LSM: Security Framework initializing Jul 10 00:34:28.765232 kernel: SELinux: Initializing. Jul 10 00:34:28.765238 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 00:34:28.765245 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 00:34:28.765251 kernel: rcu: Hierarchical SRCU implementation. Jul 10 00:34:28.765257 kernel: Platform MSI: ITS@0x8080000 domain created Jul 10 00:34:28.765263 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 10 00:34:28.765269 kernel: Remapping and enabling EFI services. Jul 10 00:34:28.765276 kernel: smp: Bringing up secondary CPUs ... Jul 10 00:34:28.765282 kernel: Detected PIPT I-cache on CPU1 Jul 10 00:34:28.765289 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 10 00:34:28.765296 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Jul 10 00:34:28.765302 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 00:34:28.765308 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 10 00:34:28.765314 kernel: Detected PIPT I-cache on CPU2 Jul 10 00:34:28.765321 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 10 00:34:28.765327 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Jul 10 00:34:28.765334 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 00:34:28.765340 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 10 00:34:28.765346 kernel: Detected PIPT I-cache on CPU3 Jul 10 00:34:28.765354 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 10 00:34:28.765361 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Jul 10 00:34:28.765367 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 00:34:28.765373 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 10 00:34:28.765384 kernel: smp: Brought up 1 node, 4 CPUs Jul 10 00:34:28.765392 kernel: SMP: Total of 4 processors activated. Jul 10 00:34:28.765398 kernel: CPU features: detected: 32-bit EL0 Support Jul 10 00:34:28.765405 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 10 00:34:28.765412 kernel: CPU features: detected: Common not Private translations Jul 10 00:34:28.765418 kernel: CPU features: detected: CRC32 instructions Jul 10 00:34:28.765424 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 10 00:34:28.765431 kernel: CPU features: detected: LSE atomic instructions Jul 10 00:34:28.765439 kernel: CPU features: detected: Privileged Access Never Jul 10 00:34:28.765445 kernel: CPU features: detected: RAS Extension Support Jul 10 00:34:28.765452 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 10 00:34:28.765459 kernel: CPU: All CPU(s) started at EL1 Jul 10 00:34:28.765466 kernel: alternatives: patching kernel code Jul 10 00:34:28.765474 kernel: devtmpfs: initialized Jul 10 00:34:28.765480 kernel: KASLR enabled Jul 10 00:34:28.765487 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 10 00:34:28.765493 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 10 00:34:28.765500 kernel: pinctrl core: initialized pinctrl subsystem Jul 10 00:34:28.765506 kernel: SMBIOS 3.0.0 present. Jul 10 00:34:28.765513 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Jul 10 00:34:28.765520 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 10 00:34:28.765526 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 10 00:34:28.765534 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 10 00:34:28.765541 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 10 00:34:28.765547 kernel: audit: initializing netlink subsys (disabled) Jul 10 00:34:28.765554 kernel: audit: type=2000 audit(0.036:1): state=initialized audit_enabled=0 res=1 Jul 10 00:34:28.765560 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 10 00:34:28.765567 kernel: cpuidle: using governor menu Jul 10 00:34:28.765573 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 10 00:34:28.765580 kernel: ASID allocator initialised with 32768 entries Jul 10 00:34:28.765586 kernel: ACPI: bus type PCI registered Jul 10 00:34:28.765594 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 10 00:34:28.765601 kernel: Serial: AMBA PL011 UART driver Jul 10 00:34:28.765607 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 10 00:34:28.765614 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Jul 10 00:34:28.765620 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 10 00:34:28.765627 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Jul 10 00:34:28.765634 kernel: cryptd: max_cpu_qlen set to 1000 Jul 10 00:34:28.765641 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 10 00:34:28.765647 kernel: ACPI: Added _OSI(Module Device) Jul 10 00:34:28.765655 kernel: ACPI: Added _OSI(Processor Device) Jul 10 00:34:28.765661 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 10 00:34:28.765668 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 10 00:34:28.765674 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 10 00:34:28.765681 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 10 00:34:28.765688 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 10 00:34:28.765694 kernel: ACPI: Interpreter enabled Jul 10 00:34:28.765701 kernel: ACPI: Using GIC for interrupt routing Jul 10 00:34:28.765707 kernel: ACPI: MCFG table detected, 1 entries Jul 10 00:34:28.765715 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 10 00:34:28.765722 kernel: printk: console [ttyAMA0] enabled Jul 10 00:34:28.765728 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 10 00:34:28.765852 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 10 00:34:28.765915 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 10 00:34:28.765972 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 10 00:34:28.766059 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 10 00:34:28.766142 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 10 00:34:28.766152 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 10 00:34:28.766159 kernel: PCI host bridge to bus 0000:00 Jul 10 00:34:28.766228 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 10 00:34:28.766286 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 10 00:34:28.766341 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 10 00:34:28.766394 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 10 00:34:28.766470 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 10 00:34:28.766544 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 10 00:34:28.766606 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 10 00:34:28.766667 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 10 00:34:28.766727 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 10 00:34:28.766785 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 10 00:34:28.766844 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 10 00:34:28.766921 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 10 00:34:28.766977 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 10 00:34:28.767067 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 10 00:34:28.767134 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 10 00:34:28.767144 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 10 00:34:28.767151 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 10 00:34:28.767158 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 10 00:34:28.767165 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 10 00:34:28.767175 kernel: iommu: Default domain type: Translated Jul 10 00:34:28.767182 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 10 00:34:28.767188 kernel: vgaarb: loaded Jul 10 00:34:28.767195 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 10 00:34:28.767202 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 10 00:34:28.767209 kernel: PTP clock support registered Jul 10 00:34:28.767216 kernel: Registered efivars operations Jul 10 00:34:28.767222 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 10 00:34:28.767229 kernel: VFS: Disk quotas dquot_6.6.0 Jul 10 00:34:28.767238 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 10 00:34:28.767244 kernel: pnp: PnP ACPI init Jul 10 00:34:28.767313 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 10 00:34:28.767323 kernel: pnp: PnP ACPI: found 1 devices Jul 10 00:34:28.767330 kernel: NET: Registered PF_INET protocol family Jul 10 00:34:28.767337 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 10 00:34:28.767343 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 10 00:34:28.767351 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 10 00:34:28.767359 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 10 00:34:28.767366 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 10 00:34:28.767373 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 10 00:34:28.767380 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 00:34:28.767386 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 00:34:28.767393 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 10 00:34:28.767400 kernel: PCI: CLS 0 bytes, default 64 Jul 10 00:34:28.767407 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 10 00:34:28.767413 kernel: kvm [1]: HYP mode not available Jul 10 00:34:28.767422 kernel: Initialise system trusted keyrings Jul 10 00:34:28.767428 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 10 00:34:28.767435 kernel: Key type asymmetric registered Jul 10 00:34:28.767442 kernel: Asymmetric key parser 'x509' registered Jul 10 00:34:28.767448 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 10 00:34:28.767455 kernel: io scheduler mq-deadline registered Jul 10 00:34:28.767462 kernel: io scheduler kyber registered Jul 10 00:34:28.767469 kernel: io scheduler bfq registered Jul 10 00:34:28.767476 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 10 00:34:28.767484 kernel: ACPI: button: Power Button [PWRB] Jul 10 00:34:28.767491 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 10 00:34:28.767552 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 10 00:34:28.767561 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 10 00:34:28.767568 kernel: thunder_xcv, ver 1.0 Jul 10 00:34:28.767575 kernel: thunder_bgx, ver 1.0 Jul 10 00:34:28.767582 kernel: nicpf, ver 1.0 Jul 10 00:34:28.767588 kernel: nicvf, ver 1.0 Jul 10 00:34:28.767655 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 10 00:34:28.767714 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-10T00:34:28 UTC (1752107668) Jul 10 00:34:28.767723 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 10 00:34:28.767730 kernel: NET: Registered PF_INET6 protocol family Jul 10 00:34:28.767737 kernel: Segment Routing with IPv6 Jul 10 00:34:28.767743 kernel: In-situ OAM (IOAM) with IPv6 Jul 10 00:34:28.767750 kernel: NET: Registered PF_PACKET protocol family Jul 10 00:34:28.767757 kernel: Key type dns_resolver registered Jul 10 00:34:28.767763 kernel: registered taskstats version 1 Jul 10 00:34:28.767772 kernel: Loading compiled-in X.509 certificates Jul 10 00:34:28.767779 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.186-flatcar: 9e274a0dc4fc3d34232d90d226b034c4fe0e3e22' Jul 10 00:34:28.767785 kernel: Key type .fscrypt registered Jul 10 00:34:28.767792 kernel: Key type fscrypt-provisioning registered Jul 10 00:34:28.767799 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 10 00:34:28.767806 kernel: ima: Allocated hash algorithm: sha1 Jul 10 00:34:28.767812 kernel: ima: No architecture policies found Jul 10 00:34:28.767819 kernel: clk: Disabling unused clocks Jul 10 00:34:28.767826 kernel: Freeing unused kernel memory: 36416K Jul 10 00:34:28.767834 kernel: Run /init as init process Jul 10 00:34:28.767841 kernel: with arguments: Jul 10 00:34:28.767847 kernel: /init Jul 10 00:34:28.767854 kernel: with environment: Jul 10 00:34:28.767860 kernel: HOME=/ Jul 10 00:34:28.767867 kernel: TERM=linux Jul 10 00:34:28.767873 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 10 00:34:28.767882 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 10 00:34:28.767894 systemd[1]: Detected virtualization kvm. Jul 10 00:34:28.767903 systemd[1]: Detected architecture arm64. Jul 10 00:34:28.767911 systemd[1]: Running in initrd. Jul 10 00:34:28.767919 systemd[1]: No hostname configured, using default hostname. Jul 10 00:34:28.767927 systemd[1]: Hostname set to . Jul 10 00:34:28.767934 systemd[1]: Initializing machine ID from VM UUID. Jul 10 00:34:28.767941 systemd[1]: Queued start job for default target initrd.target. Jul 10 00:34:28.767949 systemd[1]: Started systemd-ask-password-console.path. Jul 10 00:34:28.767957 systemd[1]: Reached target cryptsetup.target. Jul 10 00:34:28.767964 systemd[1]: Reached target paths.target. Jul 10 00:34:28.767971 systemd[1]: Reached target slices.target. Jul 10 00:34:28.767978 systemd[1]: Reached target swap.target. Jul 10 00:34:28.767985 systemd[1]: Reached target timers.target. Jul 10 00:34:28.767993 systemd[1]: Listening on iscsid.socket. Jul 10 00:34:28.768000 systemd[1]: Listening on iscsiuio.socket. Jul 10 00:34:28.768008 systemd[1]: Listening on systemd-journald-audit.socket. Jul 10 00:34:28.768016 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 10 00:34:28.768023 systemd[1]: Listening on systemd-journald.socket. Jul 10 00:34:28.768040 systemd[1]: Listening on systemd-networkd.socket. Jul 10 00:34:28.768048 systemd[1]: Listening on systemd-udevd-control.socket. Jul 10 00:34:28.768055 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 10 00:34:28.768062 systemd[1]: Reached target sockets.target. Jul 10 00:34:28.768069 systemd[1]: Starting kmod-static-nodes.service... Jul 10 00:34:28.768076 systemd[1]: Finished network-cleanup.service. Jul 10 00:34:28.768086 systemd[1]: Starting systemd-fsck-usr.service... Jul 10 00:34:28.768093 systemd[1]: Starting systemd-journald.service... Jul 10 00:34:28.768100 systemd[1]: Starting systemd-modules-load.service... Jul 10 00:34:28.768113 systemd[1]: Starting systemd-resolved.service... Jul 10 00:34:28.768120 systemd[1]: Starting systemd-vconsole-setup.service... Jul 10 00:34:28.768127 systemd[1]: Finished kmod-static-nodes.service. Jul 10 00:34:28.768134 systemd[1]: Finished systemd-fsck-usr.service. Jul 10 00:34:28.768141 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 10 00:34:28.768149 systemd[1]: Finished systemd-vconsole-setup.service. Jul 10 00:34:28.768157 systemd[1]: Starting dracut-cmdline-ask.service... Jul 10 00:34:28.768168 systemd-journald[290]: Journal started Jul 10 00:34:28.768209 systemd-journald[290]: Runtime Journal (/run/log/journal/655ac46e9d8c4a958c26373382e7d476) is 6.0M, max 48.7M, 42.6M free. Jul 10 00:34:28.758445 systemd-modules-load[291]: Inserted module 'overlay' Jul 10 00:34:28.772328 systemd[1]: Started systemd-journald.service. Jul 10 00:34:28.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:28.772775 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 10 00:34:28.778148 kernel: audit: type=1130 audit(1752107668.772:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:28.778176 kernel: audit: type=1130 audit(1752107668.775:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:28.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:28.776543 systemd-resolved[292]: Positive Trust Anchors: Jul 10 00:34:28.776550 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:34:28.776578 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 10 00:34:28.784769 systemd-resolved[292]: Defaulting to hostname 'linux'. Jul 10 00:34:28.785747 systemd[1]: Started systemd-resolved.service. Jul 10 00:34:28.790652 kernel: audit: type=1130 audit(1752107668.787:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:28.790674 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 10 00:34:28.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:28.787151 systemd[1]: Reached target nss-lookup.target. Jul 10 00:34:28.791425 systemd[1]: Finished dracut-cmdline-ask.service. Jul 10 00:34:28.795156 kernel: audit: type=1130 audit(1752107668.792:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:28.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:28.793416 systemd[1]: Starting dracut-cmdline.service... Jul 10 00:34:28.796761 kernel: Bridge firewalling registered Jul 10 00:34:28.796006 systemd-modules-load[291]: Inserted module 'br_netfilter' Jul 10 00:34:28.803071 dracut-cmdline[307]: dracut-dracut-053 Jul 10 00:34:28.805345 dracut-cmdline[307]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=97626bbec4e8c603c151f40dbbae5fabba3cda417023e06335ea30183b36a27f Jul 10 00:34:28.812052 kernel: SCSI subsystem initialized Jul 10 00:34:28.818319 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 10 00:34:28.818358 kernel: device-mapper: uevent: version 1.0.3 Jul 10 00:34:28.819431 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 10 00:34:28.821339 systemd-modules-load[291]: Inserted module 'dm_multipath' Jul 10 00:34:28.822158 systemd[1]: Finished systemd-modules-load.service. Jul 10 00:34:28.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:28.823599 systemd[1]: Starting systemd-sysctl.service... Jul 10 00:34:28.826922 kernel: audit: type=1130 audit(1752107668.822:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:28.832565 systemd[1]: Finished systemd-sysctl.service. Jul 10 00:34:28.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:28.836049 kernel: audit: type=1130 audit(1752107668.833:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:28.880055 kernel: Loading iSCSI transport class v2.0-870. Jul 10 00:34:28.892053 kernel: iscsi: registered transport (tcp) Jul 10 00:34:28.909054 kernel: iscsi: registered transport (qla4xxx) Jul 10 00:34:28.909080 kernel: QLogic iSCSI HBA Driver Jul 10 00:34:28.941976 systemd[1]: Finished dracut-cmdline.service. Jul 10 00:34:28.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:28.943477 systemd[1]: Starting dracut-pre-udev.service... Jul 10 00:34:28.945991 kernel: audit: type=1130 audit(1752107668.942:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:28.989053 kernel: raid6: neonx8 gen() 13702 MB/s Jul 10 00:34:29.006044 kernel: raid6: neonx8 xor() 10801 MB/s Jul 10 00:34:29.023038 kernel: raid6: neonx4 gen() 13501 MB/s Jul 10 00:34:29.040040 kernel: raid6: neonx4 xor() 11041 MB/s Jul 10 00:34:29.057046 kernel: raid6: neonx2 gen() 12919 MB/s Jul 10 00:34:29.074046 kernel: raid6: neonx2 xor() 10338 MB/s Jul 10 00:34:29.091052 kernel: raid6: neonx1 gen() 10514 MB/s Jul 10 00:34:29.108045 kernel: raid6: neonx1 xor() 8748 MB/s Jul 10 00:34:29.125039 kernel: raid6: int64x8 gen() 6219 MB/s Jul 10 00:34:29.142047 kernel: raid6: int64x8 xor() 3530 MB/s Jul 10 00:34:29.159045 kernel: raid6: int64x4 gen() 7195 MB/s Jul 10 00:34:29.176046 kernel: raid6: int64x4 xor() 3848 MB/s Jul 10 00:34:29.193054 kernel: raid6: int64x2 gen() 6120 MB/s Jul 10 00:34:29.210049 kernel: raid6: int64x2 xor() 3299 MB/s Jul 10 00:34:29.227046 kernel: raid6: int64x1 gen() 5037 MB/s Jul 10 00:34:29.244293 kernel: raid6: int64x1 xor() 2635 MB/s Jul 10 00:34:29.244311 kernel: raid6: using algorithm neonx8 gen() 13702 MB/s Jul 10 00:34:29.244321 kernel: raid6: .... xor() 10801 MB/s, rmw enabled Jul 10 00:34:29.244329 kernel: raid6: using neon recovery algorithm Jul 10 00:34:29.255209 kernel: xor: measuring software checksum speed Jul 10 00:34:29.255227 kernel: 8regs : 17202 MB/sec Jul 10 00:34:29.256211 kernel: 32regs : 20697 MB/sec Jul 10 00:34:29.256224 kernel: arm64_neon : 27766 MB/sec Jul 10 00:34:29.256232 kernel: xor: using function: arm64_neon (27766 MB/sec) Jul 10 00:34:29.312053 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Jul 10 00:34:29.321871 systemd[1]: Finished dracut-pre-udev.service. Jul 10 00:34:29.325489 kernel: audit: type=1130 audit(1752107669.322:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:29.325514 kernel: audit: type=1334 audit(1752107669.324:10): prog-id=7 op=LOAD Jul 10 00:34:29.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:29.324000 audit: BPF prog-id=7 op=LOAD Jul 10 00:34:29.325000 audit: BPF prog-id=8 op=LOAD Jul 10 00:34:29.325838 systemd[1]: Starting systemd-udevd.service... Jul 10 00:34:29.339432 systemd-udevd[491]: Using default interface naming scheme 'v252'. Jul 10 00:34:29.342680 systemd[1]: Started systemd-udevd.service. Jul 10 00:34:29.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:29.344610 systemd[1]: Starting dracut-pre-trigger.service... Jul 10 00:34:29.356197 dracut-pre-trigger[498]: rd.md=0: removing MD RAID activation Jul 10 00:34:29.381865 systemd[1]: Finished dracut-pre-trigger.service. Jul 10 00:34:29.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:29.383232 systemd[1]: Starting systemd-udev-trigger.service... Jul 10 00:34:29.417814 systemd[1]: Finished systemd-udev-trigger.service. Jul 10 00:34:29.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:29.446172 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 10 00:34:29.449221 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 10 00:34:29.449236 kernel: GPT:9289727 != 19775487 Jul 10 00:34:29.449250 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 10 00:34:29.449258 kernel: GPT:9289727 != 19775487 Jul 10 00:34:29.449266 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 10 00:34:29.449274 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:34:29.463055 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (536) Jul 10 00:34:29.464732 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 10 00:34:29.468171 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 10 00:34:29.469142 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 10 00:34:29.475945 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 10 00:34:29.481865 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 10 00:34:29.483540 systemd[1]: Starting disk-uuid.service... Jul 10 00:34:29.489461 disk-uuid[560]: Primary Header is updated. Jul 10 00:34:29.489461 disk-uuid[560]: Secondary Entries is updated. Jul 10 00:34:29.489461 disk-uuid[560]: Secondary Header is updated. Jul 10 00:34:29.493041 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:34:30.507059 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:34:30.507607 disk-uuid[561]: The operation has completed successfully. Jul 10 00:34:30.528677 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 10 00:34:30.529804 systemd[1]: Finished disk-uuid.service. Jul 10 00:34:30.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:30.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:30.532214 systemd[1]: Starting verity-setup.service... Jul 10 00:34:30.547048 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 10 00:34:30.567631 systemd[1]: Found device dev-mapper-usr.device. Jul 10 00:34:30.569900 systemd[1]: Mounting sysusr-usr.mount... Jul 10 00:34:30.571891 systemd[1]: Finished verity-setup.service. Jul 10 00:34:30.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:30.620047 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 10 00:34:30.620159 systemd[1]: Mounted sysusr-usr.mount. Jul 10 00:34:30.620938 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 10 00:34:30.621698 systemd[1]: Starting ignition-setup.service... Jul 10 00:34:30.623455 systemd[1]: Starting parse-ip-for-networkd.service... Jul 10 00:34:30.630202 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 10 00:34:30.630278 kernel: BTRFS info (device vda6): using free space tree Jul 10 00:34:30.630308 kernel: BTRFS info (device vda6): has skinny extents Jul 10 00:34:30.637424 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 10 00:34:30.642496 systemd[1]: Finished ignition-setup.service. Jul 10 00:34:30.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:30.643884 systemd[1]: Starting ignition-fetch-offline.service... Jul 10 00:34:30.704894 systemd[1]: Finished parse-ip-for-networkd.service. Jul 10 00:34:30.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:30.706000 audit: BPF prog-id=9 op=LOAD Jul 10 00:34:30.707200 systemd[1]: Starting systemd-networkd.service... Jul 10 00:34:30.729646 ignition[643]: Ignition 2.14.0 Jul 10 00:34:30.729656 ignition[643]: Stage: fetch-offline Jul 10 00:34:30.729689 ignition[643]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:34:30.729698 ignition[643]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:34:30.729823 ignition[643]: parsed url from cmdline: "" Jul 10 00:34:30.729826 ignition[643]: no config URL provided Jul 10 00:34:30.729830 ignition[643]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 00:34:30.729837 ignition[643]: no config at "/usr/lib/ignition/user.ign" Jul 10 00:34:30.729853 ignition[643]: op(1): [started] loading QEMU firmware config module Jul 10 00:34:30.729858 ignition[643]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 10 00:34:30.735609 systemd-networkd[737]: lo: Link UP Jul 10 00:34:30.735619 systemd-networkd[737]: lo: Gained carrier Jul 10 00:34:30.736002 systemd-networkd[737]: Enumeration completed Jul 10 00:34:30.736734 ignition[643]: op(1): [finished] loading QEMU firmware config module Jul 10 00:34:30.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:30.736202 systemd-networkd[737]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:34:30.736329 systemd[1]: Started systemd-networkd.service. Jul 10 00:34:30.738104 systemd-networkd[737]: eth0: Link UP Jul 10 00:34:30.738108 systemd-networkd[737]: eth0: Gained carrier Jul 10 00:34:30.738243 systemd[1]: Reached target network.target. Jul 10 00:34:30.740271 systemd[1]: Starting iscsiuio.service... Jul 10 00:34:30.748368 ignition[643]: parsing config with SHA512: a884a0d5787fcb56e4d78ffdca926bf5520222ead8ee81e76243b40ab8e13645fcf36176f614a56cc4de92e94ff2d9dc32a231c71444db4d2816cf9a12f4e7f1 Jul 10 00:34:30.751574 systemd[1]: Started iscsiuio.service. Jul 10 00:34:30.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:30.752944 systemd[1]: Starting iscsid.service... Jul 10 00:34:30.753447 systemd-networkd[737]: eth0: DHCPv4 address 10.0.0.80/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 00:34:30.756553 iscsid[743]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 10 00:34:30.756553 iscsid[743]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 10 00:34:30.756553 iscsid[743]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 10 00:34:30.756553 iscsid[743]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 10 00:34:30.756553 iscsid[743]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 10 00:34:30.756553 iscsid[743]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 10 00:34:30.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:30.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:30.758710 ignition[643]: fetch-offline: fetch-offline passed Jul 10 00:34:30.758223 unknown[643]: fetched base config from "system" Jul 10 00:34:30.758781 ignition[643]: Ignition finished successfully Jul 10 00:34:30.758231 unknown[643]: fetched user config from "qemu" Jul 10 00:34:30.759382 systemd[1]: Started iscsid.service. Jul 10 00:34:30.760820 systemd[1]: Finished ignition-fetch-offline.service. Jul 10 00:34:30.765286 systemd[1]: Starting dracut-initqueue.service... Jul 10 00:34:30.766318 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 10 00:34:30.766970 systemd[1]: Starting ignition-kargs.service... Jul 10 00:34:30.775189 systemd[1]: Finished dracut-initqueue.service. Jul 10 00:34:30.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:30.775896 systemd[1]: Reached target remote-fs-pre.target. Jul 10 00:34:30.775273 ignition[747]: Ignition 2.14.0 Jul 10 00:34:30.777639 systemd[1]: Reached target remote-cryptsetup.target. Jul 10 00:34:30.775279 ignition[747]: Stage: kargs Jul 10 00:34:30.778653 systemd[1]: Reached target remote-fs.target. Jul 10 00:34:30.775365 ignition[747]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:34:30.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:30.780207 systemd[1]: Starting dracut-pre-mount.service... Jul 10 00:34:30.775374 ignition[747]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:34:30.781122 systemd[1]: Finished ignition-kargs.service. Jul 10 00:34:30.775986 ignition[747]: kargs: kargs passed Jul 10 00:34:30.782699 systemd[1]: Starting ignition-disks.service... Jul 10 00:34:30.776023 ignition[747]: Ignition finished successfully Jul 10 00:34:30.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:30.787679 systemd[1]: Finished dracut-pre-mount.service. Jul 10 00:34:30.790820 ignition[761]: Ignition 2.14.0 Jul 10 00:34:30.790830 ignition[761]: Stage: disks Jul 10 00:34:30.790920 ignition[761]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:34:30.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:30.792680 systemd[1]: Finished ignition-disks.service. Jul 10 00:34:30.790929 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:34:30.793489 systemd[1]: Reached target initrd-root-device.target. Jul 10 00:34:30.791554 ignition[761]: disks: disks passed Jul 10 00:34:30.794185 systemd[1]: Reached target local-fs-pre.target. Jul 10 00:34:30.791594 ignition[761]: Ignition finished successfully Jul 10 00:34:30.794843 systemd[1]: Reached target local-fs.target. Jul 10 00:34:30.795692 systemd[1]: Reached target sysinit.target. Jul 10 00:34:30.796796 systemd[1]: Reached target basic.target. Jul 10 00:34:30.799022 systemd[1]: Starting systemd-fsck-root.service... Jul 10 00:34:30.813074 systemd-fsck[773]: ROOT: clean, 619/553520 files, 56022/553472 blocks Jul 10 00:34:30.816348 systemd[1]: Finished systemd-fsck-root.service. Jul 10 00:34:30.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:30.818045 systemd[1]: Mounting sysroot.mount... Jul 10 00:34:30.824936 systemd[1]: Mounted sysroot.mount. Jul 10 00:34:30.826103 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 10 00:34:30.825610 systemd[1]: Reached target initrd-root-fs.target. Jul 10 00:34:30.827652 systemd[1]: Mounting sysroot-usr.mount... Jul 10 00:34:30.828410 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 10 00:34:30.828450 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 10 00:34:30.828474 systemd[1]: Reached target ignition-diskful.target. Jul 10 00:34:30.830465 systemd[1]: Mounted sysroot-usr.mount. Jul 10 00:34:30.832014 systemd[1]: Starting initrd-setup-root.service... Jul 10 00:34:30.836447 initrd-setup-root[783]: cut: /sysroot/etc/passwd: No such file or directory Jul 10 00:34:30.840597 initrd-setup-root[791]: cut: /sysroot/etc/group: No such file or directory Jul 10 00:34:30.844807 initrd-setup-root[799]: cut: /sysroot/etc/shadow: No such file or directory Jul 10 00:34:30.848970 initrd-setup-root[807]: cut: /sysroot/etc/gshadow: No such file or directory Jul 10 00:34:30.875285 systemd[1]: Finished initrd-setup-root.service. Jul 10 00:34:30.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:30.879190 systemd[1]: Starting ignition-mount.service... Jul 10 00:34:30.880421 systemd[1]: Starting sysroot-boot.service... Jul 10 00:34:30.885426 bash[824]: umount: /sysroot/usr/share/oem: not mounted. Jul 10 00:34:30.893623 ignition[826]: INFO : Ignition 2.14.0 Jul 10 00:34:30.893623 ignition[826]: INFO : Stage: mount Jul 10 00:34:30.894902 ignition[826]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:34:30.894902 ignition[826]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:34:30.894902 ignition[826]: INFO : mount: mount passed Jul 10 00:34:30.894902 ignition[826]: INFO : Ignition finished successfully Jul 10 00:34:30.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:30.896752 systemd[1]: Finished ignition-mount.service. Jul 10 00:34:30.901717 systemd[1]: Finished sysroot-boot.service. Jul 10 00:34:30.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:31.578858 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 10 00:34:31.585217 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (835) Jul 10 00:34:31.585246 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 10 00:34:31.585257 kernel: BTRFS info (device vda6): using free space tree Jul 10 00:34:31.586222 kernel: BTRFS info (device vda6): has skinny extents Jul 10 00:34:31.588893 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 10 00:34:31.590243 systemd[1]: Starting ignition-files.service... Jul 10 00:34:31.603680 ignition[855]: INFO : Ignition 2.14.0 Jul 10 00:34:31.603680 ignition[855]: INFO : Stage: files Jul 10 00:34:31.605308 ignition[855]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:34:31.605308 ignition[855]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:34:31.605308 ignition[855]: DEBUG : files: compiled without relabeling support, skipping Jul 10 00:34:31.608682 ignition[855]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 10 00:34:31.608682 ignition[855]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 10 00:34:31.612331 ignition[855]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 10 00:34:31.613729 ignition[855]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 10 00:34:31.613729 ignition[855]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 10 00:34:31.613729 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jul 10 00:34:31.613729 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jul 10 00:34:31.613089 unknown[855]: wrote ssh authorized keys file for user: core Jul 10 00:34:31.620933 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:34:31.620933 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:34:31.620933 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 10 00:34:31.620933 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 10 00:34:31.620933 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 10 00:34:31.620933 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 10 00:34:31.842158 systemd-networkd[737]: eth0: Gained IPv6LL Jul 10 00:34:32.176691 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jul 10 00:34:32.595670 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 10 00:34:32.595670 ignition[855]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Jul 10 00:34:32.598896 ignition[855]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 00:34:32.598896 ignition[855]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 00:34:32.598896 ignition[855]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Jul 10 00:34:32.598896 ignition[855]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Jul 10 00:34:32.598896 ignition[855]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 00:34:32.643561 ignition[855]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 00:34:32.645711 ignition[855]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Jul 10 00:34:32.645711 ignition[855]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:34:32.645711 ignition[855]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:34:32.645711 ignition[855]: INFO : files: files passed Jul 10 00:34:32.645711 ignition[855]: INFO : Ignition finished successfully Jul 10 00:34:32.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:32.645754 systemd[1]: Finished ignition-files.service. Jul 10 00:34:32.648355 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 10 00:34:32.649283 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 10 00:34:32.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:32.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:32.655764 initrd-setup-root-after-ignition[881]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Jul 10 00:34:32.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:32.649963 systemd[1]: Starting ignition-quench.service... Jul 10 00:34:32.658666 initrd-setup-root-after-ignition[883]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:34:32.653406 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 10 00:34:32.653488 systemd[1]: Finished ignition-quench.service. Jul 10 00:34:32.655429 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 10 00:34:32.656548 systemd[1]: Reached target ignition-complete.target. Jul 10 00:34:32.658859 systemd[1]: Starting initrd-parse-etc.service... Jul 10 00:34:32.671911 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 10 00:34:32.672015 systemd[1]: Finished initrd-parse-etc.service. Jul 10 00:34:32.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:32.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:32.673496 systemd[1]: Reached target initrd-fs.target. Jul 10 00:34:32.674529 systemd[1]: Reached target initrd.target. Jul 10 00:34:32.675527 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 10 00:34:32.676403 systemd[1]: Starting dracut-pre-pivot.service... Jul 10 00:34:32.687958 systemd[1]: Finished dracut-pre-pivot.service. Jul 10 00:34:32.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:32.689651 systemd[1]: Starting initrd-cleanup.service... Jul 10 00:34:32.698587 systemd[1]: Stopped target nss-lookup.target. Jul 10 00:34:32.699356 systemd[1]: Stopped target remote-cryptsetup.target. Jul 10 00:34:32.700470 systemd[1]: Stopped target timers.target. Jul 10 00:34:32.701571 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 10 00:34:32.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:32.701687 systemd[1]: Stopped dracut-pre-pivot.service. Jul 10 00:34:32.702649 systemd[1]: Stopped target initrd.target. Jul 10 00:34:32.703697 systemd[1]: Stopped target basic.target. Jul 10 00:34:32.704712 systemd[1]: Stopped target ignition-complete.target. Jul 10 00:34:32.705737 systemd[1]: Stopped target ignition-diskful.target. Jul 10 00:34:32.706711 systemd[1]: Stopped target initrd-root-device.target. Jul 10 00:34:32.707803 systemd[1]: Stopped target remote-fs.target. Jul 10 00:34:32.708882 systemd[1]: Stopped target remote-fs-pre.target. Jul 10 00:34:32.709974 systemd[1]: Stopped target sysinit.target. Jul 10 00:34:32.710999 systemd[1]: Stopped target local-fs.target. Jul 10 00:34:32.712049 systemd[1]: Stopped target local-fs-pre.target. Jul 10 00:34:32.713114 systemd[1]: Stopped target swap.target. Jul 10 00:34:32.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:32.714105 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 10 00:34:32.714227 systemd[1]: Stopped dracut-pre-mount.service. Jul 10 00:34:32.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:32.715355 systemd[1]: Stopped target cryptsetup.target. Jul 10 00:34:32.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:32.716282 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 10 00:34:32.716386 systemd[1]: Stopped dracut-initqueue.service. Jul 10 00:34:32.717548 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 10 00:34:32.717642 systemd[1]: Stopped ignition-fetch-offline.service. Jul 10 00:34:32.718729 systemd[1]: Stopped target paths.target. Jul 10 00:34:32.719695 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 10 00:34:32.724077 systemd[1]: Stopped systemd-ask-password-console.path. Jul 10 00:34:32.724810 systemd[1]: Stopped target slices.target. Jul 10 00:34:32.725966 systemd[1]: Stopped target sockets.target. Jul 10 00:34:32.726919 systemd[1]: iscsid.socket: Deactivated successfully. Jul 10 00:34:32.726991 systemd[1]: Closed iscsid.socket. Jul 10 00:34:32.729000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:32.728061 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 10 00:34:32.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:32.728178 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 10 00:34:32.729279 systemd[1]: ignition-files.service: Deactivated successfully. Jul 10 00:34:32.729370 systemd[1]: Stopped ignition-files.service. Jul 10 00:34:32.731124 systemd[1]: Stopping ignition-mount.service... Jul 10 00:34:32.732222 systemd[1]: Stopping iscsiuio.service... Jul 10 00:34:32.735409 systemd[1]: Stopping sysroot-boot.service... Jul 10 00:34:32.736307 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 10 00:34:32.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:32.736464 systemd[1]: Stopped systemd-udev-trigger.service. Jul 10 00:34:32.737738 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 10 00:34:32.737851 systemd[1]: Stopped dracut-pre-trigger.service. Jul 10 00:34:32.740123 ignition[896]: INFO : Ignition 2.14.0 Jul 10 00:34:32.740123 ignition[896]: INFO : Stage: umount Jul 10 00:34:32.740123 ignition[896]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:34:32.740123 ignition[896]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:34:32.740123 ignition[896]: INFO : umount: umount passed Jul 10 00:34:32.740123 ignition[896]: INFO : Ignition finished successfully Jul 10 00:34:32.741000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:32.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:32.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:32.742837 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 10 00:34:32.742951 systemd[1]: Stopped iscsiuio.service. Jul 10 00:34:32.747894 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 10 00:34:32.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:32.747980 systemd[1]: Stopped ignition-mount.service. Jul 10 00:34:32.754000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:32.749309 systemd[1]: Stopped target network.target. Jul 10 00:34:32.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:32.750405 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 10 00:34:32.750441 systemd[1]: Closed iscsiuio.socket. Jul 10 00:34:32.751789 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 10 00:34:32.751838 systemd[1]: Stopped ignition-disks.service. Jul 10 00:34:32.752899 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 10 00:34:32.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:32.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:32.752940 systemd[1]: Stopped ignition-kargs.service. Jul 10 00:34:32.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:32.754573 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 10 00:34:32.754621 systemd[1]: Stopped ignition-setup.service. Jul 10 00:34:32.755973 systemd[1]: Stopping systemd-networkd.service... Jul 10 00:34:32.757101 systemd[1]: Stopping systemd-resolved.service... Jul 10 00:34:32.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:32.759107 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 10 00:34:32.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:32.759655 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 10 00:34:32.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:32.759734 systemd[1]: Finished initrd-cleanup.service. Jul 10 00:34:32.761064 systemd-networkd[737]: eth0: DHCPv6 lease lost Jul 10 00:34:32.762057 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 10 00:34:32.762161 systemd[1]: Stopped systemd-networkd.service. Jul 10 00:34:32.763249 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 10 00:34:32.777000 audit: BPF prog-id=9 op=UNLOAD Jul 10 00:34:32.763279 systemd[1]: Closed systemd-networkd.socket. Jul 10 00:34:32.765128 systemd[1]: Stopping network-cleanup.service... Jul 10 00:34:32.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:32.766238 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 10 00:34:32.766309 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 10 00:34:32.768231 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 00:34:32.768280 systemd[1]: Stopped systemd-sysctl.service. Jul 10 00:34:32.785000 audit: BPF prog-id=6 op=UNLOAD Jul 10 00:34:32.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:32.770517 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 10 00:34:32.770563 systemd[1]: Stopped systemd-modules-load.service. Jul 10 00:34:32.787000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:32.771644 systemd[1]: Stopping systemd-udevd.service... Jul 10 00:34:32.777607 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 10 00:34:32.778182 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 10 00:34:32.778276 systemd[1]: Stopped systemd-resolved.service. Jul 10 00:34:32.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:32.781344 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 10 00:34:32.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:32.781496 systemd[1]: Stopped systemd-udevd.service. Jul 10 00:34:32.794000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:32.786148 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 10 00:34:32.786243 systemd[1]: Stopped network-cleanup.service. Jul 10 00:34:32.787980 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 10 00:34:32.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:32.788023 systemd[1]: Closed systemd-udevd-control.socket. Jul 10 00:34:32.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:32.788936 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 10 00:34:32.799000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:32.788968 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 10 00:34:32.790335 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 10 00:34:32.790389 systemd[1]: Stopped dracut-pre-udev.service. Jul 10 00:34:32.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:32.792369 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 10 00:34:32.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:32.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:32.792411 systemd[1]: Stopped dracut-cmdline.service. Jul 10 00:34:32.793412 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 00:34:32.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:32.793449 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 10 00:34:32.795715 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 10 00:34:32.796449 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 10 00:34:32.796522 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Jul 10 00:34:32.798232 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 10 00:34:32.798280 systemd[1]: Stopped kmod-static-nodes.service. Jul 10 00:34:32.798981 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:34:32.799016 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 10 00:34:32.801310 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 10 00:34:32.801803 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 10 00:34:32.801900 systemd[1]: Stopped sysroot-boot.service. Jul 10 00:34:32.802838 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 10 00:34:32.802909 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 10 00:34:32.803917 systemd[1]: Reached target initrd-switch-root.target. Jul 10 00:34:32.805012 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 10 00:34:32.805141 systemd[1]: Stopped initrd-setup-root.service. Jul 10 00:34:32.807008 systemd[1]: Starting initrd-switch-root.service... Jul 10 00:34:32.814512 systemd[1]: Switching root. Jul 10 00:34:32.822406 iscsid[743]: iscsid shutting down. Jul 10 00:34:32.823037 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). Jul 10 00:34:32.823089 systemd-journald[290]: Journal stopped Jul 10 00:34:34.807841 kernel: SELinux: Class mctp_socket not defined in policy. Jul 10 00:34:34.807891 kernel: SELinux: Class anon_inode not defined in policy. Jul 10 00:34:34.807902 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 10 00:34:34.807914 kernel: SELinux: policy capability network_peer_controls=1 Jul 10 00:34:34.807924 kernel: SELinux: policy capability open_perms=1 Jul 10 00:34:34.807934 kernel: SELinux: policy capability extended_socket_class=1 Jul 10 00:34:34.807944 kernel: SELinux: policy capability always_check_network=0 Jul 10 00:34:34.807953 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 10 00:34:34.807962 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 10 00:34:34.807972 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 10 00:34:34.807986 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 10 00:34:34.807997 systemd[1]: Successfully loaded SELinux policy in 32.795ms. Jul 10 00:34:34.808022 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.105ms. Jul 10 00:34:34.808063 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 10 00:34:34.808078 systemd[1]: Detected virtualization kvm. Jul 10 00:34:34.808089 systemd[1]: Detected architecture arm64. Jul 10 00:34:34.808099 systemd[1]: Detected first boot. Jul 10 00:34:34.808110 systemd[1]: Initializing machine ID from VM UUID. Jul 10 00:34:34.808121 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 10 00:34:34.808131 kernel: kauditd_printk_skb: 70 callbacks suppressed Jul 10 00:34:34.808144 kernel: audit: type=1400 audit(1752107673.007:81): avc: denied { associate } for pid=930 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 10 00:34:34.808155 kernel: audit: type=1300 audit(1752107673.007:81): arch=c00000b7 syscall=5 success=yes exit=0 a0=400014d89c a1=40000d0de0 a2=40000d70c0 a3=32 items=0 ppid=913 pid=930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:34:34.808166 kernel: audit: type=1327 audit(1752107673.007:81): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 10 00:34:34.808177 kernel: audit: type=1400 audit(1752107673.008:82): avc: denied { associate } for pid=930 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 10 00:34:34.808187 kernel: audit: type=1300 audit(1752107673.008:82): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=400014d975 a2=1ed a3=0 items=2 ppid=913 pid=930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:34:34.808198 kernel: audit: type=1307 audit(1752107673.008:82): cwd="/" Jul 10 00:34:34.808209 kernel: audit: type=1302 audit(1752107673.008:82): item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:34.808219 kernel: audit: type=1302 audit(1752107673.008:82): item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:34.808229 kernel: audit: type=1327 audit(1752107673.008:82): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 10 00:34:34.808240 systemd[1]: Populated /etc with preset unit settings. Jul 10 00:34:34.808251 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 10 00:34:34.808262 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 10 00:34:34.808273 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:34:34.808285 kernel: audit: type=1334 audit(1752107674.683:83): prog-id=12 op=LOAD Jul 10 00:34:34.808295 systemd[1]: iscsid.service: Deactivated successfully. Jul 10 00:34:34.808305 systemd[1]: Stopped iscsid.service. Jul 10 00:34:34.808316 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 10 00:34:34.808326 systemd[1]: Stopped initrd-switch-root.service. Jul 10 00:34:34.808341 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 10 00:34:34.808352 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 10 00:34:34.808364 systemd[1]: Created slice system-addon\x2drun.slice. Jul 10 00:34:34.808374 systemd[1]: Created slice system-getty.slice. Jul 10 00:34:34.808384 systemd[1]: Created slice system-modprobe.slice. Jul 10 00:34:34.808396 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 10 00:34:34.808407 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 10 00:34:34.808418 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 10 00:34:34.808428 systemd[1]: Created slice user.slice. Jul 10 00:34:34.808438 systemd[1]: Started systemd-ask-password-console.path. Jul 10 00:34:34.808453 systemd[1]: Started systemd-ask-password-wall.path. Jul 10 00:34:34.808463 systemd[1]: Set up automount boot.automount. Jul 10 00:34:34.808474 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 10 00:34:34.808489 systemd[1]: Stopped target initrd-switch-root.target. Jul 10 00:34:34.808499 systemd[1]: Stopped target initrd-fs.target. Jul 10 00:34:34.808511 systemd[1]: Stopped target initrd-root-fs.target. Jul 10 00:34:34.808521 systemd[1]: Reached target integritysetup.target. Jul 10 00:34:34.808531 systemd[1]: Reached target remote-cryptsetup.target. Jul 10 00:34:34.808542 systemd[1]: Reached target remote-fs.target. Jul 10 00:34:34.808552 systemd[1]: Reached target slices.target. Jul 10 00:34:34.808562 systemd[1]: Reached target swap.target. Jul 10 00:34:34.808573 systemd[1]: Reached target torcx.target. Jul 10 00:34:34.808583 systemd[1]: Reached target veritysetup.target. Jul 10 00:34:34.808594 systemd[1]: Listening on systemd-coredump.socket. Jul 10 00:34:34.808604 systemd[1]: Listening on systemd-initctl.socket. Jul 10 00:34:34.808616 systemd[1]: Listening on systemd-networkd.socket. Jul 10 00:34:34.808627 systemd[1]: Listening on systemd-udevd-control.socket. Jul 10 00:34:34.808638 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 10 00:34:34.808648 systemd[1]: Listening on systemd-userdbd.socket. Jul 10 00:34:34.808658 systemd[1]: Mounting dev-hugepages.mount... Jul 10 00:34:34.808668 systemd[1]: Mounting dev-mqueue.mount... Jul 10 00:34:34.808678 systemd[1]: Mounting media.mount... Jul 10 00:34:34.808688 systemd[1]: Mounting sys-kernel-debug.mount... Jul 10 00:34:34.808699 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 10 00:34:34.808711 systemd[1]: Mounting tmp.mount... Jul 10 00:34:34.808721 systemd[1]: Starting flatcar-tmpfiles.service... Jul 10 00:34:34.808732 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 10 00:34:34.808742 systemd[1]: Starting kmod-static-nodes.service... Jul 10 00:34:34.808752 systemd[1]: Starting modprobe@configfs.service... Jul 10 00:34:34.808762 systemd[1]: Starting modprobe@dm_mod.service... Jul 10 00:34:34.808772 systemd[1]: Starting modprobe@drm.service... Jul 10 00:34:34.808782 systemd[1]: Starting modprobe@efi_pstore.service... Jul 10 00:34:34.808793 systemd[1]: Starting modprobe@fuse.service... Jul 10 00:34:34.808804 systemd[1]: Starting modprobe@loop.service... Jul 10 00:34:34.808814 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 10 00:34:34.808824 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 10 00:34:34.808834 systemd[1]: Stopped systemd-fsck-root.service. Jul 10 00:34:34.808846 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 10 00:34:34.808856 systemd[1]: Stopped systemd-fsck-usr.service. Jul 10 00:34:34.808866 systemd[1]: Stopped systemd-journald.service. Jul 10 00:34:34.808876 kernel: fuse: init (API version 7.34) Jul 10 00:34:34.808887 kernel: loop: module loaded Jul 10 00:34:34.808898 systemd[1]: Starting systemd-journald.service... Jul 10 00:34:34.808909 systemd[1]: Starting systemd-modules-load.service... Jul 10 00:34:34.808919 systemd[1]: Starting systemd-network-generator.service... Jul 10 00:34:34.808931 systemd[1]: Starting systemd-remount-fs.service... Jul 10 00:34:34.808942 systemd[1]: Starting systemd-udev-trigger.service... Jul 10 00:34:34.808952 systemd[1]: verity-setup.service: Deactivated successfully. Jul 10 00:34:34.808962 systemd[1]: Stopped verity-setup.service. Jul 10 00:34:34.808972 systemd[1]: Mounted dev-hugepages.mount. Jul 10 00:34:34.808983 systemd[1]: Mounted dev-mqueue.mount. Jul 10 00:34:34.808993 systemd[1]: Mounted media.mount. Jul 10 00:34:34.809004 systemd[1]: Mounted sys-kernel-debug.mount. Jul 10 00:34:34.809014 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 10 00:34:34.809040 systemd[1]: Mounted tmp.mount. Jul 10 00:34:34.809053 systemd[1]: Finished kmod-static-nodes.service. Jul 10 00:34:34.809070 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 10 00:34:34.809081 systemd[1]: Finished modprobe@configfs.service. Jul 10 00:34:34.809091 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:34:34.809102 systemd[1]: Finished modprobe@dm_mod.service. Jul 10 00:34:34.809115 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:34:34.809125 systemd[1]: Finished modprobe@drm.service. Jul 10 00:34:34.809137 systemd-journald[996]: Journal started Jul 10 00:34:34.809180 systemd-journald[996]: Runtime Journal (/run/log/journal/655ac46e9d8c4a958c26373382e7d476) is 6.0M, max 48.7M, 42.6M free. Jul 10 00:34:34.809210 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:34:32.887000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 10 00:34:32.955000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 10 00:34:32.955000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 10 00:34:32.955000 audit: BPF prog-id=10 op=LOAD Jul 10 00:34:32.955000 audit: BPF prog-id=10 op=UNLOAD Jul 10 00:34:32.955000 audit: BPF prog-id=11 op=LOAD Jul 10 00:34:32.955000 audit: BPF prog-id=11 op=UNLOAD Jul 10 00:34:33.007000 audit[930]: AVC avc: denied { associate } for pid=930 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 10 00:34:33.007000 audit[930]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=400014d89c a1=40000d0de0 a2=40000d70c0 a3=32 items=0 ppid=913 pid=930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:34:33.007000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 10 00:34:33.008000 audit[930]: AVC avc: denied { associate } for pid=930 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 10 00:34:33.008000 audit[930]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=400014d975 a2=1ed a3=0 items=2 ppid=913 pid=930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:34:33.008000 audit: CWD cwd="/" Jul 10 00:34:33.008000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:33.008000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:33.008000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 10 00:34:34.683000 audit: BPF prog-id=12 op=LOAD Jul 10 00:34:34.683000 audit: BPF prog-id=3 op=UNLOAD Jul 10 00:34:34.684000 audit: BPF prog-id=13 op=LOAD Jul 10 00:34:34.684000 audit: BPF prog-id=14 op=LOAD Jul 10 00:34:34.684000 audit: BPF prog-id=4 op=UNLOAD Jul 10 00:34:34.684000 audit: BPF prog-id=5 op=UNLOAD Jul 10 00:34:34.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:34.687000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:34.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:34.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:34.695000 audit: BPF prog-id=12 op=UNLOAD Jul 10 00:34:34.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:34.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:34.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:34.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:34.776000 audit: BPF prog-id=15 op=LOAD Jul 10 00:34:34.777000 audit: BPF prog-id=16 op=LOAD Jul 10 00:34:34.777000 audit: BPF prog-id=17 op=LOAD Jul 10 00:34:34.777000 audit: BPF prog-id=13 op=UNLOAD Jul 10 00:34:34.777000 audit: BPF prog-id=14 op=UNLOAD Jul 10 00:34:34.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:34.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:34.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:34.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:34.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:34.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:34.806000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 10 00:34:34.806000 audit[996]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffefff62f0 a2=4000 a3=1 items=0 ppid=1 pid=996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:34:34.806000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 10 00:34:34.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:34.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:34.681618 systemd[1]: Queued start job for default target multi-user.target. Jul 10 00:34:33.004779 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-07-10T00:34:33Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 10 00:34:34.681630 systemd[1]: Unnecessary job was removed for dev-vda6.device. Jul 10 00:34:34.810468 systemd[1]: Finished modprobe@efi_pstore.service. Jul 10 00:34:33.005062 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-07-10T00:34:33Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 10 00:34:34.684952 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 10 00:34:33.005089 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-07-10T00:34:33Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 10 00:34:33.005121 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-07-10T00:34:33Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 10 00:34:33.005130 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-07-10T00:34:33Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 10 00:34:33.005161 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-07-10T00:34:33Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 10 00:34:33.005173 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-07-10T00:34:33Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 10 00:34:33.005379 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-07-10T00:34:33Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 10 00:34:33.005414 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-07-10T00:34:33Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 10 00:34:33.005426 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-07-10T00:34:33Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 10 00:34:33.006277 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-07-10T00:34:33Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 10 00:34:33.006315 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-07-10T00:34:33Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 10 00:34:33.006334 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-07-10T00:34:33Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 Jul 10 00:34:34.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:34.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:33.006350 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-07-10T00:34:33Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 10 00:34:33.006369 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-07-10T00:34:33Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 Jul 10 00:34:33.006395 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-07-10T00:34:33Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 10 00:34:34.444404 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-07-10T00:34:34Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 10 00:34:34.444672 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-07-10T00:34:34Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 10 00:34:34.444764 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-07-10T00:34:34Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 10 00:34:34.444921 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-07-10T00:34:34Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 10 00:34:34.444970 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-07-10T00:34:34Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 10 00:34:34.445025 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-07-10T00:34:34Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 10 00:34:34.812667 systemd[1]: Started systemd-journald.service. Jul 10 00:34:34.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:34.813419 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 10 00:34:34.813822 systemd[1]: Finished modprobe@fuse.service. Jul 10 00:34:34.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:34.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:34.814921 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:34:34.815150 systemd[1]: Finished modprobe@loop.service. Jul 10 00:34:34.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:34.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:34.816331 systemd[1]: Finished systemd-modules-load.service. Jul 10 00:34:34.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:34.817451 systemd[1]: Finished systemd-network-generator.service. Jul 10 00:34:34.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:34.820265 systemd[1]: Finished systemd-remount-fs.service. Jul 10 00:34:34.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:34.821507 systemd[1]: Reached target network-pre.target. Jul 10 00:34:34.823508 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 10 00:34:34.825351 systemd[1]: Mounting sys-kernel-config.mount... Jul 10 00:34:34.826080 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 10 00:34:34.827594 systemd[1]: Starting systemd-hwdb-update.service... Jul 10 00:34:34.829518 systemd[1]: Starting systemd-journal-flush.service... Jul 10 00:34:34.830356 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:34:34.831411 systemd[1]: Starting systemd-random-seed.service... Jul 10 00:34:34.832276 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 10 00:34:34.833259 systemd[1]: Starting systemd-sysctl.service... Jul 10 00:34:34.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:34.836747 systemd[1]: Finished flatcar-tmpfiles.service. Jul 10 00:34:34.837769 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 10 00:34:34.838914 systemd[1]: Mounted sys-kernel-config.mount. Jul 10 00:34:34.848043 systemd-journald[996]: Time spent on flushing to /var/log/journal/655ac46e9d8c4a958c26373382e7d476 is 17.576ms for 973 entries. Jul 10 00:34:34.848043 systemd-journald[996]: System Journal (/var/log/journal/655ac46e9d8c4a958c26373382e7d476) is 8.0M, max 195.6M, 187.6M free. Jul 10 00:34:34.875935 systemd-journald[996]: Received client request to flush runtime journal. Jul 10 00:34:34.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:34.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:34.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:34.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:34.840921 systemd[1]: Starting systemd-sysusers.service... Jul 10 00:34:34.843300 systemd[1]: Finished systemd-random-seed.service. Jul 10 00:34:34.844275 systemd[1]: Reached target first-boot-complete.target. Jul 10 00:34:34.876672 udevadm[1033]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 10 00:34:34.854495 systemd[1]: Finished systemd-sysctl.service. Jul 10 00:34:34.856808 systemd[1]: Finished systemd-sysusers.service. Jul 10 00:34:34.858735 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 10 00:34:34.863952 systemd[1]: Finished systemd-udev-trigger.service. Jul 10 00:34:34.866021 systemd[1]: Starting systemd-udev-settle.service... Jul 10 00:34:34.876843 systemd[1]: Finished systemd-journal-flush.service. Jul 10 00:34:34.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:34.881835 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 10 00:34:34.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:35.219783 systemd[1]: Finished systemd-hwdb-update.service. Jul 10 00:34:35.222210 systemd[1]: Starting systemd-udevd.service... Jul 10 00:34:35.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:35.221000 audit: BPF prog-id=18 op=LOAD Jul 10 00:34:35.221000 audit: BPF prog-id=19 op=LOAD Jul 10 00:34:35.221000 audit: BPF prog-id=7 op=UNLOAD Jul 10 00:34:35.221000 audit: BPF prog-id=8 op=UNLOAD Jul 10 00:34:35.240615 systemd-udevd[1035]: Using default interface naming scheme 'v252'. Jul 10 00:34:35.259475 systemd[1]: Started systemd-udevd.service. Jul 10 00:34:35.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:35.261000 audit: BPF prog-id=20 op=LOAD Jul 10 00:34:35.261997 systemd[1]: Starting systemd-networkd.service... Jul 10 00:34:35.278000 audit: BPF prog-id=21 op=LOAD Jul 10 00:34:35.278000 audit: BPF prog-id=22 op=LOAD Jul 10 00:34:35.278000 audit: BPF prog-id=23 op=LOAD Jul 10 00:34:35.279651 systemd[1]: Starting systemd-userdbd.service... Jul 10 00:34:35.287319 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Jul 10 00:34:35.307459 systemd[1]: Started systemd-userdbd.service. Jul 10 00:34:35.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:35.332571 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 10 00:34:35.373588 systemd-networkd[1038]: lo: Link UP Jul 10 00:34:35.373599 systemd-networkd[1038]: lo: Gained carrier Jul 10 00:34:35.373988 systemd-networkd[1038]: Enumeration completed Jul 10 00:34:35.374123 systemd[1]: Started systemd-networkd.service. Jul 10 00:34:35.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:35.374893 systemd-networkd[1038]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:34:35.377409 systemd[1]: Finished systemd-udev-settle.service. Jul 10 00:34:35.378018 systemd-networkd[1038]: eth0: Link UP Jul 10 00:34:35.378038 systemd-networkd[1038]: eth0: Gained carrier Jul 10 00:34:35.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:35.379389 systemd[1]: Starting lvm2-activation-early.service... Jul 10 00:34:35.387945 lvm[1068]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 10 00:34:35.394160 systemd-networkd[1038]: eth0: DHCPv4 address 10.0.0.80/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 00:34:35.433991 systemd[1]: Finished lvm2-activation-early.service. Jul 10 00:34:35.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:35.434796 systemd[1]: Reached target cryptsetup.target. Jul 10 00:34:35.436605 systemd[1]: Starting lvm2-activation.service... Jul 10 00:34:35.440284 lvm[1069]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 10 00:34:35.475910 systemd[1]: Finished lvm2-activation.service. Jul 10 00:34:35.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:35.476708 systemd[1]: Reached target local-fs-pre.target. Jul 10 00:34:35.477354 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 10 00:34:35.477387 systemd[1]: Reached target local-fs.target. Jul 10 00:34:35.477937 systemd[1]: Reached target machines.target. Jul 10 00:34:35.479764 systemd[1]: Starting ldconfig.service... Jul 10 00:34:35.480668 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 10 00:34:35.480719 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:34:35.481711 systemd[1]: Starting systemd-boot-update.service... Jul 10 00:34:35.483467 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 10 00:34:35.485206 systemd[1]: Starting systemd-machine-id-commit.service... Jul 10 00:34:35.487613 systemd[1]: Starting systemd-sysext.service... Jul 10 00:34:35.488657 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1071 (bootctl) Jul 10 00:34:35.489721 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 10 00:34:35.500164 systemd[1]: Unmounting usr-share-oem.mount... Jul 10 00:34:35.501397 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 10 00:34:35.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:35.506710 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 10 00:34:35.506911 systemd[1]: Unmounted usr-share-oem.mount. Jul 10 00:34:35.519069 kernel: loop0: detected capacity change from 0 to 207008 Jul 10 00:34:35.561193 systemd[1]: Finished systemd-machine-id-commit.service. Jul 10 00:34:35.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:35.566947 systemd-fsck[1080]: fsck.fat 4.2 (2021-01-31) Jul 10 00:34:35.566947 systemd-fsck[1080]: /dev/vda1: 236 files, 117310/258078 clusters Jul 10 00:34:35.569057 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 10 00:34:35.569981 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 10 00:34:35.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:35.588055 kernel: loop1: detected capacity change from 0 to 207008 Jul 10 00:34:35.593107 (sd-sysext)[1087]: Using extensions 'kubernetes'. Jul 10 00:34:35.593427 (sd-sysext)[1087]: Merged extensions into '/usr'. Jul 10 00:34:35.608666 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 10 00:34:35.609905 systemd[1]: Starting modprobe@dm_mod.service... Jul 10 00:34:35.611702 systemd[1]: Starting modprobe@efi_pstore.service... Jul 10 00:34:35.613629 systemd[1]: Starting modprobe@loop.service... Jul 10 00:34:35.614301 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 10 00:34:35.614432 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:34:35.615223 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:34:35.615342 systemd[1]: Finished modprobe@dm_mod.service. Jul 10 00:34:35.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:35.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:35.616453 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:34:35.616572 systemd[1]: Finished modprobe@efi_pstore.service. Jul 10 00:34:35.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:35.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:35.617725 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:34:35.617825 systemd[1]: Finished modprobe@loop.service. Jul 10 00:34:35.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:35.618000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:35.618895 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:34:35.619023 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 10 00:34:35.655595 ldconfig[1070]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 10 00:34:35.658656 systemd[1]: Finished ldconfig.service. Jul 10 00:34:35.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:35.792333 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 10 00:34:35.794172 systemd[1]: Mounting boot.mount... Jul 10 00:34:35.795858 systemd[1]: Mounting usr-share-oem.mount... Jul 10 00:34:35.801633 systemd[1]: Mounted boot.mount. Jul 10 00:34:35.802397 systemd[1]: Mounted usr-share-oem.mount. Jul 10 00:34:35.804062 systemd[1]: Finished systemd-sysext.service. Jul 10 00:34:35.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:35.805784 systemd[1]: Starting ensure-sysext.service... Jul 10 00:34:35.807681 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 10 00:34:35.808759 systemd[1]: Finished systemd-boot-update.service. Jul 10 00:34:35.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:35.812485 systemd[1]: Reloading. Jul 10 00:34:35.816784 systemd-tmpfiles[1095]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 10 00:34:35.817850 systemd-tmpfiles[1095]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 10 00:34:35.819139 systemd-tmpfiles[1095]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 10 00:34:35.838872 /usr/lib/systemd/system-generators/torcx-generator[1115]: time="2025-07-10T00:34:35Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 10 00:34:35.838902 /usr/lib/systemd/system-generators/torcx-generator[1115]: time="2025-07-10T00:34:35Z" level=info msg="torcx already run" Jul 10 00:34:35.902533 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 10 00:34:35.902554 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 10 00:34:35.917642 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:34:35.961000 audit: BPF prog-id=24 op=LOAD Jul 10 00:34:35.962000 audit: BPF prog-id=21 op=UNLOAD Jul 10 00:34:35.962000 audit: BPF prog-id=25 op=LOAD Jul 10 00:34:35.962000 audit: BPF prog-id=26 op=LOAD Jul 10 00:34:35.962000 audit: BPF prog-id=22 op=UNLOAD Jul 10 00:34:35.962000 audit: BPF prog-id=23 op=UNLOAD Jul 10 00:34:35.963000 audit: BPF prog-id=27 op=LOAD Jul 10 00:34:35.963000 audit: BPF prog-id=15 op=UNLOAD Jul 10 00:34:35.963000 audit: BPF prog-id=28 op=LOAD Jul 10 00:34:35.963000 audit: BPF prog-id=29 op=LOAD Jul 10 00:34:35.963000 audit: BPF prog-id=16 op=UNLOAD Jul 10 00:34:35.963000 audit: BPF prog-id=17 op=UNLOAD Jul 10 00:34:35.963000 audit: BPF prog-id=30 op=LOAD Jul 10 00:34:35.963000 audit: BPF prog-id=31 op=LOAD Jul 10 00:34:35.963000 audit: BPF prog-id=18 op=UNLOAD Jul 10 00:34:35.963000 audit: BPF prog-id=19 op=UNLOAD Jul 10 00:34:35.964000 audit: BPF prog-id=32 op=LOAD Jul 10 00:34:35.964000 audit: BPF prog-id=20 op=UNLOAD Jul 10 00:34:35.966731 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 10 00:34:35.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:35.970928 systemd[1]: Starting audit-rules.service... Jul 10 00:34:35.972749 systemd[1]: Starting clean-ca-certificates.service... Jul 10 00:34:35.978000 audit: BPF prog-id=33 op=LOAD Jul 10 00:34:35.980000 audit: BPF prog-id=34 op=LOAD Jul 10 00:34:35.974768 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 10 00:34:35.979025 systemd[1]: Starting systemd-resolved.service... Jul 10 00:34:35.982467 systemd[1]: Starting systemd-timesyncd.service... Jul 10 00:34:35.984308 systemd[1]: Starting systemd-update-utmp.service... Jul 10 00:34:35.985475 systemd[1]: Finished clean-ca-certificates.service. Jul 10 00:34:35.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:35.988097 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:34:35.989749 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 10 00:34:35.991851 systemd[1]: Starting modprobe@dm_mod.service... Jul 10 00:34:35.993000 audit[1165]: SYSTEM_BOOT pid=1165 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 10 00:34:35.993636 systemd[1]: Starting modprobe@efi_pstore.service... Jul 10 00:34:35.996382 systemd[1]: Starting modprobe@loop.service... Jul 10 00:34:35.998242 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 10 00:34:35.998373 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:34:35.998468 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:34:35.999326 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 10 00:34:36.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:36.000799 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:34:36.000922 systemd[1]: Finished modprobe@dm_mod.service. Jul 10 00:34:36.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:36.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:36.002214 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:34:36.002330 systemd[1]: Finished modprobe@efi_pstore.service. Jul 10 00:34:36.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:36.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:36.003519 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:34:36.003629 systemd[1]: Finished modprobe@loop.service. Jul 10 00:34:36.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:36.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:36.006551 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:34:36.006689 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 10 00:34:36.007973 systemd[1]: Starting systemd-update-done.service... Jul 10 00:34:36.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:36.009539 systemd[1]: Finished systemd-update-utmp.service. Jul 10 00:34:36.012480 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 10 00:34:36.013744 systemd[1]: Starting modprobe@dm_mod.service... Jul 10 00:34:36.015769 systemd[1]: Starting modprobe@efi_pstore.service... Jul 10 00:34:36.017647 systemd[1]: Starting modprobe@loop.service... Jul 10 00:34:36.018484 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 10 00:34:36.018608 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:34:36.018699 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:34:36.019582 systemd[1]: Finished systemd-update-done.service. Jul 10 00:34:36.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:36.020856 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:34:36.020970 systemd[1]: Finished modprobe@dm_mod.service. Jul 10 00:34:36.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:36.021000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:36.022213 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:34:36.022333 systemd[1]: Finished modprobe@efi_pstore.service. Jul 10 00:34:36.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:36.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:36.023757 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:34:36.023870 systemd[1]: Finished modprobe@loop.service. Jul 10 00:34:36.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:36.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:36.027218 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 10 00:34:36.029835 systemd[1]: Starting modprobe@dm_mod.service... Jul 10 00:34:36.031000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 10 00:34:36.031000 audit[1181]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffce224400 a2=420 a3=0 items=0 ppid=1154 pid=1181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:34:36.031000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 10 00:34:36.031779 augenrules[1181]: No rules Jul 10 00:34:36.031870 systemd[1]: Starting modprobe@drm.service... Jul 10 00:34:36.034119 systemd[1]: Starting modprobe@efi_pstore.service... Jul 10 00:34:36.036161 systemd[1]: Starting modprobe@loop.service... Jul 10 00:34:36.036923 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 10 00:34:36.037125 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:34:36.038345 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 10 00:34:36.039360 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:34:36.040464 systemd[1]: Finished audit-rules.service. Jul 10 00:34:36.041643 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:34:36.041762 systemd[1]: Finished modprobe@dm_mod.service. Jul 10 00:34:36.042950 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:34:36.043096 systemd[1]: Finished modprobe@drm.service. Jul 10 00:34:36.043915 systemd-timesyncd[1164]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 10 00:34:36.044315 systemd[1]: Started systemd-timesyncd.service. Jul 10 00:34:36.044396 systemd-timesyncd[1164]: Initial clock synchronization to Thu 2025-07-10 00:34:36.092330 UTC. Jul 10 00:34:36.045855 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:34:36.045973 systemd[1]: Finished modprobe@efi_pstore.service. Jul 10 00:34:36.047213 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:34:36.047329 systemd[1]: Finished modprobe@loop.service. Jul 10 00:34:36.048706 systemd[1]: Reached target time-set.target. Jul 10 00:34:36.049650 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:34:36.049691 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 10 00:34:36.049997 systemd[1]: Finished ensure-sysext.service. Jul 10 00:34:36.051111 systemd-resolved[1161]: Positive Trust Anchors: Jul 10 00:34:36.051122 systemd-resolved[1161]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:34:36.051149 systemd-resolved[1161]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 10 00:34:36.060567 systemd-resolved[1161]: Defaulting to hostname 'linux'. Jul 10 00:34:36.063910 systemd[1]: Started systemd-resolved.service. Jul 10 00:34:36.064805 systemd[1]: Reached target network.target. Jul 10 00:34:36.065622 systemd[1]: Reached target nss-lookup.target. Jul 10 00:34:36.066396 systemd[1]: Reached target sysinit.target. Jul 10 00:34:36.067214 systemd[1]: Started motdgen.path. Jul 10 00:34:36.067889 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 10 00:34:36.069126 systemd[1]: Started logrotate.timer. Jul 10 00:34:36.069890 systemd[1]: Started mdadm.timer. Jul 10 00:34:36.070627 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 10 00:34:36.071447 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 10 00:34:36.071484 systemd[1]: Reached target paths.target. Jul 10 00:34:36.072189 systemd[1]: Reached target timers.target. Jul 10 00:34:36.073282 systemd[1]: Listening on dbus.socket. Jul 10 00:34:36.075107 systemd[1]: Starting docker.socket... Jul 10 00:34:36.078291 systemd[1]: Listening on sshd.socket. Jul 10 00:34:36.079169 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:34:36.079621 systemd[1]: Listening on docker.socket. Jul 10 00:34:36.080428 systemd[1]: Reached target sockets.target. Jul 10 00:34:36.081153 systemd[1]: Reached target basic.target. Jul 10 00:34:36.081874 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 10 00:34:36.081907 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 10 00:34:36.082916 systemd[1]: Starting containerd.service... Jul 10 00:34:36.084702 systemd[1]: Starting dbus.service... Jul 10 00:34:36.086368 systemd[1]: Starting enable-oem-cloudinit.service... Jul 10 00:34:36.088275 systemd[1]: Starting extend-filesystems.service... Jul 10 00:34:36.089119 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 10 00:34:36.090488 systemd[1]: Starting motdgen.service... Jul 10 00:34:36.095240 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 10 00:34:36.097152 systemd[1]: Starting sshd-keygen.service... Jul 10 00:34:36.100735 systemd[1]: Starting systemd-logind.service... Jul 10 00:34:36.101663 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:34:36.101737 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 10 00:34:36.102841 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 10 00:34:36.103543 systemd[1]: Starting update-engine.service... Jul 10 00:34:36.108471 jq[1209]: true Jul 10 00:34:36.105257 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 10 00:34:36.107743 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 10 00:34:36.107913 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 10 00:34:36.109787 extend-filesystems[1197]: Found loop1 Jul 10 00:34:36.113072 extend-filesystems[1197]: Found vda Jul 10 00:34:36.113072 extend-filesystems[1197]: Found vda1 Jul 10 00:34:36.113072 extend-filesystems[1197]: Found vda2 Jul 10 00:34:36.113072 extend-filesystems[1197]: Found vda3 Jul 10 00:34:36.113072 extend-filesystems[1197]: Found usr Jul 10 00:34:36.113072 extend-filesystems[1197]: Found vda4 Jul 10 00:34:36.113072 extend-filesystems[1197]: Found vda6 Jul 10 00:34:36.113072 extend-filesystems[1197]: Found vda7 Jul 10 00:34:36.113072 extend-filesystems[1197]: Found vda9 Jul 10 00:34:36.113072 extend-filesystems[1197]: Checking size of /dev/vda9 Jul 10 00:34:36.112430 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 10 00:34:36.125122 jq[1196]: false Jul 10 00:34:36.112596 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 10 00:34:36.126100 jq[1213]: true Jul 10 00:34:36.130022 systemd[1]: motdgen.service: Deactivated successfully. Jul 10 00:34:36.130251 systemd[1]: Finished motdgen.service. Jul 10 00:34:36.136074 dbus-daemon[1195]: [system] SELinux support is enabled Jul 10 00:34:36.136230 systemd[1]: Started dbus.service. Jul 10 00:34:36.138678 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 10 00:34:36.138708 systemd[1]: Reached target system-config.target. Jul 10 00:34:36.139627 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 10 00:34:36.139649 systemd[1]: Reached target user-config.target. Jul 10 00:34:36.142534 extend-filesystems[1197]: Resized partition /dev/vda9 Jul 10 00:34:36.155467 extend-filesystems[1234]: resize2fs 1.46.5 (30-Dec-2021) Jul 10 00:34:36.174134 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 10 00:34:36.176011 systemd-logind[1204]: Watching system buttons on /dev/input/event0 (Power Button) Jul 10 00:34:36.178239 systemd-logind[1204]: New seat seat0. Jul 10 00:34:36.179962 systemd[1]: Started systemd-logind.service. Jul 10 00:34:36.203833 update_engine[1208]: I0710 00:34:36.202917 1208 main.cc:92] Flatcar Update Engine starting Jul 10 00:34:36.210069 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 10 00:34:36.213057 systemd[1]: Started update-engine.service. Jul 10 00:34:36.216051 systemd[1]: Started locksmithd.service. Jul 10 00:34:36.243153 update_engine[1208]: I0710 00:34:36.213119 1208 update_check_scheduler.cc:74] Next update check in 7m24s Jul 10 00:34:36.243571 extend-filesystems[1234]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 10 00:34:36.243571 extend-filesystems[1234]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 10 00:34:36.243571 extend-filesystems[1234]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 10 00:34:36.246626 extend-filesystems[1197]: Resized filesystem in /dev/vda9 Jul 10 00:34:36.245553 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 10 00:34:36.248515 bash[1242]: Updated "/home/core/.ssh/authorized_keys" Jul 10 00:34:36.245745 systemd[1]: Finished extend-filesystems.service. Jul 10 00:34:36.248253 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 10 00:34:36.249955 env[1214]: time="2025-07-10T00:34:36.249514400Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 10 00:34:36.269890 env[1214]: time="2025-07-10T00:34:36.269838440Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 10 00:34:36.270114 env[1214]: time="2025-07-10T00:34:36.270024920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:34:36.271736 env[1214]: time="2025-07-10T00:34:36.271690160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.186-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:34:36.271736 env[1214]: time="2025-07-10T00:34:36.271734920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:34:36.272060 env[1214]: time="2025-07-10T00:34:36.271994720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:34:36.272060 env[1214]: time="2025-07-10T00:34:36.272018520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 10 00:34:36.272115 env[1214]: time="2025-07-10T00:34:36.272069080Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 10 00:34:36.272115 env[1214]: time="2025-07-10T00:34:36.272082280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 10 00:34:36.272224 env[1214]: time="2025-07-10T00:34:36.272184760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:34:36.272478 env[1214]: time="2025-07-10T00:34:36.272442520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:34:36.272642 env[1214]: time="2025-07-10T00:34:36.272614800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:34:36.272672 env[1214]: time="2025-07-10T00:34:36.272635720Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 10 00:34:36.272727 env[1214]: time="2025-07-10T00:34:36.272711400Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 10 00:34:36.272758 env[1214]: time="2025-07-10T00:34:36.272727800Z" level=info msg="metadata content store policy set" policy=shared Jul 10 00:34:36.277646 env[1214]: time="2025-07-10T00:34:36.277613000Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 10 00:34:36.277703 env[1214]: time="2025-07-10T00:34:36.277653240Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 10 00:34:36.277703 env[1214]: time="2025-07-10T00:34:36.277676200Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 10 00:34:36.277761 env[1214]: time="2025-07-10T00:34:36.277710200Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 10 00:34:36.277761 env[1214]: time="2025-07-10T00:34:36.277731440Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 10 00:34:36.277761 env[1214]: time="2025-07-10T00:34:36.277754200Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 10 00:34:36.277818 env[1214]: time="2025-07-10T00:34:36.277767400Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 10 00:34:36.278206 env[1214]: time="2025-07-10T00:34:36.278186480Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 10 00:34:36.278246 env[1214]: time="2025-07-10T00:34:36.278212720Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 10 00:34:36.278246 env[1214]: time="2025-07-10T00:34:36.278236080Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 10 00:34:36.278287 env[1214]: time="2025-07-10T00:34:36.278251800Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 10 00:34:36.278287 env[1214]: time="2025-07-10T00:34:36.278265840Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 10 00:34:36.278421 env[1214]: time="2025-07-10T00:34:36.278404160Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 10 00:34:36.278511 env[1214]: time="2025-07-10T00:34:36.278496160Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 10 00:34:36.279127 env[1214]: time="2025-07-10T00:34:36.279094920Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 10 00:34:36.279164 env[1214]: time="2025-07-10T00:34:36.279141600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 10 00:34:36.279185 env[1214]: time="2025-07-10T00:34:36.279161880Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 10 00:34:36.279323 env[1214]: time="2025-07-10T00:34:36.279301960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 10 00:34:36.279358 env[1214]: time="2025-07-10T00:34:36.279325120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 10 00:34:36.279358 env[1214]: time="2025-07-10T00:34:36.279342560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 10 00:34:36.279397 env[1214]: time="2025-07-10T00:34:36.279357680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 10 00:34:36.279397 env[1214]: time="2025-07-10T00:34:36.279374440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 10 00:34:36.279397 env[1214]: time="2025-07-10T00:34:36.279391520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 10 00:34:36.279452 env[1214]: time="2025-07-10T00:34:36.279408360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 10 00:34:36.279452 env[1214]: time="2025-07-10T00:34:36.279425560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 10 00:34:36.279452 env[1214]: time="2025-07-10T00:34:36.279444000Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 10 00:34:36.279606 env[1214]: time="2025-07-10T00:34:36.279576640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 10 00:34:36.279633 env[1214]: time="2025-07-10T00:34:36.279606560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 10 00:34:36.279633 env[1214]: time="2025-07-10T00:34:36.279623800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 10 00:34:36.279678 env[1214]: time="2025-07-10T00:34:36.279640040Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 10 00:34:36.279700 env[1214]: time="2025-07-10T00:34:36.279680920Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 10 00:34:36.279700 env[1214]: time="2025-07-10T00:34:36.279696400Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 10 00:34:36.279743 env[1214]: time="2025-07-10T00:34:36.279719480Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 10 00:34:36.279843 env[1214]: time="2025-07-10T00:34:36.279801000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 10 00:34:36.280146 env[1214]: time="2025-07-10T00:34:36.280094520Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 10 00:34:36.280814 env[1214]: time="2025-07-10T00:34:36.280161160Z" level=info msg="Connect containerd service" Jul 10 00:34:36.280814 env[1214]: time="2025-07-10T00:34:36.280208640Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 10 00:34:36.280935 env[1214]: time="2025-07-10T00:34:36.280905720Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:34:36.281149 env[1214]: time="2025-07-10T00:34:36.281040880Z" level=info msg="Start subscribing containerd event" Jul 10 00:34:36.281149 env[1214]: time="2025-07-10T00:34:36.281106360Z" level=info msg="Start recovering state" Jul 10 00:34:36.281238 env[1214]: time="2025-07-10T00:34:36.281182520Z" level=info msg="Start event monitor" Jul 10 00:34:36.281238 env[1214]: time="2025-07-10T00:34:36.281204680Z" level=info msg="Start snapshots syncer" Jul 10 00:34:36.281238 env[1214]: time="2025-07-10T00:34:36.281217360Z" level=info msg="Start cni network conf syncer for default" Jul 10 00:34:36.281238 env[1214]: time="2025-07-10T00:34:36.281229640Z" level=info msg="Start streaming server" Jul 10 00:34:36.281326 env[1214]: time="2025-07-10T00:34:36.281286600Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 10 00:34:36.281352 env[1214]: time="2025-07-10T00:34:36.281335320Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 10 00:34:36.282083 env[1214]: time="2025-07-10T00:34:36.281390800Z" level=info msg="containerd successfully booted in 0.057976s" Jul 10 00:34:36.281467 systemd[1]: Started containerd.service. Jul 10 00:34:36.292061 locksmithd[1245]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 10 00:34:36.578176 systemd-networkd[1038]: eth0: Gained IPv6LL Jul 10 00:34:36.579847 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 10 00:34:36.581159 systemd[1]: Reached target network-online.target. Jul 10 00:34:36.583513 systemd[1]: Starting kubelet.service... Jul 10 00:34:37.156604 systemd[1]: Started kubelet.service. Jul 10 00:34:37.495270 sshd_keygen[1218]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 10 00:34:37.513806 systemd[1]: Finished sshd-keygen.service. Jul 10 00:34:37.516459 systemd[1]: Starting issuegen.service... Jul 10 00:34:37.521704 systemd[1]: issuegen.service: Deactivated successfully. Jul 10 00:34:37.521881 systemd[1]: Finished issuegen.service. Jul 10 00:34:37.524119 systemd[1]: Starting systemd-user-sessions.service... Jul 10 00:34:37.530685 systemd[1]: Finished systemd-user-sessions.service. Jul 10 00:34:37.533101 systemd[1]: Started getty@tty1.service. Jul 10 00:34:37.535180 systemd[1]: Started serial-getty@ttyAMA0.service. Jul 10 00:34:37.536125 systemd[1]: Reached target getty.target. Jul 10 00:34:37.536902 systemd[1]: Reached target multi-user.target. Jul 10 00:34:37.538818 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 10 00:34:37.545859 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 10 00:34:37.546022 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 10 00:34:37.546926 systemd[1]: Startup finished in 631ms (kernel) + 4.250s (initrd) + 4.694s (userspace) = 9.576s. Jul 10 00:34:37.615357 kubelet[1259]: E0710 00:34:37.615315 1259 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:34:37.617210 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:34:37.617377 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:34:41.521453 systemd[1]: Created slice system-sshd.slice. Jul 10 00:34:41.522598 systemd[1]: Started sshd@0-10.0.0.80:22-10.0.0.1:58584.service. Jul 10 00:34:41.572974 sshd[1281]: Accepted publickey for core from 10.0.0.1 port 58584 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:34:41.575456 sshd[1281]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:34:41.590052 systemd-logind[1204]: New session 1 of user core. Jul 10 00:34:41.591076 systemd[1]: Created slice user-500.slice. Jul 10 00:34:41.592281 systemd[1]: Starting user-runtime-dir@500.service... Jul 10 00:34:41.601312 systemd[1]: Finished user-runtime-dir@500.service. Jul 10 00:34:41.602768 systemd[1]: Starting user@500.service... Jul 10 00:34:41.605784 (systemd)[1284]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:34:41.671545 systemd[1284]: Queued start job for default target default.target. Jul 10 00:34:41.672052 systemd[1284]: Reached target paths.target. Jul 10 00:34:41.672085 systemd[1284]: Reached target sockets.target. Jul 10 00:34:41.672097 systemd[1284]: Reached target timers.target. Jul 10 00:34:41.672111 systemd[1284]: Reached target basic.target. Jul 10 00:34:41.672151 systemd[1284]: Reached target default.target. Jul 10 00:34:41.672176 systemd[1284]: Startup finished in 60ms. Jul 10 00:34:41.672385 systemd[1]: Started user@500.service. Jul 10 00:34:41.673403 systemd[1]: Started session-1.scope. Jul 10 00:34:41.725932 systemd[1]: Started sshd@1-10.0.0.80:22-10.0.0.1:58598.service. Jul 10 00:34:41.762718 sshd[1293]: Accepted publickey for core from 10.0.0.1 port 58598 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:34:41.764100 sshd[1293]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:34:41.767707 systemd-logind[1204]: New session 2 of user core. Jul 10 00:34:41.769016 systemd[1]: Started session-2.scope. Jul 10 00:34:41.824421 sshd[1293]: pam_unix(sshd:session): session closed for user core Jul 10 00:34:41.827127 systemd[1]: sshd@1-10.0.0.80:22-10.0.0.1:58598.service: Deactivated successfully. Jul 10 00:34:41.827708 systemd[1]: session-2.scope: Deactivated successfully. Jul 10 00:34:41.828254 systemd-logind[1204]: Session 2 logged out. Waiting for processes to exit. Jul 10 00:34:41.829321 systemd[1]: Started sshd@2-10.0.0.80:22-10.0.0.1:58610.service. Jul 10 00:34:41.830158 systemd-logind[1204]: Removed session 2. Jul 10 00:34:41.866048 sshd[1299]: Accepted publickey for core from 10.0.0.1 port 58610 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:34:41.867433 sshd[1299]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:34:41.871278 systemd-logind[1204]: New session 3 of user core. Jul 10 00:34:41.871740 systemd[1]: Started session-3.scope. Jul 10 00:34:41.925651 sshd[1299]: pam_unix(sshd:session): session closed for user core Jul 10 00:34:41.928246 systemd[1]: sshd@2-10.0.0.80:22-10.0.0.1:58610.service: Deactivated successfully. Jul 10 00:34:41.928817 systemd[1]: session-3.scope: Deactivated successfully. Jul 10 00:34:41.929341 systemd-logind[1204]: Session 3 logged out. Waiting for processes to exit. Jul 10 00:34:41.930404 systemd[1]: Started sshd@3-10.0.0.80:22-10.0.0.1:58622.service. Jul 10 00:34:41.931053 systemd-logind[1204]: Removed session 3. Jul 10 00:34:41.967471 sshd[1305]: Accepted publickey for core from 10.0.0.1 port 58622 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:34:41.968816 sshd[1305]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:34:41.972258 systemd-logind[1204]: New session 4 of user core. Jul 10 00:34:41.973119 systemd[1]: Started session-4.scope. Jul 10 00:34:42.028100 sshd[1305]: pam_unix(sshd:session): session closed for user core Jul 10 00:34:42.031796 systemd[1]: sshd@3-10.0.0.80:22-10.0.0.1:58622.service: Deactivated successfully. Jul 10 00:34:42.032475 systemd[1]: session-4.scope: Deactivated successfully. Jul 10 00:34:42.032985 systemd-logind[1204]: Session 4 logged out. Waiting for processes to exit. Jul 10 00:34:42.034103 systemd[1]: Started sshd@4-10.0.0.80:22-10.0.0.1:58634.service. Jul 10 00:34:42.034750 systemd-logind[1204]: Removed session 4. Jul 10 00:34:42.071652 sshd[1311]: Accepted publickey for core from 10.0.0.1 port 58634 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:34:42.073282 sshd[1311]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:34:42.076519 systemd-logind[1204]: New session 5 of user core. Jul 10 00:34:42.077422 systemd[1]: Started session-5.scope. Jul 10 00:34:42.140490 sudo[1314]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 10 00:34:42.140728 sudo[1314]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 10 00:34:42.152892 systemd[1]: Starting coreos-metadata.service... Jul 10 00:34:42.159536 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 10 00:34:42.159851 systemd[1]: Finished coreos-metadata.service. Jul 10 00:34:42.629356 systemd[1]: Stopped kubelet.service. Jul 10 00:34:42.631379 systemd[1]: Starting kubelet.service... Jul 10 00:34:42.653815 systemd[1]: Reloading. Jul 10 00:34:42.708552 /usr/lib/systemd/system-generators/torcx-generator[1372]: time="2025-07-10T00:34:42Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 10 00:34:42.708581 /usr/lib/systemd/system-generators/torcx-generator[1372]: time="2025-07-10T00:34:42Z" level=info msg="torcx already run" Jul 10 00:34:42.885203 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 10 00:34:42.885224 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 10 00:34:42.900534 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:34:42.964576 systemd[1]: Started kubelet.service. Jul 10 00:34:42.965880 systemd[1]: Stopping kubelet.service... Jul 10 00:34:42.966168 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 00:34:42.966355 systemd[1]: Stopped kubelet.service. Jul 10 00:34:42.967947 systemd[1]: Starting kubelet.service... Jul 10 00:34:43.057324 systemd[1]: Started kubelet.service. Jul 10 00:34:43.098426 kubelet[1417]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:34:43.098426 kubelet[1417]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 00:34:43.098426 kubelet[1417]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:34:43.098794 kubelet[1417]: I0710 00:34:43.098484 1417 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:34:44.182819 kubelet[1417]: I0710 00:34:44.182770 1417 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 10 00:34:44.182819 kubelet[1417]: I0710 00:34:44.182806 1417 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:34:44.183181 kubelet[1417]: I0710 00:34:44.183103 1417 server.go:954] "Client rotation is on, will bootstrap in background" Jul 10 00:34:44.229484 kubelet[1417]: I0710 00:34:44.229453 1417 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:34:44.237419 kubelet[1417]: E0710 00:34:44.237378 1417 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 10 00:34:44.237419 kubelet[1417]: I0710 00:34:44.237408 1417 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 10 00:34:44.240798 kubelet[1417]: I0710 00:34:44.240770 1417 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:34:44.241792 kubelet[1417]: I0710 00:34:44.241744 1417 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:34:44.241968 kubelet[1417]: I0710 00:34:44.241792 1417 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.80","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 00:34:44.242081 kubelet[1417]: I0710 00:34:44.242049 1417 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:34:44.242081 kubelet[1417]: I0710 00:34:44.242061 1417 container_manager_linux.go:304] "Creating device plugin manager" Jul 10 00:34:44.242273 kubelet[1417]: I0710 00:34:44.242249 1417 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:34:44.250152 kubelet[1417]: I0710 00:34:44.250119 1417 kubelet.go:446] "Attempting to sync node with API server" Jul 10 00:34:44.250152 kubelet[1417]: I0710 00:34:44.250151 1417 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:34:44.250256 kubelet[1417]: I0710 00:34:44.250178 1417 kubelet.go:352] "Adding apiserver pod source" Jul 10 00:34:44.250256 kubelet[1417]: I0710 00:34:44.250243 1417 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:34:44.250466 kubelet[1417]: E0710 00:34:44.250444 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:34:44.257610 kubelet[1417]: E0710 00:34:44.257579 1417 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:34:44.268951 kubelet[1417]: I0710 00:34:44.268919 1417 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 10 00:34:44.269620 kubelet[1417]: I0710 00:34:44.269588 1417 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 00:34:44.269728 kubelet[1417]: W0710 00:34:44.269710 1417 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 10 00:34:44.270565 kubelet[1417]: I0710 00:34:44.270545 1417 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 00:34:44.270631 kubelet[1417]: I0710 00:34:44.270586 1417 server.go:1287] "Started kubelet" Jul 10 00:34:44.270716 kubelet[1417]: I0710 00:34:44.270686 1417 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:34:44.271774 kubelet[1417]: I0710 00:34:44.271754 1417 server.go:479] "Adding debug handlers to kubelet server" Jul 10 00:34:44.275350 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 10 00:34:44.275561 kubelet[1417]: I0710 00:34:44.275544 1417 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:34:44.279994 kubelet[1417]: I0710 00:34:44.279972 1417 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:34:44.281289 kubelet[1417]: I0710 00:34:44.281275 1417 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 00:34:44.281538 kubelet[1417]: E0710 00:34:44.281523 1417 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.80\" not found" Jul 10 00:34:44.282085 kubelet[1417]: I0710 00:34:44.282071 1417 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 00:34:44.282240 kubelet[1417]: I0710 00:34:44.282229 1417 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:34:44.286301 kubelet[1417]: I0710 00:34:44.286229 1417 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:34:44.286940 kubelet[1417]: I0710 00:34:44.286913 1417 factory.go:221] Registration of the systemd container factory successfully Jul 10 00:34:44.287073 kubelet[1417]: I0710 00:34:44.287052 1417 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:34:44.288344 kubelet[1417]: I0710 00:34:44.288310 1417 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:34:44.295027 kubelet[1417]: I0710 00:34:44.290293 1417 factory.go:221] Registration of the containerd container factory successfully Jul 10 00:34:44.299206 kubelet[1417]: I0710 00:34:44.299187 1417 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 00:34:44.299206 kubelet[1417]: I0710 00:34:44.299203 1417 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 00:34:44.299287 kubelet[1417]: I0710 00:34:44.299223 1417 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:34:44.301259 kubelet[1417]: E0710 00:34:44.301227 1417 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.80\" not found" node="10.0.0.80" Jul 10 00:34:44.369568 kubelet[1417]: I0710 00:34:44.369529 1417 policy_none.go:49] "None policy: Start" Jul 10 00:34:44.369568 kubelet[1417]: I0710 00:34:44.369560 1417 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 00:34:44.369568 kubelet[1417]: I0710 00:34:44.369572 1417 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:34:44.374159 systemd[1]: Created slice kubepods.slice. Jul 10 00:34:44.378738 systemd[1]: Created slice kubepods-besteffort.slice. Jul 10 00:34:44.382111 kubelet[1417]: E0710 00:34:44.382081 1417 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.80\" not found" Jul 10 00:34:44.387197 systemd[1]: Created slice kubepods-burstable.slice. Jul 10 00:34:44.388480 kubelet[1417]: I0710 00:34:44.388457 1417 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 00:34:44.388706 kubelet[1417]: I0710 00:34:44.388688 1417 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:34:44.389338 kubelet[1417]: I0710 00:34:44.388767 1417 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:34:44.389715 kubelet[1417]: I0710 00:34:44.389668 1417 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:34:44.390556 kubelet[1417]: E0710 00:34:44.390529 1417 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 00:34:44.390623 kubelet[1417]: E0710 00:34:44.390574 1417 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.80\" not found" Jul 10 00:34:44.449882 kubelet[1417]: I0710 00:34:44.449759 1417 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 00:34:44.451026 kubelet[1417]: I0710 00:34:44.450808 1417 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 00:34:44.451026 kubelet[1417]: I0710 00:34:44.450828 1417 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 10 00:34:44.451026 kubelet[1417]: I0710 00:34:44.450846 1417 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 00:34:44.451026 kubelet[1417]: I0710 00:34:44.450862 1417 kubelet.go:2382] "Starting kubelet main sync loop" Jul 10 00:34:44.451026 kubelet[1417]: E0710 00:34:44.451010 1417 kubelet.go:2406] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jul 10 00:34:44.491223 kubelet[1417]: I0710 00:34:44.491184 1417 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.80" Jul 10 00:34:44.502427 kubelet[1417]: I0710 00:34:44.502392 1417 kubelet_node_status.go:78] "Successfully registered node" node="10.0.0.80" Jul 10 00:34:44.511654 kubelet[1417]: I0710 00:34:44.511604 1417 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jul 10 00:34:44.511938 env[1214]: time="2025-07-10T00:34:44.511882418Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 10 00:34:44.512388 kubelet[1417]: I0710 00:34:44.512361 1417 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jul 10 00:34:44.589890 sudo[1314]: pam_unix(sudo:session): session closed for user root Jul 10 00:34:44.592052 sshd[1311]: pam_unix(sshd:session): session closed for user core Jul 10 00:34:44.594665 systemd[1]: sshd@4-10.0.0.80:22-10.0.0.1:58634.service: Deactivated successfully. Jul 10 00:34:44.595478 systemd[1]: session-5.scope: Deactivated successfully. Jul 10 00:34:44.596097 systemd-logind[1204]: Session 5 logged out. Waiting for processes to exit. Jul 10 00:34:44.596917 systemd-logind[1204]: Removed session 5. Jul 10 00:34:45.185925 kubelet[1417]: I0710 00:34:45.185875 1417 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jul 10 00:34:45.186262 kubelet[1417]: W0710 00:34:45.186083 1417 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jul 10 00:34:45.186262 kubelet[1417]: W0710 00:34:45.186118 1417 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jul 10 00:34:45.186262 kubelet[1417]: W0710 00:34:45.186140 1417 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jul 10 00:34:45.250846 kubelet[1417]: E0710 00:34:45.250804 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:34:45.251000 kubelet[1417]: I0710 00:34:45.250863 1417 apiserver.go:52] "Watching apiserver" Jul 10 00:34:45.271941 systemd[1]: Created slice kubepods-besteffort-pod13973736_d70c_435e_9208_890502a495a3.slice. Jul 10 00:34:45.289780 kubelet[1417]: I0710 00:34:45.289747 1417 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 00:34:45.290137 kubelet[1417]: I0710 00:34:45.290115 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0afc8bde-982f-4750-a3f5-637da5b3d369-hubble-tls\") pod \"cilium-89knj\" (UID: \"0afc8bde-982f-4750-a3f5-637da5b3d369\") " pod="kube-system/cilium-89knj" Jul 10 00:34:45.290236 kubelet[1417]: I0710 00:34:45.290221 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64cfp\" (UniqueName: \"kubernetes.io/projected/0afc8bde-982f-4750-a3f5-637da5b3d369-kube-api-access-64cfp\") pod \"cilium-89knj\" (UID: \"0afc8bde-982f-4750-a3f5-637da5b3d369\") " pod="kube-system/cilium-89knj" Jul 10 00:34:45.290325 kubelet[1417]: I0710 00:34:45.290309 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13973736-d70c-435e-9208-890502a495a3-xtables-lock\") pod \"kube-proxy-6qvbq\" (UID: \"13973736-d70c-435e-9208-890502a495a3\") " pod="kube-system/kube-proxy-6qvbq" Jul 10 00:34:45.290404 kubelet[1417]: I0710 00:34:45.290391 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0afc8bde-982f-4750-a3f5-637da5b3d369-cilium-cgroup\") pod \"cilium-89knj\" (UID: \"0afc8bde-982f-4750-a3f5-637da5b3d369\") " pod="kube-system/cilium-89knj" Jul 10 00:34:45.290480 kubelet[1417]: I0710 00:34:45.290467 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0afc8bde-982f-4750-a3f5-637da5b3d369-cni-path\") pod \"cilium-89knj\" (UID: \"0afc8bde-982f-4750-a3f5-637da5b3d369\") " pod="kube-system/cilium-89knj" Jul 10 00:34:45.290570 kubelet[1417]: I0710 00:34:45.290557 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0afc8bde-982f-4750-a3f5-637da5b3d369-etc-cni-netd\") pod \"cilium-89knj\" (UID: \"0afc8bde-982f-4750-a3f5-637da5b3d369\") " pod="kube-system/cilium-89knj" Jul 10 00:34:45.290662 kubelet[1417]: I0710 00:34:45.290650 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxnk9\" (UniqueName: \"kubernetes.io/projected/13973736-d70c-435e-9208-890502a495a3-kube-api-access-gxnk9\") pod \"kube-proxy-6qvbq\" (UID: \"13973736-d70c-435e-9208-890502a495a3\") " pod="kube-system/kube-proxy-6qvbq" Jul 10 00:34:45.290746 kubelet[1417]: I0710 00:34:45.290728 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0afc8bde-982f-4750-a3f5-637da5b3d369-xtables-lock\") pod \"cilium-89knj\" (UID: \"0afc8bde-982f-4750-a3f5-637da5b3d369\") " pod="kube-system/cilium-89knj" Jul 10 00:34:45.290820 kubelet[1417]: I0710 00:34:45.290806 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0afc8bde-982f-4750-a3f5-637da5b3d369-host-proc-sys-kernel\") pod \"cilium-89knj\" (UID: \"0afc8bde-982f-4750-a3f5-637da5b3d369\") " pod="kube-system/cilium-89knj" Jul 10 00:34:45.290899 kubelet[1417]: I0710 00:34:45.290886 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0afc8bde-982f-4750-a3f5-637da5b3d369-bpf-maps\") pod \"cilium-89knj\" (UID: \"0afc8bde-982f-4750-a3f5-637da5b3d369\") " pod="kube-system/cilium-89knj" Jul 10 00:34:45.290983 kubelet[1417]: I0710 00:34:45.290970 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0afc8bde-982f-4750-a3f5-637da5b3d369-lib-modules\") pod \"cilium-89knj\" (UID: \"0afc8bde-982f-4750-a3f5-637da5b3d369\") " pod="kube-system/cilium-89knj" Jul 10 00:34:45.291086 kubelet[1417]: I0710 00:34:45.291073 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0afc8bde-982f-4750-a3f5-637da5b3d369-clustermesh-secrets\") pod \"cilium-89knj\" (UID: \"0afc8bde-982f-4750-a3f5-637da5b3d369\") " pod="kube-system/cilium-89knj" Jul 10 00:34:45.291166 kubelet[1417]: I0710 00:34:45.291153 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0afc8bde-982f-4750-a3f5-637da5b3d369-cilium-config-path\") pod \"cilium-89knj\" (UID: \"0afc8bde-982f-4750-a3f5-637da5b3d369\") " pod="kube-system/cilium-89knj" Jul 10 00:34:45.291242 kubelet[1417]: I0710 00:34:45.291228 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0afc8bde-982f-4750-a3f5-637da5b3d369-host-proc-sys-net\") pod \"cilium-89knj\" (UID: \"0afc8bde-982f-4750-a3f5-637da5b3d369\") " pod="kube-system/cilium-89knj" Jul 10 00:34:45.291320 kubelet[1417]: I0710 00:34:45.291307 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/13973736-d70c-435e-9208-890502a495a3-kube-proxy\") pod \"kube-proxy-6qvbq\" (UID: \"13973736-d70c-435e-9208-890502a495a3\") " pod="kube-system/kube-proxy-6qvbq" Jul 10 00:34:45.291417 kubelet[1417]: I0710 00:34:45.291402 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13973736-d70c-435e-9208-890502a495a3-lib-modules\") pod \"kube-proxy-6qvbq\" (UID: \"13973736-d70c-435e-9208-890502a495a3\") " pod="kube-system/kube-proxy-6qvbq" Jul 10 00:34:45.291584 kubelet[1417]: I0710 00:34:45.291569 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0afc8bde-982f-4750-a3f5-637da5b3d369-cilium-run\") pod \"cilium-89knj\" (UID: \"0afc8bde-982f-4750-a3f5-637da5b3d369\") " pod="kube-system/cilium-89knj" Jul 10 00:34:45.291665 kubelet[1417]: I0710 00:34:45.291652 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0afc8bde-982f-4750-a3f5-637da5b3d369-hostproc\") pod \"cilium-89knj\" (UID: \"0afc8bde-982f-4750-a3f5-637da5b3d369\") " pod="kube-system/cilium-89knj" Jul 10 00:34:45.292059 systemd[1]: Created slice kubepods-burstable-pod0afc8bde_982f_4750_a3f5_637da5b3d369.slice. Jul 10 00:34:45.393418 kubelet[1417]: I0710 00:34:45.393366 1417 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jul 10 00:34:45.593709 kubelet[1417]: E0710 00:34:45.592994 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:45.594618 env[1214]: time="2025-07-10T00:34:45.594569137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6qvbq,Uid:13973736-d70c-435e-9208-890502a495a3,Namespace:kube-system,Attempt:0,}" Jul 10 00:34:45.609163 kubelet[1417]: E0710 00:34:45.609133 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:45.609955 env[1214]: time="2025-07-10T00:34:45.609879617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-89knj,Uid:0afc8bde-982f-4750-a3f5-637da5b3d369,Namespace:kube-system,Attempt:0,}" Jul 10 00:34:46.199309 env[1214]: time="2025-07-10T00:34:46.199261817Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:34:46.200780 env[1214]: time="2025-07-10T00:34:46.200741722Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:34:46.205072 env[1214]: time="2025-07-10T00:34:46.205017119Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:34:46.207348 env[1214]: time="2025-07-10T00:34:46.207317453Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:34:46.209489 env[1214]: time="2025-07-10T00:34:46.209460480Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:34:46.210868 env[1214]: time="2025-07-10T00:34:46.210839134Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:34:46.212789 env[1214]: time="2025-07-10T00:34:46.212758983Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:34:46.214409 env[1214]: time="2025-07-10T00:34:46.214372074Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:34:46.238429 env[1214]: time="2025-07-10T00:34:46.237507474Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:34:46.238429 env[1214]: time="2025-07-10T00:34:46.237545298Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:34:46.238429 env[1214]: time="2025-07-10T00:34:46.237555635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:34:46.238429 env[1214]: time="2025-07-10T00:34:46.237734819Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fda3750f65f5edf9711b3a944c890ac18b14f0e24b63e41664e186245490dece pid=1479 runtime=io.containerd.runc.v2 Jul 10 00:34:46.246301 env[1214]: time="2025-07-10T00:34:46.246041519Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:34:46.246301 env[1214]: time="2025-07-10T00:34:46.246079944Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:34:46.246301 env[1214]: time="2025-07-10T00:34:46.246091043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:34:46.246457 env[1214]: time="2025-07-10T00:34:46.246295108Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e796c6539834d69b8a33ce47f23bb6243838a6998f5fcc9c18318e22a7cd7d0c pid=1488 runtime=io.containerd.runc.v2 Jul 10 00:34:46.251789 kubelet[1417]: E0710 00:34:46.251744 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:34:46.266315 systemd[1]: Started cri-containerd-fda3750f65f5edf9711b3a944c890ac18b14f0e24b63e41664e186245490dece.scope. Jul 10 00:34:46.268597 systemd[1]: Started cri-containerd-e796c6539834d69b8a33ce47f23bb6243838a6998f5fcc9c18318e22a7cd7d0c.scope. Jul 10 00:34:46.309883 env[1214]: time="2025-07-10T00:34:46.309843193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6qvbq,Uid:13973736-d70c-435e-9208-890502a495a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"fda3750f65f5edf9711b3a944c890ac18b14f0e24b63e41664e186245490dece\"" Jul 10 00:34:46.311403 kubelet[1417]: E0710 00:34:46.310907 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:46.312208 env[1214]: time="2025-07-10T00:34:46.312179667Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 10 00:34:46.314323 env[1214]: time="2025-07-10T00:34:46.314292003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-89knj,Uid:0afc8bde-982f-4750-a3f5-637da5b3d369,Namespace:kube-system,Attempt:0,} returns sandbox id \"e796c6539834d69b8a33ce47f23bb6243838a6998f5fcc9c18318e22a7cd7d0c\"" Jul 10 00:34:46.315103 kubelet[1417]: E0710 00:34:46.314919 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:46.399208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3388514443.mount: Deactivated successfully. Jul 10 00:34:47.251899 kubelet[1417]: E0710 00:34:47.251846 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:34:47.412243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount81799603.mount: Deactivated successfully. Jul 10 00:34:47.874506 env[1214]: time="2025-07-10T00:34:47.874462296Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:34:47.875623 env[1214]: time="2025-07-10T00:34:47.875594533Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:34:47.881014 env[1214]: time="2025-07-10T00:34:47.880973390Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:34:47.882754 env[1214]: time="2025-07-10T00:34:47.882723006Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:34:47.883211 env[1214]: time="2025-07-10T00:34:47.883190308Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jul 10 00:34:47.884291 env[1214]: time="2025-07-10T00:34:47.884222827Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 10 00:34:47.885268 env[1214]: time="2025-07-10T00:34:47.885232950Z" level=info msg="CreateContainer within sandbox \"fda3750f65f5edf9711b3a944c890ac18b14f0e24b63e41664e186245490dece\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 10 00:34:47.896400 env[1214]: time="2025-07-10T00:34:47.896360449Z" level=info msg="CreateContainer within sandbox \"fda3750f65f5edf9711b3a944c890ac18b14f0e24b63e41664e186245490dece\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"def65a2de415fa23a5c0d2ae356d01fd06a1ad8d7e39e505c9cc43b4663df7b1\"" Jul 10 00:34:47.896901 env[1214]: time="2025-07-10T00:34:47.896835083Z" level=info msg="StartContainer for \"def65a2de415fa23a5c0d2ae356d01fd06a1ad8d7e39e505c9cc43b4663df7b1\"" Jul 10 00:34:47.916211 systemd[1]: Started cri-containerd-def65a2de415fa23a5c0d2ae356d01fd06a1ad8d7e39e505c9cc43b4663df7b1.scope. Jul 10 00:34:47.955794 env[1214]: time="2025-07-10T00:34:47.954683970Z" level=info msg="StartContainer for \"def65a2de415fa23a5c0d2ae356d01fd06a1ad8d7e39e505c9cc43b4663df7b1\" returns successfully" Jul 10 00:34:48.252990 kubelet[1417]: E0710 00:34:48.252923 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:34:48.461017 kubelet[1417]: E0710 00:34:48.460976 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:48.475714 kubelet[1417]: I0710 00:34:48.475613 1417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6qvbq" podStartSLOduration=2.903400262 podStartE2EDuration="4.47559691s" podCreationTimestamp="2025-07-10 00:34:44 +0000 UTC" firstStartedPulling="2025-07-10 00:34:46.311754508 +0000 UTC m=+3.251053578" lastFinishedPulling="2025-07-10 00:34:47.883951155 +0000 UTC m=+4.823250226" observedRunningTime="2025-07-10 00:34:48.47386281 +0000 UTC m=+5.413161840" watchObservedRunningTime="2025-07-10 00:34:48.47559691 +0000 UTC m=+5.414895980" Jul 10 00:34:49.253455 kubelet[1417]: E0710 00:34:49.253401 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:34:49.462481 kubelet[1417]: E0710 00:34:49.462434 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:50.253834 kubelet[1417]: E0710 00:34:50.253794 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:34:51.254010 kubelet[1417]: E0710 00:34:51.253952 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:34:52.254340 kubelet[1417]: E0710 00:34:52.254301 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:34:53.255228 kubelet[1417]: E0710 00:34:53.255179 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:34:53.340923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3975206114.mount: Deactivated successfully. Jul 10 00:34:54.256938 kubelet[1417]: E0710 00:34:54.256890 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:34:55.257217 kubelet[1417]: E0710 00:34:55.257177 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:34:55.527852 env[1214]: time="2025-07-10T00:34:55.527755330Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:34:55.529258 env[1214]: time="2025-07-10T00:34:55.529227405Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:34:55.530844 env[1214]: time="2025-07-10T00:34:55.530814949Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:34:55.531429 env[1214]: time="2025-07-10T00:34:55.531398823Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 10 00:34:55.541296 env[1214]: time="2025-07-10T00:34:55.541266774Z" level=info msg="CreateContainer within sandbox \"e796c6539834d69b8a33ce47f23bb6243838a6998f5fcc9c18318e22a7cd7d0c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 00:34:55.553912 env[1214]: time="2025-07-10T00:34:55.553858065Z" level=info msg="CreateContainer within sandbox \"e796c6539834d69b8a33ce47f23bb6243838a6998f5fcc9c18318e22a7cd7d0c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bbfacd56c9dcd2209be99cb8ada24e86a3e0094bc4a5249ee2ed9d88ae52dec1\"" Jul 10 00:34:55.554763 env[1214]: time="2025-07-10T00:34:55.554729931Z" level=info msg="StartContainer for \"bbfacd56c9dcd2209be99cb8ada24e86a3e0094bc4a5249ee2ed9d88ae52dec1\"" Jul 10 00:34:55.582851 systemd[1]: Started cri-containerd-bbfacd56c9dcd2209be99cb8ada24e86a3e0094bc4a5249ee2ed9d88ae52dec1.scope. Jul 10 00:34:55.628547 env[1214]: time="2025-07-10T00:34:55.628437898Z" level=info msg="StartContainer for \"bbfacd56c9dcd2209be99cb8ada24e86a3e0094bc4a5249ee2ed9d88ae52dec1\" returns successfully" Jul 10 00:34:55.681973 systemd[1]: cri-containerd-bbfacd56c9dcd2209be99cb8ada24e86a3e0094bc4a5249ee2ed9d88ae52dec1.scope: Deactivated successfully. Jul 10 00:34:55.849551 env[1214]: time="2025-07-10T00:34:55.849435679Z" level=info msg="shim disconnected" id=bbfacd56c9dcd2209be99cb8ada24e86a3e0094bc4a5249ee2ed9d88ae52dec1 Jul 10 00:34:55.849551 env[1214]: time="2025-07-10T00:34:55.849479240Z" level=warning msg="cleaning up after shim disconnected" id=bbfacd56c9dcd2209be99cb8ada24e86a3e0094bc4a5249ee2ed9d88ae52dec1 namespace=k8s.io Jul 10 00:34:55.849551 env[1214]: time="2025-07-10T00:34:55.849488249Z" level=info msg="cleaning up dead shim" Jul 10 00:34:55.855819 env[1214]: time="2025-07-10T00:34:55.855757590Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:34:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1766 runtime=io.containerd.runc.v2\n" Jul 10 00:34:56.257590 kubelet[1417]: E0710 00:34:56.257520 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:34:56.473649 kubelet[1417]: E0710 00:34:56.473592 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:56.475657 env[1214]: time="2025-07-10T00:34:56.475616564Z" level=info msg="CreateContainer within sandbox \"e796c6539834d69b8a33ce47f23bb6243838a6998f5fcc9c18318e22a7cd7d0c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 10 00:34:56.494185 env[1214]: time="2025-07-10T00:34:56.494125288Z" level=info msg="CreateContainer within sandbox \"e796c6539834d69b8a33ce47f23bb6243838a6998f5fcc9c18318e22a7cd7d0c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ddd74305db10e0e918b669bda9d1c241ff21627dff4882a7fc5e89320a259c45\"" Jul 10 00:34:56.494802 env[1214]: time="2025-07-10T00:34:56.494716293Z" level=info msg="StartContainer for \"ddd74305db10e0e918b669bda9d1c241ff21627dff4882a7fc5e89320a259c45\"" Jul 10 00:34:56.508659 systemd[1]: Started cri-containerd-ddd74305db10e0e918b669bda9d1c241ff21627dff4882a7fc5e89320a259c45.scope. Jul 10 00:34:56.545067 env[1214]: time="2025-07-10T00:34:56.545014019Z" level=info msg="StartContainer for \"ddd74305db10e0e918b669bda9d1c241ff21627dff4882a7fc5e89320a259c45\" returns successfully" Jul 10 00:34:56.547960 systemd[1]: run-containerd-runc-k8s.io-bbfacd56c9dcd2209be99cb8ada24e86a3e0094bc4a5249ee2ed9d88ae52dec1-runc.pd4MXQ.mount: Deactivated successfully. Jul 10 00:34:56.548056 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bbfacd56c9dcd2209be99cb8ada24e86a3e0094bc4a5249ee2ed9d88ae52dec1-rootfs.mount: Deactivated successfully. Jul 10 00:34:56.558913 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 00:34:56.559122 systemd[1]: Stopped systemd-sysctl.service. Jul 10 00:34:56.560636 systemd[1]: Stopping systemd-sysctl.service... Jul 10 00:34:56.562147 systemd[1]: Starting systemd-sysctl.service... Jul 10 00:34:56.563710 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 10 00:34:56.567705 systemd[1]: cri-containerd-ddd74305db10e0e918b669bda9d1c241ff21627dff4882a7fc5e89320a259c45.scope: Deactivated successfully. Jul 10 00:34:56.570761 systemd[1]: Finished systemd-sysctl.service. Jul 10 00:34:56.585282 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ddd74305db10e0e918b669bda9d1c241ff21627dff4882a7fc5e89320a259c45-rootfs.mount: Deactivated successfully. Jul 10 00:34:56.591252 env[1214]: time="2025-07-10T00:34:56.591208621Z" level=info msg="shim disconnected" id=ddd74305db10e0e918b669bda9d1c241ff21627dff4882a7fc5e89320a259c45 Jul 10 00:34:56.591420 env[1214]: time="2025-07-10T00:34:56.591401912Z" level=warning msg="cleaning up after shim disconnected" id=ddd74305db10e0e918b669bda9d1c241ff21627dff4882a7fc5e89320a259c45 namespace=k8s.io Jul 10 00:34:56.591480 env[1214]: time="2025-07-10T00:34:56.591468251Z" level=info msg="cleaning up dead shim" Jul 10 00:34:56.598992 env[1214]: time="2025-07-10T00:34:56.598954422Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:34:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1830 runtime=io.containerd.runc.v2\n" Jul 10 00:34:57.257859 kubelet[1417]: E0710 00:34:57.257779 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:34:57.476730 kubelet[1417]: E0710 00:34:57.476614 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:57.478439 env[1214]: time="2025-07-10T00:34:57.478398541Z" level=info msg="CreateContainer within sandbox \"e796c6539834d69b8a33ce47f23bb6243838a6998f5fcc9c18318e22a7cd7d0c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 10 00:34:57.491468 env[1214]: time="2025-07-10T00:34:57.491407457Z" level=info msg="CreateContainer within sandbox \"e796c6539834d69b8a33ce47f23bb6243838a6998f5fcc9c18318e22a7cd7d0c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"dbfbf29a573128b4efcbc966c8527984a3a963eaa39cbb08592c9c7fb9f7a0e3\"" Jul 10 00:34:57.492189 env[1214]: time="2025-07-10T00:34:57.492150997Z" level=info msg="StartContainer for \"dbfbf29a573128b4efcbc966c8527984a3a963eaa39cbb08592c9c7fb9f7a0e3\"" Jul 10 00:34:57.506812 systemd[1]: Started cri-containerd-dbfbf29a573128b4efcbc966c8527984a3a963eaa39cbb08592c9c7fb9f7a0e3.scope. Jul 10 00:34:57.541985 env[1214]: time="2025-07-10T00:34:57.541606831Z" level=info msg="StartContainer for \"dbfbf29a573128b4efcbc966c8527984a3a963eaa39cbb08592c9c7fb9f7a0e3\" returns successfully" Jul 10 00:34:57.557390 systemd[1]: cri-containerd-dbfbf29a573128b4efcbc966c8527984a3a963eaa39cbb08592c9c7fb9f7a0e3.scope: Deactivated successfully. Jul 10 00:34:57.577216 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dbfbf29a573128b4efcbc966c8527984a3a963eaa39cbb08592c9c7fb9f7a0e3-rootfs.mount: Deactivated successfully. Jul 10 00:34:57.582915 env[1214]: time="2025-07-10T00:34:57.582870683Z" level=info msg="shim disconnected" id=dbfbf29a573128b4efcbc966c8527984a3a963eaa39cbb08592c9c7fb9f7a0e3 Jul 10 00:34:57.583317 env[1214]: time="2025-07-10T00:34:57.583294596Z" level=warning msg="cleaning up after shim disconnected" id=dbfbf29a573128b4efcbc966c8527984a3a963eaa39cbb08592c9c7fb9f7a0e3 namespace=k8s.io Jul 10 00:34:57.583389 env[1214]: time="2025-07-10T00:34:57.583376904Z" level=info msg="cleaning up dead shim" Jul 10 00:34:57.589842 env[1214]: time="2025-07-10T00:34:57.589806260Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:34:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1887 runtime=io.containerd.runc.v2\n" Jul 10 00:34:58.258228 kubelet[1417]: E0710 00:34:58.258183 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:34:58.480431 kubelet[1417]: E0710 00:34:58.479701 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:58.482265 env[1214]: time="2025-07-10T00:34:58.482224709Z" level=info msg="CreateContainer within sandbox \"e796c6539834d69b8a33ce47f23bb6243838a6998f5fcc9c18318e22a7cd7d0c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 10 00:34:58.498891 env[1214]: time="2025-07-10T00:34:58.498843687Z" level=info msg="CreateContainer within sandbox \"e796c6539834d69b8a33ce47f23bb6243838a6998f5fcc9c18318e22a7cd7d0c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cf1b4ff3597e933d98bbb74fea3f12a30ece871f3e01dc5ecec5077cfa61a0aa\"" Jul 10 00:34:58.499779 env[1214]: time="2025-07-10T00:34:58.499749915Z" level=info msg="StartContainer for \"cf1b4ff3597e933d98bbb74fea3f12a30ece871f3e01dc5ecec5077cfa61a0aa\"" Jul 10 00:34:58.525160 systemd[1]: Started cri-containerd-cf1b4ff3597e933d98bbb74fea3f12a30ece871f3e01dc5ecec5077cfa61a0aa.scope. Jul 10 00:34:58.554327 systemd[1]: cri-containerd-cf1b4ff3597e933d98bbb74fea3f12a30ece871f3e01dc5ecec5077cfa61a0aa.scope: Deactivated successfully. Jul 10 00:34:58.554934 env[1214]: time="2025-07-10T00:34:58.554891618Z" level=info msg="StartContainer for \"cf1b4ff3597e933d98bbb74fea3f12a30ece871f3e01dc5ecec5077cfa61a0aa\" returns successfully" Jul 10 00:34:58.570990 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cf1b4ff3597e933d98bbb74fea3f12a30ece871f3e01dc5ecec5077cfa61a0aa-rootfs.mount: Deactivated successfully. Jul 10 00:34:58.576567 env[1214]: time="2025-07-10T00:34:58.576522630Z" level=info msg="shim disconnected" id=cf1b4ff3597e933d98bbb74fea3f12a30ece871f3e01dc5ecec5077cfa61a0aa Jul 10 00:34:58.576567 env[1214]: time="2025-07-10T00:34:58.576568626Z" level=warning msg="cleaning up after shim disconnected" id=cf1b4ff3597e933d98bbb74fea3f12a30ece871f3e01dc5ecec5077cfa61a0aa namespace=k8s.io Jul 10 00:34:58.576742 env[1214]: time="2025-07-10T00:34:58.576577873Z" level=info msg="cleaning up dead shim" Jul 10 00:34:58.584162 env[1214]: time="2025-07-10T00:34:58.584118482Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:34:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1943 runtime=io.containerd.runc.v2\n" Jul 10 00:34:59.259167 kubelet[1417]: E0710 00:34:59.259069 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:34:59.486644 kubelet[1417]: E0710 00:34:59.486596 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:59.489613 env[1214]: time="2025-07-10T00:34:59.489575507Z" level=info msg="CreateContainer within sandbox \"e796c6539834d69b8a33ce47f23bb6243838a6998f5fcc9c18318e22a7cd7d0c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 10 00:34:59.507820 env[1214]: time="2025-07-10T00:34:59.507768428Z" level=info msg="CreateContainer within sandbox \"e796c6539834d69b8a33ce47f23bb6243838a6998f5fcc9c18318e22a7cd7d0c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0d2a630a7cfdc85841ade1cfab62f76ff6bb8339060020e9566e6d90113e360c\"" Jul 10 00:34:59.508513 env[1214]: time="2025-07-10T00:34:59.508478828Z" level=info msg="StartContainer for \"0d2a630a7cfdc85841ade1cfab62f76ff6bb8339060020e9566e6d90113e360c\"" Jul 10 00:34:59.525006 systemd[1]: Started cri-containerd-0d2a630a7cfdc85841ade1cfab62f76ff6bb8339060020e9566e6d90113e360c.scope. Jul 10 00:34:59.568607 env[1214]: time="2025-07-10T00:34:59.568541724Z" level=info msg="StartContainer for \"0d2a630a7cfdc85841ade1cfab62f76ff6bb8339060020e9566e6d90113e360c\" returns successfully" Jul 10 00:34:59.704741 kubelet[1417]: I0710 00:34:59.704713 1417 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 10 00:35:00.096069 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 10 00:35:00.259468 kubelet[1417]: E0710 00:35:00.259408 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:00.363073 kernel: Initializing XFRM netlink socket Jul 10 00:35:00.366060 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 10 00:35:00.491820 kubelet[1417]: E0710 00:35:00.491776 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:00.583357 kubelet[1417]: I0710 00:35:00.583285 1417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-89knj" podStartSLOduration=7.3662734069999996 podStartE2EDuration="16.583265026s" podCreationTimestamp="2025-07-10 00:34:44 +0000 UTC" firstStartedPulling="2025-07-10 00:34:46.315489149 +0000 UTC m=+3.254788219" lastFinishedPulling="2025-07-10 00:34:55.532480768 +0000 UTC m=+12.471779838" observedRunningTime="2025-07-10 00:35:00.539592247 +0000 UTC m=+17.478891317" watchObservedRunningTime="2025-07-10 00:35:00.583265026 +0000 UTC m=+17.522564096" Jul 10 00:35:00.591428 systemd[1]: Created slice kubepods-besteffort-pod02d3a115_4eda_4fe0_8ae1_1a83d234742f.slice. Jul 10 00:35:00.593758 kubelet[1417]: I0710 00:35:00.593719 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhlkc\" (UniqueName: \"kubernetes.io/projected/02d3a115-4eda-4fe0-8ae1-1a83d234742f-kube-api-access-xhlkc\") pod \"nginx-deployment-7fcdb87857-cxdr2\" (UID: \"02d3a115-4eda-4fe0-8ae1-1a83d234742f\") " pod="default/nginx-deployment-7fcdb87857-cxdr2" Jul 10 00:35:00.898148 env[1214]: time="2025-07-10T00:35:00.898101061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-cxdr2,Uid:02d3a115-4eda-4fe0-8ae1-1a83d234742f,Namespace:default,Attempt:0,}" Jul 10 00:35:01.259596 kubelet[1417]: E0710 00:35:01.259538 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:01.493831 kubelet[1417]: E0710 00:35:01.493798 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:01.979363 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Jul 10 00:35:01.979471 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 10 00:35:01.978484 systemd-networkd[1038]: cilium_host: Link UP Jul 10 00:35:01.978611 systemd-networkd[1038]: cilium_net: Link UP Jul 10 00:35:01.978736 systemd-networkd[1038]: cilium_net: Gained carrier Jul 10 00:35:01.980198 systemd-networkd[1038]: cilium_host: Gained carrier Jul 10 00:35:02.069583 systemd-networkd[1038]: cilium_vxlan: Link UP Jul 10 00:35:02.069591 systemd-networkd[1038]: cilium_vxlan: Gained carrier Jul 10 00:35:02.259956 kubelet[1417]: E0710 00:35:02.259834 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:02.362070 kernel: NET: Registered PF_ALG protocol family Jul 10 00:35:02.495336 kubelet[1417]: E0710 00:35:02.495286 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:02.626194 systemd-networkd[1038]: cilium_net: Gained IPv6LL Jul 10 00:35:02.923058 systemd-networkd[1038]: lxc_health: Link UP Jul 10 00:35:02.932240 systemd-networkd[1038]: lxc_health: Gained carrier Jul 10 00:35:02.933060 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 10 00:35:02.946159 systemd-networkd[1038]: cilium_host: Gained IPv6LL Jul 10 00:35:03.261090 kubelet[1417]: E0710 00:35:03.260940 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:03.460293 systemd-networkd[1038]: lxcda221f0b91e8: Link UP Jul 10 00:35:03.467055 kernel: eth0: renamed from tmp9306c Jul 10 00:35:03.477525 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 10 00:35:03.477631 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcda221f0b91e8: link becomes ready Jul 10 00:35:03.477594 systemd-networkd[1038]: lxcda221f0b91e8: Gained carrier Jul 10 00:35:03.496899 kubelet[1417]: E0710 00:35:03.496828 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:03.523153 systemd-networkd[1038]: cilium_vxlan: Gained IPv6LL Jul 10 00:35:04.250876 kubelet[1417]: E0710 00:35:04.250830 1417 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:04.261067 kubelet[1417]: E0710 00:35:04.261035 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:04.498397 kubelet[1417]: E0710 00:35:04.498357 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:04.546190 systemd-networkd[1038]: lxc_health: Gained IPv6LL Jul 10 00:35:05.261869 kubelet[1417]: E0710 00:35:05.261827 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:05.315156 systemd-networkd[1038]: lxcda221f0b91e8: Gained IPv6LL Jul 10 00:35:05.499662 kubelet[1417]: E0710 00:35:05.499627 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:06.262339 kubelet[1417]: E0710 00:35:06.262295 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:06.982664 env[1214]: time="2025-07-10T00:35:06.982593703Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:35:06.982977 env[1214]: time="2025-07-10T00:35:06.982636563Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:35:06.982977 env[1214]: time="2025-07-10T00:35:06.982648048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:35:06.983188 env[1214]: time="2025-07-10T00:35:06.983153604Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9306c8cfbf92f74ec068aca6088dd14c8ee624e8b72b4cdb0215497e331ff0f9 pid=2480 runtime=io.containerd.runc.v2 Jul 10 00:35:06.993782 systemd[1]: Started cri-containerd-9306c8cfbf92f74ec068aca6088dd14c8ee624e8b72b4cdb0215497e331ff0f9.scope. Jul 10 00:35:07.064319 systemd-resolved[1161]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:35:07.080942 env[1214]: time="2025-07-10T00:35:07.080904207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-cxdr2,Uid:02d3a115-4eda-4fe0-8ae1-1a83d234742f,Namespace:default,Attempt:0,} returns sandbox id \"9306c8cfbf92f74ec068aca6088dd14c8ee624e8b72b4cdb0215497e331ff0f9\"" Jul 10 00:35:07.082528 env[1214]: time="2025-07-10T00:35:07.082499944Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 10 00:35:07.263082 kubelet[1417]: E0710 00:35:07.262734 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:08.262909 kubelet[1417]: E0710 00:35:08.262855 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:09.263522 kubelet[1417]: E0710 00:35:09.263476 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:09.278062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount177296052.mount: Deactivated successfully. Jul 10 00:35:10.263853 kubelet[1417]: E0710 00:35:10.263800 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:10.497090 env[1214]: time="2025-07-10T00:35:10.497039029Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:10.500735 env[1214]: time="2025-07-10T00:35:10.500695346Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cd8b38a4e22587134e82fff3512a99b84799274d989a1ec20f58c7f8c89b8511,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:10.502735 env[1214]: time="2025-07-10T00:35:10.502708111Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:10.503493 env[1214]: time="2025-07-10T00:35:10.503466864Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:cd8b38a4e22587134e82fff3512a99b84799274d989a1ec20f58c7f8c89b8511\"" Jul 10 00:35:10.504305 env[1214]: time="2025-07-10T00:35:10.504281718Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:30bb68e656e0665bce700e67d2756f68bdca3345fa1099a32bfdb8febcf621cd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:10.505930 env[1214]: time="2025-07-10T00:35:10.505900101Z" level=info msg="CreateContainer within sandbox \"9306c8cfbf92f74ec068aca6088dd14c8ee624e8b72b4cdb0215497e331ff0f9\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jul 10 00:35:10.515182 env[1214]: time="2025-07-10T00:35:10.515084208Z" level=info msg="CreateContainer within sandbox \"9306c8cfbf92f74ec068aca6088dd14c8ee624e8b72b4cdb0215497e331ff0f9\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"d1bb22fa3a90558dc8c2a507f99c1fc75a78e21da0d33318a00473f0fc429f88\"" Jul 10 00:35:10.515526 env[1214]: time="2025-07-10T00:35:10.515485913Z" level=info msg="StartContainer for \"d1bb22fa3a90558dc8c2a507f99c1fc75a78e21da0d33318a00473f0fc429f88\"" Jul 10 00:35:10.532967 systemd[1]: Started cri-containerd-d1bb22fa3a90558dc8c2a507f99c1fc75a78e21da0d33318a00473f0fc429f88.scope. Jul 10 00:35:10.564959 env[1214]: time="2025-07-10T00:35:10.564914913Z" level=info msg="StartContainer for \"d1bb22fa3a90558dc8c2a507f99c1fc75a78e21da0d33318a00473f0fc429f88\" returns successfully" Jul 10 00:35:11.264155 kubelet[1417]: E0710 00:35:11.264116 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:11.512768 systemd[1]: run-containerd-runc-k8s.io-d1bb22fa3a90558dc8c2a507f99c1fc75a78e21da0d33318a00473f0fc429f88-runc.GDVLEm.mount: Deactivated successfully. Jul 10 00:35:11.522590 kubelet[1417]: I0710 00:35:11.522324 1417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-cxdr2" podStartSLOduration=8.099683564 podStartE2EDuration="11.522309616s" podCreationTimestamp="2025-07-10 00:35:00 +0000 UTC" firstStartedPulling="2025-07-10 00:35:07.081796677 +0000 UTC m=+24.021095747" lastFinishedPulling="2025-07-10 00:35:10.504422769 +0000 UTC m=+27.443721799" observedRunningTime="2025-07-10 00:35:11.522141399 +0000 UTC m=+28.461440429" watchObservedRunningTime="2025-07-10 00:35:11.522309616 +0000 UTC m=+28.461608686" Jul 10 00:35:12.264514 kubelet[1417]: E0710 00:35:12.264464 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:13.265448 kubelet[1417]: E0710 00:35:13.265410 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:14.265879 kubelet[1417]: E0710 00:35:14.265842 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:15.266852 kubelet[1417]: E0710 00:35:15.266804 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:16.267320 kubelet[1417]: E0710 00:35:16.267269 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:17.268011 kubelet[1417]: E0710 00:35:17.267967 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:17.416051 systemd[1]: Created slice kubepods-besteffort-pode9563adc_df62_4609_b00d_047191aaca6b.slice. Jul 10 00:35:17.478305 kubelet[1417]: I0710 00:35:17.478262 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/e9563adc-df62-4609-b00d-047191aaca6b-data\") pod \"nfs-server-provisioner-0\" (UID: \"e9563adc-df62-4609-b00d-047191aaca6b\") " pod="default/nfs-server-provisioner-0" Jul 10 00:35:17.478573 kubelet[1417]: I0710 00:35:17.478553 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t8jk\" (UniqueName: \"kubernetes.io/projected/e9563adc-df62-4609-b00d-047191aaca6b-kube-api-access-7t8jk\") pod \"nfs-server-provisioner-0\" (UID: \"e9563adc-df62-4609-b00d-047191aaca6b\") " pod="default/nfs-server-provisioner-0" Jul 10 00:35:17.719879 env[1214]: time="2025-07-10T00:35:17.719838401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:e9563adc-df62-4609-b00d-047191aaca6b,Namespace:default,Attempt:0,}" Jul 10 00:35:17.748815 systemd-networkd[1038]: lxcc53577212057: Link UP Jul 10 00:35:17.759059 kernel: eth0: renamed from tmpfae4f Jul 10 00:35:17.765889 systemd-networkd[1038]: lxcc53577212057: Gained carrier Jul 10 00:35:17.766072 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 10 00:35:17.766123 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcc53577212057: link becomes ready Jul 10 00:35:17.931870 env[1214]: time="2025-07-10T00:35:17.931780829Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:35:17.931870 env[1214]: time="2025-07-10T00:35:17.931820838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:35:17.931870 env[1214]: time="2025-07-10T00:35:17.931839322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:35:17.932289 env[1214]: time="2025-07-10T00:35:17.932248336Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fae4f828b17600ef1d2c564e542b70742aa2cc7ecd95b1d8c773fd2006f4fb27 pid=2611 runtime=io.containerd.runc.v2 Jul 10 00:35:17.943300 systemd[1]: Started cri-containerd-fae4f828b17600ef1d2c564e542b70742aa2cc7ecd95b1d8c773fd2006f4fb27.scope. Jul 10 00:35:17.972154 systemd-resolved[1161]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:35:17.987182 env[1214]: time="2025-07-10T00:35:17.987118875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:e9563adc-df62-4609-b00d-047191aaca6b,Namespace:default,Attempt:0,} returns sandbox id \"fae4f828b17600ef1d2c564e542b70742aa2cc7ecd95b1d8c773fd2006f4fb27\"" Jul 10 00:35:17.988649 env[1214]: time="2025-07-10T00:35:17.988614138Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jul 10 00:35:18.269278 kubelet[1417]: E0710 00:35:18.269158 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:19.270290 kubelet[1417]: E0710 00:35:19.270240 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:19.396402 systemd-networkd[1038]: lxcc53577212057: Gained IPv6LL Jul 10 00:35:20.259281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount980413172.mount: Deactivated successfully. Jul 10 00:35:20.271395 kubelet[1417]: E0710 00:35:20.271343 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:20.973149 update_engine[1208]: I0710 00:35:20.973098 1208 update_attempter.cc:509] Updating boot flags... Jul 10 00:35:21.272324 kubelet[1417]: E0710 00:35:21.272133 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:21.964315 env[1214]: time="2025-07-10T00:35:21.964263021Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:21.965602 env[1214]: time="2025-07-10T00:35:21.965574613Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:21.967560 env[1214]: time="2025-07-10T00:35:21.967523918Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:21.968833 env[1214]: time="2025-07-10T00:35:21.968802624Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:21.970225 env[1214]: time="2025-07-10T00:35:21.970192631Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Jul 10 00:35:21.972212 env[1214]: time="2025-07-10T00:35:21.972181383Z" level=info msg="CreateContainer within sandbox \"fae4f828b17600ef1d2c564e542b70742aa2cc7ecd95b1d8c773fd2006f4fb27\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jul 10 00:35:21.982739 env[1214]: time="2025-07-10T00:35:21.982698405Z" level=info msg="CreateContainer within sandbox \"fae4f828b17600ef1d2c564e542b70742aa2cc7ecd95b1d8c773fd2006f4fb27\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"65b06c68e2fc768e6b506f8a7390dc26d3da0e26045296a93988f6539d657741\"" Jul 10 00:35:21.983218 env[1214]: time="2025-07-10T00:35:21.983191613Z" level=info msg="StartContainer for \"65b06c68e2fc768e6b506f8a7390dc26d3da0e26045296a93988f6539d657741\"" Jul 10 00:35:22.001804 systemd[1]: Started cri-containerd-65b06c68e2fc768e6b506f8a7390dc26d3da0e26045296a93988f6539d657741.scope. Jul 10 00:35:22.063772 env[1214]: time="2025-07-10T00:35:22.063717100Z" level=info msg="StartContainer for \"65b06c68e2fc768e6b506f8a7390dc26d3da0e26045296a93988f6539d657741\" returns successfully" Jul 10 00:35:22.273819 kubelet[1417]: E0710 00:35:22.272535 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:23.273012 kubelet[1417]: E0710 00:35:23.272968 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:24.251094 kubelet[1417]: E0710 00:35:24.251045 1417 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:24.273547 kubelet[1417]: E0710 00:35:24.273510 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:25.273971 kubelet[1417]: E0710 00:35:25.273926 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:26.274690 kubelet[1417]: E0710 00:35:26.274639 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:27.274982 kubelet[1417]: E0710 00:35:27.274943 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:28.275877 kubelet[1417]: E0710 00:35:28.275828 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:29.276762 kubelet[1417]: E0710 00:35:29.276732 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:30.277770 kubelet[1417]: E0710 00:35:30.277730 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:31.278666 kubelet[1417]: E0710 00:35:31.278624 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:31.407370 kubelet[1417]: I0710 00:35:31.407310 1417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=10.424744774 podStartE2EDuration="14.407290133s" podCreationTimestamp="2025-07-10 00:35:17 +0000 UTC" firstStartedPulling="2025-07-10 00:35:17.988330553 +0000 UTC m=+34.927629623" lastFinishedPulling="2025-07-10 00:35:21.970875912 +0000 UTC m=+38.910174982" observedRunningTime="2025-07-10 00:35:22.551315898 +0000 UTC m=+39.490614968" watchObservedRunningTime="2025-07-10 00:35:31.407290133 +0000 UTC m=+48.346589203" Jul 10 00:35:31.412639 systemd[1]: Created slice kubepods-besteffort-podd6d15651_ec3b_464d_8d05_734a1da040f7.slice. Jul 10 00:35:31.550314 kubelet[1417]: I0710 00:35:31.550193 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mh6z\" (UniqueName: \"kubernetes.io/projected/d6d15651-ec3b-464d-8d05-734a1da040f7-kube-api-access-9mh6z\") pod \"test-pod-1\" (UID: \"d6d15651-ec3b-464d-8d05-734a1da040f7\") " pod="default/test-pod-1" Jul 10 00:35:31.550314 kubelet[1417]: I0710 00:35:31.550240 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4634c4cc-f490-49d8-bd75-6e805deaf42d\" (UniqueName: \"kubernetes.io/nfs/d6d15651-ec3b-464d-8d05-734a1da040f7-pvc-4634c4cc-f490-49d8-bd75-6e805deaf42d\") pod \"test-pod-1\" (UID: \"d6d15651-ec3b-464d-8d05-734a1da040f7\") " pod="default/test-pod-1" Jul 10 00:35:31.674058 kernel: FS-Cache: Loaded Jul 10 00:35:31.700363 kernel: RPC: Registered named UNIX socket transport module. Jul 10 00:35:31.700462 kernel: RPC: Registered udp transport module. Jul 10 00:35:31.700495 kernel: RPC: Registered tcp transport module. Jul 10 00:35:31.701290 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jul 10 00:35:31.742065 kernel: FS-Cache: Netfs 'nfs' registered for caching Jul 10 00:35:31.870290 kernel: NFS: Registering the id_resolver key type Jul 10 00:35:31.870475 kernel: Key type id_resolver registered Jul 10 00:35:31.870502 kernel: Key type id_legacy registered Jul 10 00:35:31.893461 nfsidmap[2742]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jul 10 00:35:31.898590 nfsidmap[2745]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jul 10 00:35:32.015602 env[1214]: time="2025-07-10T00:35:32.015202632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:d6d15651-ec3b-464d-8d05-734a1da040f7,Namespace:default,Attempt:0,}" Jul 10 00:35:32.065247 systemd-networkd[1038]: lxc97ff10337c65: Link UP Jul 10 00:35:32.079260 kernel: eth0: renamed from tmp9800c Jul 10 00:35:32.084678 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 10 00:35:32.084769 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc97ff10337c65: link becomes ready Jul 10 00:35:32.084791 systemd-networkd[1038]: lxc97ff10337c65: Gained carrier Jul 10 00:35:32.279102 kubelet[1417]: E0710 00:35:32.278995 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:32.305961 env[1214]: time="2025-07-10T00:35:32.305891667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:35:32.305961 env[1214]: time="2025-07-10T00:35:32.305929150Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:35:32.306215 env[1214]: time="2025-07-10T00:35:32.305938991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:35:32.306472 env[1214]: time="2025-07-10T00:35:32.306428394Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9800ca303b7b86287f440b865ba75c83374093789c706fedd0b8c354c76e9311 pid=2778 runtime=io.containerd.runc.v2 Jul 10 00:35:32.317144 systemd[1]: Started cri-containerd-9800ca303b7b86287f440b865ba75c83374093789c706fedd0b8c354c76e9311.scope. Jul 10 00:35:32.362814 systemd-resolved[1161]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:35:32.383084 env[1214]: time="2025-07-10T00:35:32.383025904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:d6d15651-ec3b-464d-8d05-734a1da040f7,Namespace:default,Attempt:0,} returns sandbox id \"9800ca303b7b86287f440b865ba75c83374093789c706fedd0b8c354c76e9311\"" Jul 10 00:35:32.384380 env[1214]: time="2025-07-10T00:35:32.384341099Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 10 00:35:32.607863 env[1214]: time="2025-07-10T00:35:32.607759475Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:32.609583 env[1214]: time="2025-07-10T00:35:32.609545110Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:cd8b38a4e22587134e82fff3512a99b84799274d989a1ec20f58c7f8c89b8511,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:32.611316 env[1214]: time="2025-07-10T00:35:32.611288862Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:32.613687 env[1214]: time="2025-07-10T00:35:32.613648108Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:30bb68e656e0665bce700e67d2756f68bdca3345fa1099a32bfdb8febcf621cd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:32.614524 env[1214]: time="2025-07-10T00:35:32.614491301Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:cd8b38a4e22587134e82fff3512a99b84799274d989a1ec20f58c7f8c89b8511\"" Jul 10 00:35:32.617179 env[1214]: time="2025-07-10T00:35:32.617148533Z" level=info msg="CreateContainer within sandbox \"9800ca303b7b86287f440b865ba75c83374093789c706fedd0b8c354c76e9311\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jul 10 00:35:32.627835 env[1214]: time="2025-07-10T00:35:32.627780698Z" level=info msg="CreateContainer within sandbox \"9800ca303b7b86287f440b865ba75c83374093789c706fedd0b8c354c76e9311\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"701f8ebfbe0515ebb9e196cdb02f778ef92f7e9fa6f0dc024dbe98595632e16f\"" Jul 10 00:35:32.628442 env[1214]: time="2025-07-10T00:35:32.628416314Z" level=info msg="StartContainer for \"701f8ebfbe0515ebb9e196cdb02f778ef92f7e9fa6f0dc024dbe98595632e16f\"" Jul 10 00:35:32.642358 systemd[1]: Started cri-containerd-701f8ebfbe0515ebb9e196cdb02f778ef92f7e9fa6f0dc024dbe98595632e16f.scope. Jul 10 00:35:32.707023 env[1214]: time="2025-07-10T00:35:32.706648927Z" level=info msg="StartContainer for \"701f8ebfbe0515ebb9e196cdb02f778ef92f7e9fa6f0dc024dbe98595632e16f\" returns successfully" Jul 10 00:35:33.279493 kubelet[1417]: E0710 00:35:33.279446 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:33.562571 kubelet[1417]: I0710 00:35:33.562443 1417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=16.330421575 podStartE2EDuration="16.562428979s" podCreationTimestamp="2025-07-10 00:35:17 +0000 UTC" firstStartedPulling="2025-07-10 00:35:32.383818173 +0000 UTC m=+49.323117203" lastFinishedPulling="2025-07-10 00:35:32.615825537 +0000 UTC m=+49.555124607" observedRunningTime="2025-07-10 00:35:33.561602392 +0000 UTC m=+50.500901462" watchObservedRunningTime="2025-07-10 00:35:33.562428979 +0000 UTC m=+50.501728049" Jul 10 00:35:33.858200 systemd-networkd[1038]: lxc97ff10337c65: Gained IPv6LL Jul 10 00:35:34.279795 kubelet[1417]: E0710 00:35:34.279753 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:35.280169 kubelet[1417]: E0710 00:35:35.280083 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:36.280603 kubelet[1417]: E0710 00:35:36.280546 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:37.280737 kubelet[1417]: E0710 00:35:37.280661 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:38.281335 kubelet[1417]: E0710 00:35:38.281257 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:39.282167 kubelet[1417]: E0710 00:35:39.282013 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:40.196495 env[1214]: time="2025-07-10T00:35:40.196428983Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:35:40.201368 env[1214]: time="2025-07-10T00:35:40.201329377Z" level=info msg="StopContainer for \"0d2a630a7cfdc85841ade1cfab62f76ff6bb8339060020e9566e6d90113e360c\" with timeout 2 (s)" Jul 10 00:35:40.201596 env[1214]: time="2025-07-10T00:35:40.201570990Z" level=info msg="Stop container \"0d2a630a7cfdc85841ade1cfab62f76ff6bb8339060020e9566e6d90113e360c\" with signal terminated" Jul 10 00:35:40.207926 systemd-networkd[1038]: lxc_health: Link DOWN Jul 10 00:35:40.207932 systemd-networkd[1038]: lxc_health: Lost carrier Jul 10 00:35:40.254380 systemd[1]: cri-containerd-0d2a630a7cfdc85841ade1cfab62f76ff6bb8339060020e9566e6d90113e360c.scope: Deactivated successfully. Jul 10 00:35:40.254870 systemd[1]: cri-containerd-0d2a630a7cfdc85841ade1cfab62f76ff6bb8339060020e9566e6d90113e360c.scope: Consumed 6.738s CPU time. Jul 10 00:35:40.270769 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d2a630a7cfdc85841ade1cfab62f76ff6bb8339060020e9566e6d90113e360c-rootfs.mount: Deactivated successfully. Jul 10 00:35:40.282812 kubelet[1417]: E0710 00:35:40.282645 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:40.286671 env[1214]: time="2025-07-10T00:35:40.286420928Z" level=info msg="shim disconnected" id=0d2a630a7cfdc85841ade1cfab62f76ff6bb8339060020e9566e6d90113e360c Jul 10 00:35:40.286671 env[1214]: time="2025-07-10T00:35:40.286644860Z" level=warning msg="cleaning up after shim disconnected" id=0d2a630a7cfdc85841ade1cfab62f76ff6bb8339060020e9566e6d90113e360c namespace=k8s.io Jul 10 00:35:40.287061 env[1214]: time="2025-07-10T00:35:40.286863472Z" level=info msg="cleaning up dead shim" Jul 10 00:35:40.294976 env[1214]: time="2025-07-10T00:35:40.294933763Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:35:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2912 runtime=io.containerd.runc.v2\n" Jul 10 00:35:40.297780 env[1214]: time="2025-07-10T00:35:40.297723999Z" level=info msg="StopContainer for \"0d2a630a7cfdc85841ade1cfab62f76ff6bb8339060020e9566e6d90113e360c\" returns successfully" Jul 10 00:35:40.298379 env[1214]: time="2025-07-10T00:35:40.298348434Z" level=info msg="StopPodSandbox for \"e796c6539834d69b8a33ce47f23bb6243838a6998f5fcc9c18318e22a7cd7d0c\"" Jul 10 00:35:40.298527 env[1214]: time="2025-07-10T00:35:40.298505482Z" level=info msg="Container to stop \"cf1b4ff3597e933d98bbb74fea3f12a30ece871f3e01dc5ecec5077cfa61a0aa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:35:40.298595 env[1214]: time="2025-07-10T00:35:40.298579646Z" level=info msg="Container to stop \"bbfacd56c9dcd2209be99cb8ada24e86a3e0094bc4a5249ee2ed9d88ae52dec1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:35:40.298655 env[1214]: time="2025-07-10T00:35:40.298639090Z" level=info msg="Container to stop \"ddd74305db10e0e918b669bda9d1c241ff21627dff4882a7fc5e89320a259c45\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:35:40.298709 env[1214]: time="2025-07-10T00:35:40.298695133Z" level=info msg="Container to stop \"dbfbf29a573128b4efcbc966c8527984a3a963eaa39cbb08592c9c7fb9f7a0e3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:35:40.298765 env[1214]: time="2025-07-10T00:35:40.298751216Z" level=info msg="Container to stop \"0d2a630a7cfdc85841ade1cfab62f76ff6bb8339060020e9566e6d90113e360c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:35:40.300427 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e796c6539834d69b8a33ce47f23bb6243838a6998f5fcc9c18318e22a7cd7d0c-shm.mount: Deactivated successfully. Jul 10 00:35:40.306193 systemd[1]: cri-containerd-e796c6539834d69b8a33ce47f23bb6243838a6998f5fcc9c18318e22a7cd7d0c.scope: Deactivated successfully. Jul 10 00:35:40.327303 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e796c6539834d69b8a33ce47f23bb6243838a6998f5fcc9c18318e22a7cd7d0c-rootfs.mount: Deactivated successfully. Jul 10 00:35:40.332947 env[1214]: time="2025-07-10T00:35:40.332894722Z" level=info msg="shim disconnected" id=e796c6539834d69b8a33ce47f23bb6243838a6998f5fcc9c18318e22a7cd7d0c Jul 10 00:35:40.332947 env[1214]: time="2025-07-10T00:35:40.332941685Z" level=warning msg="cleaning up after shim disconnected" id=e796c6539834d69b8a33ce47f23bb6243838a6998f5fcc9c18318e22a7cd7d0c namespace=k8s.io Jul 10 00:35:40.332947 env[1214]: time="2025-07-10T00:35:40.332952205Z" level=info msg="cleaning up dead shim" Jul 10 00:35:40.340708 env[1214]: time="2025-07-10T00:35:40.340655396Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:35:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2943 runtime=io.containerd.runc.v2\n" Jul 10 00:35:40.341018 env[1214]: time="2025-07-10T00:35:40.340971453Z" level=info msg="TearDown network for sandbox \"e796c6539834d69b8a33ce47f23bb6243838a6998f5fcc9c18318e22a7cd7d0c\" successfully" Jul 10 00:35:40.341018 env[1214]: time="2025-07-10T00:35:40.340997735Z" level=info msg="StopPodSandbox for \"e796c6539834d69b8a33ce47f23bb6243838a6998f5fcc9c18318e22a7cd7d0c\" returns successfully" Jul 10 00:35:40.504869 kubelet[1417]: I0710 00:35:40.504440 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0afc8bde-982f-4750-a3f5-637da5b3d369-xtables-lock\") pod \"0afc8bde-982f-4750-a3f5-637da5b3d369\" (UID: \"0afc8bde-982f-4750-a3f5-637da5b3d369\") " Jul 10 00:35:40.504869 kubelet[1417]: I0710 00:35:40.504476 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0afc8bde-982f-4750-a3f5-637da5b3d369-host-proc-sys-kernel\") pod \"0afc8bde-982f-4750-a3f5-637da5b3d369\" (UID: \"0afc8bde-982f-4750-a3f5-637da5b3d369\") " Jul 10 00:35:40.504869 kubelet[1417]: I0710 00:35:40.504505 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0afc8bde-982f-4750-a3f5-637da5b3d369-cilium-run\") pod \"0afc8bde-982f-4750-a3f5-637da5b3d369\" (UID: \"0afc8bde-982f-4750-a3f5-637da5b3d369\") " Jul 10 00:35:40.504869 kubelet[1417]: I0710 00:35:40.504531 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0afc8bde-982f-4750-a3f5-637da5b3d369-hubble-tls\") pod \"0afc8bde-982f-4750-a3f5-637da5b3d369\" (UID: \"0afc8bde-982f-4750-a3f5-637da5b3d369\") " Jul 10 00:35:40.504869 kubelet[1417]: I0710 00:35:40.504547 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0afc8bde-982f-4750-a3f5-637da5b3d369-hostproc\") pod \"0afc8bde-982f-4750-a3f5-637da5b3d369\" (UID: \"0afc8bde-982f-4750-a3f5-637da5b3d369\") " Jul 10 00:35:40.504869 kubelet[1417]: I0710 00:35:40.504570 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0afc8bde-982f-4750-a3f5-637da5b3d369-cilium-cgroup\") pod \"0afc8bde-982f-4750-a3f5-637da5b3d369\" (UID: \"0afc8bde-982f-4750-a3f5-637da5b3d369\") " Jul 10 00:35:40.505161 kubelet[1417]: I0710 00:35:40.504585 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0afc8bde-982f-4750-a3f5-637da5b3d369-cni-path\") pod \"0afc8bde-982f-4750-a3f5-637da5b3d369\" (UID: \"0afc8bde-982f-4750-a3f5-637da5b3d369\") " Jul 10 00:35:40.505161 kubelet[1417]: I0710 00:35:40.504601 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0afc8bde-982f-4750-a3f5-637da5b3d369-bpf-maps\") pod \"0afc8bde-982f-4750-a3f5-637da5b3d369\" (UID: \"0afc8bde-982f-4750-a3f5-637da5b3d369\") " Jul 10 00:35:40.505161 kubelet[1417]: I0710 00:35:40.504614 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0afc8bde-982f-4750-a3f5-637da5b3d369-host-proc-sys-net\") pod \"0afc8bde-982f-4750-a3f5-637da5b3d369\" (UID: \"0afc8bde-982f-4750-a3f5-637da5b3d369\") " Jul 10 00:35:40.505161 kubelet[1417]: I0710 00:35:40.504631 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0afc8bde-982f-4750-a3f5-637da5b3d369-clustermesh-secrets\") pod \"0afc8bde-982f-4750-a3f5-637da5b3d369\" (UID: \"0afc8bde-982f-4750-a3f5-637da5b3d369\") " Jul 10 00:35:40.505161 kubelet[1417]: I0710 00:35:40.504656 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64cfp\" (UniqueName: \"kubernetes.io/projected/0afc8bde-982f-4750-a3f5-637da5b3d369-kube-api-access-64cfp\") pod \"0afc8bde-982f-4750-a3f5-637da5b3d369\" (UID: \"0afc8bde-982f-4750-a3f5-637da5b3d369\") " Jul 10 00:35:40.505161 kubelet[1417]: I0710 00:35:40.504674 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0afc8bde-982f-4750-a3f5-637da5b3d369-etc-cni-netd\") pod \"0afc8bde-982f-4750-a3f5-637da5b3d369\" (UID: \"0afc8bde-982f-4750-a3f5-637da5b3d369\") " Jul 10 00:35:40.505374 kubelet[1417]: I0710 00:35:40.504687 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0afc8bde-982f-4750-a3f5-637da5b3d369-lib-modules\") pod \"0afc8bde-982f-4750-a3f5-637da5b3d369\" (UID: \"0afc8bde-982f-4750-a3f5-637da5b3d369\") " Jul 10 00:35:40.505374 kubelet[1417]: I0710 00:35:40.504704 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0afc8bde-982f-4750-a3f5-637da5b3d369-cilium-config-path\") pod \"0afc8bde-982f-4750-a3f5-637da5b3d369\" (UID: \"0afc8bde-982f-4750-a3f5-637da5b3d369\") " Jul 10 00:35:40.505374 kubelet[1417]: I0710 00:35:40.505113 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0afc8bde-982f-4750-a3f5-637da5b3d369-cni-path" (OuterVolumeSpecName: "cni-path") pod "0afc8bde-982f-4750-a3f5-637da5b3d369" (UID: "0afc8bde-982f-4750-a3f5-637da5b3d369"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:35:40.505374 kubelet[1417]: I0710 00:35:40.505158 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0afc8bde-982f-4750-a3f5-637da5b3d369-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0afc8bde-982f-4750-a3f5-637da5b3d369" (UID: "0afc8bde-982f-4750-a3f5-637da5b3d369"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:35:40.505374 kubelet[1417]: I0710 00:35:40.505204 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0afc8bde-982f-4750-a3f5-637da5b3d369-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0afc8bde-982f-4750-a3f5-637da5b3d369" (UID: "0afc8bde-982f-4750-a3f5-637da5b3d369"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:35:40.505502 kubelet[1417]: I0710 00:35:40.505220 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0afc8bde-982f-4750-a3f5-637da5b3d369-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0afc8bde-982f-4750-a3f5-637da5b3d369" (UID: "0afc8bde-982f-4750-a3f5-637da5b3d369"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:35:40.505735 kubelet[1417]: I0710 00:35:40.505604 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0afc8bde-982f-4750-a3f5-637da5b3d369-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0afc8bde-982f-4750-a3f5-637da5b3d369" (UID: "0afc8bde-982f-4750-a3f5-637da5b3d369"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:35:40.505735 kubelet[1417]: I0710 00:35:40.505683 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0afc8bde-982f-4750-a3f5-637da5b3d369-hostproc" (OuterVolumeSpecName: "hostproc") pod "0afc8bde-982f-4750-a3f5-637da5b3d369" (UID: "0afc8bde-982f-4750-a3f5-637da5b3d369"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:35:40.505735 kubelet[1417]: I0710 00:35:40.505707 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0afc8bde-982f-4750-a3f5-637da5b3d369-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0afc8bde-982f-4750-a3f5-637da5b3d369" (UID: "0afc8bde-982f-4750-a3f5-637da5b3d369"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:35:40.506023 kubelet[1417]: I0710 00:35:40.505981 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0afc8bde-982f-4750-a3f5-637da5b3d369-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0afc8bde-982f-4750-a3f5-637da5b3d369" (UID: "0afc8bde-982f-4750-a3f5-637da5b3d369"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:35:40.506023 kubelet[1417]: I0710 00:35:40.506018 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0afc8bde-982f-4750-a3f5-637da5b3d369-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0afc8bde-982f-4750-a3f5-637da5b3d369" (UID: "0afc8bde-982f-4750-a3f5-637da5b3d369"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:35:40.506112 kubelet[1417]: I0710 00:35:40.506050 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0afc8bde-982f-4750-a3f5-637da5b3d369-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0afc8bde-982f-4750-a3f5-637da5b3d369" (UID: "0afc8bde-982f-4750-a3f5-637da5b3d369"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:35:40.508920 kubelet[1417]: I0710 00:35:40.508834 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0afc8bde-982f-4750-a3f5-637da5b3d369-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0afc8bde-982f-4750-a3f5-637da5b3d369" (UID: "0afc8bde-982f-4750-a3f5-637da5b3d369"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 10 00:35:40.511985 systemd[1]: var-lib-kubelet-pods-0afc8bde\x2d982f\x2d4750\x2da3f5\x2d637da5b3d369-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 10 00:35:40.512760 kubelet[1417]: I0710 00:35:40.512192 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0afc8bde-982f-4750-a3f5-637da5b3d369-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0afc8bde-982f-4750-a3f5-637da5b3d369" (UID: "0afc8bde-982f-4750-a3f5-637da5b3d369"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 10 00:35:40.512760 kubelet[1417]: I0710 00:35:40.512491 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0afc8bde-982f-4750-a3f5-637da5b3d369-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0afc8bde-982f-4750-a3f5-637da5b3d369" (UID: "0afc8bde-982f-4750-a3f5-637da5b3d369"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:35:40.513328 kubelet[1417]: I0710 00:35:40.513238 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0afc8bde-982f-4750-a3f5-637da5b3d369-kube-api-access-64cfp" (OuterVolumeSpecName: "kube-api-access-64cfp") pod "0afc8bde-982f-4750-a3f5-637da5b3d369" (UID: "0afc8bde-982f-4750-a3f5-637da5b3d369"). InnerVolumeSpecName "kube-api-access-64cfp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:35:40.566935 kubelet[1417]: I0710 00:35:40.566892 1417 scope.go:117] "RemoveContainer" containerID="0d2a630a7cfdc85841ade1cfab62f76ff6bb8339060020e9566e6d90113e360c" Jul 10 00:35:40.569395 env[1214]: time="2025-07-10T00:35:40.569349363Z" level=info msg="RemoveContainer for \"0d2a630a7cfdc85841ade1cfab62f76ff6bb8339060020e9566e6d90113e360c\"" Jul 10 00:35:40.571977 systemd[1]: Removed slice kubepods-burstable-pod0afc8bde_982f_4750_a3f5_637da5b3d369.slice. Jul 10 00:35:40.572096 systemd[1]: kubepods-burstable-pod0afc8bde_982f_4750_a3f5_637da5b3d369.slice: Consumed 6.946s CPU time. Jul 10 00:35:40.572958 env[1214]: time="2025-07-10T00:35:40.572903962Z" level=info msg="RemoveContainer for \"0d2a630a7cfdc85841ade1cfab62f76ff6bb8339060020e9566e6d90113e360c\" returns successfully" Jul 10 00:35:40.573186 kubelet[1417]: I0710 00:35:40.573150 1417 scope.go:117] "RemoveContainer" containerID="cf1b4ff3597e933d98bbb74fea3f12a30ece871f3e01dc5ecec5077cfa61a0aa" Jul 10 00:35:40.574603 env[1214]: time="2025-07-10T00:35:40.574564895Z" level=info msg="RemoveContainer for \"cf1b4ff3597e933d98bbb74fea3f12a30ece871f3e01dc5ecec5077cfa61a0aa\"" Jul 10 00:35:40.579073 env[1214]: time="2025-07-10T00:35:40.579014943Z" level=info msg="RemoveContainer for \"cf1b4ff3597e933d98bbb74fea3f12a30ece871f3e01dc5ecec5077cfa61a0aa\" returns successfully" Jul 10 00:35:40.579228 kubelet[1417]: I0710 00:35:40.579204 1417 scope.go:117] "RemoveContainer" containerID="dbfbf29a573128b4efcbc966c8527984a3a963eaa39cbb08592c9c7fb9f7a0e3" Jul 10 00:35:40.581214 env[1214]: time="2025-07-10T00:35:40.580904369Z" level=info msg="RemoveContainer for \"dbfbf29a573128b4efcbc966c8527984a3a963eaa39cbb08592c9c7fb9f7a0e3\"" Jul 10 00:35:40.586149 env[1214]: time="2025-07-10T00:35:40.586014254Z" level=info msg="RemoveContainer for \"dbfbf29a573128b4efcbc966c8527984a3a963eaa39cbb08592c9c7fb9f7a0e3\" returns successfully" Jul 10 00:35:40.586367 kubelet[1417]: I0710 00:35:40.586347 1417 scope.go:117] "RemoveContainer" containerID="ddd74305db10e0e918b669bda9d1c241ff21627dff4882a7fc5e89320a259c45" Jul 10 00:35:40.587685 env[1214]: time="2025-07-10T00:35:40.587480616Z" level=info msg="RemoveContainer for \"ddd74305db10e0e918b669bda9d1c241ff21627dff4882a7fc5e89320a259c45\"" Jul 10 00:35:40.592228 env[1214]: time="2025-07-10T00:35:40.592134796Z" level=info msg="RemoveContainer for \"ddd74305db10e0e918b669bda9d1c241ff21627dff4882a7fc5e89320a259c45\" returns successfully" Jul 10 00:35:40.592370 kubelet[1417]: I0710 00:35:40.592324 1417 scope.go:117] "RemoveContainer" containerID="bbfacd56c9dcd2209be99cb8ada24e86a3e0094bc4a5249ee2ed9d88ae52dec1" Jul 10 00:35:40.593329 env[1214]: time="2025-07-10T00:35:40.593295940Z" level=info msg="RemoveContainer for \"bbfacd56c9dcd2209be99cb8ada24e86a3e0094bc4a5249ee2ed9d88ae52dec1\"" Jul 10 00:35:40.595560 env[1214]: time="2025-07-10T00:35:40.595528265Z" level=info msg="RemoveContainer for \"bbfacd56c9dcd2209be99cb8ada24e86a3e0094bc4a5249ee2ed9d88ae52dec1\" returns successfully" Jul 10 00:35:40.595690 kubelet[1417]: I0710 00:35:40.595663 1417 scope.go:117] "RemoveContainer" containerID="0d2a630a7cfdc85841ade1cfab62f76ff6bb8339060020e9566e6d90113e360c" Jul 10 00:35:40.595925 env[1214]: time="2025-07-10T00:35:40.595860284Z" level=error msg="ContainerStatus for \"0d2a630a7cfdc85841ade1cfab62f76ff6bb8339060020e9566e6d90113e360c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0d2a630a7cfdc85841ade1cfab62f76ff6bb8339060020e9566e6d90113e360c\": not found" Jul 10 00:35:40.596148 kubelet[1417]: E0710 00:35:40.596114 1417 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0d2a630a7cfdc85841ade1cfab62f76ff6bb8339060020e9566e6d90113e360c\": not found" containerID="0d2a630a7cfdc85841ade1cfab62f76ff6bb8339060020e9566e6d90113e360c" Jul 10 00:35:40.596243 kubelet[1417]: I0710 00:35:40.596144 1417 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0d2a630a7cfdc85841ade1cfab62f76ff6bb8339060020e9566e6d90113e360c"} err="failed to get container status \"0d2a630a7cfdc85841ade1cfab62f76ff6bb8339060020e9566e6d90113e360c\": rpc error: code = NotFound desc = an error occurred when try to find container \"0d2a630a7cfdc85841ade1cfab62f76ff6bb8339060020e9566e6d90113e360c\": not found" Jul 10 00:35:40.596243 kubelet[1417]: I0710 00:35:40.596221 1417 scope.go:117] "RemoveContainer" containerID="cf1b4ff3597e933d98bbb74fea3f12a30ece871f3e01dc5ecec5077cfa61a0aa" Jul 10 00:35:40.596418 env[1214]: time="2025-07-10T00:35:40.596349311Z" level=error msg="ContainerStatus for \"cf1b4ff3597e933d98bbb74fea3f12a30ece871f3e01dc5ecec5077cfa61a0aa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cf1b4ff3597e933d98bbb74fea3f12a30ece871f3e01dc5ecec5077cfa61a0aa\": not found" Jul 10 00:35:40.596501 kubelet[1417]: E0710 00:35:40.596446 1417 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cf1b4ff3597e933d98bbb74fea3f12a30ece871f3e01dc5ecec5077cfa61a0aa\": not found" containerID="cf1b4ff3597e933d98bbb74fea3f12a30ece871f3e01dc5ecec5077cfa61a0aa" Jul 10 00:35:40.596501 kubelet[1417]: I0710 00:35:40.596463 1417 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cf1b4ff3597e933d98bbb74fea3f12a30ece871f3e01dc5ecec5077cfa61a0aa"} err="failed to get container status \"cf1b4ff3597e933d98bbb74fea3f12a30ece871f3e01dc5ecec5077cfa61a0aa\": rpc error: code = NotFound desc = an error occurred when try to find container \"cf1b4ff3597e933d98bbb74fea3f12a30ece871f3e01dc5ecec5077cfa61a0aa\": not found" Jul 10 00:35:40.596501 kubelet[1417]: I0710 00:35:40.596475 1417 scope.go:117] "RemoveContainer" containerID="dbfbf29a573128b4efcbc966c8527984a3a963eaa39cbb08592c9c7fb9f7a0e3" Jul 10 00:35:40.596772 env[1214]: time="2025-07-10T00:35:40.596736012Z" level=error msg="ContainerStatus for \"dbfbf29a573128b4efcbc966c8527984a3a963eaa39cbb08592c9c7fb9f7a0e3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dbfbf29a573128b4efcbc966c8527984a3a963eaa39cbb08592c9c7fb9f7a0e3\": not found" Jul 10 00:35:40.596846 kubelet[1417]: E0710 00:35:40.596823 1417 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dbfbf29a573128b4efcbc966c8527984a3a963eaa39cbb08592c9c7fb9f7a0e3\": not found" containerID="dbfbf29a573128b4efcbc966c8527984a3a963eaa39cbb08592c9c7fb9f7a0e3" Jul 10 00:35:40.596846 kubelet[1417]: I0710 00:35:40.596837 1417 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dbfbf29a573128b4efcbc966c8527984a3a963eaa39cbb08592c9c7fb9f7a0e3"} err="failed to get container status \"dbfbf29a573128b4efcbc966c8527984a3a963eaa39cbb08592c9c7fb9f7a0e3\": rpc error: code = NotFound desc = an error occurred when try to find container \"dbfbf29a573128b4efcbc966c8527984a3a963eaa39cbb08592c9c7fb9f7a0e3\": not found" Jul 10 00:35:40.596901 kubelet[1417]: I0710 00:35:40.596847 1417 scope.go:117] "RemoveContainer" containerID="ddd74305db10e0e918b669bda9d1c241ff21627dff4882a7fc5e89320a259c45" Jul 10 00:35:40.597214 env[1214]: time="2025-07-10T00:35:40.597085072Z" level=error msg="ContainerStatus for \"ddd74305db10e0e918b669bda9d1c241ff21627dff4882a7fc5e89320a259c45\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ddd74305db10e0e918b669bda9d1c241ff21627dff4882a7fc5e89320a259c45\": not found" Jul 10 00:35:40.597276 kubelet[1417]: E0710 00:35:40.597198 1417 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ddd74305db10e0e918b669bda9d1c241ff21627dff4882a7fc5e89320a259c45\": not found" containerID="ddd74305db10e0e918b669bda9d1c241ff21627dff4882a7fc5e89320a259c45" Jul 10 00:35:40.597276 kubelet[1417]: I0710 00:35:40.597215 1417 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ddd74305db10e0e918b669bda9d1c241ff21627dff4882a7fc5e89320a259c45"} err="failed to get container status \"ddd74305db10e0e918b669bda9d1c241ff21627dff4882a7fc5e89320a259c45\": rpc error: code = NotFound desc = an error occurred when try to find container \"ddd74305db10e0e918b669bda9d1c241ff21627dff4882a7fc5e89320a259c45\": not found" Jul 10 00:35:40.597276 kubelet[1417]: I0710 00:35:40.597228 1417 scope.go:117] "RemoveContainer" containerID="bbfacd56c9dcd2209be99cb8ada24e86a3e0094bc4a5249ee2ed9d88ae52dec1" Jul 10 00:35:40.597640 env[1214]: time="2025-07-10T00:35:40.597533577Z" level=error msg="ContainerStatus for \"bbfacd56c9dcd2209be99cb8ada24e86a3e0094bc4a5249ee2ed9d88ae52dec1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bbfacd56c9dcd2209be99cb8ada24e86a3e0094bc4a5249ee2ed9d88ae52dec1\": not found" Jul 10 00:35:40.597738 kubelet[1417]: E0710 00:35:40.597706 1417 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bbfacd56c9dcd2209be99cb8ada24e86a3e0094bc4a5249ee2ed9d88ae52dec1\": not found" containerID="bbfacd56c9dcd2209be99cb8ada24e86a3e0094bc4a5249ee2ed9d88ae52dec1" Jul 10 00:35:40.597785 kubelet[1417]: I0710 00:35:40.597735 1417 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bbfacd56c9dcd2209be99cb8ada24e86a3e0094bc4a5249ee2ed9d88ae52dec1"} err="failed to get container status \"bbfacd56c9dcd2209be99cb8ada24e86a3e0094bc4a5249ee2ed9d88ae52dec1\": rpc error: code = NotFound desc = an error occurred when try to find container \"bbfacd56c9dcd2209be99cb8ada24e86a3e0094bc4a5249ee2ed9d88ae52dec1\": not found" Jul 10 00:35:40.605381 kubelet[1417]: I0710 00:35:40.605319 1417 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0afc8bde-982f-4750-a3f5-637da5b3d369-bpf-maps\") on node \"10.0.0.80\" DevicePath \"\"" Jul 10 00:35:40.605381 kubelet[1417]: I0710 00:35:40.605353 1417 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0afc8bde-982f-4750-a3f5-637da5b3d369-host-proc-sys-net\") on node \"10.0.0.80\" DevicePath \"\"" Jul 10 00:35:40.605381 kubelet[1417]: I0710 00:35:40.605363 1417 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0afc8bde-982f-4750-a3f5-637da5b3d369-clustermesh-secrets\") on node \"10.0.0.80\" DevicePath \"\"" Jul 10 00:35:40.605381 kubelet[1417]: I0710 00:35:40.605372 1417 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-64cfp\" (UniqueName: \"kubernetes.io/projected/0afc8bde-982f-4750-a3f5-637da5b3d369-kube-api-access-64cfp\") on node \"10.0.0.80\" DevicePath \"\"" Jul 10 00:35:40.605381 kubelet[1417]: I0710 00:35:40.605380 1417 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0afc8bde-982f-4750-a3f5-637da5b3d369-etc-cni-netd\") on node \"10.0.0.80\" DevicePath \"\"" Jul 10 00:35:40.605381 kubelet[1417]: I0710 00:35:40.605387 1417 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0afc8bde-982f-4750-a3f5-637da5b3d369-lib-modules\") on node \"10.0.0.80\" DevicePath \"\"" Jul 10 00:35:40.605631 kubelet[1417]: I0710 00:35:40.605396 1417 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0afc8bde-982f-4750-a3f5-637da5b3d369-cilium-config-path\") on node \"10.0.0.80\" DevicePath \"\"" Jul 10 00:35:40.605631 kubelet[1417]: I0710 00:35:40.605403 1417 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0afc8bde-982f-4750-a3f5-637da5b3d369-xtables-lock\") on node \"10.0.0.80\" DevicePath \"\"" Jul 10 00:35:40.605631 kubelet[1417]: I0710 00:35:40.605411 1417 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0afc8bde-982f-4750-a3f5-637da5b3d369-host-proc-sys-kernel\") on node \"10.0.0.80\" DevicePath \"\"" Jul 10 00:35:40.605631 kubelet[1417]: I0710 00:35:40.605419 1417 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0afc8bde-982f-4750-a3f5-637da5b3d369-cilium-run\") on node \"10.0.0.80\" DevicePath \"\"" Jul 10 00:35:40.605631 kubelet[1417]: I0710 00:35:40.605427 1417 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0afc8bde-982f-4750-a3f5-637da5b3d369-hubble-tls\") on node \"10.0.0.80\" DevicePath \"\"" Jul 10 00:35:40.605631 kubelet[1417]: I0710 00:35:40.605434 1417 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0afc8bde-982f-4750-a3f5-637da5b3d369-hostproc\") on node \"10.0.0.80\" DevicePath \"\"" Jul 10 00:35:40.605631 kubelet[1417]: I0710 00:35:40.605441 1417 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0afc8bde-982f-4750-a3f5-637da5b3d369-cilium-cgroup\") on node \"10.0.0.80\" DevicePath \"\"" Jul 10 00:35:40.605631 kubelet[1417]: I0710 00:35:40.605448 1417 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0afc8bde-982f-4750-a3f5-637da5b3d369-cni-path\") on node \"10.0.0.80\" DevicePath \"\"" Jul 10 00:35:41.169122 systemd[1]: var-lib-kubelet-pods-0afc8bde\x2d982f\x2d4750\x2da3f5\x2d637da5b3d369-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d64cfp.mount: Deactivated successfully. Jul 10 00:35:41.169223 systemd[1]: var-lib-kubelet-pods-0afc8bde\x2d982f\x2d4750\x2da3f5\x2d637da5b3d369-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 10 00:35:41.283155 kubelet[1417]: E0710 00:35:41.283089 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:42.283280 kubelet[1417]: E0710 00:35:42.283222 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:42.453692 kubelet[1417]: I0710 00:35:42.453648 1417 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0afc8bde-982f-4750-a3f5-637da5b3d369" path="/var/lib/kubelet/pods/0afc8bde-982f-4750-a3f5-637da5b3d369/volumes" Jul 10 00:35:43.231038 kubelet[1417]: I0710 00:35:43.230203 1417 memory_manager.go:355] "RemoveStaleState removing state" podUID="0afc8bde-982f-4750-a3f5-637da5b3d369" containerName="cilium-agent" Jul 10 00:35:43.235806 systemd[1]: Created slice kubepods-besteffort-pod87ed9a9c_1f34_4a35_a500_d5c7a5ea0917.slice. Jul 10 00:35:43.263934 systemd[1]: Created slice kubepods-burstable-podf8915214_510f_4e35_afaa_2a69c7b9f03f.slice. Jul 10 00:35:43.283593 kubelet[1417]: E0710 00:35:43.283552 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:43.403755 kubelet[1417]: E0710 00:35:43.403694 1417 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-zn27r lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-hjdmx" podUID="f8915214-510f-4e35-afaa-2a69c7b9f03f" Jul 10 00:35:43.423043 kubelet[1417]: I0710 00:35:43.422996 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn27r\" (UniqueName: \"kubernetes.io/projected/f8915214-510f-4e35-afaa-2a69c7b9f03f-kube-api-access-zn27r\") pod \"cilium-hjdmx\" (UID: \"f8915214-510f-4e35-afaa-2a69c7b9f03f\") " pod="kube-system/cilium-hjdmx" Jul 10 00:35:43.423136 kubelet[1417]: I0710 00:35:43.423053 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/87ed9a9c-1f34-4a35-a500-d5c7a5ea0917-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-h829b\" (UID: \"87ed9a9c-1f34-4a35-a500-d5c7a5ea0917\") " pod="kube-system/cilium-operator-6c4d7847fc-h829b" Jul 10 00:35:43.423136 kubelet[1417]: I0710 00:35:43.423079 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f8915214-510f-4e35-afaa-2a69c7b9f03f-cilium-config-path\") pod \"cilium-hjdmx\" (UID: \"f8915214-510f-4e35-afaa-2a69c7b9f03f\") " pod="kube-system/cilium-hjdmx" Jul 10 00:35:43.423186 kubelet[1417]: I0710 00:35:43.423141 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f8915214-510f-4e35-afaa-2a69c7b9f03f-hostproc\") pod \"cilium-hjdmx\" (UID: \"f8915214-510f-4e35-afaa-2a69c7b9f03f\") " pod="kube-system/cilium-hjdmx" Jul 10 00:35:43.423217 kubelet[1417]: I0710 00:35:43.423160 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f8915214-510f-4e35-afaa-2a69c7b9f03f-cilium-cgroup\") pod \"cilium-hjdmx\" (UID: \"f8915214-510f-4e35-afaa-2a69c7b9f03f\") " pod="kube-system/cilium-hjdmx" Jul 10 00:35:43.423217 kubelet[1417]: I0710 00:35:43.423202 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f8915214-510f-4e35-afaa-2a69c7b9f03f-clustermesh-secrets\") pod \"cilium-hjdmx\" (UID: \"f8915214-510f-4e35-afaa-2a69c7b9f03f\") " pod="kube-system/cilium-hjdmx" Jul 10 00:35:43.423274 kubelet[1417]: I0710 00:35:43.423219 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzt9t\" (UniqueName: \"kubernetes.io/projected/87ed9a9c-1f34-4a35-a500-d5c7a5ea0917-kube-api-access-gzt9t\") pod \"cilium-operator-6c4d7847fc-h829b\" (UID: \"87ed9a9c-1f34-4a35-a500-d5c7a5ea0917\") " pod="kube-system/cilium-operator-6c4d7847fc-h829b" Jul 10 00:35:43.423306 kubelet[1417]: I0710 00:35:43.423276 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f8915214-510f-4e35-afaa-2a69c7b9f03f-cilium-run\") pod \"cilium-hjdmx\" (UID: \"f8915214-510f-4e35-afaa-2a69c7b9f03f\") " pod="kube-system/cilium-hjdmx" Jul 10 00:35:43.423306 kubelet[1417]: I0710 00:35:43.423296 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f8915214-510f-4e35-afaa-2a69c7b9f03f-cni-path\") pod \"cilium-hjdmx\" (UID: \"f8915214-510f-4e35-afaa-2a69c7b9f03f\") " pod="kube-system/cilium-hjdmx" Jul 10 00:35:43.423352 kubelet[1417]: I0710 00:35:43.423313 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f8915214-510f-4e35-afaa-2a69c7b9f03f-xtables-lock\") pod \"cilium-hjdmx\" (UID: \"f8915214-510f-4e35-afaa-2a69c7b9f03f\") " pod="kube-system/cilium-hjdmx" Jul 10 00:35:43.423378 kubelet[1417]: I0710 00:35:43.423354 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f8915214-510f-4e35-afaa-2a69c7b9f03f-hubble-tls\") pod \"cilium-hjdmx\" (UID: \"f8915214-510f-4e35-afaa-2a69c7b9f03f\") " pod="kube-system/cilium-hjdmx" Jul 10 00:35:43.423404 kubelet[1417]: I0710 00:35:43.423376 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f8915214-510f-4e35-afaa-2a69c7b9f03f-host-proc-sys-net\") pod \"cilium-hjdmx\" (UID: \"f8915214-510f-4e35-afaa-2a69c7b9f03f\") " pod="kube-system/cilium-hjdmx" Jul 10 00:35:43.423404 kubelet[1417]: I0710 00:35:43.423393 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f8915214-510f-4e35-afaa-2a69c7b9f03f-host-proc-sys-kernel\") pod \"cilium-hjdmx\" (UID: \"f8915214-510f-4e35-afaa-2a69c7b9f03f\") " pod="kube-system/cilium-hjdmx" Jul 10 00:35:43.423448 kubelet[1417]: I0710 00:35:43.423408 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f8915214-510f-4e35-afaa-2a69c7b9f03f-bpf-maps\") pod \"cilium-hjdmx\" (UID: \"f8915214-510f-4e35-afaa-2a69c7b9f03f\") " pod="kube-system/cilium-hjdmx" Jul 10 00:35:43.423448 kubelet[1417]: I0710 00:35:43.423424 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f8915214-510f-4e35-afaa-2a69c7b9f03f-etc-cni-netd\") pod \"cilium-hjdmx\" (UID: \"f8915214-510f-4e35-afaa-2a69c7b9f03f\") " pod="kube-system/cilium-hjdmx" Jul 10 00:35:43.423448 kubelet[1417]: I0710 00:35:43.423438 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f8915214-510f-4e35-afaa-2a69c7b9f03f-lib-modules\") pod \"cilium-hjdmx\" (UID: \"f8915214-510f-4e35-afaa-2a69c7b9f03f\") " pod="kube-system/cilium-hjdmx" Jul 10 00:35:43.423512 kubelet[1417]: I0710 00:35:43.423453 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f8915214-510f-4e35-afaa-2a69c7b9f03f-cilium-ipsec-secrets\") pod \"cilium-hjdmx\" (UID: \"f8915214-510f-4e35-afaa-2a69c7b9f03f\") " pod="kube-system/cilium-hjdmx" Jul 10 00:35:43.725926 kubelet[1417]: I0710 00:35:43.725864 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f8915214-510f-4e35-afaa-2a69c7b9f03f-cilium-cgroup\") pod \"f8915214-510f-4e35-afaa-2a69c7b9f03f\" (UID: \"f8915214-510f-4e35-afaa-2a69c7b9f03f\") " Jul 10 00:35:43.725926 kubelet[1417]: I0710 00:35:43.725906 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f8915214-510f-4e35-afaa-2a69c7b9f03f-cni-path\") pod \"f8915214-510f-4e35-afaa-2a69c7b9f03f\" (UID: \"f8915214-510f-4e35-afaa-2a69c7b9f03f\") " Jul 10 00:35:43.725926 kubelet[1417]: I0710 00:35:43.725924 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f8915214-510f-4e35-afaa-2a69c7b9f03f-lib-modules\") pod \"f8915214-510f-4e35-afaa-2a69c7b9f03f\" (UID: \"f8915214-510f-4e35-afaa-2a69c7b9f03f\") " Jul 10 00:35:43.725926 kubelet[1417]: I0710 00:35:43.725939 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f8915214-510f-4e35-afaa-2a69c7b9f03f-bpf-maps\") pod \"f8915214-510f-4e35-afaa-2a69c7b9f03f\" (UID: \"f8915214-510f-4e35-afaa-2a69c7b9f03f\") " Jul 10 00:35:43.726253 kubelet[1417]: I0710 00:35:43.725960 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zn27r\" (UniqueName: \"kubernetes.io/projected/f8915214-510f-4e35-afaa-2a69c7b9f03f-kube-api-access-zn27r\") pod \"f8915214-510f-4e35-afaa-2a69c7b9f03f\" (UID: \"f8915214-510f-4e35-afaa-2a69c7b9f03f\") " Jul 10 00:35:43.726253 kubelet[1417]: I0710 00:35:43.725989 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f8915214-510f-4e35-afaa-2a69c7b9f03f-hostproc\") pod \"f8915214-510f-4e35-afaa-2a69c7b9f03f\" (UID: \"f8915214-510f-4e35-afaa-2a69c7b9f03f\") " Jul 10 00:35:43.726253 kubelet[1417]: I0710 00:35:43.726006 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f8915214-510f-4e35-afaa-2a69c7b9f03f-xtables-lock\") pod \"f8915214-510f-4e35-afaa-2a69c7b9f03f\" (UID: \"f8915214-510f-4e35-afaa-2a69c7b9f03f\") " Jul 10 00:35:43.726253 kubelet[1417]: I0710 00:35:43.726020 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f8915214-510f-4e35-afaa-2a69c7b9f03f-cilium-run\") pod \"f8915214-510f-4e35-afaa-2a69c7b9f03f\" (UID: \"f8915214-510f-4e35-afaa-2a69c7b9f03f\") " Jul 10 00:35:43.726253 kubelet[1417]: I0710 00:35:43.726050 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f8915214-510f-4e35-afaa-2a69c7b9f03f-host-proc-sys-net\") pod \"f8915214-510f-4e35-afaa-2a69c7b9f03f\" (UID: \"f8915214-510f-4e35-afaa-2a69c7b9f03f\") " Jul 10 00:35:43.726253 kubelet[1417]: I0710 00:35:43.726086 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f8915214-510f-4e35-afaa-2a69c7b9f03f-host-proc-sys-kernel\") pod \"f8915214-510f-4e35-afaa-2a69c7b9f03f\" (UID: \"f8915214-510f-4e35-afaa-2a69c7b9f03f\") " Jul 10 00:35:43.726397 kubelet[1417]: I0710 00:35:43.726109 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f8915214-510f-4e35-afaa-2a69c7b9f03f-etc-cni-netd\") pod \"f8915214-510f-4e35-afaa-2a69c7b9f03f\" (UID: \"f8915214-510f-4e35-afaa-2a69c7b9f03f\") " Jul 10 00:35:43.726397 kubelet[1417]: I0710 00:35:43.726126 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f8915214-510f-4e35-afaa-2a69c7b9f03f-cilium-config-path\") pod \"f8915214-510f-4e35-afaa-2a69c7b9f03f\" (UID: \"f8915214-510f-4e35-afaa-2a69c7b9f03f\") " Jul 10 00:35:43.726397 kubelet[1417]: I0710 00:35:43.726144 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f8915214-510f-4e35-afaa-2a69c7b9f03f-clustermesh-secrets\") pod \"f8915214-510f-4e35-afaa-2a69c7b9f03f\" (UID: \"f8915214-510f-4e35-afaa-2a69c7b9f03f\") " Jul 10 00:35:43.726397 kubelet[1417]: I0710 00:35:43.726160 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f8915214-510f-4e35-afaa-2a69c7b9f03f-hubble-tls\") pod \"f8915214-510f-4e35-afaa-2a69c7b9f03f\" (UID: \"f8915214-510f-4e35-afaa-2a69c7b9f03f\") " Jul 10 00:35:43.726397 kubelet[1417]: I0710 00:35:43.726176 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f8915214-510f-4e35-afaa-2a69c7b9f03f-cilium-ipsec-secrets\") pod \"f8915214-510f-4e35-afaa-2a69c7b9f03f\" (UID: \"f8915214-510f-4e35-afaa-2a69c7b9f03f\") " Jul 10 00:35:43.726647 kubelet[1417]: I0710 00:35:43.726598 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8915214-510f-4e35-afaa-2a69c7b9f03f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f8915214-510f-4e35-afaa-2a69c7b9f03f" (UID: "f8915214-510f-4e35-afaa-2a69c7b9f03f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:35:43.726700 kubelet[1417]: I0710 00:35:43.726636 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8915214-510f-4e35-afaa-2a69c7b9f03f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f8915214-510f-4e35-afaa-2a69c7b9f03f" (UID: "f8915214-510f-4e35-afaa-2a69c7b9f03f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:35:43.726700 kubelet[1417]: I0710 00:35:43.726658 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8915214-510f-4e35-afaa-2a69c7b9f03f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f8915214-510f-4e35-afaa-2a69c7b9f03f" (UID: "f8915214-510f-4e35-afaa-2a69c7b9f03f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:35:43.726700 kubelet[1417]: I0710 00:35:43.726685 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8915214-510f-4e35-afaa-2a69c7b9f03f-cni-path" (OuterVolumeSpecName: "cni-path") pod "f8915214-510f-4e35-afaa-2a69c7b9f03f" (UID: "f8915214-510f-4e35-afaa-2a69c7b9f03f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:35:43.726700 kubelet[1417]: I0710 00:35:43.726692 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8915214-510f-4e35-afaa-2a69c7b9f03f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f8915214-510f-4e35-afaa-2a69c7b9f03f" (UID: "f8915214-510f-4e35-afaa-2a69c7b9f03f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:35:43.726797 kubelet[1417]: I0710 00:35:43.726706 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8915214-510f-4e35-afaa-2a69c7b9f03f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f8915214-510f-4e35-afaa-2a69c7b9f03f" (UID: "f8915214-510f-4e35-afaa-2a69c7b9f03f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:35:43.726797 kubelet[1417]: I0710 00:35:43.726727 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8915214-510f-4e35-afaa-2a69c7b9f03f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f8915214-510f-4e35-afaa-2a69c7b9f03f" (UID: "f8915214-510f-4e35-afaa-2a69c7b9f03f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:35:43.726797 kubelet[1417]: I0710 00:35:43.726742 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8915214-510f-4e35-afaa-2a69c7b9f03f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f8915214-510f-4e35-afaa-2a69c7b9f03f" (UID: "f8915214-510f-4e35-afaa-2a69c7b9f03f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:35:43.726797 kubelet[1417]: I0710 00:35:43.726750 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8915214-510f-4e35-afaa-2a69c7b9f03f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f8915214-510f-4e35-afaa-2a69c7b9f03f" (UID: "f8915214-510f-4e35-afaa-2a69c7b9f03f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:35:43.728405 kubelet[1417]: I0710 00:35:43.728372 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8915214-510f-4e35-afaa-2a69c7b9f03f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f8915214-510f-4e35-afaa-2a69c7b9f03f" (UID: "f8915214-510f-4e35-afaa-2a69c7b9f03f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 10 00:35:43.728823 kubelet[1417]: I0710 00:35:43.728786 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8915214-510f-4e35-afaa-2a69c7b9f03f-hostproc" (OuterVolumeSpecName: "hostproc") pod "f8915214-510f-4e35-afaa-2a69c7b9f03f" (UID: "f8915214-510f-4e35-afaa-2a69c7b9f03f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:35:43.733494 kubelet[1417]: I0710 00:35:43.732080 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8915214-510f-4e35-afaa-2a69c7b9f03f-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "f8915214-510f-4e35-afaa-2a69c7b9f03f" (UID: "f8915214-510f-4e35-afaa-2a69c7b9f03f"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 10 00:35:43.735004 kubelet[1417]: I0710 00:35:43.734587 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8915214-510f-4e35-afaa-2a69c7b9f03f-kube-api-access-zn27r" (OuterVolumeSpecName: "kube-api-access-zn27r") pod "f8915214-510f-4e35-afaa-2a69c7b9f03f" (UID: "f8915214-510f-4e35-afaa-2a69c7b9f03f"). InnerVolumeSpecName "kube-api-access-zn27r". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:35:43.735004 kubelet[1417]: I0710 00:35:43.734588 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8915214-510f-4e35-afaa-2a69c7b9f03f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f8915214-510f-4e35-afaa-2a69c7b9f03f" (UID: "f8915214-510f-4e35-afaa-2a69c7b9f03f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 10 00:35:43.735004 kubelet[1417]: I0710 00:35:43.734943 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8915214-510f-4e35-afaa-2a69c7b9f03f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f8915214-510f-4e35-afaa-2a69c7b9f03f" (UID: "f8915214-510f-4e35-afaa-2a69c7b9f03f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:35:43.827099 kubelet[1417]: I0710 00:35:43.827064 1417 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f8915214-510f-4e35-afaa-2a69c7b9f03f-cilium-run\") on node \"10.0.0.80\" DevicePath \"\"" Jul 10 00:35:43.827254 kubelet[1417]: I0710 00:35:43.827236 1417 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f8915214-510f-4e35-afaa-2a69c7b9f03f-host-proc-sys-net\") on node \"10.0.0.80\" DevicePath \"\"" Jul 10 00:35:43.827350 kubelet[1417]: I0710 00:35:43.827336 1417 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f8915214-510f-4e35-afaa-2a69c7b9f03f-host-proc-sys-kernel\") on node \"10.0.0.80\" DevicePath \"\"" Jul 10 00:35:43.827410 kubelet[1417]: I0710 00:35:43.827399 1417 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f8915214-510f-4e35-afaa-2a69c7b9f03f-cilium-config-path\") on node \"10.0.0.80\" DevicePath \"\"" Jul 10 00:35:43.827463 kubelet[1417]: I0710 00:35:43.827454 1417 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f8915214-510f-4e35-afaa-2a69c7b9f03f-clustermesh-secrets\") on node \"10.0.0.80\" DevicePath \"\"" Jul 10 00:35:43.827524 kubelet[1417]: I0710 00:35:43.827514 1417 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f8915214-510f-4e35-afaa-2a69c7b9f03f-hubble-tls\") on node \"10.0.0.80\" DevicePath \"\"" Jul 10 00:35:43.827579 kubelet[1417]: I0710 00:35:43.827569 1417 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f8915214-510f-4e35-afaa-2a69c7b9f03f-cilium-ipsec-secrets\") on node \"10.0.0.80\" DevicePath \"\"" Jul 10 00:35:43.827631 kubelet[1417]: I0710 00:35:43.827622 1417 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f8915214-510f-4e35-afaa-2a69c7b9f03f-etc-cni-netd\") on node \"10.0.0.80\" DevicePath \"\"" Jul 10 00:35:43.827684 kubelet[1417]: I0710 00:35:43.827674 1417 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f8915214-510f-4e35-afaa-2a69c7b9f03f-cilium-cgroup\") on node \"10.0.0.80\" DevicePath \"\"" Jul 10 00:35:43.827737 kubelet[1417]: I0710 00:35:43.827727 1417 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f8915214-510f-4e35-afaa-2a69c7b9f03f-cni-path\") on node \"10.0.0.80\" DevicePath \"\"" Jul 10 00:35:43.827795 kubelet[1417]: I0710 00:35:43.827785 1417 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f8915214-510f-4e35-afaa-2a69c7b9f03f-bpf-maps\") on node \"10.0.0.80\" DevicePath \"\"" Jul 10 00:35:43.827860 kubelet[1417]: I0710 00:35:43.827848 1417 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zn27r\" (UniqueName: \"kubernetes.io/projected/f8915214-510f-4e35-afaa-2a69c7b9f03f-kube-api-access-zn27r\") on node \"10.0.0.80\" DevicePath \"\"" Jul 10 00:35:43.827920 kubelet[1417]: I0710 00:35:43.827909 1417 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f8915214-510f-4e35-afaa-2a69c7b9f03f-hostproc\") on node \"10.0.0.80\" DevicePath \"\"" Jul 10 00:35:43.827977 kubelet[1417]: I0710 00:35:43.827966 1417 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f8915214-510f-4e35-afaa-2a69c7b9f03f-xtables-lock\") on node \"10.0.0.80\" DevicePath \"\"" Jul 10 00:35:43.828057 kubelet[1417]: I0710 00:35:43.828019 1417 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f8915214-510f-4e35-afaa-2a69c7b9f03f-lib-modules\") on node \"10.0.0.80\" DevicePath \"\"" Jul 10 00:35:43.838436 kubelet[1417]: E0710 00:35:43.838409 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:43.839303 env[1214]: time="2025-07-10T00:35:43.839234299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-h829b,Uid:87ed9a9c-1f34-4a35-a500-d5c7a5ea0917,Namespace:kube-system,Attempt:0,}" Jul 10 00:35:43.853615 env[1214]: time="2025-07-10T00:35:43.853530609Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:35:43.853807 env[1214]: time="2025-07-10T00:35:43.853619172Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:35:43.853807 env[1214]: time="2025-07-10T00:35:43.853656093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:35:43.854731 env[1214]: time="2025-07-10T00:35:43.853941623Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/43b6e146fbda046ceecebc5609dd1b6a0deafbeb605d735f1508d64d6d5c1fea pid=2974 runtime=io.containerd.runc.v2 Jul 10 00:35:43.864212 systemd[1]: Started cri-containerd-43b6e146fbda046ceecebc5609dd1b6a0deafbeb605d735f1508d64d6d5c1fea.scope. Jul 10 00:35:43.923648 env[1214]: time="2025-07-10T00:35:43.923595792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-h829b,Uid:87ed9a9c-1f34-4a35-a500-d5c7a5ea0917,Namespace:kube-system,Attempt:0,} returns sandbox id \"43b6e146fbda046ceecebc5609dd1b6a0deafbeb605d735f1508d64d6d5c1fea\"" Jul 10 00:35:43.924369 kubelet[1417]: E0710 00:35:43.924344 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:43.925482 env[1214]: time="2025-07-10T00:35:43.925446173Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 10 00:35:44.250605 kubelet[1417]: E0710 00:35:44.250557 1417 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:44.285265 kubelet[1417]: E0710 00:35:44.285230 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:44.292759 env[1214]: time="2025-07-10T00:35:44.292720020Z" level=info msg="StopPodSandbox for \"e796c6539834d69b8a33ce47f23bb6243838a6998f5fcc9c18318e22a7cd7d0c\"" Jul 10 00:35:44.292887 env[1214]: time="2025-07-10T00:35:44.292812503Z" level=info msg="TearDown network for sandbox \"e796c6539834d69b8a33ce47f23bb6243838a6998f5fcc9c18318e22a7cd7d0c\" successfully" Jul 10 00:35:44.292887 env[1214]: time="2025-07-10T00:35:44.292847824Z" level=info msg="StopPodSandbox for \"e796c6539834d69b8a33ce47f23bb6243838a6998f5fcc9c18318e22a7cd7d0c\" returns successfully" Jul 10 00:35:44.293674 env[1214]: time="2025-07-10T00:35:44.293641689Z" level=info msg="RemovePodSandbox for \"e796c6539834d69b8a33ce47f23bb6243838a6998f5fcc9c18318e22a7cd7d0c\"" Jul 10 00:35:44.293847 env[1214]: time="2025-07-10T00:35:44.293810135Z" level=info msg="Forcibly stopping sandbox \"e796c6539834d69b8a33ce47f23bb6243838a6998f5fcc9c18318e22a7cd7d0c\"" Jul 10 00:35:44.293960 env[1214]: time="2025-07-10T00:35:44.293939379Z" level=info msg="TearDown network for sandbox \"e796c6539834d69b8a33ce47f23bb6243838a6998f5fcc9c18318e22a7cd7d0c\" successfully" Jul 10 00:35:44.298486 env[1214]: time="2025-07-10T00:35:44.298449963Z" level=info msg="RemovePodSandbox \"e796c6539834d69b8a33ce47f23bb6243838a6998f5fcc9c18318e22a7cd7d0c\" returns successfully" Jul 10 00:35:44.400844 kubelet[1417]: E0710 00:35:44.400804 1417 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 10 00:35:44.456902 systemd[1]: Removed slice kubepods-burstable-podf8915214_510f_4e35_afaa_2a69c7b9f03f.slice. Jul 10 00:35:44.530099 systemd[1]: var-lib-kubelet-pods-f8915214\x2d510f\x2d4e35\x2dafaa\x2d2a69c7b9f03f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzn27r.mount: Deactivated successfully. Jul 10 00:35:44.530188 systemd[1]: var-lib-kubelet-pods-f8915214\x2d510f\x2d4e35\x2dafaa\x2d2a69c7b9f03f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 10 00:35:44.530239 systemd[1]: var-lib-kubelet-pods-f8915214\x2d510f\x2d4e35\x2dafaa\x2d2a69c7b9f03f-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 10 00:35:44.530294 systemd[1]: var-lib-kubelet-pods-f8915214\x2d510f\x2d4e35\x2dafaa\x2d2a69c7b9f03f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 10 00:35:44.615437 systemd[1]: Created slice kubepods-burstable-podcf0a59c6_e15d_42a6_8e0f_ac25040d2607.slice. Jul 10 00:35:44.732914 kubelet[1417]: I0710 00:35:44.732838 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cf0a59c6-e15d-42a6-8e0f-ac25040d2607-host-proc-sys-kernel\") pod \"cilium-bxvk7\" (UID: \"cf0a59c6-e15d-42a6-8e0f-ac25040d2607\") " pod="kube-system/cilium-bxvk7" Jul 10 00:35:44.732914 kubelet[1417]: I0710 00:35:44.732887 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cf0a59c6-e15d-42a6-8e0f-ac25040d2607-cilium-run\") pod \"cilium-bxvk7\" (UID: \"cf0a59c6-e15d-42a6-8e0f-ac25040d2607\") " pod="kube-system/cilium-bxvk7" Jul 10 00:35:44.732914 kubelet[1417]: I0710 00:35:44.732907 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cf0a59c6-e15d-42a6-8e0f-ac25040d2607-clustermesh-secrets\") pod \"cilium-bxvk7\" (UID: \"cf0a59c6-e15d-42a6-8e0f-ac25040d2607\") " pod="kube-system/cilium-bxvk7" Jul 10 00:35:44.732914 kubelet[1417]: I0710 00:35:44.732923 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cf0a59c6-e15d-42a6-8e0f-ac25040d2607-cilium-config-path\") pod \"cilium-bxvk7\" (UID: \"cf0a59c6-e15d-42a6-8e0f-ac25040d2607\") " pod="kube-system/cilium-bxvk7" Jul 10 00:35:44.733182 kubelet[1417]: I0710 00:35:44.732943 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cf0a59c6-e15d-42a6-8e0f-ac25040d2607-xtables-lock\") pod \"cilium-bxvk7\" (UID: \"cf0a59c6-e15d-42a6-8e0f-ac25040d2607\") " pod="kube-system/cilium-bxvk7" Jul 10 00:35:44.733182 kubelet[1417]: I0710 00:35:44.732960 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/cf0a59c6-e15d-42a6-8e0f-ac25040d2607-cilium-ipsec-secrets\") pod \"cilium-bxvk7\" (UID: \"cf0a59c6-e15d-42a6-8e0f-ac25040d2607\") " pod="kube-system/cilium-bxvk7" Jul 10 00:35:44.733182 kubelet[1417]: I0710 00:35:44.732977 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cf0a59c6-e15d-42a6-8e0f-ac25040d2607-cilium-cgroup\") pod \"cilium-bxvk7\" (UID: \"cf0a59c6-e15d-42a6-8e0f-ac25040d2607\") " pod="kube-system/cilium-bxvk7" Jul 10 00:35:44.733182 kubelet[1417]: I0710 00:35:44.732992 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cf0a59c6-e15d-42a6-8e0f-ac25040d2607-etc-cni-netd\") pod \"cilium-bxvk7\" (UID: \"cf0a59c6-e15d-42a6-8e0f-ac25040d2607\") " pod="kube-system/cilium-bxvk7" Jul 10 00:35:44.733182 kubelet[1417]: I0710 00:35:44.733007 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cf0a59c6-e15d-42a6-8e0f-ac25040d2607-host-proc-sys-net\") pod \"cilium-bxvk7\" (UID: \"cf0a59c6-e15d-42a6-8e0f-ac25040d2607\") " pod="kube-system/cilium-bxvk7" Jul 10 00:35:44.733182 kubelet[1417]: I0710 00:35:44.733023 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cf0a59c6-e15d-42a6-8e0f-ac25040d2607-hostproc\") pod \"cilium-bxvk7\" (UID: \"cf0a59c6-e15d-42a6-8e0f-ac25040d2607\") " pod="kube-system/cilium-bxvk7" Jul 10 00:35:44.733333 kubelet[1417]: I0710 00:35:44.733068 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cf0a59c6-e15d-42a6-8e0f-ac25040d2607-cni-path\") pod \"cilium-bxvk7\" (UID: \"cf0a59c6-e15d-42a6-8e0f-ac25040d2607\") " pod="kube-system/cilium-bxvk7" Jul 10 00:35:44.733333 kubelet[1417]: I0710 00:35:44.733085 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cf0a59c6-e15d-42a6-8e0f-ac25040d2607-bpf-maps\") pod \"cilium-bxvk7\" (UID: \"cf0a59c6-e15d-42a6-8e0f-ac25040d2607\") " pod="kube-system/cilium-bxvk7" Jul 10 00:35:44.733333 kubelet[1417]: I0710 00:35:44.733101 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf0a59c6-e15d-42a6-8e0f-ac25040d2607-lib-modules\") pod \"cilium-bxvk7\" (UID: \"cf0a59c6-e15d-42a6-8e0f-ac25040d2607\") " pod="kube-system/cilium-bxvk7" Jul 10 00:35:44.733333 kubelet[1417]: I0710 00:35:44.733121 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cf0a59c6-e15d-42a6-8e0f-ac25040d2607-hubble-tls\") pod \"cilium-bxvk7\" (UID: \"cf0a59c6-e15d-42a6-8e0f-ac25040d2607\") " pod="kube-system/cilium-bxvk7" Jul 10 00:35:44.733333 kubelet[1417]: I0710 00:35:44.733135 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shjzt\" (UniqueName: \"kubernetes.io/projected/cf0a59c6-e15d-42a6-8e0f-ac25040d2607-kube-api-access-shjzt\") pod \"cilium-bxvk7\" (UID: \"cf0a59c6-e15d-42a6-8e0f-ac25040d2607\") " pod="kube-system/cilium-bxvk7" Jul 10 00:35:44.927453 kubelet[1417]: E0710 00:35:44.927417 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:44.927949 env[1214]: time="2025-07-10T00:35:44.927901680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bxvk7,Uid:cf0a59c6-e15d-42a6-8e0f-ac25040d2607,Namespace:kube-system,Attempt:0,}" Jul 10 00:35:44.950733 env[1214]: time="2025-07-10T00:35:44.950672128Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:35:44.950868 env[1214]: time="2025-07-10T00:35:44.950742490Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:35:44.950868 env[1214]: time="2025-07-10T00:35:44.950770451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:35:44.951013 env[1214]: time="2025-07-10T00:35:44.950980218Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2f5d197d6a521d74f4bd8898bf762efd691d9a0d84db4cb75bf73c8cc22b930e pid=3023 runtime=io.containerd.runc.v2 Jul 10 00:35:44.963653 systemd[1]: Started cri-containerd-2f5d197d6a521d74f4bd8898bf762efd691d9a0d84db4cb75bf73c8cc22b930e.scope. Jul 10 00:35:45.002651 env[1214]: time="2025-07-10T00:35:45.002604866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bxvk7,Uid:cf0a59c6-e15d-42a6-8e0f-ac25040d2607,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f5d197d6a521d74f4bd8898bf762efd691d9a0d84db4cb75bf73c8cc22b930e\"" Jul 10 00:35:45.003841 kubelet[1417]: E0710 00:35:45.003365 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:45.005276 env[1214]: time="2025-07-10T00:35:45.005241508Z" level=info msg="CreateContainer within sandbox \"2f5d197d6a521d74f4bd8898bf762efd691d9a0d84db4cb75bf73c8cc22b930e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 00:35:45.014530 env[1214]: time="2025-07-10T00:35:45.014493036Z" level=info msg="CreateContainer within sandbox \"2f5d197d6a521d74f4bd8898bf762efd691d9a0d84db4cb75bf73c8cc22b930e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f000e89ba329b9c508a977651cb289fb9fc2e0084b7b97012aff2507c9ae11ac\"" Jul 10 00:35:45.015081 env[1214]: time="2025-07-10T00:35:45.015058213Z" level=info msg="StartContainer for \"f000e89ba329b9c508a977651cb289fb9fc2e0084b7b97012aff2507c9ae11ac\"" Jul 10 00:35:45.029166 systemd[1]: Started cri-containerd-f000e89ba329b9c508a977651cb289fb9fc2e0084b7b97012aff2507c9ae11ac.scope. Jul 10 00:35:45.097784 env[1214]: time="2025-07-10T00:35:45.097729983Z" level=info msg="StartContainer for \"f000e89ba329b9c508a977651cb289fb9fc2e0084b7b97012aff2507c9ae11ac\" returns successfully" Jul 10 00:35:45.128716 systemd[1]: cri-containerd-f000e89ba329b9c508a977651cb289fb9fc2e0084b7b97012aff2507c9ae11ac.scope: Deactivated successfully. Jul 10 00:35:45.154537 env[1214]: time="2025-07-10T00:35:45.154491067Z" level=info msg="shim disconnected" id=f000e89ba329b9c508a977651cb289fb9fc2e0084b7b97012aff2507c9ae11ac Jul 10 00:35:45.154786 env[1214]: time="2025-07-10T00:35:45.154767356Z" level=warning msg="cleaning up after shim disconnected" id=f000e89ba329b9c508a977651cb289fb9fc2e0084b7b97012aff2507c9ae11ac namespace=k8s.io Jul 10 00:35:45.154848 env[1214]: time="2025-07-10T00:35:45.154835958Z" level=info msg="cleaning up dead shim" Jul 10 00:35:45.161353 env[1214]: time="2025-07-10T00:35:45.161313039Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:35:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3108 runtime=io.containerd.runc.v2\n" Jul 10 00:35:45.286736 kubelet[1417]: E0710 00:35:45.286232 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:45.578699 kubelet[1417]: E0710 00:35:45.578481 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:45.580626 env[1214]: time="2025-07-10T00:35:45.580580950Z" level=info msg="CreateContainer within sandbox \"2f5d197d6a521d74f4bd8898bf762efd691d9a0d84db4cb75bf73c8cc22b930e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 10 00:35:45.594329 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1382508667.mount: Deactivated successfully. Jul 10 00:35:45.596205 env[1214]: time="2025-07-10T00:35:45.596157515Z" level=info msg="CreateContainer within sandbox \"2f5d197d6a521d74f4bd8898bf762efd691d9a0d84db4cb75bf73c8cc22b930e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5bdec0210d2abe9346279d957a9410c0e21cb153370f080bd78ed2ce19929285\"" Jul 10 00:35:45.596797 env[1214]: time="2025-07-10T00:35:45.596766573Z" level=info msg="StartContainer for \"5bdec0210d2abe9346279d957a9410c0e21cb153370f080bd78ed2ce19929285\"" Jul 10 00:35:45.612530 systemd[1]: Started cri-containerd-5bdec0210d2abe9346279d957a9410c0e21cb153370f080bd78ed2ce19929285.scope. Jul 10 00:35:45.644879 env[1214]: time="2025-07-10T00:35:45.644829907Z" level=info msg="StartContainer for \"5bdec0210d2abe9346279d957a9410c0e21cb153370f080bd78ed2ce19929285\" returns successfully" Jul 10 00:35:45.650918 systemd[1]: cri-containerd-5bdec0210d2abe9346279d957a9410c0e21cb153370f080bd78ed2ce19929285.scope: Deactivated successfully. Jul 10 00:35:45.654270 kubelet[1417]: I0710 00:35:45.653486 1417 setters.go:602] "Node became not ready" node="10.0.0.80" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-10T00:35:45Z","lastTransitionTime":"2025-07-10T00:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 10 00:35:45.701403 env[1214]: time="2025-07-10T00:35:45.701348944Z" level=info msg="shim disconnected" id=5bdec0210d2abe9346279d957a9410c0e21cb153370f080bd78ed2ce19929285 Jul 10 00:35:45.701403 env[1214]: time="2025-07-10T00:35:45.701390505Z" level=warning msg="cleaning up after shim disconnected" id=5bdec0210d2abe9346279d957a9410c0e21cb153370f080bd78ed2ce19929285 namespace=k8s.io Jul 10 00:35:45.701403 env[1214]: time="2025-07-10T00:35:45.701399586Z" level=info msg="cleaning up dead shim" Jul 10 00:35:45.707949 env[1214]: time="2025-07-10T00:35:45.707897068Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:35:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3172 runtime=io.containerd.runc.v2\n" Jul 10 00:35:45.885441 env[1214]: time="2025-07-10T00:35:45.884987212Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:45.886812 env[1214]: time="2025-07-10T00:35:45.886771147Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:45.889141 env[1214]: time="2025-07-10T00:35:45.889096579Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:45.889376 env[1214]: time="2025-07-10T00:35:45.889331907Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 10 00:35:45.891892 env[1214]: time="2025-07-10T00:35:45.891844545Z" level=info msg="CreateContainer within sandbox \"43b6e146fbda046ceecebc5609dd1b6a0deafbeb605d735f1508d64d6d5c1fea\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 10 00:35:45.901337 env[1214]: time="2025-07-10T00:35:45.901280758Z" level=info msg="CreateContainer within sandbox \"43b6e146fbda046ceecebc5609dd1b6a0deafbeb605d735f1508d64d6d5c1fea\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"20953ef0d1d503ee8fb2912a3ac7988c37ff4018248004dfb96c61df0d66f77b\"" Jul 10 00:35:45.901842 env[1214]: time="2025-07-10T00:35:45.901815575Z" level=info msg="StartContainer for \"20953ef0d1d503ee8fb2912a3ac7988c37ff4018248004dfb96c61df0d66f77b\"" Jul 10 00:35:45.921392 systemd[1]: Started cri-containerd-20953ef0d1d503ee8fb2912a3ac7988c37ff4018248004dfb96c61df0d66f77b.scope. Jul 10 00:35:45.971721 env[1214]: time="2025-07-10T00:35:45.971670826Z" level=info msg="StartContainer for \"20953ef0d1d503ee8fb2912a3ac7988c37ff4018248004dfb96c61df0d66f77b\" returns successfully" Jul 10 00:35:46.287153 kubelet[1417]: E0710 00:35:46.287086 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:46.453751 kubelet[1417]: I0710 00:35:46.453548 1417 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8915214-510f-4e35-afaa-2a69c7b9f03f" path="/var/lib/kubelet/pods/f8915214-510f-4e35-afaa-2a69c7b9f03f/volumes" Jul 10 00:35:46.529744 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5bdec0210d2abe9346279d957a9410c0e21cb153370f080bd78ed2ce19929285-rootfs.mount: Deactivated successfully. Jul 10 00:35:46.581594 kubelet[1417]: E0710 00:35:46.581296 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:46.583497 kubelet[1417]: E0710 00:35:46.583467 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:46.585534 env[1214]: time="2025-07-10T00:35:46.585494767Z" level=info msg="CreateContainer within sandbox \"2f5d197d6a521d74f4bd8898bf762efd691d9a0d84db4cb75bf73c8cc22b930e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 10 00:35:46.591207 kubelet[1417]: I0710 00:35:46.591143 1417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-h829b" podStartSLOduration=1.625712916 podStartE2EDuration="3.591127378s" podCreationTimestamp="2025-07-10 00:35:43 +0000 UTC" firstStartedPulling="2025-07-10 00:35:43.925172964 +0000 UTC m=+60.864472034" lastFinishedPulling="2025-07-10 00:35:45.890587466 +0000 UTC m=+62.829886496" observedRunningTime="2025-07-10 00:35:46.591120937 +0000 UTC m=+63.530420007" watchObservedRunningTime="2025-07-10 00:35:46.591127378 +0000 UTC m=+63.530426448" Jul 10 00:35:46.601711 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2605968352.mount: Deactivated successfully. Jul 10 00:35:46.605568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount177530609.mount: Deactivated successfully. Jul 10 00:35:46.608162 env[1214]: time="2025-07-10T00:35:46.608081810Z" level=info msg="CreateContainer within sandbox \"2f5d197d6a521d74f4bd8898bf762efd691d9a0d84db4cb75bf73c8cc22b930e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"59fc822936434203af2336dabb11d748964e725aa6b7eb347f55d3cf536a16cf\"" Jul 10 00:35:46.608760 env[1214]: time="2025-07-10T00:35:46.608698669Z" level=info msg="StartContainer for \"59fc822936434203af2336dabb11d748964e725aa6b7eb347f55d3cf536a16cf\"" Jul 10 00:35:46.626110 systemd[1]: Started cri-containerd-59fc822936434203af2336dabb11d748964e725aa6b7eb347f55d3cf536a16cf.scope. Jul 10 00:35:46.672893 systemd[1]: cri-containerd-59fc822936434203af2336dabb11d748964e725aa6b7eb347f55d3cf536a16cf.scope: Deactivated successfully. Jul 10 00:35:46.673592 env[1214]: time="2025-07-10T00:35:46.673270221Z" level=info msg="StartContainer for \"59fc822936434203af2336dabb11d748964e725aa6b7eb347f55d3cf536a16cf\" returns successfully" Jul 10 00:35:46.692050 env[1214]: time="2025-07-10T00:35:46.691980587Z" level=info msg="shim disconnected" id=59fc822936434203af2336dabb11d748964e725aa6b7eb347f55d3cf536a16cf Jul 10 00:35:46.692230 env[1214]: time="2025-07-10T00:35:46.692167472Z" level=warning msg="cleaning up after shim disconnected" id=59fc822936434203af2336dabb11d748964e725aa6b7eb347f55d3cf536a16cf namespace=k8s.io Jul 10 00:35:46.692230 env[1214]: time="2025-07-10T00:35:46.692183393Z" level=info msg="cleaning up dead shim" Jul 10 00:35:46.698603 env[1214]: time="2025-07-10T00:35:46.698556585Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:35:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3271 runtime=io.containerd.runc.v2\n" Jul 10 00:35:47.287720 kubelet[1417]: E0710 00:35:47.287677 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:47.587122 kubelet[1417]: E0710 00:35:47.586826 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:47.587122 kubelet[1417]: E0710 00:35:47.586955 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:47.589511 env[1214]: time="2025-07-10T00:35:47.589286802Z" level=info msg="CreateContainer within sandbox \"2f5d197d6a521d74f4bd8898bf762efd691d9a0d84db4cb75bf73c8cc22b930e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 10 00:35:47.605583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3623947555.mount: Deactivated successfully. Jul 10 00:35:47.607707 env[1214]: time="2025-07-10T00:35:47.607664823Z" level=info msg="CreateContainer within sandbox \"2f5d197d6a521d74f4bd8898bf762efd691d9a0d84db4cb75bf73c8cc22b930e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7681ee8a51b4153430a90f363cb2dc2d5f6f47e2afdd2a7383243acabe0c7218\"" Jul 10 00:35:47.608266 env[1214]: time="2025-07-10T00:35:47.608192678Z" level=info msg="StartContainer for \"7681ee8a51b4153430a90f363cb2dc2d5f6f47e2afdd2a7383243acabe0c7218\"" Jul 10 00:35:47.628677 systemd[1]: Started cri-containerd-7681ee8a51b4153430a90f363cb2dc2d5f6f47e2afdd2a7383243acabe0c7218.scope. Jul 10 00:35:47.674671 systemd[1]: cri-containerd-7681ee8a51b4153430a90f363cb2dc2d5f6f47e2afdd2a7383243acabe0c7218.scope: Deactivated successfully. Jul 10 00:35:47.677716 env[1214]: time="2025-07-10T00:35:47.676800256Z" level=info msg="StartContainer for \"7681ee8a51b4153430a90f363cb2dc2d5f6f47e2afdd2a7383243acabe0c7218\" returns successfully" Jul 10 00:35:47.695881 env[1214]: time="2025-07-10T00:35:47.695630970Z" level=info msg="shim disconnected" id=7681ee8a51b4153430a90f363cb2dc2d5f6f47e2afdd2a7383243acabe0c7218 Jul 10 00:35:47.695881 env[1214]: time="2025-07-10T00:35:47.695856336Z" level=warning msg="cleaning up after shim disconnected" id=7681ee8a51b4153430a90f363cb2dc2d5f6f47e2afdd2a7383243acabe0c7218 namespace=k8s.io Jul 10 00:35:47.696447 env[1214]: time="2025-07-10T00:35:47.696249268Z" level=info msg="cleaning up dead shim" Jul 10 00:35:47.703674 env[1214]: time="2025-07-10T00:35:47.703210792Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:35:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3326 runtime=io.containerd.runc.v2\n" Jul 10 00:35:48.287982 kubelet[1417]: E0710 00:35:48.287934 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:48.529317 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7681ee8a51b4153430a90f363cb2dc2d5f6f47e2afdd2a7383243acabe0c7218-rootfs.mount: Deactivated successfully. Jul 10 00:35:48.591259 kubelet[1417]: E0710 00:35:48.591020 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:48.593556 env[1214]: time="2025-07-10T00:35:48.593510981Z" level=info msg="CreateContainer within sandbox \"2f5d197d6a521d74f4bd8898bf762efd691d9a0d84db4cb75bf73c8cc22b930e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 10 00:35:48.609685 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount140504465.mount: Deactivated successfully. Jul 10 00:35:48.612634 env[1214]: time="2025-07-10T00:35:48.612591087Z" level=info msg="CreateContainer within sandbox \"2f5d197d6a521d74f4bd8898bf762efd691d9a0d84db4cb75bf73c8cc22b930e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"11c343ed31d40904be1ce0761c59402b1b38efd15a446a651152b8803d3d1403\"" Jul 10 00:35:48.613403 env[1214]: time="2025-07-10T00:35:48.613365189Z" level=info msg="StartContainer for \"11c343ed31d40904be1ce0761c59402b1b38efd15a446a651152b8803d3d1403\"" Jul 10 00:35:48.633006 systemd[1]: Started cri-containerd-11c343ed31d40904be1ce0761c59402b1b38efd15a446a651152b8803d3d1403.scope. Jul 10 00:35:48.707317 env[1214]: time="2025-07-10T00:35:48.707250195Z" level=info msg="StartContainer for \"11c343ed31d40904be1ce0761c59402b1b38efd15a446a651152b8803d3d1403\" returns successfully" Jul 10 00:35:48.962063 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Jul 10 00:35:49.288641 kubelet[1417]: E0710 00:35:49.288524 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:49.595195 kubelet[1417]: E0710 00:35:49.595092 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:50.289384 kubelet[1417]: E0710 00:35:50.289346 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:50.928322 kubelet[1417]: E0710 00:35:50.928290 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:51.290301 kubelet[1417]: E0710 00:35:51.290159 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:51.778016 systemd-networkd[1038]: lxc_health: Link UP Jul 10 00:35:51.789060 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 10 00:35:51.790598 systemd-networkd[1038]: lxc_health: Gained carrier Jul 10 00:35:51.836666 systemd[1]: run-containerd-runc-k8s.io-11c343ed31d40904be1ce0761c59402b1b38efd15a446a651152b8803d3d1403-runc.aj14m9.mount: Deactivated successfully. Jul 10 00:35:52.290832 kubelet[1417]: E0710 00:35:52.290794 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:52.929785 kubelet[1417]: E0710 00:35:52.929743 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:52.948312 kubelet[1417]: I0710 00:35:52.948252 1417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bxvk7" podStartSLOduration=8.948234283 podStartE2EDuration="8.948234283s" podCreationTimestamp="2025-07-10 00:35:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:35:49.622297613 +0000 UTC m=+66.561596763" watchObservedRunningTime="2025-07-10 00:35:52.948234283 +0000 UTC m=+69.887533353" Jul 10 00:35:53.188881 systemd-networkd[1038]: lxc_health: Gained IPv6LL Jul 10 00:35:53.291497 kubelet[1417]: E0710 00:35:53.291441 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:53.601845 kubelet[1417]: E0710 00:35:53.601732 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:53.983717 systemd[1]: run-containerd-runc-k8s.io-11c343ed31d40904be1ce0761c59402b1b38efd15a446a651152b8803d3d1403-runc.kJg72Z.mount: Deactivated successfully. Jul 10 00:35:54.292118 kubelet[1417]: E0710 00:35:54.291983 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:54.603952 kubelet[1417]: E0710 00:35:54.603798 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:55.292727 kubelet[1417]: E0710 00:35:55.292679 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:56.105644 systemd[1]: run-containerd-runc-k8s.io-11c343ed31d40904be1ce0761c59402b1b38efd15a446a651152b8803d3d1403-runc.cst858.mount: Deactivated successfully. Jul 10 00:35:56.293123 kubelet[1417]: E0710 00:35:56.293076 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:57.293225 kubelet[1417]: E0710 00:35:57.293185 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:58.263683 kubelet[1417]: E0710 00:35:58.263625 1417 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:46236->127.0.0.1:40525: write tcp 127.0.0.1:46236->127.0.0.1:40525: write: broken pipe Jul 10 00:35:58.293654 kubelet[1417]: E0710 00:35:58.293617 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:35:59.294039 kubelet[1417]: E0710 00:35:59.293982 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"