Jul 11 00:31:20.724568 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 11 00:31:20.724587 kernel: Linux version 5.15.186-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Thu Jul 10 23:22:35 -00 2025 Jul 11 00:31:20.724595 kernel: efi: EFI v2.70 by EDK II Jul 11 00:31:20.724601 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Jul 11 00:31:20.724606 kernel: random: crng init done Jul 11 00:31:20.724611 kernel: ACPI: Early table checksum verification disabled Jul 11 00:31:20.724618 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Jul 11 00:31:20.724624 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 11 00:31:20.724630 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:31:20.724635 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:31:20.724640 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:31:20.724646 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:31:20.724651 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:31:20.724656 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:31:20.724664 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:31:20.724670 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:31:20.724676 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:31:20.724682 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 11 00:31:20.724687 kernel: NUMA: Failed to initialise from firmware Jul 11 00:31:20.724693 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 11 00:31:20.724699 kernel: NUMA: NODE_DATA [mem 0xdcb0a900-0xdcb0ffff] Jul 11 00:31:20.724705 kernel: Zone ranges: Jul 11 00:31:20.724711 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 11 00:31:20.724718 kernel: DMA32 empty Jul 11 00:31:20.724723 kernel: Normal empty Jul 11 00:31:20.724746 kernel: Movable zone start for each node Jul 11 00:31:20.724752 kernel: Early memory node ranges Jul 11 00:31:20.724758 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Jul 11 00:31:20.724764 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Jul 11 00:31:20.724769 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Jul 11 00:31:20.724775 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Jul 11 00:31:20.724780 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Jul 11 00:31:20.724786 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Jul 11 00:31:20.724792 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Jul 11 00:31:20.724797 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 11 00:31:20.724805 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 11 00:31:20.724810 kernel: psci: probing for conduit method from ACPI. Jul 11 00:31:20.724816 kernel: psci: PSCIv1.1 detected in firmware. Jul 11 00:31:20.724821 kernel: psci: Using standard PSCI v0.2 function IDs Jul 11 00:31:20.724827 kernel: psci: Trusted OS migration not required Jul 11 00:31:20.724835 kernel: psci: SMC Calling Convention v1.1 Jul 11 00:31:20.724842 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 11 00:31:20.724849 kernel: ACPI: SRAT not present Jul 11 00:31:20.724855 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Jul 11 00:31:20.724861 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Jul 11 00:31:20.724868 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 11 00:31:20.724873 kernel: Detected PIPT I-cache on CPU0 Jul 11 00:31:20.724880 kernel: CPU features: detected: GIC system register CPU interface Jul 11 00:31:20.724886 kernel: CPU features: detected: Hardware dirty bit management Jul 11 00:31:20.724891 kernel: CPU features: detected: Spectre-v4 Jul 11 00:31:20.724898 kernel: CPU features: detected: Spectre-BHB Jul 11 00:31:20.724905 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 11 00:31:20.724911 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 11 00:31:20.724917 kernel: CPU features: detected: ARM erratum 1418040 Jul 11 00:31:20.724923 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 11 00:31:20.724929 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 11 00:31:20.724935 kernel: Policy zone: DMA Jul 11 00:31:20.724942 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=8fd3ef416118421b63f30b3d02e5d4feea39e34704e91050cdad11fae31df42c Jul 11 00:31:20.724948 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 11 00:31:20.724954 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 11 00:31:20.724960 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 11 00:31:20.724966 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 11 00:31:20.724974 kernel: Memory: 2457336K/2572288K available (9792K kernel code, 2094K rwdata, 7588K rodata, 36416K init, 777K bss, 114952K reserved, 0K cma-reserved) Jul 11 00:31:20.724980 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 11 00:31:20.724986 kernel: trace event string verifier disabled Jul 11 00:31:20.724992 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 11 00:31:20.724999 kernel: rcu: RCU event tracing is enabled. Jul 11 00:31:20.725005 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 11 00:31:20.725011 kernel: Trampoline variant of Tasks RCU enabled. Jul 11 00:31:20.725017 kernel: Tracing variant of Tasks RCU enabled. Jul 11 00:31:20.725023 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 11 00:31:20.725029 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 11 00:31:20.725035 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 11 00:31:20.725043 kernel: GICv3: 256 SPIs implemented Jul 11 00:31:20.725049 kernel: GICv3: 0 Extended SPIs implemented Jul 11 00:31:20.725055 kernel: GICv3: Distributor has no Range Selector support Jul 11 00:31:20.725061 kernel: Root IRQ handler: gic_handle_irq Jul 11 00:31:20.725067 kernel: GICv3: 16 PPIs implemented Jul 11 00:31:20.725073 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 11 00:31:20.725079 kernel: ACPI: SRAT not present Jul 11 00:31:20.725085 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 11 00:31:20.725091 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Jul 11 00:31:20.725097 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Jul 11 00:31:20.725103 kernel: GICv3: using LPI property table @0x00000000400d0000 Jul 11 00:31:20.725110 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Jul 11 00:31:20.725117 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 00:31:20.725123 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 11 00:31:20.725129 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 11 00:31:20.725135 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 11 00:31:20.725141 kernel: arm-pv: using stolen time PV Jul 11 00:31:20.725147 kernel: Console: colour dummy device 80x25 Jul 11 00:31:20.725154 kernel: ACPI: Core revision 20210730 Jul 11 00:31:20.725160 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 11 00:31:20.725183 kernel: pid_max: default: 32768 minimum: 301 Jul 11 00:31:20.725189 kernel: LSM: Security Framework initializing Jul 11 00:31:20.725202 kernel: SELinux: Initializing. Jul 11 00:31:20.725209 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 00:31:20.725215 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 00:31:20.725221 kernel: rcu: Hierarchical SRCU implementation. Jul 11 00:31:20.725227 kernel: Platform MSI: ITS@0x8080000 domain created Jul 11 00:31:20.725233 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 11 00:31:20.725239 kernel: Remapping and enabling EFI services. Jul 11 00:31:20.725245 kernel: smp: Bringing up secondary CPUs ... Jul 11 00:31:20.725251 kernel: Detected PIPT I-cache on CPU1 Jul 11 00:31:20.725259 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 11 00:31:20.725266 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Jul 11 00:31:20.725272 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 00:31:20.725278 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 11 00:31:20.725284 kernel: Detected PIPT I-cache on CPU2 Jul 11 00:31:20.725290 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 11 00:31:20.725296 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Jul 11 00:31:20.725303 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 00:31:20.725309 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 11 00:31:20.725315 kernel: Detected PIPT I-cache on CPU3 Jul 11 00:31:20.725322 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 11 00:31:20.725328 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Jul 11 00:31:20.725334 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 00:31:20.725340 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 11 00:31:20.725351 kernel: smp: Brought up 1 node, 4 CPUs Jul 11 00:31:20.725358 kernel: SMP: Total of 4 processors activated. Jul 11 00:31:20.725365 kernel: CPU features: detected: 32-bit EL0 Support Jul 11 00:31:20.725371 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 11 00:31:20.725378 kernel: CPU features: detected: Common not Private translations Jul 11 00:31:20.725384 kernel: CPU features: detected: CRC32 instructions Jul 11 00:31:20.725391 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 11 00:31:20.725397 kernel: CPU features: detected: LSE atomic instructions Jul 11 00:31:20.725405 kernel: CPU features: detected: Privileged Access Never Jul 11 00:31:20.725411 kernel: CPU features: detected: RAS Extension Support Jul 11 00:31:20.725418 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 11 00:31:20.725424 kernel: CPU: All CPU(s) started at EL1 Jul 11 00:31:20.725431 kernel: alternatives: patching kernel code Jul 11 00:31:20.725442 kernel: devtmpfs: initialized Jul 11 00:31:20.725448 kernel: KASLR enabled Jul 11 00:31:20.725455 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 11 00:31:20.725461 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 11 00:31:20.725468 kernel: pinctrl core: initialized pinctrl subsystem Jul 11 00:31:20.725474 kernel: SMBIOS 3.0.0 present. Jul 11 00:31:20.725480 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Jul 11 00:31:20.725487 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 11 00:31:20.725493 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 11 00:31:20.725501 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 11 00:31:20.725508 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 11 00:31:20.725515 kernel: audit: initializing netlink subsys (disabled) Jul 11 00:31:20.725521 kernel: audit: type=2000 audit(0.033:1): state=initialized audit_enabled=0 res=1 Jul 11 00:31:20.725528 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 11 00:31:20.725534 kernel: cpuidle: using governor menu Jul 11 00:31:20.725541 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 11 00:31:20.725547 kernel: ASID allocator initialised with 32768 entries Jul 11 00:31:20.725554 kernel: ACPI: bus type PCI registered Jul 11 00:31:20.725561 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 11 00:31:20.725567 kernel: Serial: AMBA PL011 UART driver Jul 11 00:31:20.725574 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 11 00:31:20.725580 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Jul 11 00:31:20.725587 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 11 00:31:20.725593 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Jul 11 00:31:20.725600 kernel: cryptd: max_cpu_qlen set to 1000 Jul 11 00:31:20.725606 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 11 00:31:20.725613 kernel: ACPI: Added _OSI(Module Device) Jul 11 00:31:20.725620 kernel: ACPI: Added _OSI(Processor Device) Jul 11 00:31:20.725627 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 11 00:31:20.725633 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 11 00:31:20.725639 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 11 00:31:20.725646 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 11 00:31:20.725653 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 11 00:31:20.725659 kernel: ACPI: Interpreter enabled Jul 11 00:31:20.725666 kernel: ACPI: Using GIC for interrupt routing Jul 11 00:31:20.725672 kernel: ACPI: MCFG table detected, 1 entries Jul 11 00:31:20.725680 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 11 00:31:20.725686 kernel: printk: console [ttyAMA0] enabled Jul 11 00:31:20.725693 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 11 00:31:20.725830 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 11 00:31:20.725895 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 11 00:31:20.725954 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 11 00:31:20.726010 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 11 00:31:20.726068 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 11 00:31:20.726077 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 11 00:31:20.726083 kernel: PCI host bridge to bus 0000:00 Jul 11 00:31:20.726148 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 11 00:31:20.726211 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 11 00:31:20.726266 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 11 00:31:20.726318 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 11 00:31:20.726390 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 11 00:31:20.726471 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 11 00:31:20.726536 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 11 00:31:20.726598 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 11 00:31:20.726656 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 11 00:31:20.726714 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 11 00:31:20.726817 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 11 00:31:20.726882 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 11 00:31:20.726935 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 11 00:31:20.727002 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 11 00:31:20.727058 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 11 00:31:20.727067 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 11 00:31:20.727074 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 11 00:31:20.727081 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 11 00:31:20.727090 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 11 00:31:20.727096 kernel: iommu: Default domain type: Translated Jul 11 00:31:20.727103 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 11 00:31:20.727109 kernel: vgaarb: loaded Jul 11 00:31:20.727116 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 11 00:31:20.727123 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 11 00:31:20.727129 kernel: PTP clock support registered Jul 11 00:31:20.727136 kernel: Registered efivars operations Jul 11 00:31:20.727142 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 11 00:31:20.727148 kernel: VFS: Disk quotas dquot_6.6.0 Jul 11 00:31:20.727157 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 11 00:31:20.727163 kernel: pnp: PnP ACPI init Jul 11 00:31:20.727272 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 11 00:31:20.727282 kernel: pnp: PnP ACPI: found 1 devices Jul 11 00:31:20.727293 kernel: NET: Registered PF_INET protocol family Jul 11 00:31:20.727300 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 11 00:31:20.727307 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 11 00:31:20.727313 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 11 00:31:20.727322 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 11 00:31:20.727329 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 11 00:31:20.727335 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 11 00:31:20.727342 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 00:31:20.727348 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 00:31:20.727355 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 11 00:31:20.727361 kernel: PCI: CLS 0 bytes, default 64 Jul 11 00:31:20.727368 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 11 00:31:20.727375 kernel: kvm [1]: HYP mode not available Jul 11 00:31:20.727383 kernel: Initialise system trusted keyrings Jul 11 00:31:20.727389 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 11 00:31:20.727396 kernel: Key type asymmetric registered Jul 11 00:31:20.727402 kernel: Asymmetric key parser 'x509' registered Jul 11 00:31:20.727409 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 11 00:31:20.727415 kernel: io scheduler mq-deadline registered Jul 11 00:31:20.727421 kernel: io scheduler kyber registered Jul 11 00:31:20.727428 kernel: io scheduler bfq registered Jul 11 00:31:20.727439 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 11 00:31:20.727447 kernel: ACPI: button: Power Button [PWRB] Jul 11 00:31:20.727454 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 11 00:31:20.727521 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 11 00:31:20.727533 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 11 00:31:20.727540 kernel: thunder_xcv, ver 1.0 Jul 11 00:31:20.727546 kernel: thunder_bgx, ver 1.0 Jul 11 00:31:20.727553 kernel: nicpf, ver 1.0 Jul 11 00:31:20.727559 kernel: nicvf, ver 1.0 Jul 11 00:31:20.727626 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 11 00:31:20.727683 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-11T00:31:20 UTC (1752193880) Jul 11 00:31:20.727692 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 11 00:31:20.727699 kernel: NET: Registered PF_INET6 protocol family Jul 11 00:31:20.727705 kernel: Segment Routing with IPv6 Jul 11 00:31:20.727712 kernel: In-situ OAM (IOAM) with IPv6 Jul 11 00:31:20.727718 kernel: NET: Registered PF_PACKET protocol family Jul 11 00:31:20.727724 kernel: Key type dns_resolver registered Jul 11 00:31:20.727754 kernel: registered taskstats version 1 Jul 11 00:31:20.727762 kernel: Loading compiled-in X.509 certificates Jul 11 00:31:20.727769 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.186-flatcar: e29f2f0310c2b60e0457f826e7476605fb3b6ab2' Jul 11 00:31:20.727776 kernel: Key type .fscrypt registered Jul 11 00:31:20.727782 kernel: Key type fscrypt-provisioning registered Jul 11 00:31:20.727788 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 11 00:31:20.727795 kernel: ima: Allocated hash algorithm: sha1 Jul 11 00:31:20.727801 kernel: ima: No architecture policies found Jul 11 00:31:20.727808 kernel: clk: Disabling unused clocks Jul 11 00:31:20.727814 kernel: Freeing unused kernel memory: 36416K Jul 11 00:31:20.727822 kernel: Run /init as init process Jul 11 00:31:20.727828 kernel: with arguments: Jul 11 00:31:20.727835 kernel: /init Jul 11 00:31:20.727841 kernel: with environment: Jul 11 00:31:20.727847 kernel: HOME=/ Jul 11 00:31:20.727854 kernel: TERM=linux Jul 11 00:31:20.727860 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 11 00:31:20.727869 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 11 00:31:20.727878 systemd[1]: Detected virtualization kvm. Jul 11 00:31:20.727886 systemd[1]: Detected architecture arm64. Jul 11 00:31:20.727893 systemd[1]: Running in initrd. Jul 11 00:31:20.727899 systemd[1]: No hostname configured, using default hostname. Jul 11 00:31:20.727906 systemd[1]: Hostname set to . Jul 11 00:31:20.727913 systemd[1]: Initializing machine ID from VM UUID. Jul 11 00:31:20.727920 systemd[1]: Queued start job for default target initrd.target. Jul 11 00:31:20.727927 systemd[1]: Started systemd-ask-password-console.path. Jul 11 00:31:20.727935 systemd[1]: Reached target cryptsetup.target. Jul 11 00:31:20.727941 systemd[1]: Reached target paths.target. Jul 11 00:31:20.727948 systemd[1]: Reached target slices.target. Jul 11 00:31:20.727955 systemd[1]: Reached target swap.target. Jul 11 00:31:20.727962 systemd[1]: Reached target timers.target. Jul 11 00:31:20.727969 systemd[1]: Listening on iscsid.socket. Jul 11 00:31:20.727976 systemd[1]: Listening on iscsiuio.socket. Jul 11 00:31:20.727984 systemd[1]: Listening on systemd-journald-audit.socket. Jul 11 00:31:20.727991 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 11 00:31:20.727998 systemd[1]: Listening on systemd-journald.socket. Jul 11 00:31:20.728005 systemd[1]: Listening on systemd-networkd.socket. Jul 11 00:31:20.728012 systemd[1]: Listening on systemd-udevd-control.socket. Jul 11 00:31:20.728019 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 11 00:31:20.728026 systemd[1]: Reached target sockets.target. Jul 11 00:31:20.728033 systemd[1]: Starting kmod-static-nodes.service... Jul 11 00:31:20.728040 systemd[1]: Finished network-cleanup.service. Jul 11 00:31:20.728048 systemd[1]: Starting systemd-fsck-usr.service... Jul 11 00:31:20.728055 systemd[1]: Starting systemd-journald.service... Jul 11 00:31:20.728062 systemd[1]: Starting systemd-modules-load.service... Jul 11 00:31:20.728069 systemd[1]: Starting systemd-resolved.service... Jul 11 00:31:20.728076 systemd[1]: Starting systemd-vconsole-setup.service... Jul 11 00:31:20.728083 systemd[1]: Finished kmod-static-nodes.service. Jul 11 00:31:20.728090 systemd[1]: Finished systemd-fsck-usr.service. Jul 11 00:31:20.728097 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 11 00:31:20.728104 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 11 00:31:20.728112 kernel: audit: type=1130 audit(1752193880.725:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:20.728119 systemd[1]: Finished systemd-vconsole-setup.service. Jul 11 00:31:20.728129 systemd-journald[290]: Journal started Jul 11 00:31:20.728169 systemd-journald[290]: Runtime Journal (/run/log/journal/8c7360041e0e454db52779c75e761edc) is 6.0M, max 48.7M, 42.6M free. Jul 11 00:31:20.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:20.719790 systemd-modules-load[291]: Inserted module 'overlay' Jul 11 00:31:20.730928 kernel: audit: type=1130 audit(1752193880.727:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:20.730947 systemd[1]: Started systemd-journald.service. Jul 11 00:31:20.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:20.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:20.733748 kernel: audit: type=1130 audit(1752193880.731:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:20.734593 systemd[1]: Starting dracut-cmdline-ask.service... Jul 11 00:31:20.739905 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 11 00:31:20.743202 systemd-modules-load[291]: Inserted module 'br_netfilter' Jul 11 00:31:20.744175 kernel: Bridge firewalling registered Jul 11 00:31:20.745715 systemd-resolved[292]: Positive Trust Anchors: Jul 11 00:31:20.745739 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 00:31:20.745779 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 11 00:31:20.749926 systemd-resolved[292]: Defaulting to hostname 'linux'. Jul 11 00:31:20.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:20.750716 systemd[1]: Started systemd-resolved.service. Jul 11 00:31:20.757394 kernel: audit: type=1130 audit(1752193880.752:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:20.757416 kernel: SCSI subsystem initialized Jul 11 00:31:20.755982 systemd[1]: Reached target nss-lookup.target. Jul 11 00:31:20.758272 systemd[1]: Finished dracut-cmdline-ask.service. Jul 11 00:31:20.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:20.762525 systemd[1]: Starting dracut-cmdline.service... Jul 11 00:31:20.764609 kernel: audit: type=1130 audit(1752193880.758:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:20.764627 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 11 00:31:20.765233 kernel: device-mapper: uevent: version 1.0.3 Jul 11 00:31:20.766317 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 11 00:31:20.768548 systemd-modules-load[291]: Inserted module 'dm_multipath' Jul 11 00:31:20.769312 systemd[1]: Finished systemd-modules-load.service. Jul 11 00:31:20.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:20.770825 systemd[1]: Starting systemd-sysctl.service... Jul 11 00:31:20.773789 kernel: audit: type=1130 audit(1752193880.770:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:20.773876 dracut-cmdline[309]: dracut-dracut-053 Jul 11 00:31:20.777558 dracut-cmdline[309]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=8fd3ef416118421b63f30b3d02e5d4feea39e34704e91050cdad11fae31df42c Jul 11 00:31:20.782610 systemd[1]: Finished systemd-sysctl.service. Jul 11 00:31:20.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:20.785753 kernel: audit: type=1130 audit(1752193880.783:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:20.836755 kernel: Loading iSCSI transport class v2.0-870. Jul 11 00:31:20.848878 kernel: iscsi: registered transport (tcp) Jul 11 00:31:20.867772 kernel: iscsi: registered transport (qla4xxx) Jul 11 00:31:20.867830 kernel: QLogic iSCSI HBA Driver Jul 11 00:31:20.902963 systemd[1]: Finished dracut-cmdline.service. Jul 11 00:31:20.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:20.904710 systemd[1]: Starting dracut-pre-udev.service... Jul 11 00:31:20.907014 kernel: audit: type=1130 audit(1752193880.903:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:20.952758 kernel: raid6: neonx8 gen() 12057 MB/s Jul 11 00:31:20.966754 kernel: raid6: neonx8 xor() 10628 MB/s Jul 11 00:31:20.983746 kernel: raid6: neonx4 gen() 13334 MB/s Jul 11 00:31:21.000741 kernel: raid6: neonx4 xor() 11028 MB/s Jul 11 00:31:21.017745 kernel: raid6: neonx2 gen() 12796 MB/s Jul 11 00:31:21.034748 kernel: raid6: neonx2 xor() 10152 MB/s Jul 11 00:31:21.051745 kernel: raid6: neonx1 gen() 10470 MB/s Jul 11 00:31:21.068746 kernel: raid6: neonx1 xor() 8660 MB/s Jul 11 00:31:21.085746 kernel: raid6: int64x8 gen() 6209 MB/s Jul 11 00:31:21.102746 kernel: raid6: int64x8 xor() 3511 MB/s Jul 11 00:31:21.119755 kernel: raid6: int64x4 gen() 7146 MB/s Jul 11 00:31:21.136744 kernel: raid6: int64x4 xor() 3822 MB/s Jul 11 00:31:21.153745 kernel: raid6: int64x2 gen() 6079 MB/s Jul 11 00:31:21.170747 kernel: raid6: int64x2 xor() 3268 MB/s Jul 11 00:31:21.187745 kernel: raid6: int64x1 gen() 4995 MB/s Jul 11 00:31:21.205138 kernel: raid6: int64x1 xor() 2620 MB/s Jul 11 00:31:21.205151 kernel: raid6: using algorithm neonx4 gen() 13334 MB/s Jul 11 00:31:21.205159 kernel: raid6: .... xor() 11028 MB/s, rmw enabled Jul 11 00:31:21.205167 kernel: raid6: using neon recovery algorithm Jul 11 00:31:21.215750 kernel: xor: measuring software checksum speed Jul 11 00:31:21.216742 kernel: 8regs : 16013 MB/sec Jul 11 00:31:21.216753 kernel: 32regs : 19850 MB/sec Jul 11 00:31:21.217784 kernel: arm64_neon : 25688 MB/sec Jul 11 00:31:21.217794 kernel: xor: using function: arm64_neon (25688 MB/sec) Jul 11 00:31:21.274762 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Jul 11 00:31:21.284454 systemd[1]: Finished dracut-pre-udev.service. Jul 11 00:31:21.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:21.286000 audit: BPF prog-id=7 op=LOAD Jul 11 00:31:21.286000 audit: BPF prog-id=8 op=LOAD Jul 11 00:31:21.287766 kernel: audit: type=1130 audit(1752193881.284:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:21.288086 systemd[1]: Starting systemd-udevd.service... Jul 11 00:31:21.300149 systemd-udevd[491]: Using default interface naming scheme 'v252'. Jul 11 00:31:21.303590 systemd[1]: Started systemd-udevd.service. Jul 11 00:31:21.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:21.311599 systemd[1]: Starting dracut-pre-trigger.service... Jul 11 00:31:21.323068 dracut-pre-trigger[498]: rd.md=0: removing MD RAID activation Jul 11 00:31:21.351579 systemd[1]: Finished dracut-pre-trigger.service. Jul 11 00:31:21.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:21.353258 systemd[1]: Starting systemd-udev-trigger.service... Jul 11 00:31:21.390003 systemd[1]: Finished systemd-udev-trigger.service. Jul 11 00:31:21.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:21.424262 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 11 00:31:21.432875 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 11 00:31:21.432890 kernel: GPT:9289727 != 19775487 Jul 11 00:31:21.432899 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 11 00:31:21.432908 kernel: GPT:9289727 != 19775487 Jul 11 00:31:21.432917 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 11 00:31:21.432925 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:31:21.448748 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (541) Jul 11 00:31:21.450168 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 11 00:31:21.450953 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 11 00:31:21.459701 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 11 00:31:21.465043 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 11 00:31:21.468318 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 11 00:31:21.470212 systemd[1]: Starting disk-uuid.service... Jul 11 00:31:21.476844 disk-uuid[566]: Primary Header is updated. Jul 11 00:31:21.476844 disk-uuid[566]: Secondary Entries is updated. Jul 11 00:31:21.476844 disk-uuid[566]: Secondary Header is updated. Jul 11 00:31:21.485761 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:31:21.489740 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:31:21.492752 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:31:22.496540 disk-uuid[567]: The operation has completed successfully. Jul 11 00:31:22.497560 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:31:22.517813 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 11 00:31:22.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:22.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:22.517910 systemd[1]: Finished disk-uuid.service. Jul 11 00:31:22.522200 systemd[1]: Starting verity-setup.service... Jul 11 00:31:22.540755 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 11 00:31:22.562381 systemd[1]: Found device dev-mapper-usr.device. Jul 11 00:31:22.564635 systemd[1]: Mounting sysusr-usr.mount... Jul 11 00:31:22.566524 systemd[1]: Finished verity-setup.service. Jul 11 00:31:22.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:22.613753 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 11 00:31:22.613922 systemd[1]: Mounted sysusr-usr.mount. Jul 11 00:31:22.614655 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 11 00:31:22.615384 systemd[1]: Starting ignition-setup.service... Jul 11 00:31:22.616915 systemd[1]: Starting parse-ip-for-networkd.service... Jul 11 00:31:22.624017 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 11 00:31:22.624128 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:31:22.624139 kernel: BTRFS info (device vda6): has skinny extents Jul 11 00:31:22.630957 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 11 00:31:22.666522 systemd[1]: Finished ignition-setup.service. Jul 11 00:31:22.668189 systemd[1]: Starting ignition-fetch-offline.service... Jul 11 00:31:22.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:22.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:22.689000 audit: BPF prog-id=9 op=LOAD Jul 11 00:31:22.688356 systemd[1]: Finished parse-ip-for-networkd.service. Jul 11 00:31:22.690621 systemd[1]: Starting systemd-networkd.service... Jul 11 00:31:22.713524 systemd-networkd[739]: lo: Link UP Jul 11 00:31:22.713536 systemd-networkd[739]: lo: Gained carrier Jul 11 00:31:22.713940 systemd-networkd[739]: Enumeration completed Jul 11 00:31:22.714121 systemd-networkd[739]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 00:31:22.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:22.715223 systemd-networkd[739]: eth0: Link UP Jul 11 00:31:22.715228 systemd-networkd[739]: eth0: Gained carrier Jul 11 00:31:22.715819 systemd[1]: Started systemd-networkd.service. Jul 11 00:31:22.717005 systemd[1]: Reached target network.target. Jul 11 00:31:22.718592 systemd[1]: Starting iscsiuio.service... Jul 11 00:31:22.729897 systemd[1]: Started iscsiuio.service. Jul 11 00:31:22.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:22.731369 systemd[1]: Starting iscsid.service... Jul 11 00:31:22.735101 iscsid[749]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 11 00:31:22.735101 iscsid[749]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 11 00:31:22.735101 iscsid[749]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 11 00:31:22.735101 iscsid[749]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 11 00:31:22.735101 iscsid[749]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 11 00:31:22.735101 iscsid[749]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 11 00:31:22.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:22.737936 systemd[1]: Started iscsid.service. Jul 11 00:31:22.737937 systemd-networkd[739]: eth0: DHCPv4 address 10.0.0.78/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 00:31:22.743150 systemd[1]: Starting dracut-initqueue.service... Jul 11 00:31:22.754496 systemd[1]: Finished dracut-initqueue.service. Jul 11 00:31:22.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:22.755428 systemd[1]: Reached target remote-fs-pre.target. Jul 11 00:31:22.756609 systemd[1]: Reached target remote-cryptsetup.target. Jul 11 00:31:22.757920 systemd[1]: Reached target remote-fs.target. Jul 11 00:31:22.759922 systemd[1]: Starting dracut-pre-mount.service... Jul 11 00:31:22.768067 systemd[1]: Finished dracut-pre-mount.service. Jul 11 00:31:22.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:22.776570 ignition[716]: Ignition 2.14.0 Jul 11 00:31:22.776580 ignition[716]: Stage: fetch-offline Jul 11 00:31:22.776614 ignition[716]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:31:22.776623 ignition[716]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:31:22.776763 ignition[716]: parsed url from cmdline: "" Jul 11 00:31:22.776767 ignition[716]: no config URL provided Jul 11 00:31:22.776771 ignition[716]: reading system config file "/usr/lib/ignition/user.ign" Jul 11 00:31:22.776778 ignition[716]: no config at "/usr/lib/ignition/user.ign" Jul 11 00:31:22.776797 ignition[716]: op(1): [started] loading QEMU firmware config module Jul 11 00:31:22.776801 ignition[716]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 11 00:31:22.780306 ignition[716]: op(1): [finished] loading QEMU firmware config module Jul 11 00:31:22.788011 ignition[716]: parsing config with SHA512: bc47a0e4c16d6adc8cd261ba01b95a4348118639e3e3a1d610a910aba99bb3e345428c2ce14d3aeddb14fceee557411bfbe4fe7c80aff68982b22f8ae7ead857 Jul 11 00:31:22.791956 unknown[716]: fetched base config from "system" Jul 11 00:31:22.791966 unknown[716]: fetched user config from "qemu" Jul 11 00:31:22.792291 ignition[716]: fetch-offline: fetch-offline passed Jul 11 00:31:22.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:22.793108 systemd[1]: Finished ignition-fetch-offline.service. Jul 11 00:31:22.792342 ignition[716]: Ignition finished successfully Jul 11 00:31:22.794272 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 11 00:31:22.795039 systemd[1]: Starting ignition-kargs.service... Jul 11 00:31:22.803531 ignition[766]: Ignition 2.14.0 Jul 11 00:31:22.803542 ignition[766]: Stage: kargs Jul 11 00:31:22.803630 ignition[766]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:31:22.805556 systemd[1]: Finished ignition-kargs.service. Jul 11 00:31:22.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:22.803639 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:31:22.804279 ignition[766]: kargs: kargs passed Jul 11 00:31:22.807677 systemd[1]: Starting ignition-disks.service... Jul 11 00:31:22.804322 ignition[766]: Ignition finished successfully Jul 11 00:31:22.814709 ignition[772]: Ignition 2.14.0 Jul 11 00:31:22.814718 ignition[772]: Stage: disks Jul 11 00:31:22.814861 ignition[772]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:31:22.814871 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:31:22.815537 ignition[772]: disks: disks passed Jul 11 00:31:22.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:22.817202 systemd[1]: Finished ignition-disks.service. Jul 11 00:31:22.815575 ignition[772]: Ignition finished successfully Jul 11 00:31:22.818898 systemd[1]: Reached target initrd-root-device.target. Jul 11 00:31:22.819943 systemd[1]: Reached target local-fs-pre.target. Jul 11 00:31:22.821055 systemd[1]: Reached target local-fs.target. Jul 11 00:31:22.822160 systemd[1]: Reached target sysinit.target. Jul 11 00:31:22.823501 systemd[1]: Reached target basic.target. Jul 11 00:31:22.825489 systemd[1]: Starting systemd-fsck-root.service... Jul 11 00:31:22.836200 systemd-fsck[780]: ROOT: clean, 619/553520 files, 56022/553472 blocks Jul 11 00:31:22.839573 systemd[1]: Finished systemd-fsck-root.service. Jul 11 00:31:22.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:22.841663 systemd[1]: Mounting sysroot.mount... Jul 11 00:31:22.848742 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 11 00:31:22.849010 systemd[1]: Mounted sysroot.mount. Jul 11 00:31:22.849817 systemd[1]: Reached target initrd-root-fs.target. Jul 11 00:31:22.851978 systemd[1]: Mounting sysroot-usr.mount... Jul 11 00:31:22.852976 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 11 00:31:22.853024 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 11 00:31:22.853049 systemd[1]: Reached target ignition-diskful.target. Jul 11 00:31:22.855019 systemd[1]: Mounted sysroot-usr.mount. Jul 11 00:31:22.856561 systemd[1]: Starting initrd-setup-root.service... Jul 11 00:31:22.860831 initrd-setup-root[790]: cut: /sysroot/etc/passwd: No such file or directory Jul 11 00:31:22.865249 initrd-setup-root[798]: cut: /sysroot/etc/group: No such file or directory Jul 11 00:31:22.869034 initrd-setup-root[806]: cut: /sysroot/etc/shadow: No such file or directory Jul 11 00:31:22.872660 initrd-setup-root[814]: cut: /sysroot/etc/gshadow: No such file or directory Jul 11 00:31:22.899899 systemd[1]: Finished initrd-setup-root.service. Jul 11 00:31:22.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:22.901716 systemd[1]: Starting ignition-mount.service... Jul 11 00:31:22.903452 systemd[1]: Starting sysroot-boot.service... Jul 11 00:31:22.907064 bash[831]: umount: /sysroot/usr/share/oem: not mounted. Jul 11 00:31:22.915899 ignition[832]: INFO : Ignition 2.14.0 Jul 11 00:31:22.915899 ignition[832]: INFO : Stage: mount Jul 11 00:31:22.917214 ignition[832]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:31:22.917214 ignition[832]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:31:22.917214 ignition[832]: INFO : mount: mount passed Jul 11 00:31:22.917214 ignition[832]: INFO : Ignition finished successfully Jul 11 00:31:22.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:22.918054 systemd[1]: Finished ignition-mount.service. Jul 11 00:31:22.926036 systemd[1]: Finished sysroot-boot.service. Jul 11 00:31:22.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:23.573879 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 11 00:31:23.580745 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (842) Jul 11 00:31:23.582745 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 11 00:31:23.582762 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:31:23.582792 kernel: BTRFS info (device vda6): has skinny extents Jul 11 00:31:23.585631 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 11 00:31:23.587094 systemd[1]: Starting ignition-files.service... Jul 11 00:31:23.601609 ignition[862]: INFO : Ignition 2.14.0 Jul 11 00:31:23.601609 ignition[862]: INFO : Stage: files Jul 11 00:31:23.603110 ignition[862]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:31:23.603110 ignition[862]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:31:23.603110 ignition[862]: DEBUG : files: compiled without relabeling support, skipping Jul 11 00:31:23.605577 ignition[862]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 11 00:31:23.605577 ignition[862]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 11 00:31:23.608893 ignition[862]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 11 00:31:23.609900 ignition[862]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 11 00:31:23.609900 ignition[862]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 11 00:31:23.609588 unknown[862]: wrote ssh authorized keys file for user: core Jul 11 00:31:23.613021 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jul 11 00:31:23.613021 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jul 11 00:31:23.613021 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 00:31:23.613021 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 00:31:23.613021 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 11 00:31:23.613021 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 11 00:31:23.613021 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 11 00:31:23.613021 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jul 11 00:31:24.142364 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jul 11 00:31:24.531140 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 11 00:31:24.531140 ignition[862]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Jul 11 00:31:24.533979 ignition[862]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 00:31:24.533979 ignition[862]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 00:31:24.533979 ignition[862]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Jul 11 00:31:24.533979 ignition[862]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Jul 11 00:31:24.533979 ignition[862]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 00:31:24.566586 ignition[862]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 00:31:24.567757 ignition[862]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Jul 11 00:31:24.567757 ignition[862]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 11 00:31:24.567757 ignition[862]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 11 00:31:24.567757 ignition[862]: INFO : files: files passed Jul 11 00:31:24.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:24.574115 ignition[862]: INFO : Ignition finished successfully Jul 11 00:31:24.569836 systemd[1]: Finished ignition-files.service. Jul 11 00:31:24.572364 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 11 00:31:24.577110 initrd-setup-root-after-ignition[887]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Jul 11 00:31:24.573522 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 11 00:31:24.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:24.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:24.580012 initrd-setup-root-after-ignition[890]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:31:24.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:24.574138 systemd[1]: Starting ignition-quench.service... Jul 11 00:31:24.577925 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 11 00:31:24.578006 systemd[1]: Finished ignition-quench.service. Jul 11 00:31:24.579141 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 11 00:31:24.580632 systemd[1]: Reached target ignition-complete.target. Jul 11 00:31:24.582674 systemd[1]: Starting initrd-parse-etc.service... Jul 11 00:31:24.594394 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 11 00:31:24.594480 systemd[1]: Finished initrd-parse-etc.service. Jul 11 00:31:24.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:24.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:24.595832 systemd[1]: Reached target initrd-fs.target. Jul 11 00:31:24.596667 systemd[1]: Reached target initrd.target. Jul 11 00:31:24.597639 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 11 00:31:24.598335 systemd[1]: Starting dracut-pre-pivot.service... Jul 11 00:31:24.608671 systemd[1]: Finished dracut-pre-pivot.service. Jul 11 00:31:24.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:24.610251 systemd[1]: Starting initrd-cleanup.service... Jul 11 00:31:24.618770 systemd[1]: Stopped target nss-lookup.target. Jul 11 00:31:24.619644 systemd[1]: Stopped target remote-cryptsetup.target. Jul 11 00:31:24.620836 systemd[1]: Stopped target timers.target. Jul 11 00:31:24.621837 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 11 00:31:24.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:24.621949 systemd[1]: Stopped dracut-pre-pivot.service. Jul 11 00:31:24.622978 systemd[1]: Stopped target initrd.target. Jul 11 00:31:24.623982 systemd[1]: Stopped target basic.target. Jul 11 00:31:24.624956 systemd[1]: Stopped target ignition-complete.target. Jul 11 00:31:24.626018 systemd[1]: Stopped target ignition-diskful.target. Jul 11 00:31:24.627022 systemd[1]: Stopped target initrd-root-device.target. Jul 11 00:31:24.628131 systemd[1]: Stopped target remote-fs.target. Jul 11 00:31:24.629170 systemd[1]: Stopped target remote-fs-pre.target. Jul 11 00:31:24.630260 systemd[1]: Stopped target sysinit.target. Jul 11 00:31:24.631212 systemd[1]: Stopped target local-fs.target. Jul 11 00:31:24.632403 systemd[1]: Stopped target local-fs-pre.target. Jul 11 00:31:24.633428 systemd[1]: Stopped target swap.target. Jul 11 00:31:24.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:24.634328 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 11 00:31:24.634445 systemd[1]: Stopped dracut-pre-mount.service. Jul 11 00:31:24.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:24.635464 systemd[1]: Stopped target cryptsetup.target. Jul 11 00:31:24.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:24.636327 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 11 00:31:24.636423 systemd[1]: Stopped dracut-initqueue.service. Jul 11 00:31:24.637505 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 11 00:31:24.637600 systemd[1]: Stopped ignition-fetch-offline.service. Jul 11 00:31:24.638600 systemd[1]: Stopped target paths.target. Jul 11 00:31:24.639476 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 11 00:31:24.642776 systemd[1]: Stopped systemd-ask-password-console.path. Jul 11 00:31:24.644013 systemd[1]: Stopped target slices.target. Jul 11 00:31:24.645039 systemd[1]: Stopped target sockets.target. Jul 11 00:31:24.645964 systemd[1]: iscsid.socket: Deactivated successfully. Jul 11 00:31:24.646037 systemd[1]: Closed iscsid.socket. Jul 11 00:31:24.646887 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 11 00:31:24.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:24.646952 systemd[1]: Closed iscsiuio.socket. Jul 11 00:31:24.649000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:24.647901 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 11 00:31:24.648001 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 11 00:31:24.649082 systemd[1]: ignition-files.service: Deactivated successfully. Jul 11 00:31:24.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:24.649182 systemd[1]: Stopped ignition-files.service. Jul 11 00:31:24.651019 systemd[1]: Stopping ignition-mount.service... Jul 11 00:31:24.651822 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 11 00:31:24.651937 systemd[1]: Stopped kmod-static-nodes.service. Jul 11 00:31:24.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:24.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:24.653771 systemd[1]: Stopping sysroot-boot.service... Jul 11 00:31:24.654720 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 11 00:31:24.654857 systemd[1]: Stopped systemd-udev-trigger.service. Jul 11 00:31:24.659789 ignition[903]: INFO : Ignition 2.14.0 Jul 11 00:31:24.659789 ignition[903]: INFO : Stage: umount Jul 11 00:31:24.659789 ignition[903]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:31:24.659789 ignition[903]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:31:24.659789 ignition[903]: INFO : umount: umount passed Jul 11 00:31:24.659789 ignition[903]: INFO : Ignition finished successfully Jul 11 00:31:24.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:24.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:24.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:24.655887 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 11 00:31:24.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:24.656018 systemd[1]: Stopped dracut-pre-trigger.service. Jul 11 00:31:24.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:24.660698 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 11 00:31:24.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:24.660810 systemd[1]: Finished initrd-cleanup.service. Jul 11 00:31:24.662098 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 11 00:31:24.662183 systemd[1]: Stopped ignition-mount.service. Jul 11 00:31:24.663934 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 11 00:31:24.664210 systemd[1]: Stopped target network.target. Jul 11 00:31:24.665098 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 11 00:31:24.665144 systemd[1]: Stopped ignition-disks.service. Jul 11 00:31:24.666432 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 11 00:31:24.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:24.666471 systemd[1]: Stopped ignition-kargs.service. Jul 11 00:31:24.668937 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 11 00:31:24.668973 systemd[1]: Stopped ignition-setup.service. Jul 11 00:31:24.670310 systemd[1]: Stopping systemd-networkd.service... Jul 11 00:31:24.671129 systemd[1]: Stopping systemd-resolved.service... Jul 11 00:31:24.676680 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 11 00:31:24.676798 systemd[1]: Stopped systemd-resolved.service. Jul 11 00:31:24.685820 systemd-networkd[739]: eth0: DHCPv6 lease lost Jul 11 00:31:24.686941 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 11 00:31:24.687047 systemd[1]: Stopped systemd-networkd.service. Jul 11 00:31:24.687000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:24.688000 audit: BPF prog-id=6 op=UNLOAD Jul 11 00:31:24.688525 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 11 00:31:24.688554 systemd[1]: Closed systemd-networkd.socket. Jul 11 00:31:24.690318 systemd[1]: Stopping network-cleanup.service... Jul 11 00:31:24.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:24.691049 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 11 00:31:24.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:24.691104 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 11 00:31:24.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:24.692281 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 11 00:31:24.697000 audit: BPF prog-id=9 op=UNLOAD Jul 11 00:31:24.692317 systemd[1]: Stopped systemd-sysctl.service. Jul 11 00:31:24.694214 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 11 00:31:24.694251 systemd[1]: Stopped systemd-modules-load.service. Jul 11 00:31:24.696066 systemd[1]: Stopping systemd-udevd.service... Jul 11 00:31:24.700146 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 11 00:31:24.701959 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 11 00:31:24.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:24.702086 systemd[1]: Stopped systemd-udevd.service. Jul 11 00:31:24.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:24.703762 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 11 00:31:24.703844 systemd[1]: Stopped network-cleanup.service. Jul 11 00:31:24.704808 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 11 00:31:24.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:24.704846 systemd[1]: Closed systemd-udevd-control.socket. Jul 11 00:31:24.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:24.705803 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 11 00:31:24.711000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:24.705834 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 11 00:31:24.707059 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 11 00:31:24.707101 systemd[1]: Stopped dracut-pre-udev.service. Jul 11 00:31:24.708787 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 11 00:31:24.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:24.708832 systemd[1]: Stopped dracut-cmdline.service. Jul 11 00:31:24.710139 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 11 00:31:24.710188 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 11 00:31:24.712594 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 11 00:31:24.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:24.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:24.713994 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 00:31:24.714052 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 11 00:31:24.718244 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 11 00:31:24.718335 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 11 00:31:24.731426 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 11 00:31:24.731523 systemd[1]: Stopped sysroot-boot.service. Jul 11 00:31:24.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:24.732783 systemd[1]: Reached target initrd-switch-root.target. Jul 11 00:31:24.733600 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 11 00:31:24.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:24.733644 systemd[1]: Stopped initrd-setup-root.service. Jul 11 00:31:24.735395 systemd[1]: Starting initrd-switch-root.service... Jul 11 00:31:24.741651 systemd[1]: Switching root. Jul 11 00:31:24.761546 iscsid[749]: iscsid shutting down. Jul 11 00:31:24.762254 systemd-journald[290]: Received SIGTERM from PID 1 (n/a). Jul 11 00:31:24.762307 systemd-journald[290]: Journal stopped Jul 11 00:31:26.776322 kernel: SELinux: Class mctp_socket not defined in policy. Jul 11 00:31:26.776377 kernel: SELinux: Class anon_inode not defined in policy. Jul 11 00:31:26.776388 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 11 00:31:26.776398 kernel: SELinux: policy capability network_peer_controls=1 Jul 11 00:31:26.776408 kernel: SELinux: policy capability open_perms=1 Jul 11 00:31:26.776418 kernel: SELinux: policy capability extended_socket_class=1 Jul 11 00:31:26.776431 kernel: SELinux: policy capability always_check_network=0 Jul 11 00:31:26.776442 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 11 00:31:26.776452 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 11 00:31:26.776461 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 11 00:31:26.776471 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 11 00:31:26.776481 systemd[1]: Successfully loaded SELinux policy in 35.788ms. Jul 11 00:31:26.776500 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.232ms. Jul 11 00:31:26.776513 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 11 00:31:26.776524 systemd[1]: Detected virtualization kvm. Jul 11 00:31:26.776536 systemd[1]: Detected architecture arm64. Jul 11 00:31:26.776546 systemd[1]: Detected first boot. Jul 11 00:31:26.776557 systemd[1]: Initializing machine ID from VM UUID. Jul 11 00:31:26.776570 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 11 00:31:26.776582 systemd[1]: Populated /etc with preset unit settings. Jul 11 00:31:26.776593 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 11 00:31:26.776605 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 11 00:31:26.776620 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:31:26.776633 kernel: kauditd_printk_skb: 78 callbacks suppressed Jul 11 00:31:26.776642 kernel: audit: type=1334 audit(1752193886.653:82): prog-id=12 op=LOAD Jul 11 00:31:26.776656 kernel: audit: type=1334 audit(1752193886.653:83): prog-id=3 op=UNLOAD Jul 11 00:31:26.776666 kernel: audit: type=1334 audit(1752193886.653:84): prog-id=13 op=LOAD Jul 11 00:31:26.776676 kernel: audit: type=1334 audit(1752193886.653:85): prog-id=14 op=LOAD Jul 11 00:31:26.776685 kernel: audit: type=1334 audit(1752193886.653:86): prog-id=4 op=UNLOAD Jul 11 00:31:26.776695 kernel: audit: type=1334 audit(1752193886.654:87): prog-id=5 op=UNLOAD Jul 11 00:31:26.776704 kernel: audit: type=1334 audit(1752193886.654:88): prog-id=15 op=LOAD Jul 11 00:31:26.776715 kernel: audit: type=1334 audit(1752193886.654:89): prog-id=12 op=UNLOAD Jul 11 00:31:26.776726 kernel: audit: type=1334 audit(1752193886.655:90): prog-id=16 op=LOAD Jul 11 00:31:26.776758 kernel: audit: type=1334 audit(1752193886.655:91): prog-id=17 op=LOAD Jul 11 00:31:26.776769 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 11 00:31:26.776781 systemd[1]: Stopped iscsiuio.service. Jul 11 00:31:26.776791 systemd[1]: iscsid.service: Deactivated successfully. Jul 11 00:31:26.776801 systemd[1]: Stopped iscsid.service. Jul 11 00:31:26.776812 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 11 00:31:26.776823 systemd[1]: Stopped initrd-switch-root.service. Jul 11 00:31:26.776834 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 11 00:31:26.776846 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 11 00:31:26.776856 systemd[1]: Created slice system-addon\x2drun.slice. Jul 11 00:31:26.776867 systemd[1]: Created slice system-getty.slice. Jul 11 00:31:26.776877 systemd[1]: Created slice system-modprobe.slice. Jul 11 00:31:26.776888 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 11 00:31:26.776898 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 11 00:31:26.776909 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 11 00:31:26.776920 systemd[1]: Created slice user.slice. Jul 11 00:31:26.776930 systemd[1]: Started systemd-ask-password-console.path. Jul 11 00:31:26.776940 systemd[1]: Started systemd-ask-password-wall.path. Jul 11 00:31:26.776951 systemd[1]: Set up automount boot.automount. Jul 11 00:31:26.776962 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 11 00:31:26.776973 systemd[1]: Stopped target initrd-switch-root.target. Jul 11 00:31:26.776983 systemd[1]: Stopped target initrd-fs.target. Jul 11 00:31:26.776994 systemd[1]: Stopped target initrd-root-fs.target. Jul 11 00:31:26.777004 systemd[1]: Reached target integritysetup.target. Jul 11 00:31:26.777016 systemd[1]: Reached target remote-cryptsetup.target. Jul 11 00:31:26.777027 systemd[1]: Reached target remote-fs.target. Jul 11 00:31:26.777038 systemd[1]: Reached target slices.target. Jul 11 00:31:26.777049 systemd[1]: Reached target swap.target. Jul 11 00:31:26.777059 systemd[1]: Reached target torcx.target. Jul 11 00:31:26.777070 systemd[1]: Reached target veritysetup.target. Jul 11 00:31:26.777080 systemd[1]: Listening on systemd-coredump.socket. Jul 11 00:31:26.777092 systemd[1]: Listening on systemd-initctl.socket. Jul 11 00:31:26.777103 systemd[1]: Listening on systemd-networkd.socket. Jul 11 00:31:26.777113 systemd[1]: Listening on systemd-udevd-control.socket. Jul 11 00:31:26.777124 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 11 00:31:26.777135 systemd[1]: Listening on systemd-userdbd.socket. Jul 11 00:31:26.777145 systemd[1]: Mounting dev-hugepages.mount... Jul 11 00:31:26.777162 systemd[1]: Mounting dev-mqueue.mount... Jul 11 00:31:26.777173 systemd[1]: Mounting media.mount... Jul 11 00:31:26.777183 systemd[1]: Mounting sys-kernel-debug.mount... Jul 11 00:31:26.777195 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 11 00:31:26.777205 systemd[1]: Mounting tmp.mount... Jul 11 00:31:26.777216 systemd[1]: Starting flatcar-tmpfiles.service... Jul 11 00:31:26.777227 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 11 00:31:26.777237 systemd[1]: Starting kmod-static-nodes.service... Jul 11 00:31:26.777248 systemd[1]: Starting modprobe@configfs.service... Jul 11 00:31:26.777258 systemd[1]: Starting modprobe@dm_mod.service... Jul 11 00:31:26.777269 systemd[1]: Starting modprobe@drm.service... Jul 11 00:31:26.777279 systemd[1]: Starting modprobe@efi_pstore.service... Jul 11 00:31:26.777292 systemd[1]: Starting modprobe@fuse.service... Jul 11 00:31:26.777302 systemd[1]: Starting modprobe@loop.service... Jul 11 00:31:26.777313 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 11 00:31:26.777323 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 11 00:31:26.777334 systemd[1]: Stopped systemd-fsck-root.service. Jul 11 00:31:26.777344 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 11 00:31:26.777355 systemd[1]: Stopped systemd-fsck-usr.service. Jul 11 00:31:26.777365 systemd[1]: Stopped systemd-journald.service. Jul 11 00:31:26.777375 kernel: fuse: init (API version 7.34) Jul 11 00:31:26.777386 systemd[1]: Starting systemd-journald.service... Jul 11 00:31:26.777397 kernel: loop: module loaded Jul 11 00:31:26.777407 systemd[1]: Starting systemd-modules-load.service... Jul 11 00:31:26.777418 systemd[1]: Starting systemd-network-generator.service... Jul 11 00:31:26.777428 systemd[1]: Starting systemd-remount-fs.service... Jul 11 00:31:26.777439 systemd[1]: Starting systemd-udev-trigger.service... Jul 11 00:31:26.777451 systemd[1]: verity-setup.service: Deactivated successfully. Jul 11 00:31:26.777462 systemd[1]: Stopped verity-setup.service. Jul 11 00:31:26.777472 systemd[1]: Mounted dev-hugepages.mount. Jul 11 00:31:26.777483 systemd[1]: Mounted dev-mqueue.mount. Jul 11 00:31:26.777494 systemd[1]: Mounted media.mount. Jul 11 00:31:26.777505 systemd[1]: Mounted sys-kernel-debug.mount. Jul 11 00:31:26.777516 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 11 00:31:26.777527 systemd[1]: Mounted tmp.mount. Jul 11 00:31:26.777537 systemd[1]: Finished kmod-static-nodes.service. Jul 11 00:31:26.777547 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 11 00:31:26.777558 systemd[1]: Finished modprobe@configfs.service. Jul 11 00:31:26.777570 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:31:26.777580 systemd[1]: Finished modprobe@dm_mod.service. Jul 11 00:31:26.777593 systemd-journald[1002]: Journal started Jul 11 00:31:26.777629 systemd-journald[1002]: Runtime Journal (/run/log/journal/8c7360041e0e454db52779c75e761edc) is 6.0M, max 48.7M, 42.6M free. Jul 11 00:31:24.824000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 11 00:31:24.891000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 11 00:31:24.891000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 11 00:31:24.891000 audit: BPF prog-id=10 op=LOAD Jul 11 00:31:24.891000 audit: BPF prog-id=10 op=UNLOAD Jul 11 00:31:24.891000 audit: BPF prog-id=11 op=LOAD Jul 11 00:31:24.891000 audit: BPF prog-id=11 op=UNLOAD Jul 11 00:31:24.936000 audit[937]: AVC avc: denied { associate } for pid=937 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 11 00:31:24.936000 audit[937]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c589c a1=40000c8de0 a2=40000cf0c0 a3=32 items=0 ppid=920 pid=937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:31:24.936000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 11 00:31:24.937000 audit[937]: AVC avc: denied { associate } for pid=937 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 11 00:31:24.937000 audit[937]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001c5975 a2=1ed a3=0 items=2 ppid=920 pid=937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:31:24.937000 audit: CWD cwd="/" Jul 11 00:31:24.937000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 11 00:31:24.937000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 11 00:31:24.937000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 11 00:31:26.653000 audit: BPF prog-id=12 op=LOAD Jul 11 00:31:26.653000 audit: BPF prog-id=3 op=UNLOAD Jul 11 00:31:26.653000 audit: BPF prog-id=13 op=LOAD Jul 11 00:31:26.653000 audit: BPF prog-id=14 op=LOAD Jul 11 00:31:26.653000 audit: BPF prog-id=4 op=UNLOAD Jul 11 00:31:26.654000 audit: BPF prog-id=5 op=UNLOAD Jul 11 00:31:26.654000 audit: BPF prog-id=15 op=LOAD Jul 11 00:31:26.654000 audit: BPF prog-id=12 op=UNLOAD Jul 11 00:31:26.655000 audit: BPF prog-id=16 op=LOAD Jul 11 00:31:26.655000 audit: BPF prog-id=17 op=LOAD Jul 11 00:31:26.655000 audit: BPF prog-id=13 op=UNLOAD Jul 11 00:31:26.655000 audit: BPF prog-id=14 op=UNLOAD Jul 11 00:31:26.657000 audit: BPF prog-id=18 op=LOAD Jul 11 00:31:26.657000 audit: BPF prog-id=15 op=UNLOAD Jul 11 00:31:26.658000 audit: BPF prog-id=19 op=LOAD Jul 11 00:31:26.658000 audit: BPF prog-id=20 op=LOAD Jul 11 00:31:26.658000 audit: BPF prog-id=16 op=UNLOAD Jul 11 00:31:26.658000 audit: BPF prog-id=17 op=UNLOAD Jul 11 00:31:26.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:26.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:26.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:26.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:26.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:26.674000 audit: BPF prog-id=18 op=UNLOAD Jul 11 00:31:26.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:26.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:26.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:26.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:26.748000 audit: BPF prog-id=21 op=LOAD Jul 11 00:31:26.749000 audit: BPF prog-id=22 op=LOAD Jul 11 00:31:26.749000 audit: BPF prog-id=23 op=LOAD Jul 11 00:31:26.749000 audit: BPF prog-id=19 op=UNLOAD Jul 11 00:31:26.749000 audit: BPF prog-id=20 op=UNLOAD Jul 11 00:31:26.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:26.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:26.774000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 11 00:31:26.774000 audit[1002]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffc6457400 a2=4000 a3=1 items=0 ppid=1 pid=1002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:31:26.774000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 11 00:31:26.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:26.775000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:26.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:26.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:24.934456 /usr/lib/systemd/system-generators/torcx-generator[937]: time="2025-07-11T00:31:24Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 11 00:31:26.653240 systemd[1]: Queued start job for default target multi-user.target. Jul 11 00:31:26.779055 systemd[1]: Started systemd-journald.service. Jul 11 00:31:24.934703 /usr/lib/systemd/system-generators/torcx-generator[937]: time="2025-07-11T00:31:24Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 11 00:31:26.653253 systemd[1]: Unnecessary job was removed for dev-vda6.device. Jul 11 00:31:26.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:24.934721 /usr/lib/systemd/system-generators/torcx-generator[937]: time="2025-07-11T00:31:24Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 11 00:31:26.659960 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 11 00:31:24.934762 /usr/lib/systemd/system-generators/torcx-generator[937]: time="2025-07-11T00:31:24Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 11 00:31:26.779373 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 00:31:24.934772 /usr/lib/systemd/system-generators/torcx-generator[937]: time="2025-07-11T00:31:24Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 11 00:31:24.934798 /usr/lib/systemd/system-generators/torcx-generator[937]: time="2025-07-11T00:31:24Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 11 00:31:26.779523 systemd[1]: Finished modprobe@drm.service. Jul 11 00:31:24.934810 /usr/lib/systemd/system-generators/torcx-generator[937]: time="2025-07-11T00:31:24Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 11 00:31:24.935109 /usr/lib/systemd/system-generators/torcx-generator[937]: time="2025-07-11T00:31:24Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 11 00:31:24.935155 /usr/lib/systemd/system-generators/torcx-generator[937]: time="2025-07-11T00:31:24Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 11 00:31:24.935227 /usr/lib/systemd/system-generators/torcx-generator[937]: time="2025-07-11T00:31:24Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 11 00:31:24.936319 /usr/lib/systemd/system-generators/torcx-generator[937]: time="2025-07-11T00:31:24Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 11 00:31:24.936355 /usr/lib/systemd/system-generators/torcx-generator[937]: time="2025-07-11T00:31:24Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 11 00:31:24.936374 /usr/lib/systemd/system-generators/torcx-generator[937]: time="2025-07-11T00:31:24Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 Jul 11 00:31:24.936387 /usr/lib/systemd/system-generators/torcx-generator[937]: time="2025-07-11T00:31:24Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 11 00:31:26.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:26.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:24.936407 /usr/lib/systemd/system-generators/torcx-generator[937]: time="2025-07-11T00:31:24Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 Jul 11 00:31:24.936420 /usr/lib/systemd/system-generators/torcx-generator[937]: time="2025-07-11T00:31:24Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 11 00:31:26.379067 /usr/lib/systemd/system-generators/torcx-generator[937]: time="2025-07-11T00:31:26Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 11 00:31:26.379337 /usr/lib/systemd/system-generators/torcx-generator[937]: time="2025-07-11T00:31:26Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 11 00:31:26.379433 /usr/lib/systemd/system-generators/torcx-generator[937]: time="2025-07-11T00:31:26Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 11 00:31:26.379599 /usr/lib/systemd/system-generators/torcx-generator[937]: time="2025-07-11T00:31:26Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 11 00:31:26.379653 /usr/lib/systemd/system-generators/torcx-generator[937]: time="2025-07-11T00:31:26Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 11 00:31:26.379713 /usr/lib/systemd/system-generators/torcx-generator[937]: time="2025-07-11T00:31:26Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 11 00:31:26.780713 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:31:26.780847 systemd[1]: Finished modprobe@efi_pstore.service. Jul 11 00:31:26.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:26.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:26.781834 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 11 00:31:26.781997 systemd[1]: Finished modprobe@fuse.service. Jul 11 00:31:26.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:26.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:26.782882 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:31:26.783039 systemd[1]: Finished modprobe@loop.service. Jul 11 00:31:26.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:26.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:26.783926 systemd[1]: Finished systemd-modules-load.service. Jul 11 00:31:26.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:26.784958 systemd[1]: Finished flatcar-tmpfiles.service. Jul 11 00:31:26.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:26.785805 systemd[1]: Finished systemd-network-generator.service. Jul 11 00:31:26.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:26.786830 systemd[1]: Finished systemd-remount-fs.service. Jul 11 00:31:26.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:26.787912 systemd[1]: Reached target network-pre.target. Jul 11 00:31:26.789818 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 11 00:31:26.791497 systemd[1]: Mounting sys-kernel-config.mount... Jul 11 00:31:26.792240 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 11 00:31:26.794119 systemd[1]: Starting systemd-hwdb-update.service... Jul 11 00:31:26.795978 systemd[1]: Starting systemd-journal-flush.service... Jul 11 00:31:26.796668 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:31:26.797841 systemd[1]: Starting systemd-random-seed.service... Jul 11 00:31:26.798485 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 11 00:31:26.799936 systemd[1]: Starting systemd-sysctl.service... Jul 11 00:31:26.803046 systemd[1]: Starting systemd-sysusers.service... Jul 11 00:31:26.804099 systemd-journald[1002]: Time spent on flushing to /var/log/journal/8c7360041e0e454db52779c75e761edc is 12.538ms for 981 entries. Jul 11 00:31:26.804099 systemd-journald[1002]: System Journal (/var/log/journal/8c7360041e0e454db52779c75e761edc) is 8.0M, max 195.6M, 187.6M free. Jul 11 00:31:26.827331 systemd-journald[1002]: Received client request to flush runtime journal. Jul 11 00:31:26.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:26.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:26.806466 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 11 00:31:26.807483 systemd[1]: Mounted sys-kernel-config.mount. Jul 11 00:31:26.813979 systemd[1]: Finished systemd-udev-trigger.service. Jul 11 00:31:26.815928 systemd[1]: Starting systemd-udev-settle.service... Jul 11 00:31:26.826396 systemd[1]: Finished systemd-sysctl.service. Jul 11 00:31:26.827517 systemd[1]: Finished systemd-random-seed.service. Jul 11 00:31:26.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:26.828595 systemd[1]: Finished systemd-journal-flush.service. Jul 11 00:31:26.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:26.829588 systemd[1]: Finished systemd-sysusers.service. Jul 11 00:31:26.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:26.830558 udevadm[1037]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 11 00:31:26.830597 systemd[1]: Reached target first-boot-complete.target. Jul 11 00:31:27.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:27.175806 systemd[1]: Finished systemd-hwdb-update.service. Jul 11 00:31:27.176000 audit: BPF prog-id=24 op=LOAD Jul 11 00:31:27.176000 audit: BPF prog-id=25 op=LOAD Jul 11 00:31:27.176000 audit: BPF prog-id=7 op=UNLOAD Jul 11 00:31:27.176000 audit: BPF prog-id=8 op=UNLOAD Jul 11 00:31:27.177960 systemd[1]: Starting systemd-udevd.service... Jul 11 00:31:27.196840 systemd-udevd[1039]: Using default interface naming scheme 'v252'. Jul 11 00:31:27.208440 systemd[1]: Started systemd-udevd.service. Jul 11 00:31:27.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:27.209000 audit: BPF prog-id=26 op=LOAD Jul 11 00:31:27.210931 systemd[1]: Starting systemd-networkd.service... Jul 11 00:31:27.217000 audit: BPF prog-id=27 op=LOAD Jul 11 00:31:27.217000 audit: BPF prog-id=28 op=LOAD Jul 11 00:31:27.217000 audit: BPF prog-id=29 op=LOAD Jul 11 00:31:27.219127 systemd[1]: Starting systemd-userdbd.service... Jul 11 00:31:27.228337 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Jul 11 00:31:27.255288 systemd[1]: Started systemd-userdbd.service. Jul 11 00:31:27.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:27.265961 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 11 00:31:27.303156 systemd-networkd[1046]: lo: Link UP Jul 11 00:31:27.303166 systemd-networkd[1046]: lo: Gained carrier Jul 11 00:31:27.303530 systemd-networkd[1046]: Enumeration completed Jul 11 00:31:27.303624 systemd[1]: Started systemd-networkd.service. Jul 11 00:31:27.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:27.304481 systemd-networkd[1046]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 00:31:27.309188 systemd-networkd[1046]: eth0: Link UP Jul 11 00:31:27.309200 systemd-networkd[1046]: eth0: Gained carrier Jul 11 00:31:27.314083 systemd[1]: Finished systemd-udev-settle.service. Jul 11 00:31:27.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:27.315924 systemd[1]: Starting lvm2-activation-early.service... Jul 11 00:31:27.327861 systemd-networkd[1046]: eth0: DHCPv4 address 10.0.0.78/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 00:31:27.329134 lvm[1072]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 11 00:31:27.357651 systemd[1]: Finished lvm2-activation-early.service. Jul 11 00:31:27.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:27.358758 systemd[1]: Reached target cryptsetup.target. Jul 11 00:31:27.360538 systemd[1]: Starting lvm2-activation.service... Jul 11 00:31:27.364005 lvm[1073]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 11 00:31:27.398628 systemd[1]: Finished lvm2-activation.service. Jul 11 00:31:27.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:27.399404 systemd[1]: Reached target local-fs-pre.target. Jul 11 00:31:27.400036 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 11 00:31:27.400063 systemd[1]: Reached target local-fs.target. Jul 11 00:31:27.400619 systemd[1]: Reached target machines.target. Jul 11 00:31:27.402401 systemd[1]: Starting ldconfig.service... Jul 11 00:31:27.403369 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 11 00:31:27.403421 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 11 00:31:27.404519 systemd[1]: Starting systemd-boot-update.service... Jul 11 00:31:27.406121 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 11 00:31:27.408089 systemd[1]: Starting systemd-machine-id-commit.service... Jul 11 00:31:27.409852 systemd[1]: Starting systemd-sysext.service... Jul 11 00:31:27.411180 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1075 (bootctl) Jul 11 00:31:27.412178 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 11 00:31:27.419782 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 11 00:31:27.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:27.424990 systemd[1]: Unmounting usr-share-oem.mount... Jul 11 00:31:27.431587 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 11 00:31:27.431790 systemd[1]: Unmounted usr-share-oem.mount. Jul 11 00:31:27.481765 kernel: loop0: detected capacity change from 0 to 211168 Jul 11 00:31:27.483220 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 11 00:31:27.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:27.483782 systemd[1]: Finished systemd-machine-id-commit.service. Jul 11 00:31:27.493759 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 11 00:31:27.496113 systemd-fsck[1083]: fsck.fat 4.2 (2021-01-31) Jul 11 00:31:27.496113 systemd-fsck[1083]: /dev/vda1: 236 files, 117310/258078 clusters Jul 11 00:31:27.498021 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 11 00:31:27.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:27.500547 systemd[1]: Mounting boot.mount... Jul 11 00:31:27.509022 systemd[1]: Mounted boot.mount. Jul 11 00:31:27.517985 systemd[1]: Finished systemd-boot-update.service. Jul 11 00:31:27.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:27.521814 kernel: loop1: detected capacity change from 0 to 211168 Jul 11 00:31:27.525790 (sd-sysext)[1089]: Using extensions 'kubernetes'. Jul 11 00:31:27.526232 (sd-sysext)[1089]: Merged extensions into '/usr'. Jul 11 00:31:27.543882 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 11 00:31:27.545376 systemd[1]: Starting modprobe@dm_mod.service... Jul 11 00:31:27.547522 systemd[1]: Starting modprobe@efi_pstore.service... Jul 11 00:31:27.549662 systemd[1]: Starting modprobe@loop.service... Jul 11 00:31:27.550642 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 11 00:31:27.550774 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 11 00:31:27.551497 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:31:27.551623 systemd[1]: Finished modprobe@dm_mod.service. Jul 11 00:31:27.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:27.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:27.553437 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:31:27.553550 systemd[1]: Finished modprobe@efi_pstore.service. Jul 11 00:31:27.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:27.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:27.554906 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:31:27.555011 systemd[1]: Finished modprobe@loop.service. Jul 11 00:31:27.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:27.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:27.556223 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:31:27.556322 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 11 00:31:27.591353 ldconfig[1074]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 11 00:31:27.594642 systemd[1]: Finished ldconfig.service. Jul 11 00:31:27.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:27.766870 systemd[1]: Mounting usr-share-oem.mount... Jul 11 00:31:27.771801 systemd[1]: Mounted usr-share-oem.mount. Jul 11 00:31:27.773558 systemd[1]: Finished systemd-sysext.service. Jul 11 00:31:27.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:27.775522 systemd[1]: Starting ensure-sysext.service... Jul 11 00:31:27.777122 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 11 00:31:27.781282 systemd[1]: Reloading. Jul 11 00:31:27.789048 systemd-tmpfiles[1096]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 11 00:31:27.790902 systemd-tmpfiles[1096]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 11 00:31:27.793574 systemd-tmpfiles[1096]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 11 00:31:27.815336 /usr/lib/systemd/system-generators/torcx-generator[1116]: time="2025-07-11T00:31:27Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 11 00:31:27.815365 /usr/lib/systemd/system-generators/torcx-generator[1116]: time="2025-07-11T00:31:27Z" level=info msg="torcx already run" Jul 11 00:31:27.876929 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 11 00:31:27.876948 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 11 00:31:27.894015 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:31:27.936000 audit: BPF prog-id=30 op=LOAD Jul 11 00:31:27.936000 audit: BPF prog-id=21 op=UNLOAD Jul 11 00:31:27.936000 audit: BPF prog-id=31 op=LOAD Jul 11 00:31:27.936000 audit: BPF prog-id=32 op=LOAD Jul 11 00:31:27.936000 audit: BPF prog-id=22 op=UNLOAD Jul 11 00:31:27.936000 audit: BPF prog-id=23 op=UNLOAD Jul 11 00:31:27.937000 audit: BPF prog-id=33 op=LOAD Jul 11 00:31:27.937000 audit: BPF prog-id=26 op=UNLOAD Jul 11 00:31:27.938000 audit: BPF prog-id=34 op=LOAD Jul 11 00:31:27.938000 audit: BPF prog-id=27 op=UNLOAD Jul 11 00:31:27.938000 audit: BPF prog-id=35 op=LOAD Jul 11 00:31:27.938000 audit: BPF prog-id=36 op=LOAD Jul 11 00:31:27.938000 audit: BPF prog-id=28 op=UNLOAD Jul 11 00:31:27.938000 audit: BPF prog-id=29 op=UNLOAD Jul 11 00:31:27.939000 audit: BPF prog-id=37 op=LOAD Jul 11 00:31:27.939000 audit: BPF prog-id=38 op=LOAD Jul 11 00:31:27.939000 audit: BPF prog-id=24 op=UNLOAD Jul 11 00:31:27.939000 audit: BPF prog-id=25 op=UNLOAD Jul 11 00:31:27.941851 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 11 00:31:27.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:27.946066 systemd[1]: Starting audit-rules.service... Jul 11 00:31:27.947687 systemd[1]: Starting clean-ca-certificates.service... Jul 11 00:31:27.949979 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 11 00:31:27.954000 audit: BPF prog-id=39 op=LOAD Jul 11 00:31:27.956190 systemd[1]: Starting systemd-resolved.service... Jul 11 00:31:27.961000 audit: BPF prog-id=40 op=LOAD Jul 11 00:31:27.963816 systemd[1]: Starting systemd-timesyncd.service... Jul 11 00:31:27.965825 systemd[1]: Starting systemd-update-utmp.service... Jul 11 00:31:27.972221 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 11 00:31:27.974447 systemd[1]: Starting modprobe@dm_mod.service... Jul 11 00:31:27.976482 systemd[1]: Starting modprobe@efi_pstore.service... Jul 11 00:31:27.978630 systemd[1]: Starting modprobe@loop.service... Jul 11 00:31:27.979675 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 11 00:31:27.979826 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 11 00:31:27.980000 audit[1161]: SYSTEM_BOOT pid=1161 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 11 00:31:27.981847 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:31:27.981987 systemd[1]: Finished modprobe@dm_mod.service. Jul 11 00:31:27.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:27.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:27.983057 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:31:27.983173 systemd[1]: Finished modprobe@efi_pstore.service. Jul 11 00:31:27.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:27.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:27.984288 systemd[1]: Finished clean-ca-certificates.service. Jul 11 00:31:27.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:27.985370 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:31:27.985486 systemd[1]: Finished modprobe@loop.service. Jul 11 00:31:27.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:27.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:27.988054 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:31:27.988266 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 11 00:31:27.988399 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 11 00:31:27.990613 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 11 00:31:27.992239 systemd[1]: Starting modprobe@dm_mod.service... Jul 11 00:31:27.994103 systemd[1]: Starting modprobe@efi_pstore.service... Jul 11 00:31:27.995890 systemd[1]: Starting modprobe@loop.service... Jul 11 00:31:27.996469 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 11 00:31:27.996596 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 11 00:31:27.996690 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 11 00:31:27.997598 systemd[1]: Finished systemd-update-utmp.service. Jul 11 00:31:27.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:27.998837 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 11 00:31:27.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:27.999911 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:31:28.000023 systemd[1]: Finished modprobe@dm_mod.service. Jul 11 00:31:27.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:27.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:28.001204 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:31:28.001329 systemd[1]: Finished modprobe@efi_pstore.service. Jul 11 00:31:28.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:28.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:28.002311 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:31:28.002422 systemd[1]: Finished modprobe@loop.service. Jul 11 00:31:28.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:28.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:28.004664 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:31:28.004792 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 11 00:31:28.006054 systemd[1]: Starting systemd-update-done.service... Jul 11 00:31:28.009996 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 11 00:31:28.011330 systemd[1]: Starting modprobe@dm_mod.service... Jul 11 00:31:28.012992 systemd[1]: Starting modprobe@drm.service... Jul 11 00:31:28.014715 systemd[1]: Starting modprobe@efi_pstore.service... Jul 11 00:31:28.016534 systemd[1]: Starting modprobe@loop.service... Jul 11 00:31:28.017277 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 11 00:31:28.017483 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 11 00:31:28.019011 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 11 00:31:28.019788 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 11 00:31:28.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:28.021064 systemd[1]: Finished systemd-update-done.service. Jul 11 00:31:28.022189 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:31:28.022306 systemd[1]: Finished modprobe@dm_mod.service. Jul 11 00:31:28.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:28.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:28.023451 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 00:31:28.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:28.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:28.023566 systemd[1]: Finished modprobe@drm.service. Jul 11 00:31:28.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:28.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:28.024697 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:31:28.024839 systemd[1]: Finished modprobe@efi_pstore.service. Jul 11 00:31:28.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:28.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:31:28.025942 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:31:28.026047 systemd[1]: Finished modprobe@loop.service. Jul 11 00:31:28.027171 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:31:28.027276 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 11 00:31:28.031209 systemd[1]: Finished ensure-sysext.service. Jul 11 00:31:28.031000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 11 00:31:28.031000 audit[1187]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe6c896e0 a2=420 a3=0 items=0 ppid=1155 pid=1187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:31:28.031000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 11 00:31:28.032122 augenrules[1187]: No rules Jul 11 00:31:28.032771 systemd[1]: Finished audit-rules.service. Jul 11 00:31:28.036105 systemd[1]: Started systemd-timesyncd.service. Jul 11 00:31:28.479170 systemd-timesyncd[1160]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 11 00:31:28.479244 systemd-timesyncd[1160]: Initial clock synchronization to Fri 2025-07-11 00:31:28.479073 UTC. Jul 11 00:31:28.479598 systemd[1]: Reached target time-set.target. Jul 11 00:31:28.486177 systemd-resolved[1159]: Positive Trust Anchors: Jul 11 00:31:28.486189 systemd-resolved[1159]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 00:31:28.486215 systemd-resolved[1159]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 11 00:31:28.500318 systemd-resolved[1159]: Defaulting to hostname 'linux'. Jul 11 00:31:28.501831 systemd[1]: Started systemd-resolved.service. Jul 11 00:31:28.502689 systemd[1]: Reached target network.target. Jul 11 00:31:28.503268 systemd[1]: Reached target nss-lookup.target. Jul 11 00:31:28.503834 systemd[1]: Reached target sysinit.target. Jul 11 00:31:28.504467 systemd[1]: Started motdgen.path. Jul 11 00:31:28.504986 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 11 00:31:28.505972 systemd[1]: Started logrotate.timer. Jul 11 00:31:28.506619 systemd[1]: Started mdadm.timer. Jul 11 00:31:28.507100 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 11 00:31:28.507717 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 11 00:31:28.507747 systemd[1]: Reached target paths.target. Jul 11 00:31:28.508305 systemd[1]: Reached target timers.target. Jul 11 00:31:28.509187 systemd[1]: Listening on dbus.socket. Jul 11 00:31:28.510801 systemd[1]: Starting docker.socket... Jul 11 00:31:28.513745 systemd[1]: Listening on sshd.socket. Jul 11 00:31:28.514633 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 11 00:31:28.515053 systemd[1]: Listening on docker.socket. Jul 11 00:31:28.515716 systemd[1]: Reached target sockets.target. Jul 11 00:31:28.516290 systemd[1]: Reached target basic.target. Jul 11 00:31:28.516846 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 11 00:31:28.516879 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 11 00:31:28.517843 systemd[1]: Starting containerd.service... Jul 11 00:31:28.519372 systemd[1]: Starting dbus.service... Jul 11 00:31:28.520888 systemd[1]: Starting enable-oem-cloudinit.service... Jul 11 00:31:28.522555 systemd[1]: Starting extend-filesystems.service... Jul 11 00:31:28.523314 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 11 00:31:28.526030 jq[1197]: false Jul 11 00:31:28.524541 systemd[1]: Starting motdgen.service... Jul 11 00:31:28.526836 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 11 00:31:28.528555 systemd[1]: Starting sshd-keygen.service... Jul 11 00:31:28.533657 systemd[1]: Starting systemd-logind.service... Jul 11 00:31:28.534553 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 11 00:31:28.534638 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 11 00:31:28.535618 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 11 00:31:28.536349 systemd[1]: Starting update-engine.service... Jul 11 00:31:28.538147 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 11 00:31:28.540599 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 11 00:31:28.540759 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 11 00:31:28.541044 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 11 00:31:28.541216 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 11 00:31:28.542251 jq[1214]: true Jul 11 00:31:28.551311 extend-filesystems[1198]: Found loop1 Jul 11 00:31:28.556349 jq[1217]: true Jul 11 00:31:28.557621 systemd[1]: motdgen.service: Deactivated successfully. Jul 11 00:31:28.557755 dbus-daemon[1196]: [system] SELinux support is enabled Jul 11 00:31:28.557792 systemd[1]: Finished motdgen.service. Jul 11 00:31:28.558693 systemd[1]: Started dbus.service. Jul 11 00:31:28.561247 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 11 00:31:28.561277 systemd[1]: Reached target system-config.target. Jul 11 00:31:28.562166 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 11 00:31:28.562194 systemd[1]: Reached target user-config.target. Jul 11 00:31:28.562376 extend-filesystems[1198]: Found vda Jul 11 00:31:28.563722 extend-filesystems[1198]: Found vda1 Jul 11 00:31:28.564455 extend-filesystems[1198]: Found vda2 Jul 11 00:31:28.565183 extend-filesystems[1198]: Found vda3 Jul 11 00:31:28.565888 extend-filesystems[1198]: Found usr Jul 11 00:31:28.566621 extend-filesystems[1198]: Found vda4 Jul 11 00:31:28.567351 extend-filesystems[1198]: Found vda6 Jul 11 00:31:28.579222 extend-filesystems[1198]: Found vda7 Jul 11 00:31:28.580168 extend-filesystems[1198]: Found vda9 Jul 11 00:31:28.580168 extend-filesystems[1198]: Checking size of /dev/vda9 Jul 11 00:31:28.596545 extend-filesystems[1198]: Resized partition /dev/vda9 Jul 11 00:31:28.598934 extend-filesystems[1244]: resize2fs 1.46.5 (30-Dec-2021) Jul 11 00:31:28.598055 systemd-logind[1207]: Watching system buttons on /dev/input/event0 (Power Button) Jul 11 00:31:28.604472 systemd-logind[1207]: New seat seat0. Jul 11 00:31:28.607136 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 11 00:31:28.611920 systemd[1]: Started systemd-logind.service. Jul 11 00:31:28.622836 update_engine[1213]: I0711 00:31:28.622634 1213 main.cc:92] Flatcar Update Engine starting Jul 11 00:31:28.631124 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 11 00:31:28.633480 systemd[1]: Started update-engine.service. Jul 11 00:31:28.637032 systemd[1]: Started locksmithd.service. Jul 11 00:31:28.643707 update_engine[1213]: I0711 00:31:28.638424 1213 update_check_scheduler.cc:74] Next update check in 4m5s Jul 11 00:31:28.643991 extend-filesystems[1244]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 11 00:31:28.643991 extend-filesystems[1244]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 11 00:31:28.643991 extend-filesystems[1244]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 11 00:31:28.648892 extend-filesystems[1198]: Resized filesystem in /dev/vda9 Jul 11 00:31:28.646100 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 11 00:31:28.646305 systemd[1]: Finished extend-filesystems.service. Jul 11 00:31:28.653835 bash[1241]: Updated "/home/core/.ssh/authorized_keys" Jul 11 00:31:28.654678 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 11 00:31:28.659353 env[1218]: time="2025-07-11T00:31:28.659219620Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 11 00:31:28.680195 locksmithd[1246]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 11 00:31:28.683468 env[1218]: time="2025-07-11T00:31:28.683432980Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 11 00:31:28.683633 env[1218]: time="2025-07-11T00:31:28.683611340Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:31:28.689281 env[1218]: time="2025-07-11T00:31:28.689245900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.186-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:31:28.689328 env[1218]: time="2025-07-11T00:31:28.689280620Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:31:28.689517 env[1218]: time="2025-07-11T00:31:28.689490860Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:31:28.689554 env[1218]: time="2025-07-11T00:31:28.689515460Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 11 00:31:28.689554 env[1218]: time="2025-07-11T00:31:28.689539620Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 11 00:31:28.689554 env[1218]: time="2025-07-11T00:31:28.689550100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 11 00:31:28.689650 env[1218]: time="2025-07-11T00:31:28.689630380Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:31:28.689936 env[1218]: time="2025-07-11T00:31:28.689913380Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:31:28.690053 env[1218]: time="2025-07-11T00:31:28.690033140Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:31:28.690083 env[1218]: time="2025-07-11T00:31:28.690051860Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 11 00:31:28.690152 env[1218]: time="2025-07-11T00:31:28.690106980Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 11 00:31:28.690182 env[1218]: time="2025-07-11T00:31:28.690151540Z" level=info msg="metadata content store policy set" policy=shared Jul 11 00:31:28.693679 env[1218]: time="2025-07-11T00:31:28.693652660Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 11 00:31:28.693719 env[1218]: time="2025-07-11T00:31:28.693686460Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 11 00:31:28.693719 env[1218]: time="2025-07-11T00:31:28.693699460Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 11 00:31:28.693758 env[1218]: time="2025-07-11T00:31:28.693729140Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 11 00:31:28.693758 env[1218]: time="2025-07-11T00:31:28.693745820Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 11 00:31:28.693798 env[1218]: time="2025-07-11T00:31:28.693759900Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 11 00:31:28.693798 env[1218]: time="2025-07-11T00:31:28.693780060Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 11 00:31:28.694125 env[1218]: time="2025-07-11T00:31:28.694094380Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 11 00:31:28.694161 env[1218]: time="2025-07-11T00:31:28.694135100Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 11 00:31:28.694161 env[1218]: time="2025-07-11T00:31:28.694151460Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 11 00:31:28.694199 env[1218]: time="2025-07-11T00:31:28.694165100Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 11 00:31:28.694199 env[1218]: time="2025-07-11T00:31:28.694177780Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 11 00:31:28.694329 env[1218]: time="2025-07-11T00:31:28.694306420Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 11 00:31:28.694407 env[1218]: time="2025-07-11T00:31:28.694390660Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 11 00:31:28.694639 env[1218]: time="2025-07-11T00:31:28.694616740Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 11 00:31:28.694669 env[1218]: time="2025-07-11T00:31:28.694651740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 11 00:31:28.694690 env[1218]: time="2025-07-11T00:31:28.694666700Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 11 00:31:28.694782 env[1218]: time="2025-07-11T00:31:28.694766460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 11 00:31:28.694817 env[1218]: time="2025-07-11T00:31:28.694783740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 11 00:31:28.694817 env[1218]: time="2025-07-11T00:31:28.694796780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 11 00:31:28.694817 env[1218]: time="2025-07-11T00:31:28.694810180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 11 00:31:28.694873 env[1218]: time="2025-07-11T00:31:28.694821860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 11 00:31:28.694873 env[1218]: time="2025-07-11T00:31:28.694834460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 11 00:31:28.694873 env[1218]: time="2025-07-11T00:31:28.694849700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 11 00:31:28.694873 env[1218]: time="2025-07-11T00:31:28.694861260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 11 00:31:28.694953 env[1218]: time="2025-07-11T00:31:28.694873500Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 11 00:31:28.695013 env[1218]: time="2025-07-11T00:31:28.694993980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 11 00:31:28.695044 env[1218]: time="2025-07-11T00:31:28.695014620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 11 00:31:28.695044 env[1218]: time="2025-07-11T00:31:28.695027540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 11 00:31:28.695044 env[1218]: time="2025-07-11T00:31:28.695038940Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 11 00:31:28.695104 env[1218]: time="2025-07-11T00:31:28.695052540Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 11 00:31:28.695104 env[1218]: time="2025-07-11T00:31:28.695063260Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 11 00:31:28.695104 env[1218]: time="2025-07-11T00:31:28.695079220Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 11 00:31:28.695184 env[1218]: time="2025-07-11T00:31:28.695127620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 11 00:31:28.695363 env[1218]: time="2025-07-11T00:31:28.695309380Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 11 00:31:28.696049 env[1218]: time="2025-07-11T00:31:28.695369380Z" level=info msg="Connect containerd service" Jul 11 00:31:28.696049 env[1218]: time="2025-07-11T00:31:28.695400860Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 11 00:31:28.696191 env[1218]: time="2025-07-11T00:31:28.696161700Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 11 00:31:28.696531 env[1218]: time="2025-07-11T00:31:28.696506380Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 11 00:31:28.696569 env[1218]: time="2025-07-11T00:31:28.696557180Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 11 00:31:28.696613 env[1218]: time="2025-07-11T00:31:28.696599060Z" level=info msg="containerd successfully booted in 0.039203s" Jul 11 00:31:28.696687 systemd[1]: Started containerd.service. Jul 11 00:31:28.697774 env[1218]: time="2025-07-11T00:31:28.697728060Z" level=info msg="Start subscribing containerd event" Jul 11 00:31:28.697805 env[1218]: time="2025-07-11T00:31:28.697795780Z" level=info msg="Start recovering state" Jul 11 00:31:28.697873 env[1218]: time="2025-07-11T00:31:28.697858660Z" level=info msg="Start event monitor" Jul 11 00:31:28.697909 env[1218]: time="2025-07-11T00:31:28.697886940Z" level=info msg="Start snapshots syncer" Jul 11 00:31:28.697909 env[1218]: time="2025-07-11T00:31:28.697897220Z" level=info msg="Start cni network conf syncer for default" Jul 11 00:31:28.697909 env[1218]: time="2025-07-11T00:31:28.697904260Z" level=info msg="Start streaming server" Jul 11 00:31:28.974316 systemd-networkd[1046]: eth0: Gained IPv6LL Jul 11 00:31:28.975977 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 11 00:31:28.977106 systemd[1]: Reached target network-online.target. Jul 11 00:31:28.979207 systemd[1]: Starting kubelet.service... Jul 11 00:31:29.476489 sshd_keygen[1216]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 11 00:31:29.494746 systemd[1]: Finished sshd-keygen.service. Jul 11 00:31:29.497053 systemd[1]: Starting issuegen.service... Jul 11 00:31:29.501937 systemd[1]: issuegen.service: Deactivated successfully. Jul 11 00:31:29.502098 systemd[1]: Finished issuegen.service. Jul 11 00:31:29.504171 systemd[1]: Starting systemd-user-sessions.service... Jul 11 00:31:29.510981 systemd[1]: Finished systemd-user-sessions.service. Jul 11 00:31:29.513200 systemd[1]: Started getty@tty1.service. Jul 11 00:31:29.515578 systemd[1]: Started serial-getty@ttyAMA0.service. Jul 11 00:31:29.517013 systemd[1]: Reached target getty.target. Jul 11 00:31:29.580801 systemd[1]: Started kubelet.service. Jul 11 00:31:29.582171 systemd[1]: Reached target multi-user.target. Jul 11 00:31:29.584351 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 11 00:31:29.591232 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 11 00:31:29.591400 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 11 00:31:29.592287 systemd[1]: Startup finished in 594ms (kernel) + 4.204s (initrd) + 4.362s (userspace) = 9.162s. Jul 11 00:31:30.078600 kubelet[1273]: E0711 00:31:30.078539 1273 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:31:30.080707 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:31:30.080822 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:31:33.332907 systemd[1]: Created slice system-sshd.slice. Jul 11 00:31:33.334160 systemd[1]: Started sshd@0-10.0.0.78:22-10.0.0.1:37984.service. Jul 11 00:31:33.377077 sshd[1282]: Accepted publickey for core from 10.0.0.1 port 37984 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:31:33.381360 sshd[1282]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:31:33.392312 systemd-logind[1207]: New session 1 of user core. Jul 11 00:31:33.393245 systemd[1]: Created slice user-500.slice. Jul 11 00:31:33.394383 systemd[1]: Starting user-runtime-dir@500.service... Jul 11 00:31:33.402929 systemd[1]: Finished user-runtime-dir@500.service. Jul 11 00:31:33.404427 systemd[1]: Starting user@500.service... Jul 11 00:31:33.407481 (systemd)[1285]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:31:33.477649 systemd[1285]: Queued start job for default target default.target. Jul 11 00:31:33.478154 systemd[1285]: Reached target paths.target. Jul 11 00:31:33.478188 systemd[1285]: Reached target sockets.target. Jul 11 00:31:33.478200 systemd[1285]: Reached target timers.target. Jul 11 00:31:33.478210 systemd[1285]: Reached target basic.target. Jul 11 00:31:33.478248 systemd[1285]: Reached target default.target. Jul 11 00:31:33.478273 systemd[1285]: Startup finished in 64ms. Jul 11 00:31:33.478451 systemd[1]: Started user@500.service. Jul 11 00:31:33.480333 systemd[1]: Started session-1.scope. Jul 11 00:31:33.532629 systemd[1]: Started sshd@1-10.0.0.78:22-10.0.0.1:38000.service. Jul 11 00:31:33.568399 sshd[1294]: Accepted publickey for core from 10.0.0.1 port 38000 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:31:33.569637 sshd[1294]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:31:33.574328 systemd[1]: Started session-2.scope. Jul 11 00:31:33.574798 systemd-logind[1207]: New session 2 of user core. Jul 11 00:31:33.630240 sshd[1294]: pam_unix(sshd:session): session closed for user core Jul 11 00:31:33.633987 systemd[1]: Started sshd@2-10.0.0.78:22-10.0.0.1:38008.service. Jul 11 00:31:33.634546 systemd[1]: sshd@1-10.0.0.78:22-10.0.0.1:38000.service: Deactivated successfully. Jul 11 00:31:33.635332 systemd-logind[1207]: Session 2 logged out. Waiting for processes to exit. Jul 11 00:31:33.635400 systemd[1]: session-2.scope: Deactivated successfully. Jul 11 00:31:33.636067 systemd-logind[1207]: Removed session 2. Jul 11 00:31:33.667706 sshd[1299]: Accepted publickey for core from 10.0.0.1 port 38008 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:31:33.669280 sshd[1299]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:31:33.672540 systemd-logind[1207]: New session 3 of user core. Jul 11 00:31:33.673394 systemd[1]: Started session-3.scope. Jul 11 00:31:33.722279 sshd[1299]: pam_unix(sshd:session): session closed for user core Jul 11 00:31:33.724902 systemd[1]: sshd@2-10.0.0.78:22-10.0.0.1:38008.service: Deactivated successfully. Jul 11 00:31:33.725502 systemd[1]: session-3.scope: Deactivated successfully. Jul 11 00:31:33.726024 systemd-logind[1207]: Session 3 logged out. Waiting for processes to exit. Jul 11 00:31:33.727048 systemd[1]: Started sshd@3-10.0.0.78:22-10.0.0.1:38014.service. Jul 11 00:31:33.727717 systemd-logind[1207]: Removed session 3. Jul 11 00:31:33.760621 sshd[1306]: Accepted publickey for core from 10.0.0.1 port 38014 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:31:33.761799 sshd[1306]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:31:33.765808 systemd[1]: Started session-4.scope. Jul 11 00:31:33.766372 systemd-logind[1207]: New session 4 of user core. Jul 11 00:31:33.820378 sshd[1306]: pam_unix(sshd:session): session closed for user core Jul 11 00:31:33.822994 systemd[1]: sshd@3-10.0.0.78:22-10.0.0.1:38014.service: Deactivated successfully. Jul 11 00:31:33.823748 systemd[1]: session-4.scope: Deactivated successfully. Jul 11 00:31:33.824294 systemd-logind[1207]: Session 4 logged out. Waiting for processes to exit. Jul 11 00:31:33.825333 systemd[1]: Started sshd@4-10.0.0.78:22-10.0.0.1:38028.service. Jul 11 00:31:33.825999 systemd-logind[1207]: Removed session 4. Jul 11 00:31:33.858966 sshd[1312]: Accepted publickey for core from 10.0.0.1 port 38028 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:31:33.860193 sshd[1312]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:31:33.863853 systemd-logind[1207]: New session 5 of user core. Jul 11 00:31:33.864333 systemd[1]: Started session-5.scope. Jul 11 00:31:33.923724 sudo[1315]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 11 00:31:33.923951 sudo[1315]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 11 00:31:33.935399 systemd[1]: Starting coreos-metadata.service... Jul 11 00:31:33.941859 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 11 00:31:33.942031 systemd[1]: Finished coreos-metadata.service. Jul 11 00:31:34.507678 systemd[1]: Stopped kubelet.service. Jul 11 00:31:34.512769 systemd[1]: Starting kubelet.service... Jul 11 00:31:34.535201 systemd[1]: Reloading. Jul 11 00:31:34.586565 /usr/lib/systemd/system-generators/torcx-generator[1377]: time="2025-07-11T00:31:34Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 11 00:31:34.586897 /usr/lib/systemd/system-generators/torcx-generator[1377]: time="2025-07-11T00:31:34Z" level=info msg="torcx already run" Jul 11 00:31:34.734868 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 11 00:31:34.735016 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 11 00:31:34.750610 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:31:34.816892 systemd[1]: Started kubelet.service. Jul 11 00:31:34.822671 systemd[1]: Stopping kubelet.service... Jul 11 00:31:34.824301 systemd[1]: kubelet.service: Deactivated successfully. Jul 11 00:31:34.824602 systemd[1]: Stopped kubelet.service. Jul 11 00:31:34.826397 systemd[1]: Starting kubelet.service... Jul 11 00:31:34.935836 systemd[1]: Started kubelet.service. Jul 11 00:31:34.968707 kubelet[1427]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:31:34.968707 kubelet[1427]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 11 00:31:34.968707 kubelet[1427]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:31:34.969032 kubelet[1427]: I0711 00:31:34.968743 1427 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 00:31:36.074132 kubelet[1427]: I0711 00:31:36.072617 1427 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 11 00:31:36.074132 kubelet[1427]: I0711 00:31:36.072650 1427 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 00:31:36.074132 kubelet[1427]: I0711 00:31:36.072866 1427 server.go:956] "Client rotation is on, will bootstrap in background" Jul 11 00:31:36.148520 kubelet[1427]: I0711 00:31:36.148474 1427 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 00:31:36.164781 kubelet[1427]: E0711 00:31:36.164732 1427 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 11 00:31:36.164781 kubelet[1427]: I0711 00:31:36.164779 1427 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 11 00:31:36.167350 kubelet[1427]: I0711 00:31:36.167329 1427 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 00:31:36.168430 kubelet[1427]: I0711 00:31:36.168382 1427 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 00:31:36.168587 kubelet[1427]: I0711 00:31:36.168425 1427 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.78","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 11 00:31:36.168673 kubelet[1427]: I0711 00:31:36.168649 1427 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 00:31:36.168673 kubelet[1427]: I0711 00:31:36.168659 1427 container_manager_linux.go:303] "Creating device plugin manager" Jul 11 00:31:36.168844 kubelet[1427]: I0711 00:31:36.168830 1427 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:31:36.173402 kubelet[1427]: I0711 00:31:36.173377 1427 kubelet.go:480] "Attempting to sync node with API server" Jul 11 00:31:36.173402 kubelet[1427]: I0711 00:31:36.173405 1427 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 00:31:36.174146 kubelet[1427]: I0711 00:31:36.174125 1427 kubelet.go:386] "Adding apiserver pod source" Jul 11 00:31:36.175196 kubelet[1427]: I0711 00:31:36.175177 1427 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 00:31:36.175273 kubelet[1427]: E0711 00:31:36.175189 1427 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:31:36.175379 kubelet[1427]: E0711 00:31:36.175350 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:31:36.176183 kubelet[1427]: I0711 00:31:36.176161 1427 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 11 00:31:36.176903 kubelet[1427]: I0711 00:31:36.176885 1427 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 11 00:31:36.177011 kubelet[1427]: W0711 00:31:36.177000 1427 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 11 00:31:36.179315 kubelet[1427]: I0711 00:31:36.179293 1427 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 11 00:31:36.179376 kubelet[1427]: I0711 00:31:36.179362 1427 server.go:1289] "Started kubelet" Jul 11 00:31:36.180515 kubelet[1427]: I0711 00:31:36.180405 1427 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 00:31:36.180947 kubelet[1427]: I0711 00:31:36.180929 1427 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 00:31:36.181097 kubelet[1427]: I0711 00:31:36.181069 1427 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 00:31:36.181882 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 11 00:31:36.182233 kubelet[1427]: I0711 00:31:36.182220 1427 server.go:317] "Adding debug handlers to kubelet server" Jul 11 00:31:36.185317 kubelet[1427]: I0711 00:31:36.185278 1427 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 00:31:36.188976 kubelet[1427]: E0711 00:31:36.188942 1427 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 00:31:36.189251 kubelet[1427]: I0711 00:31:36.189202 1427 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 00:31:36.189558 kubelet[1427]: E0711 00:31:36.189532 1427 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.78\" not found" Jul 11 00:31:36.189623 kubelet[1427]: I0711 00:31:36.189564 1427 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 11 00:31:36.189726 kubelet[1427]: I0711 00:31:36.189708 1427 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 11 00:31:36.189786 kubelet[1427]: I0711 00:31:36.189774 1427 reconciler.go:26] "Reconciler: start to sync state" Jul 11 00:31:36.190187 kubelet[1427]: I0711 00:31:36.190163 1427 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 00:31:36.193633 kubelet[1427]: E0711 00:31:36.193392 1427 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.78\" not found" node="10.0.0.78" Jul 11 00:31:36.194655 kubelet[1427]: I0711 00:31:36.194299 1427 factory.go:223] Registration of the containerd container factory successfully Jul 11 00:31:36.194655 kubelet[1427]: I0711 00:31:36.194327 1427 factory.go:223] Registration of the systemd container factory successfully Jul 11 00:31:36.205484 kubelet[1427]: I0711 00:31:36.205453 1427 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 11 00:31:36.205602 kubelet[1427]: I0711 00:31:36.205587 1427 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 11 00:31:36.205681 kubelet[1427]: I0711 00:31:36.205671 1427 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:31:36.290396 kubelet[1427]: E0711 00:31:36.290356 1427 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.78\" not found" Jul 11 00:31:36.304608 kubelet[1427]: I0711 00:31:36.304586 1427 policy_none.go:49] "None policy: Start" Jul 11 00:31:36.304734 kubelet[1427]: I0711 00:31:36.304722 1427 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 11 00:31:36.304803 kubelet[1427]: I0711 00:31:36.304794 1427 state_mem.go:35] "Initializing new in-memory state store" Jul 11 00:31:36.309552 systemd[1]: Created slice kubepods.slice. Jul 11 00:31:36.313601 systemd[1]: Created slice kubepods-burstable.slice. Jul 11 00:31:36.315969 systemd[1]: Created slice kubepods-besteffort.slice. Jul 11 00:31:36.329770 kubelet[1427]: E0711 00:31:36.328839 1427 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 11 00:31:36.329770 kubelet[1427]: I0711 00:31:36.329094 1427 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 00:31:36.329770 kubelet[1427]: I0711 00:31:36.329201 1427 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 00:31:36.329770 kubelet[1427]: I0711 00:31:36.329433 1427 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 00:31:36.331062 kubelet[1427]: E0711 00:31:36.330961 1427 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 11 00:31:36.331062 kubelet[1427]: E0711 00:31:36.331001 1427 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.78\" not found" Jul 11 00:31:36.381779 kubelet[1427]: I0711 00:31:36.381729 1427 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 11 00:31:36.382900 kubelet[1427]: I0711 00:31:36.382861 1427 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 11 00:31:36.382900 kubelet[1427]: I0711 00:31:36.382881 1427 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 11 00:31:36.382900 kubelet[1427]: I0711 00:31:36.382902 1427 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 11 00:31:36.383007 kubelet[1427]: I0711 00:31:36.382910 1427 kubelet.go:2436] "Starting kubelet main sync loop" Jul 11 00:31:36.383007 kubelet[1427]: E0711 00:31:36.382951 1427 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jul 11 00:31:36.430881 kubelet[1427]: I0711 00:31:36.430849 1427 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.78" Jul 11 00:31:36.435384 kubelet[1427]: I0711 00:31:36.435337 1427 kubelet_node_status.go:78] "Successfully registered node" node="10.0.0.78" Jul 11 00:31:36.469155 sudo[1315]: pam_unix(sudo:session): session closed for user root Jul 11 00:31:36.470953 sshd[1312]: pam_unix(sshd:session): session closed for user core Jul 11 00:31:36.473397 systemd[1]: sshd@4-10.0.0.78:22-10.0.0.1:38028.service: Deactivated successfully. Jul 11 00:31:36.474082 systemd[1]: session-5.scope: Deactivated successfully. Jul 11 00:31:36.474634 systemd-logind[1207]: Session 5 logged out. Waiting for processes to exit. Jul 11 00:31:36.475364 systemd-logind[1207]: Removed session 5. Jul 11 00:31:36.541605 kubelet[1427]: I0711 00:31:36.541574 1427 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jul 11 00:31:36.541970 env[1218]: time="2025-07-11T00:31:36.541933980Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 11 00:31:36.542312 kubelet[1427]: I0711 00:31:36.542141 1427 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jul 11 00:31:37.075161 kubelet[1427]: I0711 00:31:37.075132 1427 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jul 11 00:31:37.075500 kubelet[1427]: I0711 00:31:37.075326 1427 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Jul 11 00:31:37.075500 kubelet[1427]: I0711 00:31:37.075328 1427 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Jul 11 00:31:37.075500 kubelet[1427]: I0711 00:31:37.075353 1427 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Jul 11 00:31:37.176009 kubelet[1427]: E0711 00:31:37.175981 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:31:37.176133 kubelet[1427]: I0711 00:31:37.176042 1427 apiserver.go:52] "Watching apiserver" Jul 11 00:31:37.186514 systemd[1]: Created slice kubepods-burstable-pod9c845f69_df61_4215_98e1_604485b79b77.slice. Jul 11 00:31:37.187362 kubelet[1427]: W0711 00:31:37.187331 1427 helpers.go:245] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c845f69_df61_4215_98e1_604485b79b77.slice/cpu.weight": open /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c845f69_df61_4215_98e1_604485b79b77.slice/cpu.weight: no such device Jul 11 00:31:37.191412 kubelet[1427]: I0711 00:31:37.191389 1427 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 11 00:31:37.194319 kubelet[1427]: I0711 00:31:37.194289 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9c845f69-df61-4215-98e1-604485b79b77-hostproc\") pod \"cilium-nrlh5\" (UID: \"9c845f69-df61-4215-98e1-604485b79b77\") " pod="kube-system/cilium-nrlh5" Jul 11 00:31:37.194385 kubelet[1427]: I0711 00:31:37.194328 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c845f69-df61-4215-98e1-604485b79b77-lib-modules\") pod \"cilium-nrlh5\" (UID: \"9c845f69-df61-4215-98e1-604485b79b77\") " pod="kube-system/cilium-nrlh5" Jul 11 00:31:37.194385 kubelet[1427]: I0711 00:31:37.194355 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c845f69-df61-4215-98e1-604485b79b77-xtables-lock\") pod \"cilium-nrlh5\" (UID: \"9c845f69-df61-4215-98e1-604485b79b77\") " pod="kube-system/cilium-nrlh5" Jul 11 00:31:37.194385 kubelet[1427]: I0711 00:31:37.194370 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9c845f69-df61-4215-98e1-604485b79b77-clustermesh-secrets\") pod \"cilium-nrlh5\" (UID: \"9c845f69-df61-4215-98e1-604485b79b77\") " pod="kube-system/cilium-nrlh5" Jul 11 00:31:37.194481 kubelet[1427]: I0711 00:31:37.194411 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9c845f69-df61-4215-98e1-604485b79b77-host-proc-sys-net\") pod \"cilium-nrlh5\" (UID: \"9c845f69-df61-4215-98e1-604485b79b77\") " pod="kube-system/cilium-nrlh5" Jul 11 00:31:37.194481 kubelet[1427]: I0711 00:31:37.194458 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9c845f69-df61-4215-98e1-604485b79b77-hubble-tls\") pod \"cilium-nrlh5\" (UID: \"9c845f69-df61-4215-98e1-604485b79b77\") " pod="kube-system/cilium-nrlh5" Jul 11 00:31:37.194531 kubelet[1427]: I0711 00:31:37.194488 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cdx5\" (UniqueName: \"kubernetes.io/projected/9c845f69-df61-4215-98e1-604485b79b77-kube-api-access-8cdx5\") pod \"cilium-nrlh5\" (UID: \"9c845f69-df61-4215-98e1-604485b79b77\") " pod="kube-system/cilium-nrlh5" Jul 11 00:31:37.194531 kubelet[1427]: I0711 00:31:37.194525 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9c845f69-df61-4215-98e1-604485b79b77-bpf-maps\") pod \"cilium-nrlh5\" (UID: \"9c845f69-df61-4215-98e1-604485b79b77\") " pod="kube-system/cilium-nrlh5" Jul 11 00:31:37.194572 kubelet[1427]: I0711 00:31:37.194541 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9c845f69-df61-4215-98e1-604485b79b77-cni-path\") pod \"cilium-nrlh5\" (UID: \"9c845f69-df61-4215-98e1-604485b79b77\") " pod="kube-system/cilium-nrlh5" Jul 11 00:31:37.194572 kubelet[1427]: I0711 00:31:37.194556 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9c845f69-df61-4215-98e1-604485b79b77-cilium-config-path\") pod \"cilium-nrlh5\" (UID: \"9c845f69-df61-4215-98e1-604485b79b77\") " pod="kube-system/cilium-nrlh5" Jul 11 00:31:37.194660 kubelet[1427]: I0711 00:31:37.194573 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9c845f69-df61-4215-98e1-604485b79b77-cilium-run\") pod \"cilium-nrlh5\" (UID: \"9c845f69-df61-4215-98e1-604485b79b77\") " pod="kube-system/cilium-nrlh5" Jul 11 00:31:37.194660 kubelet[1427]: I0711 00:31:37.194593 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9c845f69-df61-4215-98e1-604485b79b77-cilium-cgroup\") pod \"cilium-nrlh5\" (UID: \"9c845f69-df61-4215-98e1-604485b79b77\") " pod="kube-system/cilium-nrlh5" Jul 11 00:31:37.194660 kubelet[1427]: I0711 00:31:37.194610 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9c845f69-df61-4215-98e1-604485b79b77-host-proc-sys-kernel\") pod \"cilium-nrlh5\" (UID: \"9c845f69-df61-4215-98e1-604485b79b77\") " pod="kube-system/cilium-nrlh5" Jul 11 00:31:37.194660 kubelet[1427]: I0711 00:31:37.194624 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9c845f69-df61-4215-98e1-604485b79b77-etc-cni-netd\") pod \"cilium-nrlh5\" (UID: \"9c845f69-df61-4215-98e1-604485b79b77\") " pod="kube-system/cilium-nrlh5" Jul 11 00:31:37.194743 kubelet[1427]: I0711 00:31:37.194665 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a782c4cd-f87a-4328-b613-83a73ffafe5c-kube-proxy\") pod \"kube-proxy-9zdt4\" (UID: \"a782c4cd-f87a-4328-b613-83a73ffafe5c\") " pod="kube-system/kube-proxy-9zdt4" Jul 11 00:31:37.194743 kubelet[1427]: I0711 00:31:37.194682 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a782c4cd-f87a-4328-b613-83a73ffafe5c-xtables-lock\") pod \"kube-proxy-9zdt4\" (UID: \"a782c4cd-f87a-4328-b613-83a73ffafe5c\") " pod="kube-system/kube-proxy-9zdt4" Jul 11 00:31:37.194743 kubelet[1427]: I0711 00:31:37.194697 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a782c4cd-f87a-4328-b613-83a73ffafe5c-lib-modules\") pod \"kube-proxy-9zdt4\" (UID: \"a782c4cd-f87a-4328-b613-83a73ffafe5c\") " pod="kube-system/kube-proxy-9zdt4" Jul 11 00:31:37.194743 kubelet[1427]: I0711 00:31:37.194723 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8k7z\" (UniqueName: \"kubernetes.io/projected/a782c4cd-f87a-4328-b613-83a73ffafe5c-kube-api-access-z8k7z\") pod \"kube-proxy-9zdt4\" (UID: \"a782c4cd-f87a-4328-b613-83a73ffafe5c\") " pod="kube-system/kube-proxy-9zdt4" Jul 11 00:31:37.206730 systemd[1]: Created slice kubepods-besteffort-poda782c4cd_f87a_4328_b613_83a73ffafe5c.slice. Jul 11 00:31:37.295707 kubelet[1427]: I0711 00:31:37.295652 1427 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jul 11 00:31:37.505288 kubelet[1427]: E0711 00:31:37.505179 1427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:31:37.507540 env[1218]: time="2025-07-11T00:31:37.507204660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nrlh5,Uid:9c845f69-df61-4215-98e1-604485b79b77,Namespace:kube-system,Attempt:0,}" Jul 11 00:31:37.519007 kubelet[1427]: E0711 00:31:37.518969 1427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:31:37.519434 env[1218]: time="2025-07-11T00:31:37.519392500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9zdt4,Uid:a782c4cd-f87a-4328-b613-83a73ffafe5c,Namespace:kube-system,Attempt:0,}" Jul 11 00:31:38.017580 env[1218]: time="2025-07-11T00:31:38.017516060Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:31:38.018740 env[1218]: time="2025-07-11T00:31:38.018702620Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:31:38.020957 env[1218]: time="2025-07-11T00:31:38.020915260Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:31:38.021998 env[1218]: time="2025-07-11T00:31:38.021966220Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:31:38.024410 env[1218]: time="2025-07-11T00:31:38.024382460Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:31:38.026649 env[1218]: time="2025-07-11T00:31:38.026619860Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:31:38.028384 env[1218]: time="2025-07-11T00:31:38.028351260Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:31:38.029327 env[1218]: time="2025-07-11T00:31:38.029300220Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:31:38.057481 env[1218]: time="2025-07-11T00:31:38.057391660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:31:38.057481 env[1218]: time="2025-07-11T00:31:38.057443780Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:31:38.057481 env[1218]: time="2025-07-11T00:31:38.057463460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:31:38.057838 env[1218]: time="2025-07-11T00:31:38.057795460Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d9b11b14656dd16699074589c5d0d326b5e20f441d0bd48d8432f63ec2fb7c12 pid=1497 runtime=io.containerd.runc.v2 Jul 11 00:31:38.057923 env[1218]: time="2025-07-11T00:31:38.057885580Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:31:38.057971 env[1218]: time="2025-07-11T00:31:38.057937900Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:31:38.058003 env[1218]: time="2025-07-11T00:31:38.057982620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:31:38.058232 env[1218]: time="2025-07-11T00:31:38.058187020Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a6e16985afe2d4e588d5bbcb0d6e6bfb7af5fd8a26482c7ba73bf3c7f692f39 pid=1498 runtime=io.containerd.runc.v2 Jul 11 00:31:38.076805 systemd[1]: Started cri-containerd-d9b11b14656dd16699074589c5d0d326b5e20f441d0bd48d8432f63ec2fb7c12.scope. Jul 11 00:31:38.080930 systemd[1]: Started cri-containerd-0a6e16985afe2d4e588d5bbcb0d6e6bfb7af5fd8a26482c7ba73bf3c7f692f39.scope. Jul 11 00:31:38.121052 env[1218]: time="2025-07-11T00:31:38.121010500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nrlh5,Uid:9c845f69-df61-4215-98e1-604485b79b77,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a6e16985afe2d4e588d5bbcb0d6e6bfb7af5fd8a26482c7ba73bf3c7f692f39\"" Jul 11 00:31:38.122657 env[1218]: time="2025-07-11T00:31:38.122619020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9zdt4,Uid:a782c4cd-f87a-4328-b613-83a73ffafe5c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d9b11b14656dd16699074589c5d0d326b5e20f441d0bd48d8432f63ec2fb7c12\"" Jul 11 00:31:38.123080 kubelet[1427]: E0711 00:31:38.123055 1427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:31:38.123337 kubelet[1427]: E0711 00:31:38.123135 1427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:31:38.124225 env[1218]: time="2025-07-11T00:31:38.124193660Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 11 00:31:38.176633 kubelet[1427]: E0711 00:31:38.176600 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:31:38.301506 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4288153809.mount: Deactivated successfully. Jul 11 00:31:39.177161 kubelet[1427]: E0711 00:31:39.177094 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:31:40.177698 kubelet[1427]: E0711 00:31:40.177655 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:31:41.178802 kubelet[1427]: E0711 00:31:41.178743 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:31:42.179678 kubelet[1427]: E0711 00:31:42.179617 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:31:42.833649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1611844721.mount: Deactivated successfully. Jul 11 00:31:43.180394 kubelet[1427]: E0711 00:31:43.180239 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:31:44.180892 kubelet[1427]: E0711 00:31:44.180858 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:31:45.016494 env[1218]: time="2025-07-11T00:31:45.016438380Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:31:45.017572 env[1218]: time="2025-07-11T00:31:45.017547540Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:31:45.018911 env[1218]: time="2025-07-11T00:31:45.018884060Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:31:45.019582 env[1218]: time="2025-07-11T00:31:45.019553140Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 11 00:31:45.021442 env[1218]: time="2025-07-11T00:31:45.021414540Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 11 00:31:45.023999 env[1218]: time="2025-07-11T00:31:45.023959180Z" level=info msg="CreateContainer within sandbox \"0a6e16985afe2d4e588d5bbcb0d6e6bfb7af5fd8a26482c7ba73bf3c7f692f39\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 11 00:31:45.034247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1914296732.mount: Deactivated successfully. Jul 11 00:31:45.037207 env[1218]: time="2025-07-11T00:31:45.037155740Z" level=info msg="CreateContainer within sandbox \"0a6e16985afe2d4e588d5bbcb0d6e6bfb7af5fd8a26482c7ba73bf3c7f692f39\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f9a483ae9f0250299d07f98d3a4c4c6cf158fdd6ac8fe0c760d1a2dd1786b8b0\"" Jul 11 00:31:45.038081 env[1218]: time="2025-07-11T00:31:45.038030500Z" level=info msg="StartContainer for \"f9a483ae9f0250299d07f98d3a4c4c6cf158fdd6ac8fe0c760d1a2dd1786b8b0\"" Jul 11 00:31:45.057022 systemd[1]: Started cri-containerd-f9a483ae9f0250299d07f98d3a4c4c6cf158fdd6ac8fe0c760d1a2dd1786b8b0.scope. Jul 11 00:31:45.099080 env[1218]: time="2025-07-11T00:31:45.098437900Z" level=info msg="StartContainer for \"f9a483ae9f0250299d07f98d3a4c4c6cf158fdd6ac8fe0c760d1a2dd1786b8b0\" returns successfully" Jul 11 00:31:45.126001 systemd[1]: cri-containerd-f9a483ae9f0250299d07f98d3a4c4c6cf158fdd6ac8fe0c760d1a2dd1786b8b0.scope: Deactivated successfully. Jul 11 00:31:45.180998 kubelet[1427]: E0711 00:31:45.180948 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:31:45.242741 env[1218]: time="2025-07-11T00:31:45.242697500Z" level=info msg="shim disconnected" id=f9a483ae9f0250299d07f98d3a4c4c6cf158fdd6ac8fe0c760d1a2dd1786b8b0 Jul 11 00:31:45.242925 env[1218]: time="2025-07-11T00:31:45.242907740Z" level=warning msg="cleaning up after shim disconnected" id=f9a483ae9f0250299d07f98d3a4c4c6cf158fdd6ac8fe0c760d1a2dd1786b8b0 namespace=k8s.io Jul 11 00:31:45.242988 env[1218]: time="2025-07-11T00:31:45.242975140Z" level=info msg="cleaning up dead shim" Jul 11 00:31:45.249847 env[1218]: time="2025-07-11T00:31:45.249806780Z" level=warning msg="cleanup warnings time=\"2025-07-11T00:31:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1614 runtime=io.containerd.runc.v2\n" Jul 11 00:31:45.398180 kubelet[1427]: E0711 00:31:45.398062 1427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:31:45.401438 env[1218]: time="2025-07-11T00:31:45.401380580Z" level=info msg="CreateContainer within sandbox \"0a6e16985afe2d4e588d5bbcb0d6e6bfb7af5fd8a26482c7ba73bf3c7f692f39\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 11 00:31:45.411275 env[1218]: time="2025-07-11T00:31:45.411227140Z" level=info msg="CreateContainer within sandbox \"0a6e16985afe2d4e588d5bbcb0d6e6bfb7af5fd8a26482c7ba73bf3c7f692f39\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0768e3a5d48677e0f984cf84089ca6980dd7a373d1e6c5c858d6bfaec8bd8a08\"" Jul 11 00:31:45.411762 env[1218]: time="2025-07-11T00:31:45.411727220Z" level=info msg="StartContainer for \"0768e3a5d48677e0f984cf84089ca6980dd7a373d1e6c5c858d6bfaec8bd8a08\"" Jul 11 00:31:45.426706 systemd[1]: Started cri-containerd-0768e3a5d48677e0f984cf84089ca6980dd7a373d1e6c5c858d6bfaec8bd8a08.scope. Jul 11 00:31:45.457722 env[1218]: time="2025-07-11T00:31:45.457674420Z" level=info msg="StartContainer for \"0768e3a5d48677e0f984cf84089ca6980dd7a373d1e6c5c858d6bfaec8bd8a08\" returns successfully" Jul 11 00:31:45.470421 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 11 00:31:45.470622 systemd[1]: Stopped systemd-sysctl.service. Jul 11 00:31:45.470822 systemd[1]: Stopping systemd-sysctl.service... Jul 11 00:31:45.472660 systemd[1]: Starting systemd-sysctl.service... Jul 11 00:31:45.475775 systemd[1]: cri-containerd-0768e3a5d48677e0f984cf84089ca6980dd7a373d1e6c5c858d6bfaec8bd8a08.scope: Deactivated successfully. Jul 11 00:31:45.479939 systemd[1]: Finished systemd-sysctl.service. Jul 11 00:31:45.500254 env[1218]: time="2025-07-11T00:31:45.500208380Z" level=info msg="shim disconnected" id=0768e3a5d48677e0f984cf84089ca6980dd7a373d1e6c5c858d6bfaec8bd8a08 Jul 11 00:31:45.500473 env[1218]: time="2025-07-11T00:31:45.500453540Z" level=warning msg="cleaning up after shim disconnected" id=0768e3a5d48677e0f984cf84089ca6980dd7a373d1e6c5c858d6bfaec8bd8a08 namespace=k8s.io Jul 11 00:31:45.500532 env[1218]: time="2025-07-11T00:31:45.500520140Z" level=info msg="cleaning up dead shim" Jul 11 00:31:45.512367 env[1218]: time="2025-07-11T00:31:45.512331660Z" level=warning msg="cleanup warnings time=\"2025-07-11T00:31:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1680 runtime=io.containerd.runc.v2\n" Jul 11 00:31:46.031917 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9a483ae9f0250299d07f98d3a4c4c6cf158fdd6ac8fe0c760d1a2dd1786b8b0-rootfs.mount: Deactivated successfully. Jul 11 00:31:46.181207 kubelet[1427]: E0711 00:31:46.181169 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:31:46.233536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3291156291.mount: Deactivated successfully. Jul 11 00:31:46.406444 kubelet[1427]: E0711 00:31:46.405543 1427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:31:46.410219 env[1218]: time="2025-07-11T00:31:46.410177820Z" level=info msg="CreateContainer within sandbox \"0a6e16985afe2d4e588d5bbcb0d6e6bfb7af5fd8a26482c7ba73bf3c7f692f39\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 11 00:31:46.425517 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1463101756.mount: Deactivated successfully. Jul 11 00:31:46.432126 env[1218]: time="2025-07-11T00:31:46.432075820Z" level=info msg="CreateContainer within sandbox \"0a6e16985afe2d4e588d5bbcb0d6e6bfb7af5fd8a26482c7ba73bf3c7f692f39\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ea41f83177301e87fec9cd6432efdb910c3af43ecc8da69dd3a7a98c3ea4002b\"" Jul 11 00:31:46.432864 env[1218]: time="2025-07-11T00:31:46.432834140Z" level=info msg="StartContainer for \"ea41f83177301e87fec9cd6432efdb910c3af43ecc8da69dd3a7a98c3ea4002b\"" Jul 11 00:31:46.447794 systemd[1]: Started cri-containerd-ea41f83177301e87fec9cd6432efdb910c3af43ecc8da69dd3a7a98c3ea4002b.scope. Jul 11 00:31:46.496039 env[1218]: time="2025-07-11T00:31:46.495987540Z" level=info msg="StartContainer for \"ea41f83177301e87fec9cd6432efdb910c3af43ecc8da69dd3a7a98c3ea4002b\" returns successfully" Jul 11 00:31:46.496361 systemd[1]: cri-containerd-ea41f83177301e87fec9cd6432efdb910c3af43ecc8da69dd3a7a98c3ea4002b.scope: Deactivated successfully. Jul 11 00:31:46.607769 env[1218]: time="2025-07-11T00:31:46.607716460Z" level=info msg="shim disconnected" id=ea41f83177301e87fec9cd6432efdb910c3af43ecc8da69dd3a7a98c3ea4002b Jul 11 00:31:46.607769 env[1218]: time="2025-07-11T00:31:46.607757780Z" level=warning msg="cleaning up after shim disconnected" id=ea41f83177301e87fec9cd6432efdb910c3af43ecc8da69dd3a7a98c3ea4002b namespace=k8s.io Jul 11 00:31:46.607769 env[1218]: time="2025-07-11T00:31:46.607766820Z" level=info msg="cleaning up dead shim" Jul 11 00:31:46.614768 env[1218]: time="2025-07-11T00:31:46.614734860Z" level=warning msg="cleanup warnings time=\"2025-07-11T00:31:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1735 runtime=io.containerd.runc.v2\n" Jul 11 00:31:46.751763 env[1218]: time="2025-07-11T00:31:46.751185460Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.33.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:31:46.752680 env[1218]: time="2025-07-11T00:31:46.752648300Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:31:46.754186 env[1218]: time="2025-07-11T00:31:46.754160660Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.33.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:31:46.755588 env[1218]: time="2025-07-11T00:31:46.755566980Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:31:46.755883 env[1218]: time="2025-07-11T00:31:46.755853580Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\"" Jul 11 00:31:46.759683 env[1218]: time="2025-07-11T00:31:46.759650620Z" level=info msg="CreateContainer within sandbox \"d9b11b14656dd16699074589c5d0d326b5e20f441d0bd48d8432f63ec2fb7c12\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 11 00:31:46.771574 env[1218]: time="2025-07-11T00:31:46.771532020Z" level=info msg="CreateContainer within sandbox \"d9b11b14656dd16699074589c5d0d326b5e20f441d0bd48d8432f63ec2fb7c12\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5b24ddde8193bdd16e947a59e481feb3900d720bd00eea0f6fbf04793a1c9e9a\"" Jul 11 00:31:46.772063 env[1218]: time="2025-07-11T00:31:46.771980580Z" level=info msg="StartContainer for \"5b24ddde8193bdd16e947a59e481feb3900d720bd00eea0f6fbf04793a1c9e9a\"" Jul 11 00:31:46.785825 systemd[1]: Started cri-containerd-5b24ddde8193bdd16e947a59e481feb3900d720bd00eea0f6fbf04793a1c9e9a.scope. Jul 11 00:31:46.823045 env[1218]: time="2025-07-11T00:31:46.822997740Z" level=info msg="StartContainer for \"5b24ddde8193bdd16e947a59e481feb3900d720bd00eea0f6fbf04793a1c9e9a\" returns successfully" Jul 11 00:31:47.031896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1503011589.mount: Deactivated successfully. Jul 11 00:31:47.181633 kubelet[1427]: E0711 00:31:47.181587 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:31:47.404206 kubelet[1427]: E0711 00:31:47.404085 1427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:31:47.406350 kubelet[1427]: E0711 00:31:47.406309 1427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:31:47.409976 env[1218]: time="2025-07-11T00:31:47.409930380Z" level=info msg="CreateContainer within sandbox \"0a6e16985afe2d4e588d5bbcb0d6e6bfb7af5fd8a26482c7ba73bf3c7f692f39\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 11 00:31:47.419321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4051775813.mount: Deactivated successfully. Jul 11 00:31:47.424316 env[1218]: time="2025-07-11T00:31:47.424270540Z" level=info msg="CreateContainer within sandbox \"0a6e16985afe2d4e588d5bbcb0d6e6bfb7af5fd8a26482c7ba73bf3c7f692f39\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0f5502ca0117ba28340df428db8be9c30a702b5b4035997a1930a3443e43bf9c\"" Jul 11 00:31:47.424821 env[1218]: time="2025-07-11T00:31:47.424782100Z" level=info msg="StartContainer for \"0f5502ca0117ba28340df428db8be9c30a702b5b4035997a1930a3443e43bf9c\"" Jul 11 00:31:47.429830 kubelet[1427]: I0711 00:31:47.429760 1427 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9zdt4" podStartSLOduration=2.7970267 podStartE2EDuration="11.42974382s" podCreationTimestamp="2025-07-11 00:31:36 +0000 UTC" firstStartedPulling="2025-07-11 00:31:38.12393438 +0000 UTC m=+3.184242081" lastFinishedPulling="2025-07-11 00:31:46.7566515 +0000 UTC m=+11.816959201" observedRunningTime="2025-07-11 00:31:47.41414586 +0000 UTC m=+12.474453561" watchObservedRunningTime="2025-07-11 00:31:47.42974382 +0000 UTC m=+12.490051521" Jul 11 00:31:47.441558 systemd[1]: Started cri-containerd-0f5502ca0117ba28340df428db8be9c30a702b5b4035997a1930a3443e43bf9c.scope. Jul 11 00:31:47.472824 systemd[1]: cri-containerd-0f5502ca0117ba28340df428db8be9c30a702b5b4035997a1930a3443e43bf9c.scope: Deactivated successfully. Jul 11 00:31:47.473992 env[1218]: time="2025-07-11T00:31:47.473314380Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c845f69_df61_4215_98e1_604485b79b77.slice/cri-containerd-0f5502ca0117ba28340df428db8be9c30a702b5b4035997a1930a3443e43bf9c.scope/memory.events\": no such file or directory" Jul 11 00:31:47.475531 env[1218]: time="2025-07-11T00:31:47.475497580Z" level=info msg="StartContainer for \"0f5502ca0117ba28340df428db8be9c30a702b5b4035997a1930a3443e43bf9c\" returns successfully" Jul 11 00:31:47.519950 env[1218]: time="2025-07-11T00:31:47.519895620Z" level=info msg="shim disconnected" id=0f5502ca0117ba28340df428db8be9c30a702b5b4035997a1930a3443e43bf9c Jul 11 00:31:47.519950 env[1218]: time="2025-07-11T00:31:47.519946260Z" level=warning msg="cleaning up after shim disconnected" id=0f5502ca0117ba28340df428db8be9c30a702b5b4035997a1930a3443e43bf9c namespace=k8s.io Jul 11 00:31:47.519950 env[1218]: time="2025-07-11T00:31:47.519956300Z" level=info msg="cleaning up dead shim" Jul 11 00:31:47.527282 env[1218]: time="2025-07-11T00:31:47.527249540Z" level=warning msg="cleanup warnings time=\"2025-07-11T00:31:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1963 runtime=io.containerd.runc.v2\n" Jul 11 00:31:48.031293 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f5502ca0117ba28340df428db8be9c30a702b5b4035997a1930a3443e43bf9c-rootfs.mount: Deactivated successfully. Jul 11 00:31:48.182455 kubelet[1427]: E0711 00:31:48.182409 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:31:48.409839 kubelet[1427]: E0711 00:31:48.409730 1427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:31:48.413248 kubelet[1427]: E0711 00:31:48.410577 1427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:31:48.418148 env[1218]: time="2025-07-11T00:31:48.418085740Z" level=info msg="CreateContainer within sandbox \"0a6e16985afe2d4e588d5bbcb0d6e6bfb7af5fd8a26482c7ba73bf3c7f692f39\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 11 00:31:48.429830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1671463255.mount: Deactivated successfully. Jul 11 00:31:48.433895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2064347896.mount: Deactivated successfully. Jul 11 00:31:48.437975 env[1218]: time="2025-07-11T00:31:48.437932740Z" level=info msg="CreateContainer within sandbox \"0a6e16985afe2d4e588d5bbcb0d6e6bfb7af5fd8a26482c7ba73bf3c7f692f39\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"93d636c07749dbe819595e5fc2adce2094fdd7dde4b75a7a8a529a9a2f047d43\"" Jul 11 00:31:48.438823 env[1218]: time="2025-07-11T00:31:48.438795220Z" level=info msg="StartContainer for \"93d636c07749dbe819595e5fc2adce2094fdd7dde4b75a7a8a529a9a2f047d43\"" Jul 11 00:31:48.459307 systemd[1]: Started cri-containerd-93d636c07749dbe819595e5fc2adce2094fdd7dde4b75a7a8a529a9a2f047d43.scope. Jul 11 00:31:48.503653 env[1218]: time="2025-07-11T00:31:48.503597460Z" level=info msg="StartContainer for \"93d636c07749dbe819595e5fc2adce2094fdd7dde4b75a7a8a529a9a2f047d43\" returns successfully" Jul 11 00:31:48.646459 kubelet[1427]: I0711 00:31:48.646420 1427 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 11 00:31:48.770150 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 11 00:31:49.014143 kernel: Initializing XFRM netlink socket Jul 11 00:31:49.017141 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 11 00:31:49.183525 kubelet[1427]: E0711 00:31:49.183477 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:31:49.413609 kubelet[1427]: E0711 00:31:49.413499 1427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:31:50.184405 kubelet[1427]: E0711 00:31:50.184351 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:31:50.415751 kubelet[1427]: E0711 00:31:50.415631 1427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:31:50.645608 systemd-networkd[1046]: cilium_host: Link UP Jul 11 00:31:50.646376 systemd-networkd[1046]: cilium_net: Link UP Jul 11 00:31:50.646389 systemd-networkd[1046]: cilium_net: Gained carrier Jul 11 00:31:50.646542 systemd-networkd[1046]: cilium_host: Gained carrier Jul 11 00:31:50.648152 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 11 00:31:50.650503 systemd-networkd[1046]: cilium_host: Gained IPv6LL Jul 11 00:31:50.742148 systemd-networkd[1046]: cilium_vxlan: Link UP Jul 11 00:31:50.742156 systemd-networkd[1046]: cilium_vxlan: Gained carrier Jul 11 00:31:50.967251 systemd-networkd[1046]: cilium_net: Gained IPv6LL Jul 11 00:31:51.033137 kernel: NET: Registered PF_ALG protocol family Jul 11 00:31:51.184675 kubelet[1427]: E0711 00:31:51.184634 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:31:51.416949 kubelet[1427]: E0711 00:31:51.416809 1427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:31:51.613238 systemd-networkd[1046]: lxc_health: Link UP Jul 11 00:31:51.624836 systemd-networkd[1046]: lxc_health: Gained carrier Jul 11 00:31:51.625156 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 11 00:31:52.184920 kubelet[1427]: E0711 00:31:52.184869 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:31:52.206264 systemd-networkd[1046]: cilium_vxlan: Gained IPv6LL Jul 11 00:31:52.523979 kubelet[1427]: I0711 00:31:52.523791 1427 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-nrlh5" podStartSLOduration=9.626900580000001 podStartE2EDuration="16.52377346s" podCreationTimestamp="2025-07-11 00:31:36 +0000 UTC" firstStartedPulling="2025-07-11 00:31:38.12386146 +0000 UTC m=+3.184169161" lastFinishedPulling="2025-07-11 00:31:45.02073434 +0000 UTC m=+10.081042041" observedRunningTime="2025-07-11 00:31:49.43047854 +0000 UTC m=+14.490786241" watchObservedRunningTime="2025-07-11 00:31:52.52377346 +0000 UTC m=+17.584081121" Jul 11 00:31:52.533386 systemd[1]: Created slice kubepods-besteffort-pod568fc707_cf08_4ea7_a75b_fe6b81f1147b.slice. Jul 11 00:31:52.588583 kubelet[1427]: I0711 00:31:52.588511 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7p99\" (UniqueName: \"kubernetes.io/projected/568fc707-cf08-4ea7-a75b-fe6b81f1147b-kube-api-access-p7p99\") pod \"nginx-deployment-7fcdb87857-4vppp\" (UID: \"568fc707-cf08-4ea7-a75b-fe6b81f1147b\") " pod="default/nginx-deployment-7fcdb87857-4vppp" Jul 11 00:31:52.836811 env[1218]: time="2025-07-11T00:31:52.836682260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-4vppp,Uid:568fc707-cf08-4ea7-a75b-fe6b81f1147b,Namespace:default,Attempt:0,}" Jul 11 00:31:52.887597 systemd-networkd[1046]: lxc96cf07394658: Link UP Jul 11 00:31:52.890153 kernel: eth0: renamed from tmp68cca Jul 11 00:31:52.897687 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 11 00:31:52.897795 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc96cf07394658: link becomes ready Jul 11 00:31:52.897888 systemd-networkd[1046]: lxc96cf07394658: Gained carrier Jul 11 00:31:52.910261 systemd-networkd[1046]: lxc_health: Gained IPv6LL Jul 11 00:31:52.988408 kubelet[1427]: E0711 00:31:52.988366 1427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:31:53.185960 kubelet[1427]: E0711 00:31:53.185844 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:31:54.186663 kubelet[1427]: E0711 00:31:54.186602 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:31:54.574263 systemd-networkd[1046]: lxc96cf07394658: Gained IPv6LL Jul 11 00:31:55.187249 kubelet[1427]: E0711 00:31:55.187205 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:31:56.094979 env[1218]: time="2025-07-11T00:31:56.094913740Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:31:56.095353 env[1218]: time="2025-07-11T00:31:56.094953300Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:31:56.095353 env[1218]: time="2025-07-11T00:31:56.094963900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:31:56.095353 env[1218]: time="2025-07-11T00:31:56.095151780Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/68cca0b62e89bfd408f3942011861f02095d690d8a3981ff21885de99b4f5cb2 pid=2515 runtime=io.containerd.runc.v2 Jul 11 00:31:56.108559 systemd[1]: Started cri-containerd-68cca0b62e89bfd408f3942011861f02095d690d8a3981ff21885de99b4f5cb2.scope. Jul 11 00:31:56.163979 systemd-resolved[1159]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:31:56.174337 kubelet[1427]: E0711 00:31:56.174280 1427 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:31:56.180391 env[1218]: time="2025-07-11T00:31:56.180351580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-4vppp,Uid:568fc707-cf08-4ea7-a75b-fe6b81f1147b,Namespace:default,Attempt:0,} returns sandbox id \"68cca0b62e89bfd408f3942011861f02095d690d8a3981ff21885de99b4f5cb2\"" Jul 11 00:31:56.181643 env[1218]: time="2025-07-11T00:31:56.181610940Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 11 00:31:56.188269 kubelet[1427]: E0711 00:31:56.188237 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:31:57.188783 kubelet[1427]: E0711 00:31:57.188732 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:31:57.623786 kubelet[1427]: I0711 00:31:57.623491 1427 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:31:57.624362 kubelet[1427]: E0711 00:31:57.624169 1427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:31:58.059850 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3019505710.mount: Deactivated successfully. Jul 11 00:31:58.189001 kubelet[1427]: E0711 00:31:58.188953 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:31:58.429693 kubelet[1427]: E0711 00:31:58.429371 1427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:31:59.189257 kubelet[1427]: E0711 00:31:59.189186 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:31:59.299103 env[1218]: time="2025-07-11T00:31:59.299044540Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:31:59.300841 env[1218]: time="2025-07-11T00:31:59.300803780Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cd8b38a4e22587134e82fff3512a99b84799274d989a1ec20f58c7f8c89b8511,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:31:59.302539 env[1218]: time="2025-07-11T00:31:59.302511460Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:31:59.304728 env[1218]: time="2025-07-11T00:31:59.304672100Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:30bb68e656e0665bce700e67d2756f68bdca3345fa1099a32bfdb8febcf621cd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:31:59.306047 env[1218]: time="2025-07-11T00:31:59.305927260Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:cd8b38a4e22587134e82fff3512a99b84799274d989a1ec20f58c7f8c89b8511\"" Jul 11 00:31:59.310808 env[1218]: time="2025-07-11T00:31:59.310759660Z" level=info msg="CreateContainer within sandbox \"68cca0b62e89bfd408f3942011861f02095d690d8a3981ff21885de99b4f5cb2\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jul 11 00:31:59.320585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1385108696.mount: Deactivated successfully. Jul 11 00:31:59.326236 env[1218]: time="2025-07-11T00:31:59.326108300Z" level=info msg="CreateContainer within sandbox \"68cca0b62e89bfd408f3942011861f02095d690d8a3981ff21885de99b4f5cb2\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"4f817449a5a7b9df2556499f47b05ce0a16d3beedfedfabe114fcc14ae42b786\"" Jul 11 00:31:59.326669 env[1218]: time="2025-07-11T00:31:59.326642940Z" level=info msg="StartContainer for \"4f817449a5a7b9df2556499f47b05ce0a16d3beedfedfabe114fcc14ae42b786\"" Jul 11 00:31:59.343386 systemd[1]: Started cri-containerd-4f817449a5a7b9df2556499f47b05ce0a16d3beedfedfabe114fcc14ae42b786.scope. Jul 11 00:31:59.384232 env[1218]: time="2025-07-11T00:31:59.383750420Z" level=info msg="StartContainer for \"4f817449a5a7b9df2556499f47b05ce0a16d3beedfedfabe114fcc14ae42b786\" returns successfully" Jul 11 00:31:59.444510 kubelet[1427]: I0711 00:31:59.444389 1427 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-4vppp" podStartSLOduration=4.31816122 podStartE2EDuration="7.44437426s" podCreationTimestamp="2025-07-11 00:31:52 +0000 UTC" firstStartedPulling="2025-07-11 00:31:56.18136534 +0000 UTC m=+21.241673041" lastFinishedPulling="2025-07-11 00:31:59.30757838 +0000 UTC m=+24.367886081" observedRunningTime="2025-07-11 00:31:59.44270182 +0000 UTC m=+24.503009481" watchObservedRunningTime="2025-07-11 00:31:59.44437426 +0000 UTC m=+24.504681961" Jul 11 00:32:00.190081 kubelet[1427]: E0711 00:32:00.190011 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:32:01.190260 kubelet[1427]: E0711 00:32:01.190207 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:32:02.190496 kubelet[1427]: E0711 00:32:02.190449 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:32:03.191500 kubelet[1427]: E0711 00:32:03.191449 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:32:04.102614 systemd[1]: Created slice kubepods-besteffort-podac57b6c7_6647_4964_8a5d_50f2b19ac7df.slice. Jul 11 00:32:04.163545 kubelet[1427]: I0711 00:32:04.163501 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfrzt\" (UniqueName: \"kubernetes.io/projected/ac57b6c7-6647-4964-8a5d-50f2b19ac7df-kube-api-access-mfrzt\") pod \"nfs-server-provisioner-0\" (UID: \"ac57b6c7-6647-4964-8a5d-50f2b19ac7df\") " pod="default/nfs-server-provisioner-0" Jul 11 00:32:04.163756 kubelet[1427]: I0711 00:32:04.163739 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/ac57b6c7-6647-4964-8a5d-50f2b19ac7df-data\") pod \"nfs-server-provisioner-0\" (UID: \"ac57b6c7-6647-4964-8a5d-50f2b19ac7df\") " pod="default/nfs-server-provisioner-0" Jul 11 00:32:04.191769 kubelet[1427]: E0711 00:32:04.191730 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:32:04.406038 env[1218]: time="2025-07-11T00:32:04.405927766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:ac57b6c7-6647-4964-8a5d-50f2b19ac7df,Namespace:default,Attempt:0,}" Jul 11 00:32:04.439596 systemd-networkd[1046]: lxc2893188c76b8: Link UP Jul 11 00:32:04.451148 kernel: eth0: renamed from tmpa9da5 Jul 11 00:32:04.460130 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 11 00:32:04.460218 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc2893188c76b8: link becomes ready Jul 11 00:32:04.460922 systemd-networkd[1046]: lxc2893188c76b8: Gained carrier Jul 11 00:32:04.633183 env[1218]: time="2025-07-11T00:32:04.632966772Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:32:04.633183 env[1218]: time="2025-07-11T00:32:04.633005813Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:32:04.633183 env[1218]: time="2025-07-11T00:32:04.633016333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:32:04.633381 env[1218]: time="2025-07-11T00:32:04.633217735Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a9da5d31d8dbe5ea8296a33eead281a66c6fbc2d1bdd3e5006cecddb96fed0d2 pid=2647 runtime=io.containerd.runc.v2 Jul 11 00:32:04.647916 systemd[1]: Started cri-containerd-a9da5d31d8dbe5ea8296a33eead281a66c6fbc2d1bdd3e5006cecddb96fed0d2.scope. Jul 11 00:32:04.670630 systemd-resolved[1159]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:32:04.686242 env[1218]: time="2025-07-11T00:32:04.686194600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:ac57b6c7-6647-4964-8a5d-50f2b19ac7df,Namespace:default,Attempt:0,} returns sandbox id \"a9da5d31d8dbe5ea8296a33eead281a66c6fbc2d1bdd3e5006cecddb96fed0d2\"" Jul 11 00:32:04.687781 env[1218]: time="2025-07-11T00:32:04.687739175Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jul 11 00:32:05.193256 kubelet[1427]: E0711 00:32:05.193209 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:32:06.194344 kubelet[1427]: E0711 00:32:06.194284 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:32:06.350303 systemd-networkd[1046]: lxc2893188c76b8: Gained IPv6LL Jul 11 00:32:07.195198 kubelet[1427]: E0711 00:32:07.195153 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:32:07.231058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2571610653.mount: Deactivated successfully. Jul 11 00:32:08.195771 kubelet[1427]: E0711 00:32:08.195718 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:32:09.053324 env[1218]: time="2025-07-11T00:32:09.053266403Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:32:09.054586 env[1218]: time="2025-07-11T00:32:09.054545172Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:32:09.057460 env[1218]: time="2025-07-11T00:32:09.057422952Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:32:09.059456 env[1218]: time="2025-07-11T00:32:09.059427486Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:32:09.060059 env[1218]: time="2025-07-11T00:32:09.060029770Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Jul 11 00:32:09.064874 env[1218]: time="2025-07-11T00:32:09.064836483Z" level=info msg="CreateContainer within sandbox \"a9da5d31d8dbe5ea8296a33eead281a66c6fbc2d1bdd3e5006cecddb96fed0d2\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jul 11 00:32:09.073492 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3540930305.mount: Deactivated successfully. Jul 11 00:32:09.077774 env[1218]: time="2025-07-11T00:32:09.077721532Z" level=info msg="CreateContainer within sandbox \"a9da5d31d8dbe5ea8296a33eead281a66c6fbc2d1bdd3e5006cecddb96fed0d2\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"ad3062d3662ffe74ca282803a951bd82d492771a78a1add1536bea39d2161f65\"" Jul 11 00:32:09.078366 env[1218]: time="2025-07-11T00:32:09.078320417Z" level=info msg="StartContainer for \"ad3062d3662ffe74ca282803a951bd82d492771a78a1add1536bea39d2161f65\"" Jul 11 00:32:09.098400 systemd[1]: Started cri-containerd-ad3062d3662ffe74ca282803a951bd82d492771a78a1add1536bea39d2161f65.scope. Jul 11 00:32:09.172388 env[1218]: time="2025-07-11T00:32:09.172342226Z" level=info msg="StartContainer for \"ad3062d3662ffe74ca282803a951bd82d492771a78a1add1536bea39d2161f65\" returns successfully" Jul 11 00:32:09.196196 kubelet[1427]: E0711 00:32:09.196148 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:32:09.473189 kubelet[1427]: I0711 00:32:09.473033 1427 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.099295777 podStartE2EDuration="5.473015103s" podCreationTimestamp="2025-07-11 00:32:04 +0000 UTC" firstStartedPulling="2025-07-11 00:32:04.687460692 +0000 UTC m=+29.747768393" lastFinishedPulling="2025-07-11 00:32:09.061180018 +0000 UTC m=+34.121487719" observedRunningTime="2025-07-11 00:32:09.472799142 +0000 UTC m=+34.533106843" watchObservedRunningTime="2025-07-11 00:32:09.473015103 +0000 UTC m=+34.533322804" Jul 11 00:32:10.197490 kubelet[1427]: E0711 00:32:10.197440 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:32:11.198422 kubelet[1427]: E0711 00:32:11.198373 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:32:12.199306 kubelet[1427]: E0711 00:32:12.199263 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:32:13.200445 kubelet[1427]: E0711 00:32:13.200396 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:32:13.434913 update_engine[1213]: I0711 00:32:13.434523 1213 update_attempter.cc:509] Updating boot flags... Jul 11 00:32:14.201454 kubelet[1427]: E0711 00:32:14.201404 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:32:14.558946 systemd[1]: Created slice kubepods-besteffort-podbbabb66a_3943_44f4_900a_defd261f39a7.slice. Jul 11 00:32:14.634664 kubelet[1427]: I0711 00:32:14.634628 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-90885cb4-470a-4055-b024-b2107572ed83\" (UniqueName: \"kubernetes.io/nfs/bbabb66a-3943-44f4-900a-defd261f39a7-pvc-90885cb4-470a-4055-b024-b2107572ed83\") pod \"test-pod-1\" (UID: \"bbabb66a-3943-44f4-900a-defd261f39a7\") " pod="default/test-pod-1" Jul 11 00:32:14.634796 kubelet[1427]: I0711 00:32:14.634670 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pkb7\" (UniqueName: \"kubernetes.io/projected/bbabb66a-3943-44f4-900a-defd261f39a7-kube-api-access-8pkb7\") pod \"test-pod-1\" (UID: \"bbabb66a-3943-44f4-900a-defd261f39a7\") " pod="default/test-pod-1" Jul 11 00:32:14.764146 kernel: FS-Cache: Loaded Jul 11 00:32:14.795884 kernel: RPC: Registered named UNIX socket transport module. Jul 11 00:32:14.796069 kernel: RPC: Registered udp transport module. Jul 11 00:32:14.796093 kernel: RPC: Registered tcp transport module. Jul 11 00:32:14.796128 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jul 11 00:32:14.843143 kernel: FS-Cache: Netfs 'nfs' registered for caching Jul 11 00:32:14.986230 kernel: NFS: Registering the id_resolver key type Jul 11 00:32:14.986589 kernel: Key type id_resolver registered Jul 11 00:32:14.986638 kernel: Key type id_legacy registered Jul 11 00:32:15.045728 nfsidmap[2772]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jul 11 00:32:15.049390 nfsidmap[2775]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jul 11 00:32:15.162915 env[1218]: time="2025-07-11T00:32:15.162857497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:bbabb66a-3943-44f4-900a-defd261f39a7,Namespace:default,Attempt:0,}" Jul 11 00:32:15.202294 kubelet[1427]: E0711 00:32:15.202243 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:32:15.235663 systemd-networkd[1046]: lxcc2f936779f03: Link UP Jul 11 00:32:15.247160 kernel: eth0: renamed from tmp109e8 Jul 11 00:32:15.256604 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 11 00:32:15.256702 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcc2f936779f03: link becomes ready Jul 11 00:32:15.256765 systemd-networkd[1046]: lxcc2f936779f03: Gained carrier Jul 11 00:32:15.453435 env[1218]: time="2025-07-11T00:32:15.453368180Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:32:15.453435 env[1218]: time="2025-07-11T00:32:15.453411620Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:32:15.453435 env[1218]: time="2025-07-11T00:32:15.453422140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:32:15.454026 env[1218]: time="2025-07-11T00:32:15.453891462Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/109e84317c00ec6c16f682545b75a02a5572a0caef04814c8297537d0998e4d4 pid=2807 runtime=io.containerd.runc.v2 Jul 11 00:32:15.467227 systemd[1]: Started cri-containerd-109e84317c00ec6c16f682545b75a02a5572a0caef04814c8297537d0998e4d4.scope. Jul 11 00:32:15.484970 systemd-resolved[1159]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:32:15.503449 env[1218]: time="2025-07-11T00:32:15.503400094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:bbabb66a-3943-44f4-900a-defd261f39a7,Namespace:default,Attempt:0,} returns sandbox id \"109e84317c00ec6c16f682545b75a02a5572a0caef04814c8297537d0998e4d4\"" Jul 11 00:32:15.504426 env[1218]: time="2025-07-11T00:32:15.504392739Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 11 00:32:15.746439 env[1218]: time="2025-07-11T00:32:15.746348634Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:32:15.752344 env[1218]: time="2025-07-11T00:32:15.752279382Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:cd8b38a4e22587134e82fff3512a99b84799274d989a1ec20f58c7f8c89b8511,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:32:15.753889 env[1218]: time="2025-07-11T00:32:15.753844629Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:32:15.757013 env[1218]: time="2025-07-11T00:32:15.756969924Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:30bb68e656e0665bce700e67d2756f68bdca3345fa1099a32bfdb8febcf621cd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:32:15.757667 env[1218]: time="2025-07-11T00:32:15.757630887Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:cd8b38a4e22587134e82fff3512a99b84799274d989a1ec20f58c7f8c89b8511\"" Jul 11 00:32:15.763798 env[1218]: time="2025-07-11T00:32:15.763760716Z" level=info msg="CreateContainer within sandbox \"109e84317c00ec6c16f682545b75a02a5572a0caef04814c8297537d0998e4d4\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jul 11 00:32:15.778069 env[1218]: time="2025-07-11T00:32:15.778011502Z" level=info msg="CreateContainer within sandbox \"109e84317c00ec6c16f682545b75a02a5572a0caef04814c8297537d0998e4d4\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"e47fb722d1fc97d8a60446feebeeda8d485b77e4aaa69ae0cd9e80008e9dc1e6\"" Jul 11 00:32:15.778588 env[1218]: time="2025-07-11T00:32:15.778548825Z" level=info msg="StartContainer for \"e47fb722d1fc97d8a60446feebeeda8d485b77e4aaa69ae0cd9e80008e9dc1e6\"" Jul 11 00:32:15.797581 systemd[1]: Started cri-containerd-e47fb722d1fc97d8a60446feebeeda8d485b77e4aaa69ae0cd9e80008e9dc1e6.scope. Jul 11 00:32:15.838639 env[1218]: time="2025-07-11T00:32:15.838474146Z" level=info msg="StartContainer for \"e47fb722d1fc97d8a60446feebeeda8d485b77e4aaa69ae0cd9e80008e9dc1e6\" returns successfully" Jul 11 00:32:16.174602 kubelet[1427]: E0711 00:32:16.174534 1427 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:32:16.202977 kubelet[1427]: E0711 00:32:16.202907 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:32:16.488929 kubelet[1427]: I0711 00:32:16.488595 1427 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=12.233699178 podStartE2EDuration="12.488581934s" podCreationTimestamp="2025-07-11 00:32:04 +0000 UTC" firstStartedPulling="2025-07-11 00:32:15.504162378 +0000 UTC m=+40.564470079" lastFinishedPulling="2025-07-11 00:32:15.759045134 +0000 UTC m=+40.819352835" observedRunningTime="2025-07-11 00:32:16.488397493 +0000 UTC m=+41.548705194" watchObservedRunningTime="2025-07-11 00:32:16.488581934 +0000 UTC m=+41.548889635" Jul 11 00:32:16.718337 systemd-networkd[1046]: lxcc2f936779f03: Gained IPv6LL Jul 11 00:32:16.749992 systemd[1]: run-containerd-runc-k8s.io-e47fb722d1fc97d8a60446feebeeda8d485b77e4aaa69ae0cd9e80008e9dc1e6-runc.842rI7.mount: Deactivated successfully. Jul 11 00:32:17.203725 kubelet[1427]: E0711 00:32:17.203675 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:32:18.204522 kubelet[1427]: E0711 00:32:18.204473 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:32:19.205589 kubelet[1427]: E0711 00:32:19.205542 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:32:20.205949 kubelet[1427]: E0711 00:32:20.205901 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:32:21.206773 kubelet[1427]: E0711 00:32:21.206726 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:32:21.994756 env[1218]: time="2025-07-11T00:32:21.994687256Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 11 00:32:22.000519 env[1218]: time="2025-07-11T00:32:22.000472114Z" level=info msg="StopContainer for \"93d636c07749dbe819595e5fc2adce2094fdd7dde4b75a7a8a529a9a2f047d43\" with timeout 2 (s)" Jul 11 00:32:22.000821 env[1218]: time="2025-07-11T00:32:22.000769555Z" level=info msg="Stop container \"93d636c07749dbe819595e5fc2adce2094fdd7dde4b75a7a8a529a9a2f047d43\" with signal terminated" Jul 11 00:32:22.006214 systemd-networkd[1046]: lxc_health: Link DOWN Jul 11 00:32:22.006221 systemd-networkd[1046]: lxc_health: Lost carrier Jul 11 00:32:22.048545 systemd[1]: cri-containerd-93d636c07749dbe819595e5fc2adce2094fdd7dde4b75a7a8a529a9a2f047d43.scope: Deactivated successfully. Jul 11 00:32:22.048866 systemd[1]: cri-containerd-93d636c07749dbe819595e5fc2adce2094fdd7dde4b75a7a8a529a9a2f047d43.scope: Consumed 6.439s CPU time. Jul 11 00:32:22.066712 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-93d636c07749dbe819595e5fc2adce2094fdd7dde4b75a7a8a529a9a2f047d43-rootfs.mount: Deactivated successfully. Jul 11 00:32:22.106657 env[1218]: time="2025-07-11T00:32:22.106602551Z" level=info msg="shim disconnected" id=93d636c07749dbe819595e5fc2adce2094fdd7dde4b75a7a8a529a9a2f047d43 Jul 11 00:32:22.106657 env[1218]: time="2025-07-11T00:32:22.106648871Z" level=warning msg="cleaning up after shim disconnected" id=93d636c07749dbe819595e5fc2adce2094fdd7dde4b75a7a8a529a9a2f047d43 namespace=k8s.io Jul 11 00:32:22.106657 env[1218]: time="2025-07-11T00:32:22.106661511Z" level=info msg="cleaning up dead shim" Jul 11 00:32:22.113903 env[1218]: time="2025-07-11T00:32:22.113850453Z" level=warning msg="cleanup warnings time=\"2025-07-11T00:32:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2940 runtime=io.containerd.runc.v2\n" Jul 11 00:32:22.117030 env[1218]: time="2025-07-11T00:32:22.116988902Z" level=info msg="StopContainer for \"93d636c07749dbe819595e5fc2adce2094fdd7dde4b75a7a8a529a9a2f047d43\" returns successfully" Jul 11 00:32:22.117715 env[1218]: time="2025-07-11T00:32:22.117678584Z" level=info msg="StopPodSandbox for \"0a6e16985afe2d4e588d5bbcb0d6e6bfb7af5fd8a26482c7ba73bf3c7f692f39\"" Jul 11 00:32:22.117793 env[1218]: time="2025-07-11T00:32:22.117744944Z" level=info msg="Container to stop \"ea41f83177301e87fec9cd6432efdb910c3af43ecc8da69dd3a7a98c3ea4002b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:32:22.117793 env[1218]: time="2025-07-11T00:32:22.117761584Z" level=info msg="Container to stop \"93d636c07749dbe819595e5fc2adce2094fdd7dde4b75a7a8a529a9a2f047d43\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:32:22.117793 env[1218]: time="2025-07-11T00:32:22.117773665Z" level=info msg="Container to stop \"f9a483ae9f0250299d07f98d3a4c4c6cf158fdd6ac8fe0c760d1a2dd1786b8b0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:32:22.117793 env[1218]: time="2025-07-11T00:32:22.117785785Z" level=info msg="Container to stop \"0768e3a5d48677e0f984cf84089ca6980dd7a373d1e6c5c858d6bfaec8bd8a08\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:32:22.117925 env[1218]: time="2025-07-11T00:32:22.117796345Z" level=info msg="Container to stop \"0f5502ca0117ba28340df428db8be9c30a702b5b4035997a1930a3443e43bf9c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:32:22.119542 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0a6e16985afe2d4e588d5bbcb0d6e6bfb7af5fd8a26482c7ba73bf3c7f692f39-shm.mount: Deactivated successfully. Jul 11 00:32:22.125516 systemd[1]: cri-containerd-0a6e16985afe2d4e588d5bbcb0d6e6bfb7af5fd8a26482c7ba73bf3c7f692f39.scope: Deactivated successfully. Jul 11 00:32:22.144170 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a6e16985afe2d4e588d5bbcb0d6e6bfb7af5fd8a26482c7ba73bf3c7f692f39-rootfs.mount: Deactivated successfully. Jul 11 00:32:22.151809 env[1218]: time="2025-07-11T00:32:22.151754806Z" level=info msg="shim disconnected" id=0a6e16985afe2d4e588d5bbcb0d6e6bfb7af5fd8a26482c7ba73bf3c7f692f39 Jul 11 00:32:22.151809 env[1218]: time="2025-07-11T00:32:22.151800326Z" level=warning msg="cleaning up after shim disconnected" id=0a6e16985afe2d4e588d5bbcb0d6e6bfb7af5fd8a26482c7ba73bf3c7f692f39 namespace=k8s.io Jul 11 00:32:22.151809 env[1218]: time="2025-07-11T00:32:22.151810566Z" level=info msg="cleaning up dead shim" Jul 11 00:32:22.158746 env[1218]: time="2025-07-11T00:32:22.158699787Z" level=warning msg="cleanup warnings time=\"2025-07-11T00:32:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2971 runtime=io.containerd.runc.v2\n" Jul 11 00:32:22.159014 env[1218]: time="2025-07-11T00:32:22.158991068Z" level=info msg="TearDown network for sandbox \"0a6e16985afe2d4e588d5bbcb0d6e6bfb7af5fd8a26482c7ba73bf3c7f692f39\" successfully" Jul 11 00:32:22.159047 env[1218]: time="2025-07-11T00:32:22.159014108Z" level=info msg="StopPodSandbox for \"0a6e16985afe2d4e588d5bbcb0d6e6bfb7af5fd8a26482c7ba73bf3c7f692f39\" returns successfully" Jul 11 00:32:22.183259 kubelet[1427]: I0711 00:32:22.183226 1427 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cdx5\" (UniqueName: \"kubernetes.io/projected/9c845f69-df61-4215-98e1-604485b79b77-kube-api-access-8cdx5\") pod \"9c845f69-df61-4215-98e1-604485b79b77\" (UID: \"9c845f69-df61-4215-98e1-604485b79b77\") " Jul 11 00:32:22.183484 kubelet[1427]: I0711 00:32:22.183468 1427 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9c845f69-df61-4215-98e1-604485b79b77-cilium-cgroup\") pod \"9c845f69-df61-4215-98e1-604485b79b77\" (UID: \"9c845f69-df61-4215-98e1-604485b79b77\") " Jul 11 00:32:22.183649 kubelet[1427]: I0711 00:32:22.183558 1427 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c845f69-df61-4215-98e1-604485b79b77-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9c845f69-df61-4215-98e1-604485b79b77" (UID: "9c845f69-df61-4215-98e1-604485b79b77"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:32:22.183649 kubelet[1427]: I0711 00:32:22.183571 1427 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9c845f69-df61-4215-98e1-604485b79b77-host-proc-sys-kernel\") pod \"9c845f69-df61-4215-98e1-604485b79b77\" (UID: \"9c845f69-df61-4215-98e1-604485b79b77\") " Jul 11 00:32:22.183723 kubelet[1427]: I0711 00:32:22.183683 1427 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9c845f69-df61-4215-98e1-604485b79b77-bpf-maps\") pod \"9c845f69-df61-4215-98e1-604485b79b77\" (UID: \"9c845f69-df61-4215-98e1-604485b79b77\") " Jul 11 00:32:22.183723 kubelet[1427]: I0711 00:32:22.183705 1427 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9c845f69-df61-4215-98e1-604485b79b77-cilium-config-path\") pod \"9c845f69-df61-4215-98e1-604485b79b77\" (UID: \"9c845f69-df61-4215-98e1-604485b79b77\") " Jul 11 00:32:22.183723 kubelet[1427]: I0711 00:32:22.183722 1427 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9c845f69-df61-4215-98e1-604485b79b77-hostproc\") pod \"9c845f69-df61-4215-98e1-604485b79b77\" (UID: \"9c845f69-df61-4215-98e1-604485b79b77\") " Jul 11 00:32:22.183806 kubelet[1427]: I0711 00:32:22.183737 1427 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c845f69-df61-4215-98e1-604485b79b77-xtables-lock\") pod \"9c845f69-df61-4215-98e1-604485b79b77\" (UID: \"9c845f69-df61-4215-98e1-604485b79b77\") " Jul 11 00:32:22.183806 kubelet[1427]: I0711 00:32:22.183751 1427 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c845f69-df61-4215-98e1-604485b79b77-lib-modules\") pod \"9c845f69-df61-4215-98e1-604485b79b77\" (UID: \"9c845f69-df61-4215-98e1-604485b79b77\") " Jul 11 00:32:22.183806 kubelet[1427]: I0711 00:32:22.183770 1427 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9c845f69-df61-4215-98e1-604485b79b77-clustermesh-secrets\") pod \"9c845f69-df61-4215-98e1-604485b79b77\" (UID: \"9c845f69-df61-4215-98e1-604485b79b77\") " Jul 11 00:32:22.183806 kubelet[1427]: I0711 00:32:22.183784 1427 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9c845f69-df61-4215-98e1-604485b79b77-cilium-run\") pod \"9c845f69-df61-4215-98e1-604485b79b77\" (UID: \"9c845f69-df61-4215-98e1-604485b79b77\") " Jul 11 00:32:22.183806 kubelet[1427]: I0711 00:32:22.183797 1427 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9c845f69-df61-4215-98e1-604485b79b77-host-proc-sys-net\") pod \"9c845f69-df61-4215-98e1-604485b79b77\" (UID: \"9c845f69-df61-4215-98e1-604485b79b77\") " Jul 11 00:32:22.183928 kubelet[1427]: I0711 00:32:22.183813 1427 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9c845f69-df61-4215-98e1-604485b79b77-hubble-tls\") pod \"9c845f69-df61-4215-98e1-604485b79b77\" (UID: \"9c845f69-df61-4215-98e1-604485b79b77\") " Jul 11 00:32:22.183928 kubelet[1427]: I0711 00:32:22.183827 1427 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9c845f69-df61-4215-98e1-604485b79b77-cni-path\") pod \"9c845f69-df61-4215-98e1-604485b79b77\" (UID: \"9c845f69-df61-4215-98e1-604485b79b77\") " Jul 11 00:32:22.183928 kubelet[1427]: I0711 00:32:22.183840 1427 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9c845f69-df61-4215-98e1-604485b79b77-etc-cni-netd\") pod \"9c845f69-df61-4215-98e1-604485b79b77\" (UID: \"9c845f69-df61-4215-98e1-604485b79b77\") " Jul 11 00:32:22.183928 kubelet[1427]: I0711 00:32:22.183874 1427 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9c845f69-df61-4215-98e1-604485b79b77-cilium-cgroup\") on node \"10.0.0.78\" DevicePath \"\"" Jul 11 00:32:22.183928 kubelet[1427]: I0711 00:32:22.183895 1427 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c845f69-df61-4215-98e1-604485b79b77-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9c845f69-df61-4215-98e1-604485b79b77" (UID: "9c845f69-df61-4215-98e1-604485b79b77"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:32:22.183928 kubelet[1427]: I0711 00:32:22.183911 1427 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c845f69-df61-4215-98e1-604485b79b77-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9c845f69-df61-4215-98e1-604485b79b77" (UID: "9c845f69-df61-4215-98e1-604485b79b77"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:32:22.184533 kubelet[1427]: I0711 00:32:22.184161 1427 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c845f69-df61-4215-98e1-604485b79b77-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9c845f69-df61-4215-98e1-604485b79b77" (UID: "9c845f69-df61-4215-98e1-604485b79b77"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:32:22.184533 kubelet[1427]: I0711 00:32:22.184191 1427 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c845f69-df61-4215-98e1-604485b79b77-hostproc" (OuterVolumeSpecName: "hostproc") pod "9c845f69-df61-4215-98e1-604485b79b77" (UID: "9c845f69-df61-4215-98e1-604485b79b77"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:32:22.184533 kubelet[1427]: I0711 00:32:22.184206 1427 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c845f69-df61-4215-98e1-604485b79b77-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9c845f69-df61-4215-98e1-604485b79b77" (UID: "9c845f69-df61-4215-98e1-604485b79b77"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:32:22.184533 kubelet[1427]: I0711 00:32:22.184239 1427 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c845f69-df61-4215-98e1-604485b79b77-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9c845f69-df61-4215-98e1-604485b79b77" (UID: "9c845f69-df61-4215-98e1-604485b79b77"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:32:22.184533 kubelet[1427]: I0711 00:32:22.184255 1427 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c845f69-df61-4215-98e1-604485b79b77-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9c845f69-df61-4215-98e1-604485b79b77" (UID: "9c845f69-df61-4215-98e1-604485b79b77"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:32:22.184800 kubelet[1427]: I0711 00:32:22.184743 1427 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c845f69-df61-4215-98e1-604485b79b77-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9c845f69-df61-4215-98e1-604485b79b77" (UID: "9c845f69-df61-4215-98e1-604485b79b77"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:32:22.184874 kubelet[1427]: I0711 00:32:22.184798 1427 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c845f69-df61-4215-98e1-604485b79b77-cni-path" (OuterVolumeSpecName: "cni-path") pod "9c845f69-df61-4215-98e1-604485b79b77" (UID: "9c845f69-df61-4215-98e1-604485b79b77"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:32:22.185877 kubelet[1427]: I0711 00:32:22.185842 1427 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c845f69-df61-4215-98e1-604485b79b77-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9c845f69-df61-4215-98e1-604485b79b77" (UID: "9c845f69-df61-4215-98e1-604485b79b77"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 11 00:32:22.187847 kubelet[1427]: I0711 00:32:22.187816 1427 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c845f69-df61-4215-98e1-604485b79b77-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9c845f69-df61-4215-98e1-604485b79b77" (UID: "9c845f69-df61-4215-98e1-604485b79b77"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 11 00:32:22.189182 kubelet[1427]: I0711 00:32:22.189151 1427 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c845f69-df61-4215-98e1-604485b79b77-kube-api-access-8cdx5" (OuterVolumeSpecName: "kube-api-access-8cdx5") pod "9c845f69-df61-4215-98e1-604485b79b77" (UID: "9c845f69-df61-4215-98e1-604485b79b77"). InnerVolumeSpecName "kube-api-access-8cdx5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 11 00:32:22.189490 systemd[1]: var-lib-kubelet-pods-9c845f69\x2ddf61\x2d4215\x2d98e1\x2d604485b79b77-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8cdx5.mount: Deactivated successfully. Jul 11 00:32:22.189602 systemd[1]: var-lib-kubelet-pods-9c845f69\x2ddf61\x2d4215\x2d98e1\x2d604485b79b77-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 11 00:32:22.189659 systemd[1]: var-lib-kubelet-pods-9c845f69\x2ddf61\x2d4215\x2d98e1\x2d604485b79b77-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 11 00:32:22.190273 kubelet[1427]: I0711 00:32:22.190204 1427 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c845f69-df61-4215-98e1-604485b79b77-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9c845f69-df61-4215-98e1-604485b79b77" (UID: "9c845f69-df61-4215-98e1-604485b79b77"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 11 00:32:22.207311 kubelet[1427]: E0711 00:32:22.207255 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:32:22.285429 kubelet[1427]: I0711 00:32:22.284770 1427 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9c845f69-df61-4215-98e1-604485b79b77-hostproc\") on node \"10.0.0.78\" DevicePath \"\"" Jul 11 00:32:22.285537 kubelet[1427]: I0711 00:32:22.285433 1427 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c845f69-df61-4215-98e1-604485b79b77-xtables-lock\") on node \"10.0.0.78\" DevicePath \"\"" Jul 11 00:32:22.285537 kubelet[1427]: I0711 00:32:22.285485 1427 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c845f69-df61-4215-98e1-604485b79b77-lib-modules\") on node \"10.0.0.78\" DevicePath \"\"" Jul 11 00:32:22.285537 kubelet[1427]: I0711 00:32:22.285495 1427 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9c845f69-df61-4215-98e1-604485b79b77-clustermesh-secrets\") on node \"10.0.0.78\" DevicePath \"\"" Jul 11 00:32:22.285628 kubelet[1427]: I0711 00:32:22.285593 1427 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9c845f69-df61-4215-98e1-604485b79b77-cilium-run\") on node \"10.0.0.78\" DevicePath \"\"" Jul 11 00:32:22.285628 kubelet[1427]: I0711 00:32:22.285608 1427 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9c845f69-df61-4215-98e1-604485b79b77-host-proc-sys-net\") on node \"10.0.0.78\" DevicePath \"\"" Jul 11 00:32:22.285684 kubelet[1427]: I0711 00:32:22.285647 1427 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9c845f69-df61-4215-98e1-604485b79b77-hubble-tls\") on node \"10.0.0.78\" DevicePath \"\"" Jul 11 00:32:22.285684 kubelet[1427]: I0711 00:32:22.285659 1427 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9c845f69-df61-4215-98e1-604485b79b77-cni-path\") on node \"10.0.0.78\" DevicePath \"\"" Jul 11 00:32:22.285684 kubelet[1427]: I0711 00:32:22.285668 1427 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9c845f69-df61-4215-98e1-604485b79b77-etc-cni-netd\") on node \"10.0.0.78\" DevicePath \"\"" Jul 11 00:32:22.285784 kubelet[1427]: I0711 00:32:22.285760 1427 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8cdx5\" (UniqueName: \"kubernetes.io/projected/9c845f69-df61-4215-98e1-604485b79b77-kube-api-access-8cdx5\") on node \"10.0.0.78\" DevicePath \"\"" Jul 11 00:32:22.285784 kubelet[1427]: I0711 00:32:22.285782 1427 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9c845f69-df61-4215-98e1-604485b79b77-host-proc-sys-kernel\") on node \"10.0.0.78\" DevicePath \"\"" Jul 11 00:32:22.285867 kubelet[1427]: I0711 00:32:22.285822 1427 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9c845f69-df61-4215-98e1-604485b79b77-bpf-maps\") on node \"10.0.0.78\" DevicePath \"\"" Jul 11 00:32:22.285867 kubelet[1427]: I0711 00:32:22.285843 1427 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9c845f69-df61-4215-98e1-604485b79b77-cilium-config-path\") on node \"10.0.0.78\" DevicePath \"\"" Jul 11 00:32:22.389660 systemd[1]: Removed slice kubepods-burstable-pod9c845f69_df61_4215_98e1_604485b79b77.slice. Jul 11 00:32:22.389743 systemd[1]: kubepods-burstable-pod9c845f69_df61_4215_98e1_604485b79b77.slice: Consumed 6.622s CPU time. Jul 11 00:32:22.489453 kubelet[1427]: I0711 00:32:22.489414 1427 scope.go:117] "RemoveContainer" containerID="93d636c07749dbe819595e5fc2adce2094fdd7dde4b75a7a8a529a9a2f047d43" Jul 11 00:32:22.492101 env[1218]: time="2025-07-11T00:32:22.492064582Z" level=info msg="RemoveContainer for \"93d636c07749dbe819595e5fc2adce2094fdd7dde4b75a7a8a529a9a2f047d43\"" Jul 11 00:32:22.498203 env[1218]: time="2025-07-11T00:32:22.498165800Z" level=info msg="RemoveContainer for \"93d636c07749dbe819595e5fc2adce2094fdd7dde4b75a7a8a529a9a2f047d43\" returns successfully" Jul 11 00:32:22.498562 kubelet[1427]: I0711 00:32:22.498542 1427 scope.go:117] "RemoveContainer" containerID="0f5502ca0117ba28340df428db8be9c30a702b5b4035997a1930a3443e43bf9c" Jul 11 00:32:22.500269 env[1218]: time="2025-07-11T00:32:22.500240926Z" level=info msg="RemoveContainer for \"0f5502ca0117ba28340df428db8be9c30a702b5b4035997a1930a3443e43bf9c\"" Jul 11 00:32:22.505802 env[1218]: time="2025-07-11T00:32:22.505767423Z" level=info msg="RemoveContainer for \"0f5502ca0117ba28340df428db8be9c30a702b5b4035997a1930a3443e43bf9c\" returns successfully" Jul 11 00:32:22.506101 kubelet[1427]: I0711 00:32:22.506079 1427 scope.go:117] "RemoveContainer" containerID="ea41f83177301e87fec9cd6432efdb910c3af43ecc8da69dd3a7a98c3ea4002b" Jul 11 00:32:22.508435 env[1218]: time="2025-07-11T00:32:22.508406111Z" level=info msg="RemoveContainer for \"ea41f83177301e87fec9cd6432efdb910c3af43ecc8da69dd3a7a98c3ea4002b\"" Jul 11 00:32:22.510664 env[1218]: time="2025-07-11T00:32:22.510632757Z" level=info msg="RemoveContainer for \"ea41f83177301e87fec9cd6432efdb910c3af43ecc8da69dd3a7a98c3ea4002b\" returns successfully" Jul 11 00:32:22.510896 kubelet[1427]: I0711 00:32:22.510869 1427 scope.go:117] "RemoveContainer" containerID="0768e3a5d48677e0f984cf84089ca6980dd7a373d1e6c5c858d6bfaec8bd8a08" Jul 11 00:32:22.511848 env[1218]: time="2025-07-11T00:32:22.511820041Z" level=info msg="RemoveContainer for \"0768e3a5d48677e0f984cf84089ca6980dd7a373d1e6c5c858d6bfaec8bd8a08\"" Jul 11 00:32:22.516571 env[1218]: time="2025-07-11T00:32:22.516527335Z" level=info msg="RemoveContainer for \"0768e3a5d48677e0f984cf84089ca6980dd7a373d1e6c5c858d6bfaec8bd8a08\" returns successfully" Jul 11 00:32:22.516746 kubelet[1427]: I0711 00:32:22.516712 1427 scope.go:117] "RemoveContainer" containerID="f9a483ae9f0250299d07f98d3a4c4c6cf158fdd6ac8fe0c760d1a2dd1786b8b0" Jul 11 00:32:22.517839 env[1218]: time="2025-07-11T00:32:22.517800739Z" level=info msg="RemoveContainer for \"f9a483ae9f0250299d07f98d3a4c4c6cf158fdd6ac8fe0c760d1a2dd1786b8b0\"" Jul 11 00:32:22.519894 env[1218]: time="2025-07-11T00:32:22.519857185Z" level=info msg="RemoveContainer for \"f9a483ae9f0250299d07f98d3a4c4c6cf158fdd6ac8fe0c760d1a2dd1786b8b0\" returns successfully" Jul 11 00:32:22.520054 kubelet[1427]: I0711 00:32:22.520023 1427 scope.go:117] "RemoveContainer" containerID="93d636c07749dbe819595e5fc2adce2094fdd7dde4b75a7a8a529a9a2f047d43" Jul 11 00:32:22.520379 env[1218]: time="2025-07-11T00:32:22.520279266Z" level=error msg="ContainerStatus for \"93d636c07749dbe819595e5fc2adce2094fdd7dde4b75a7a8a529a9a2f047d43\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"93d636c07749dbe819595e5fc2adce2094fdd7dde4b75a7a8a529a9a2f047d43\": not found" Jul 11 00:32:22.520512 kubelet[1427]: E0711 00:32:22.520482 1427 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"93d636c07749dbe819595e5fc2adce2094fdd7dde4b75a7a8a529a9a2f047d43\": not found" containerID="93d636c07749dbe819595e5fc2adce2094fdd7dde4b75a7a8a529a9a2f047d43" Jul 11 00:32:22.520561 kubelet[1427]: I0711 00:32:22.520520 1427 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"93d636c07749dbe819595e5fc2adce2094fdd7dde4b75a7a8a529a9a2f047d43"} err="failed to get container status \"93d636c07749dbe819595e5fc2adce2094fdd7dde4b75a7a8a529a9a2f047d43\": rpc error: code = NotFound desc = an error occurred when try to find container \"93d636c07749dbe819595e5fc2adce2094fdd7dde4b75a7a8a529a9a2f047d43\": not found" Jul 11 00:32:22.520636 kubelet[1427]: I0711 00:32:22.520564 1427 scope.go:117] "RemoveContainer" containerID="0f5502ca0117ba28340df428db8be9c30a702b5b4035997a1930a3443e43bf9c" Jul 11 00:32:22.520814 env[1218]: time="2025-07-11T00:32:22.520752908Z" level=error msg="ContainerStatus for \"0f5502ca0117ba28340df428db8be9c30a702b5b4035997a1930a3443e43bf9c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0f5502ca0117ba28340df428db8be9c30a702b5b4035997a1930a3443e43bf9c\": not found" Jul 11 00:32:22.520918 kubelet[1427]: E0711 00:32:22.520900 1427 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0f5502ca0117ba28340df428db8be9c30a702b5b4035997a1930a3443e43bf9c\": not found" containerID="0f5502ca0117ba28340df428db8be9c30a702b5b4035997a1930a3443e43bf9c" Jul 11 00:32:22.520944 kubelet[1427]: I0711 00:32:22.520926 1427 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0f5502ca0117ba28340df428db8be9c30a702b5b4035997a1930a3443e43bf9c"} err="failed to get container status \"0f5502ca0117ba28340df428db8be9c30a702b5b4035997a1930a3443e43bf9c\": rpc error: code = NotFound desc = an error occurred when try to find container \"0f5502ca0117ba28340df428db8be9c30a702b5b4035997a1930a3443e43bf9c\": not found" Jul 11 00:32:22.520968 kubelet[1427]: I0711 00:32:22.520944 1427 scope.go:117] "RemoveContainer" containerID="ea41f83177301e87fec9cd6432efdb910c3af43ecc8da69dd3a7a98c3ea4002b" Jul 11 00:32:22.521187 env[1218]: time="2025-07-11T00:32:22.521140029Z" level=error msg="ContainerStatus for \"ea41f83177301e87fec9cd6432efdb910c3af43ecc8da69dd3a7a98c3ea4002b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ea41f83177301e87fec9cd6432efdb910c3af43ecc8da69dd3a7a98c3ea4002b\": not found" Jul 11 00:32:22.521315 kubelet[1427]: E0711 00:32:22.521284 1427 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ea41f83177301e87fec9cd6432efdb910c3af43ecc8da69dd3a7a98c3ea4002b\": not found" containerID="ea41f83177301e87fec9cd6432efdb910c3af43ecc8da69dd3a7a98c3ea4002b" Jul 11 00:32:22.521348 kubelet[1427]: I0711 00:32:22.521325 1427 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ea41f83177301e87fec9cd6432efdb910c3af43ecc8da69dd3a7a98c3ea4002b"} err="failed to get container status \"ea41f83177301e87fec9cd6432efdb910c3af43ecc8da69dd3a7a98c3ea4002b\": rpc error: code = NotFound desc = an error occurred when try to find container \"ea41f83177301e87fec9cd6432efdb910c3af43ecc8da69dd3a7a98c3ea4002b\": not found" Jul 11 00:32:22.521348 kubelet[1427]: I0711 00:32:22.521344 1427 scope.go:117] "RemoveContainer" containerID="0768e3a5d48677e0f984cf84089ca6980dd7a373d1e6c5c858d6bfaec8bd8a08" Jul 11 00:32:22.521580 env[1218]: time="2025-07-11T00:32:22.521523230Z" level=error msg="ContainerStatus for \"0768e3a5d48677e0f984cf84089ca6980dd7a373d1e6c5c858d6bfaec8bd8a08\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0768e3a5d48677e0f984cf84089ca6980dd7a373d1e6c5c858d6bfaec8bd8a08\": not found" Jul 11 00:32:22.521677 kubelet[1427]: E0711 00:32:22.521658 1427 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0768e3a5d48677e0f984cf84089ca6980dd7a373d1e6c5c858d6bfaec8bd8a08\": not found" containerID="0768e3a5d48677e0f984cf84089ca6980dd7a373d1e6c5c858d6bfaec8bd8a08" Jul 11 00:32:22.521707 kubelet[1427]: I0711 00:32:22.521683 1427 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0768e3a5d48677e0f984cf84089ca6980dd7a373d1e6c5c858d6bfaec8bd8a08"} err="failed to get container status \"0768e3a5d48677e0f984cf84089ca6980dd7a373d1e6c5c858d6bfaec8bd8a08\": rpc error: code = NotFound desc = an error occurred when try to find container \"0768e3a5d48677e0f984cf84089ca6980dd7a373d1e6c5c858d6bfaec8bd8a08\": not found" Jul 11 00:32:22.521707 kubelet[1427]: I0711 00:32:22.521698 1427 scope.go:117] "RemoveContainer" containerID="f9a483ae9f0250299d07f98d3a4c4c6cf158fdd6ac8fe0c760d1a2dd1786b8b0" Jul 11 00:32:22.521917 env[1218]: time="2025-07-11T00:32:22.521871711Z" level=error msg="ContainerStatus for \"f9a483ae9f0250299d07f98d3a4c4c6cf158fdd6ac8fe0c760d1a2dd1786b8b0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f9a483ae9f0250299d07f98d3a4c4c6cf158fdd6ac8fe0c760d1a2dd1786b8b0\": not found" Jul 11 00:32:22.522035 kubelet[1427]: E0711 00:32:22.522011 1427 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f9a483ae9f0250299d07f98d3a4c4c6cf158fdd6ac8fe0c760d1a2dd1786b8b0\": not found" containerID="f9a483ae9f0250299d07f98d3a4c4c6cf158fdd6ac8fe0c760d1a2dd1786b8b0" Jul 11 00:32:22.522063 kubelet[1427]: I0711 00:32:22.522041 1427 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f9a483ae9f0250299d07f98d3a4c4c6cf158fdd6ac8fe0c760d1a2dd1786b8b0"} err="failed to get container status \"f9a483ae9f0250299d07f98d3a4c4c6cf158fdd6ac8fe0c760d1a2dd1786b8b0\": rpc error: code = NotFound desc = an error occurred when try to find container \"f9a483ae9f0250299d07f98d3a4c4c6cf158fdd6ac8fe0c760d1a2dd1786b8b0\": not found" Jul 11 00:32:23.207795 kubelet[1427]: E0711 00:32:23.207738 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:32:24.208381 kubelet[1427]: E0711 00:32:24.208325 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:32:24.385722 kubelet[1427]: I0711 00:32:24.385674 1427 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c845f69-df61-4215-98e1-604485b79b77" path="/var/lib/kubelet/pods/9c845f69-df61-4215-98e1-604485b79b77/volumes" Jul 11 00:32:24.544160 systemd[1]: Created slice kubepods-besteffort-pod00962bd4_d1a2_4ca0_a2e4_235bc69f9d8a.slice. Jul 11 00:32:24.549401 systemd[1]: Created slice kubepods-burstable-poddbac102a_40dc_413b_925f_46b930349926.slice. Jul 11 00:32:24.596402 kubelet[1427]: I0711 00:32:24.596359 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dbac102a-40dc-413b-925f-46b930349926-clustermesh-secrets\") pod \"cilium-ptsvz\" (UID: \"dbac102a-40dc-413b-925f-46b930349926\") " pod="kube-system/cilium-ptsvz" Jul 11 00:32:24.596402 kubelet[1427]: I0711 00:32:24.596402 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dbac102a-40dc-413b-925f-46b930349926-hostproc\") pod \"cilium-ptsvz\" (UID: \"dbac102a-40dc-413b-925f-46b930349926\") " pod="kube-system/cilium-ptsvz" Jul 11 00:32:24.596560 kubelet[1427]: I0711 00:32:24.596424 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dbac102a-40dc-413b-925f-46b930349926-cilium-cgroup\") pod \"cilium-ptsvz\" (UID: \"dbac102a-40dc-413b-925f-46b930349926\") " pod="kube-system/cilium-ptsvz" Jul 11 00:32:24.596560 kubelet[1427]: I0711 00:32:24.596439 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dbac102a-40dc-413b-925f-46b930349926-cilium-run\") pod \"cilium-ptsvz\" (UID: \"dbac102a-40dc-413b-925f-46b930349926\") " pod="kube-system/cilium-ptsvz" Jul 11 00:32:24.596560 kubelet[1427]: I0711 00:32:24.596453 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dbac102a-40dc-413b-925f-46b930349926-host-proc-sys-kernel\") pod \"cilium-ptsvz\" (UID: \"dbac102a-40dc-413b-925f-46b930349926\") " pod="kube-system/cilium-ptsvz" Jul 11 00:32:24.596560 kubelet[1427]: I0711 00:32:24.596467 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dbac102a-40dc-413b-925f-46b930349926-hubble-tls\") pod \"cilium-ptsvz\" (UID: \"dbac102a-40dc-413b-925f-46b930349926\") " pod="kube-system/cilium-ptsvz" Jul 11 00:32:24.596560 kubelet[1427]: I0711 00:32:24.596491 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pprz8\" (UniqueName: \"kubernetes.io/projected/dbac102a-40dc-413b-925f-46b930349926-kube-api-access-pprz8\") pod \"cilium-ptsvz\" (UID: \"dbac102a-40dc-413b-925f-46b930349926\") " pod="kube-system/cilium-ptsvz" Jul 11 00:32:24.596692 kubelet[1427]: I0711 00:32:24.596507 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89gmx\" (UniqueName: \"kubernetes.io/projected/00962bd4-d1a2-4ca0-a2e4-235bc69f9d8a-kube-api-access-89gmx\") pod \"cilium-operator-6c4d7847fc-lwbl6\" (UID: \"00962bd4-d1a2-4ca0-a2e4-235bc69f9d8a\") " pod="kube-system/cilium-operator-6c4d7847fc-lwbl6" Jul 11 00:32:24.596692 kubelet[1427]: I0711 00:32:24.596523 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dbac102a-40dc-413b-925f-46b930349926-bpf-maps\") pod \"cilium-ptsvz\" (UID: \"dbac102a-40dc-413b-925f-46b930349926\") " pod="kube-system/cilium-ptsvz" Jul 11 00:32:24.596692 kubelet[1427]: I0711 00:32:24.596536 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dbac102a-40dc-413b-925f-46b930349926-cni-path\") pod \"cilium-ptsvz\" (UID: \"dbac102a-40dc-413b-925f-46b930349926\") " pod="kube-system/cilium-ptsvz" Jul 11 00:32:24.596692 kubelet[1427]: I0711 00:32:24.596551 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dbac102a-40dc-413b-925f-46b930349926-etc-cni-netd\") pod \"cilium-ptsvz\" (UID: \"dbac102a-40dc-413b-925f-46b930349926\") " pod="kube-system/cilium-ptsvz" Jul 11 00:32:24.596692 kubelet[1427]: I0711 00:32:24.596566 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/dbac102a-40dc-413b-925f-46b930349926-cilium-ipsec-secrets\") pod \"cilium-ptsvz\" (UID: \"dbac102a-40dc-413b-925f-46b930349926\") " pod="kube-system/cilium-ptsvz" Jul 11 00:32:24.596799 kubelet[1427]: I0711 00:32:24.596581 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/00962bd4-d1a2-4ca0-a2e4-235bc69f9d8a-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-lwbl6\" (UID: \"00962bd4-d1a2-4ca0-a2e4-235bc69f9d8a\") " pod="kube-system/cilium-operator-6c4d7847fc-lwbl6" Jul 11 00:32:24.596799 kubelet[1427]: I0711 00:32:24.596596 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dbac102a-40dc-413b-925f-46b930349926-host-proc-sys-net\") pod \"cilium-ptsvz\" (UID: \"dbac102a-40dc-413b-925f-46b930349926\") " pod="kube-system/cilium-ptsvz" Jul 11 00:32:24.596799 kubelet[1427]: I0711 00:32:24.596614 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dbac102a-40dc-413b-925f-46b930349926-lib-modules\") pod \"cilium-ptsvz\" (UID: \"dbac102a-40dc-413b-925f-46b930349926\") " pod="kube-system/cilium-ptsvz" Jul 11 00:32:24.596799 kubelet[1427]: I0711 00:32:24.596631 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dbac102a-40dc-413b-925f-46b930349926-xtables-lock\") pod \"cilium-ptsvz\" (UID: \"dbac102a-40dc-413b-925f-46b930349926\") " pod="kube-system/cilium-ptsvz" Jul 11 00:32:24.596799 kubelet[1427]: I0711 00:32:24.596645 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dbac102a-40dc-413b-925f-46b930349926-cilium-config-path\") pod \"cilium-ptsvz\" (UID: \"dbac102a-40dc-413b-925f-46b930349926\") " pod="kube-system/cilium-ptsvz" Jul 11 00:32:24.694291 kubelet[1427]: E0711 00:32:24.694234 1427 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-pprz8 lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-ptsvz" podUID="dbac102a-40dc-413b-925f-46b930349926" Jul 11 00:32:24.846375 kubelet[1427]: E0711 00:32:24.846239 1427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:32:24.847660 env[1218]: time="2025-07-11T00:32:24.847320723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-lwbl6,Uid:00962bd4-d1a2-4ca0-a2e4-235bc69f9d8a,Namespace:kube-system,Attempt:0,}" Jul 11 00:32:24.860712 env[1218]: time="2025-07-11T00:32:24.860636278Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:32:24.860712 env[1218]: time="2025-07-11T00:32:24.860676238Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:32:24.860712 env[1218]: time="2025-07-11T00:32:24.860686398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:32:24.860853 env[1218]: time="2025-07-11T00:32:24.860826958Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4184dc575f2294dae2c9282c3f8be4e58df3de926772e8d0b2db880766bc0613 pid=3000 runtime=io.containerd.runc.v2 Jul 11 00:32:24.870494 systemd[1]: Started cri-containerd-4184dc575f2294dae2c9282c3f8be4e58df3de926772e8d0b2db880766bc0613.scope. Jul 11 00:32:24.924306 env[1218]: time="2025-07-11T00:32:24.924255565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-lwbl6,Uid:00962bd4-d1a2-4ca0-a2e4-235bc69f9d8a,Namespace:kube-system,Attempt:0,} returns sandbox id \"4184dc575f2294dae2c9282c3f8be4e58df3de926772e8d0b2db880766bc0613\"" Jul 11 00:32:24.925000 kubelet[1427]: E0711 00:32:24.924978 1427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:32:24.925775 env[1218]: time="2025-07-11T00:32:24.925729968Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 11 00:32:25.211347 kubelet[1427]: E0711 00:32:25.211258 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:32:25.608453 kubelet[1427]: I0711 00:32:25.608313 1427 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/dbac102a-40dc-413b-925f-46b930349926-cilium-ipsec-secrets\") pod \"dbac102a-40dc-413b-925f-46b930349926\" (UID: \"dbac102a-40dc-413b-925f-46b930349926\") " Jul 11 00:32:25.608453 kubelet[1427]: I0711 00:32:25.608360 1427 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dbac102a-40dc-413b-925f-46b930349926-lib-modules\") pod \"dbac102a-40dc-413b-925f-46b930349926\" (UID: \"dbac102a-40dc-413b-925f-46b930349926\") " Jul 11 00:32:25.608453 kubelet[1427]: I0711 00:32:25.608380 1427 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dbac102a-40dc-413b-925f-46b930349926-clustermesh-secrets\") pod \"dbac102a-40dc-413b-925f-46b930349926\" (UID: \"dbac102a-40dc-413b-925f-46b930349926\") " Jul 11 00:32:25.608453 kubelet[1427]: I0711 00:32:25.608395 1427 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dbac102a-40dc-413b-925f-46b930349926-cilium-cgroup\") pod \"dbac102a-40dc-413b-925f-46b930349926\" (UID: \"dbac102a-40dc-413b-925f-46b930349926\") " Jul 11 00:32:25.608453 kubelet[1427]: I0711 00:32:25.608410 1427 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dbac102a-40dc-413b-925f-46b930349926-host-proc-sys-kernel\") pod \"dbac102a-40dc-413b-925f-46b930349926\" (UID: \"dbac102a-40dc-413b-925f-46b930349926\") " Jul 11 00:32:25.608453 kubelet[1427]: I0711 00:32:25.608428 1427 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pprz8\" (UniqueName: \"kubernetes.io/projected/dbac102a-40dc-413b-925f-46b930349926-kube-api-access-pprz8\") pod \"dbac102a-40dc-413b-925f-46b930349926\" (UID: \"dbac102a-40dc-413b-925f-46b930349926\") " Jul 11 00:32:25.608818 kubelet[1427]: I0711 00:32:25.608579 1427 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dbac102a-40dc-413b-925f-46b930349926-etc-cni-netd\") pod \"dbac102a-40dc-413b-925f-46b930349926\" (UID: \"dbac102a-40dc-413b-925f-46b930349926\") " Jul 11 00:32:25.608818 kubelet[1427]: I0711 00:32:25.608605 1427 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dbac102a-40dc-413b-925f-46b930349926-cilium-config-path\") pod \"dbac102a-40dc-413b-925f-46b930349926\" (UID: \"dbac102a-40dc-413b-925f-46b930349926\") " Jul 11 00:32:25.608818 kubelet[1427]: I0711 00:32:25.608619 1427 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dbac102a-40dc-413b-925f-46b930349926-cilium-run\") pod \"dbac102a-40dc-413b-925f-46b930349926\" (UID: \"dbac102a-40dc-413b-925f-46b930349926\") " Jul 11 00:32:25.608818 kubelet[1427]: I0711 00:32:25.608633 1427 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dbac102a-40dc-413b-925f-46b930349926-bpf-maps\") pod \"dbac102a-40dc-413b-925f-46b930349926\" (UID: \"dbac102a-40dc-413b-925f-46b930349926\") " Jul 11 00:32:25.608818 kubelet[1427]: I0711 00:32:25.608658 1427 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dbac102a-40dc-413b-925f-46b930349926-host-proc-sys-net\") pod \"dbac102a-40dc-413b-925f-46b930349926\" (UID: \"dbac102a-40dc-413b-925f-46b930349926\") " Jul 11 00:32:25.608818 kubelet[1427]: I0711 00:32:25.608676 1427 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dbac102a-40dc-413b-925f-46b930349926-xtables-lock\") pod \"dbac102a-40dc-413b-925f-46b930349926\" (UID: \"dbac102a-40dc-413b-925f-46b930349926\") " Jul 11 00:32:25.608946 kubelet[1427]: I0711 00:32:25.608691 1427 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dbac102a-40dc-413b-925f-46b930349926-hostproc\") pod \"dbac102a-40dc-413b-925f-46b930349926\" (UID: \"dbac102a-40dc-413b-925f-46b930349926\") " Jul 11 00:32:25.608946 kubelet[1427]: I0711 00:32:25.608706 1427 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dbac102a-40dc-413b-925f-46b930349926-hubble-tls\") pod \"dbac102a-40dc-413b-925f-46b930349926\" (UID: \"dbac102a-40dc-413b-925f-46b930349926\") " Jul 11 00:32:25.608946 kubelet[1427]: I0711 00:32:25.608720 1427 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dbac102a-40dc-413b-925f-46b930349926-cni-path\") pod \"dbac102a-40dc-413b-925f-46b930349926\" (UID: \"dbac102a-40dc-413b-925f-46b930349926\") " Jul 11 00:32:25.608946 kubelet[1427]: I0711 00:32:25.608788 1427 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbac102a-40dc-413b-925f-46b930349926-cni-path" (OuterVolumeSpecName: "cni-path") pod "dbac102a-40dc-413b-925f-46b930349926" (UID: "dbac102a-40dc-413b-925f-46b930349926"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:32:25.609223 kubelet[1427]: I0711 00:32:25.609176 1427 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbac102a-40dc-413b-925f-46b930349926-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "dbac102a-40dc-413b-925f-46b930349926" (UID: "dbac102a-40dc-413b-925f-46b930349926"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:32:25.609223 kubelet[1427]: I0711 00:32:25.609213 1427 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbac102a-40dc-413b-925f-46b930349926-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "dbac102a-40dc-413b-925f-46b930349926" (UID: "dbac102a-40dc-413b-925f-46b930349926"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:32:25.609323 kubelet[1427]: I0711 00:32:25.609179 1427 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbac102a-40dc-413b-925f-46b930349926-hostproc" (OuterVolumeSpecName: "hostproc") pod "dbac102a-40dc-413b-925f-46b930349926" (UID: "dbac102a-40dc-413b-925f-46b930349926"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:32:25.609323 kubelet[1427]: I0711 00:32:25.609234 1427 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbac102a-40dc-413b-925f-46b930349926-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "dbac102a-40dc-413b-925f-46b930349926" (UID: "dbac102a-40dc-413b-925f-46b930349926"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:32:25.609323 kubelet[1427]: I0711 00:32:25.609249 1427 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbac102a-40dc-413b-925f-46b930349926-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "dbac102a-40dc-413b-925f-46b930349926" (UID: "dbac102a-40dc-413b-925f-46b930349926"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:32:25.609323 kubelet[1427]: I0711 00:32:25.609253 1427 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbac102a-40dc-413b-925f-46b930349926-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "dbac102a-40dc-413b-925f-46b930349926" (UID: "dbac102a-40dc-413b-925f-46b930349926"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:32:25.609323 kubelet[1427]: I0711 00:32:25.609269 1427 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbac102a-40dc-413b-925f-46b930349926-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "dbac102a-40dc-413b-925f-46b930349926" (UID: "dbac102a-40dc-413b-925f-46b930349926"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:32:25.609439 kubelet[1427]: I0711 00:32:25.609332 1427 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbac102a-40dc-413b-925f-46b930349926-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "dbac102a-40dc-413b-925f-46b930349926" (UID: "dbac102a-40dc-413b-925f-46b930349926"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:32:25.609439 kubelet[1427]: I0711 00:32:25.609399 1427 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbac102a-40dc-413b-925f-46b930349926-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "dbac102a-40dc-413b-925f-46b930349926" (UID: "dbac102a-40dc-413b-925f-46b930349926"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:32:25.611181 kubelet[1427]: I0711 00:32:25.611152 1427 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbac102a-40dc-413b-925f-46b930349926-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "dbac102a-40dc-413b-925f-46b930349926" (UID: "dbac102a-40dc-413b-925f-46b930349926"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 11 00:32:25.612031 kubelet[1427]: I0711 00:32:25.611992 1427 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbac102a-40dc-413b-925f-46b930349926-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "dbac102a-40dc-413b-925f-46b930349926" (UID: "dbac102a-40dc-413b-925f-46b930349926"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 11 00:32:25.612486 kubelet[1427]: I0711 00:32:25.612460 1427 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbac102a-40dc-413b-925f-46b930349926-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "dbac102a-40dc-413b-925f-46b930349926" (UID: "dbac102a-40dc-413b-925f-46b930349926"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 11 00:32:25.612771 kubelet[1427]: I0711 00:32:25.612743 1427 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbac102a-40dc-413b-925f-46b930349926-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "dbac102a-40dc-413b-925f-46b930349926" (UID: "dbac102a-40dc-413b-925f-46b930349926"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 11 00:32:25.612965 kubelet[1427]: I0711 00:32:25.612929 1427 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbac102a-40dc-413b-925f-46b930349926-kube-api-access-pprz8" (OuterVolumeSpecName: "kube-api-access-pprz8") pod "dbac102a-40dc-413b-925f-46b930349926" (UID: "dbac102a-40dc-413b-925f-46b930349926"). InnerVolumeSpecName "kube-api-access-pprz8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 11 00:32:25.703362 systemd[1]: var-lib-kubelet-pods-dbac102a\x2d40dc\x2d413b\x2d925f\x2d46b930349926-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpprz8.mount: Deactivated successfully. Jul 11 00:32:25.703450 systemd[1]: var-lib-kubelet-pods-dbac102a\x2d40dc\x2d413b\x2d925f\x2d46b930349926-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 11 00:32:25.703506 systemd[1]: var-lib-kubelet-pods-dbac102a\x2d40dc\x2d413b\x2d925f\x2d46b930349926-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 11 00:32:25.703556 systemd[1]: var-lib-kubelet-pods-dbac102a\x2d40dc\x2d413b\x2d925f\x2d46b930349926-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 11 00:32:25.709107 kubelet[1427]: I0711 00:32:25.709073 1427 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pprz8\" (UniqueName: \"kubernetes.io/projected/dbac102a-40dc-413b-925f-46b930349926-kube-api-access-pprz8\") on node \"10.0.0.78\" DevicePath \"\"" Jul 11 00:32:25.709306 kubelet[1427]: I0711 00:32:25.709269 1427 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dbac102a-40dc-413b-925f-46b930349926-etc-cni-netd\") on node \"10.0.0.78\" DevicePath \"\"" Jul 11 00:32:25.709390 kubelet[1427]: I0711 00:32:25.709379 1427 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dbac102a-40dc-413b-925f-46b930349926-cilium-config-path\") on node \"10.0.0.78\" DevicePath \"\"" Jul 11 00:32:25.709452 kubelet[1427]: I0711 00:32:25.709442 1427 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dbac102a-40dc-413b-925f-46b930349926-cilium-run\") on node \"10.0.0.78\" DevicePath \"\"" Jul 11 00:32:25.709521 kubelet[1427]: I0711 00:32:25.709511 1427 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dbac102a-40dc-413b-925f-46b930349926-bpf-maps\") on node \"10.0.0.78\" DevicePath \"\"" Jul 11 00:32:25.709586 kubelet[1427]: I0711 00:32:25.709576 1427 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dbac102a-40dc-413b-925f-46b930349926-host-proc-sys-net\") on node \"10.0.0.78\" DevicePath \"\"" Jul 11 00:32:25.709647 kubelet[1427]: I0711 00:32:25.709639 1427 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dbac102a-40dc-413b-925f-46b930349926-xtables-lock\") on node \"10.0.0.78\" DevicePath \"\"" Jul 11 00:32:25.709697 kubelet[1427]: I0711 00:32:25.709689 1427 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dbac102a-40dc-413b-925f-46b930349926-hostproc\") on node \"10.0.0.78\" DevicePath \"\"" Jul 11 00:32:25.709763 kubelet[1427]: I0711 00:32:25.709754 1427 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dbac102a-40dc-413b-925f-46b930349926-hubble-tls\") on node \"10.0.0.78\" DevicePath \"\"" Jul 11 00:32:25.709829 kubelet[1427]: I0711 00:32:25.709811 1427 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dbac102a-40dc-413b-925f-46b930349926-cni-path\") on node \"10.0.0.78\" DevicePath \"\"" Jul 11 00:32:25.709894 kubelet[1427]: I0711 00:32:25.709884 1427 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/dbac102a-40dc-413b-925f-46b930349926-cilium-ipsec-secrets\") on node \"10.0.0.78\" DevicePath \"\"" Jul 11 00:32:25.709964 kubelet[1427]: I0711 00:32:25.709954 1427 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dbac102a-40dc-413b-925f-46b930349926-lib-modules\") on node \"10.0.0.78\" DevicePath \"\"" Jul 11 00:32:25.710020 kubelet[1427]: I0711 00:32:25.710011 1427 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dbac102a-40dc-413b-925f-46b930349926-clustermesh-secrets\") on node \"10.0.0.78\" DevicePath \"\"" Jul 11 00:32:25.710080 kubelet[1427]: I0711 00:32:25.710064 1427 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dbac102a-40dc-413b-925f-46b930349926-cilium-cgroup\") on node \"10.0.0.78\" DevicePath \"\"" Jul 11 00:32:25.710146 kubelet[1427]: I0711 00:32:25.710136 1427 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dbac102a-40dc-413b-925f-46b930349926-host-proc-sys-kernel\") on node \"10.0.0.78\" DevicePath \"\"" Jul 11 00:32:25.948885 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4216135327.mount: Deactivated successfully. Jul 11 00:32:26.212071 kubelet[1427]: E0711 00:32:26.211965 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:32:26.337351 kubelet[1427]: E0711 00:32:26.337268 1427 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 11 00:32:26.388436 systemd[1]: Removed slice kubepods-burstable-poddbac102a_40dc_413b_925f_46b930349926.slice. Jul 11 00:32:26.541359 systemd[1]: Created slice kubepods-burstable-pod2ccbaa6c_11ef_48d4_9527_6b20266f3c63.slice. Jul 11 00:32:26.616252 kubelet[1427]: I0711 00:32:26.616202 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2ccbaa6c-11ef-48d4-9527-6b20266f3c63-cilium-ipsec-secrets\") pod \"cilium-bt62p\" (UID: \"2ccbaa6c-11ef-48d4-9527-6b20266f3c63\") " pod="kube-system/cilium-bt62p" Jul 11 00:32:26.616252 kubelet[1427]: I0711 00:32:26.616246 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2ccbaa6c-11ef-48d4-9527-6b20266f3c63-cilium-run\") pod \"cilium-bt62p\" (UID: \"2ccbaa6c-11ef-48d4-9527-6b20266f3c63\") " pod="kube-system/cilium-bt62p" Jul 11 00:32:26.616439 kubelet[1427]: I0711 00:32:26.616275 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2ccbaa6c-11ef-48d4-9527-6b20266f3c63-bpf-maps\") pod \"cilium-bt62p\" (UID: \"2ccbaa6c-11ef-48d4-9527-6b20266f3c63\") " pod="kube-system/cilium-bt62p" Jul 11 00:32:26.616439 kubelet[1427]: I0711 00:32:26.616297 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2ccbaa6c-11ef-48d4-9527-6b20266f3c63-etc-cni-netd\") pod \"cilium-bt62p\" (UID: \"2ccbaa6c-11ef-48d4-9527-6b20266f3c63\") " pod="kube-system/cilium-bt62p" Jul 11 00:32:26.616439 kubelet[1427]: I0711 00:32:26.616312 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2ccbaa6c-11ef-48d4-9527-6b20266f3c63-host-proc-sys-net\") pod \"cilium-bt62p\" (UID: \"2ccbaa6c-11ef-48d4-9527-6b20266f3c63\") " pod="kube-system/cilium-bt62p" Jul 11 00:32:26.616439 kubelet[1427]: I0711 00:32:26.616327 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2ccbaa6c-11ef-48d4-9527-6b20266f3c63-hubble-tls\") pod \"cilium-bt62p\" (UID: \"2ccbaa6c-11ef-48d4-9527-6b20266f3c63\") " pod="kube-system/cilium-bt62p" Jul 11 00:32:26.616439 kubelet[1427]: I0711 00:32:26.616341 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2ccbaa6c-11ef-48d4-9527-6b20266f3c63-xtables-lock\") pod \"cilium-bt62p\" (UID: \"2ccbaa6c-11ef-48d4-9527-6b20266f3c63\") " pod="kube-system/cilium-bt62p" Jul 11 00:32:26.616439 kubelet[1427]: I0711 00:32:26.616359 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f6rj\" (UniqueName: \"kubernetes.io/projected/2ccbaa6c-11ef-48d4-9527-6b20266f3c63-kube-api-access-7f6rj\") pod \"cilium-bt62p\" (UID: \"2ccbaa6c-11ef-48d4-9527-6b20266f3c63\") " pod="kube-system/cilium-bt62p" Jul 11 00:32:26.616577 kubelet[1427]: I0711 00:32:26.616377 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2ccbaa6c-11ef-48d4-9527-6b20266f3c63-cilium-cgroup\") pod \"cilium-bt62p\" (UID: \"2ccbaa6c-11ef-48d4-9527-6b20266f3c63\") " pod="kube-system/cilium-bt62p" Jul 11 00:32:26.616577 kubelet[1427]: I0711 00:32:26.616391 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2ccbaa6c-11ef-48d4-9527-6b20266f3c63-lib-modules\") pod \"cilium-bt62p\" (UID: \"2ccbaa6c-11ef-48d4-9527-6b20266f3c63\") " pod="kube-system/cilium-bt62p" Jul 11 00:32:26.616577 kubelet[1427]: I0711 00:32:26.616405 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2ccbaa6c-11ef-48d4-9527-6b20266f3c63-host-proc-sys-kernel\") pod \"cilium-bt62p\" (UID: \"2ccbaa6c-11ef-48d4-9527-6b20266f3c63\") " pod="kube-system/cilium-bt62p" Jul 11 00:32:26.616577 kubelet[1427]: I0711 00:32:26.616420 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2ccbaa6c-11ef-48d4-9527-6b20266f3c63-cni-path\") pod \"cilium-bt62p\" (UID: \"2ccbaa6c-11ef-48d4-9527-6b20266f3c63\") " pod="kube-system/cilium-bt62p" Jul 11 00:32:26.616577 kubelet[1427]: I0711 00:32:26.616437 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2ccbaa6c-11ef-48d4-9527-6b20266f3c63-cilium-config-path\") pod \"cilium-bt62p\" (UID: \"2ccbaa6c-11ef-48d4-9527-6b20266f3c63\") " pod="kube-system/cilium-bt62p" Jul 11 00:32:26.616577 kubelet[1427]: I0711 00:32:26.616453 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2ccbaa6c-11ef-48d4-9527-6b20266f3c63-hostproc\") pod \"cilium-bt62p\" (UID: \"2ccbaa6c-11ef-48d4-9527-6b20266f3c63\") " pod="kube-system/cilium-bt62p" Jul 11 00:32:26.616718 kubelet[1427]: I0711 00:32:26.616481 1427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2ccbaa6c-11ef-48d4-9527-6b20266f3c63-clustermesh-secrets\") pod \"cilium-bt62p\" (UID: \"2ccbaa6c-11ef-48d4-9527-6b20266f3c63\") " pod="kube-system/cilium-bt62p" Jul 11 00:32:26.659162 env[1218]: time="2025-07-11T00:32:26.659104025Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:32:26.660616 env[1218]: time="2025-07-11T00:32:26.660582748Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:32:26.662150 env[1218]: time="2025-07-11T00:32:26.662117832Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:32:26.662670 env[1218]: time="2025-07-11T00:32:26.662641193Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 11 00:32:26.666387 env[1218]: time="2025-07-11T00:32:26.666350761Z" level=info msg="CreateContainer within sandbox \"4184dc575f2294dae2c9282c3f8be4e58df3de926772e8d0b2db880766bc0613\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 11 00:32:26.676019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount730844240.mount: Deactivated successfully. Jul 11 00:32:26.681520 env[1218]: time="2025-07-11T00:32:26.681471276Z" level=info msg="CreateContainer within sandbox \"4184dc575f2294dae2c9282c3f8be4e58df3de926772e8d0b2db880766bc0613\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b62595cc944c5ed3284f3dab33854e15949b728a12dbdd60ab556c040c74b971\"" Jul 11 00:32:26.682262 env[1218]: time="2025-07-11T00:32:26.682236558Z" level=info msg="StartContainer for \"b62595cc944c5ed3284f3dab33854e15949b728a12dbdd60ab556c040c74b971\"" Jul 11 00:32:26.710506 systemd[1]: Started cri-containerd-b62595cc944c5ed3284f3dab33854e15949b728a12dbdd60ab556c040c74b971.scope. Jul 11 00:32:26.764145 env[1218]: time="2025-07-11T00:32:26.763461025Z" level=info msg="StartContainer for \"b62595cc944c5ed3284f3dab33854e15949b728a12dbdd60ab556c040c74b971\" returns successfully" Jul 11 00:32:26.856245 kubelet[1427]: E0711 00:32:26.855977 1427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:32:26.856688 env[1218]: time="2025-07-11T00:32:26.856650440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bt62p,Uid:2ccbaa6c-11ef-48d4-9527-6b20266f3c63,Namespace:kube-system,Attempt:0,}" Jul 11 00:32:26.869320 env[1218]: time="2025-07-11T00:32:26.869232989Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:32:26.869456 env[1218]: time="2025-07-11T00:32:26.869326150Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:32:26.869456 env[1218]: time="2025-07-11T00:32:26.869352710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:32:26.869593 env[1218]: time="2025-07-11T00:32:26.869533270Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4b9e6c5d6bc7ae71ab89d4a9f4128f5fb4472ead20d20705f3876d07ee6bc02e pid=3093 runtime=io.containerd.runc.v2 Jul 11 00:32:26.879377 systemd[1]: Started cri-containerd-4b9e6c5d6bc7ae71ab89d4a9f4128f5fb4472ead20d20705f3876d07ee6bc02e.scope. Jul 11 00:32:26.917760 env[1218]: time="2025-07-11T00:32:26.917710341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bt62p,Uid:2ccbaa6c-11ef-48d4-9527-6b20266f3c63,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b9e6c5d6bc7ae71ab89d4a9f4128f5fb4472ead20d20705f3876d07ee6bc02e\"" Jul 11 00:32:26.918745 kubelet[1427]: E0711 00:32:26.918426 1427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:32:26.922018 env[1218]: time="2025-07-11T00:32:26.921968711Z" level=info msg="CreateContainer within sandbox \"4b9e6c5d6bc7ae71ab89d4a9f4128f5fb4472ead20d20705f3876d07ee6bc02e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 11 00:32:26.933044 env[1218]: time="2025-07-11T00:32:26.932990936Z" level=info msg="CreateContainer within sandbox \"4b9e6c5d6bc7ae71ab89d4a9f4128f5fb4472ead20d20705f3876d07ee6bc02e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b0bccf2eac8712f7a7495b95e1825b8a83e4dce20b551ef00d71ec5aab3b92f5\"" Jul 11 00:32:26.933733 env[1218]: time="2025-07-11T00:32:26.933696778Z" level=info msg="StartContainer for \"b0bccf2eac8712f7a7495b95e1825b8a83e4dce20b551ef00d71ec5aab3b92f5\"" Jul 11 00:32:26.947224 systemd[1]: Started cri-containerd-b0bccf2eac8712f7a7495b95e1825b8a83e4dce20b551ef00d71ec5aab3b92f5.scope. Jul 11 00:32:26.977810 env[1218]: time="2025-07-11T00:32:26.977764680Z" level=info msg="StartContainer for \"b0bccf2eac8712f7a7495b95e1825b8a83e4dce20b551ef00d71ec5aab3b92f5\" returns successfully" Jul 11 00:32:27.014366 systemd[1]: cri-containerd-b0bccf2eac8712f7a7495b95e1825b8a83e4dce20b551ef00d71ec5aab3b92f5.scope: Deactivated successfully. Jul 11 00:32:27.036622 env[1218]: time="2025-07-11T00:32:27.036574050Z" level=info msg="shim disconnected" id=b0bccf2eac8712f7a7495b95e1825b8a83e4dce20b551ef00d71ec5aab3b92f5 Jul 11 00:32:27.036622 env[1218]: time="2025-07-11T00:32:27.036619970Z" level=warning msg="cleaning up after shim disconnected" id=b0bccf2eac8712f7a7495b95e1825b8a83e4dce20b551ef00d71ec5aab3b92f5 namespace=k8s.io Jul 11 00:32:27.036622 env[1218]: time="2025-07-11T00:32:27.036628690Z" level=info msg="cleaning up dead shim" Jul 11 00:32:27.045286 env[1218]: time="2025-07-11T00:32:27.045234949Z" level=warning msg="cleanup warnings time=\"2025-07-11T00:32:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3176 runtime=io.containerd.runc.v2\n" Jul 11 00:32:27.212834 kubelet[1427]: E0711 00:32:27.212781 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:32:27.215061 kubelet[1427]: I0711 00:32:27.214475 1427 setters.go:618] "Node became not ready" node="10.0.0.78" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-11T00:32:27Z","lastTransitionTime":"2025-07-11T00:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 11 00:32:27.503726 kubelet[1427]: E0711 00:32:27.502627 1427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:32:27.504718 kubelet[1427]: E0711 00:32:27.504695 1427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:32:27.509471 env[1218]: time="2025-07-11T00:32:27.509422673Z" level=info msg="CreateContainer within sandbox \"4b9e6c5d6bc7ae71ab89d4a9f4128f5fb4472ead20d20705f3876d07ee6bc02e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 11 00:32:27.531786 env[1218]: time="2025-07-11T00:32:27.531702441Z" level=info msg="CreateContainer within sandbox \"4b9e6c5d6bc7ae71ab89d4a9f4128f5fb4472ead20d20705f3876d07ee6bc02e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"48ba4d1c39a6b6759fb492031bb788ab3fe0738410007db3148cb944943a716e\"" Jul 11 00:32:27.532272 env[1218]: time="2025-07-11T00:32:27.532234202Z" level=info msg="StartContainer for \"48ba4d1c39a6b6759fb492031bb788ab3fe0738410007db3148cb944943a716e\"" Jul 11 00:32:27.541488 kubelet[1427]: I0711 00:32:27.541417 1427 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-lwbl6" podStartSLOduration=1.803435115 podStartE2EDuration="3.541398542s" podCreationTimestamp="2025-07-11 00:32:24 +0000 UTC" firstStartedPulling="2025-07-11 00:32:24.925469248 +0000 UTC m=+49.985776949" lastFinishedPulling="2025-07-11 00:32:26.663432675 +0000 UTC m=+51.723740376" observedRunningTime="2025-07-11 00:32:27.516679328 +0000 UTC m=+52.576987029" watchObservedRunningTime="2025-07-11 00:32:27.541398542 +0000 UTC m=+52.601706203" Jul 11 00:32:27.547861 systemd[1]: Started cri-containerd-48ba4d1c39a6b6759fb492031bb788ab3fe0738410007db3148cb944943a716e.scope. Jul 11 00:32:27.596433 env[1218]: time="2025-07-11T00:32:27.596360381Z" level=info msg="StartContainer for \"48ba4d1c39a6b6759fb492031bb788ab3fe0738410007db3148cb944943a716e\" returns successfully" Jul 11 00:32:27.613171 systemd[1]: cri-containerd-48ba4d1c39a6b6759fb492031bb788ab3fe0738410007db3148cb944943a716e.scope: Deactivated successfully. Jul 11 00:32:27.635832 env[1218]: time="2025-07-11T00:32:27.635743826Z" level=info msg="shim disconnected" id=48ba4d1c39a6b6759fb492031bb788ab3fe0738410007db3148cb944943a716e Jul 11 00:32:27.635832 env[1218]: time="2025-07-11T00:32:27.635823066Z" level=warning msg="cleaning up after shim disconnected" id=48ba4d1c39a6b6759fb492031bb788ab3fe0738410007db3148cb944943a716e namespace=k8s.io Jul 11 00:32:27.636177 env[1218]: time="2025-07-11T00:32:27.635869106Z" level=info msg="cleaning up dead shim" Jul 11 00:32:27.643777 env[1218]: time="2025-07-11T00:32:27.643719643Z" level=warning msg="cleanup warnings time=\"2025-07-11T00:32:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3238 runtime=io.containerd.runc.v2\n" Jul 11 00:32:28.213281 kubelet[1427]: E0711 00:32:28.213202 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:32:28.386148 kubelet[1427]: I0711 00:32:28.386017 1427 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbac102a-40dc-413b-925f-46b930349926" path="/var/lib/kubelet/pods/dbac102a-40dc-413b-925f-46b930349926/volumes" Jul 11 00:32:28.508095 kubelet[1427]: E0711 00:32:28.507868 1427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:32:28.508546 kubelet[1427]: E0711 00:32:28.508521 1427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:32:28.512604 env[1218]: time="2025-07-11T00:32:28.512562853Z" level=info msg="CreateContainer within sandbox \"4b9e6c5d6bc7ae71ab89d4a9f4128f5fb4472ead20d20705f3876d07ee6bc02e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 11 00:32:28.525612 env[1218]: time="2025-07-11T00:32:28.525498120Z" level=info msg="CreateContainer within sandbox \"4b9e6c5d6bc7ae71ab89d4a9f4128f5fb4472ead20d20705f3876d07ee6bc02e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0baaca0aff996516fe635c847ad39396e38cd8c2c74f78820ebe825a3b4e3ae2\"" Jul 11 00:32:28.525988 env[1218]: time="2025-07-11T00:32:28.525963721Z" level=info msg="StartContainer for \"0baaca0aff996516fe635c847ad39396e38cd8c2c74f78820ebe825a3b4e3ae2\"" Jul 11 00:32:28.544734 systemd[1]: Started cri-containerd-0baaca0aff996516fe635c847ad39396e38cd8c2c74f78820ebe825a3b4e3ae2.scope. Jul 11 00:32:28.582390 systemd[1]: cri-containerd-0baaca0aff996516fe635c847ad39396e38cd8c2c74f78820ebe825a3b4e3ae2.scope: Deactivated successfully. Jul 11 00:32:28.583517 env[1218]: time="2025-07-11T00:32:28.583467437Z" level=info msg="StartContainer for \"0baaca0aff996516fe635c847ad39396e38cd8c2c74f78820ebe825a3b4e3ae2\" returns successfully" Jul 11 00:32:28.606209 env[1218]: time="2025-07-11T00:32:28.606081563Z" level=info msg="shim disconnected" id=0baaca0aff996516fe635c847ad39396e38cd8c2c74f78820ebe825a3b4e3ae2 Jul 11 00:32:28.606209 env[1218]: time="2025-07-11T00:32:28.606195483Z" level=warning msg="cleaning up after shim disconnected" id=0baaca0aff996516fe635c847ad39396e38cd8c2c74f78820ebe825a3b4e3ae2 namespace=k8s.io Jul 11 00:32:28.606209 env[1218]: time="2025-07-11T00:32:28.606205563Z" level=info msg="cleaning up dead shim" Jul 11 00:32:28.613040 env[1218]: time="2025-07-11T00:32:28.612999217Z" level=warning msg="cleanup warnings time=\"2025-07-11T00:32:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3293 runtime=io.containerd.runc.v2\n" Jul 11 00:32:28.702971 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0baaca0aff996516fe635c847ad39396e38cd8c2c74f78820ebe825a3b4e3ae2-rootfs.mount: Deactivated successfully. Jul 11 00:32:29.213532 kubelet[1427]: E0711 00:32:29.213474 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:32:29.513184 kubelet[1427]: E0711 00:32:29.511828 1427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:32:29.516139 env[1218]: time="2025-07-11T00:32:29.515804182Z" level=info msg="CreateContainer within sandbox \"4b9e6c5d6bc7ae71ab89d4a9f4128f5fb4472ead20d20705f3876d07ee6bc02e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 11 00:32:29.526091 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2757360763.mount: Deactivated successfully. Jul 11 00:32:29.528832 env[1218]: time="2025-07-11T00:32:29.528789847Z" level=info msg="CreateContainer within sandbox \"4b9e6c5d6bc7ae71ab89d4a9f4128f5fb4472ead20d20705f3876d07ee6bc02e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"10cea5870113c02dde03a3acf5de51f746c8b491c8aabf21a939c88105d60de7\"" Jul 11 00:32:29.530056 env[1218]: time="2025-07-11T00:32:29.530023329Z" level=info msg="StartContainer for \"10cea5870113c02dde03a3acf5de51f746c8b491c8aabf21a939c88105d60de7\"" Jul 11 00:32:29.545393 systemd[1]: Started cri-containerd-10cea5870113c02dde03a3acf5de51f746c8b491c8aabf21a939c88105d60de7.scope. Jul 11 00:32:29.584723 systemd[1]: cri-containerd-10cea5870113c02dde03a3acf5de51f746c8b491c8aabf21a939c88105d60de7.scope: Deactivated successfully. Jul 11 00:32:29.585485 env[1218]: time="2025-07-11T00:32:29.584997754Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ccbaa6c_11ef_48d4_9527_6b20266f3c63.slice/cri-containerd-10cea5870113c02dde03a3acf5de51f746c8b491c8aabf21a939c88105d60de7.scope/memory.events\": no such file or directory" Jul 11 00:32:29.589498 env[1218]: time="2025-07-11T00:32:29.589457082Z" level=info msg="StartContainer for \"10cea5870113c02dde03a3acf5de51f746c8b491c8aabf21a939c88105d60de7\" returns successfully" Jul 11 00:32:29.609967 env[1218]: time="2025-07-11T00:32:29.609910681Z" level=info msg="shim disconnected" id=10cea5870113c02dde03a3acf5de51f746c8b491c8aabf21a939c88105d60de7 Jul 11 00:32:29.609967 env[1218]: time="2025-07-11T00:32:29.609955921Z" level=warning msg="cleaning up after shim disconnected" id=10cea5870113c02dde03a3acf5de51f746c8b491c8aabf21a939c88105d60de7 namespace=k8s.io Jul 11 00:32:29.609967 env[1218]: time="2025-07-11T00:32:29.609966241Z" level=info msg="cleaning up dead shim" Jul 11 00:32:29.617093 env[1218]: time="2025-07-11T00:32:29.617057095Z" level=warning msg="cleanup warnings time=\"2025-07-11T00:32:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3349 runtime=io.containerd.runc.v2\n" Jul 11 00:32:29.703062 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10cea5870113c02dde03a3acf5de51f746c8b491c8aabf21a939c88105d60de7-rootfs.mount: Deactivated successfully. Jul 11 00:32:30.213976 kubelet[1427]: E0711 00:32:30.213928 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:32:30.516215 kubelet[1427]: E0711 00:32:30.516103 1427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:32:30.519548 env[1218]: time="2025-07-11T00:32:30.519508030Z" level=info msg="CreateContainer within sandbox \"4b9e6c5d6bc7ae71ab89d4a9f4128f5fb4472ead20d20705f3876d07ee6bc02e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 11 00:32:30.544463 env[1218]: time="2025-07-11T00:32:30.544231394Z" level=info msg="CreateContainer within sandbox \"4b9e6c5d6bc7ae71ab89d4a9f4128f5fb4472ead20d20705f3876d07ee6bc02e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4f08d8c792b77b2baa386ea410c9956876cc1193731292f4af4ab8956015fb76\"" Jul 11 00:32:30.545913 env[1218]: time="2025-07-11T00:32:30.545882957Z" level=info msg="StartContainer for \"4f08d8c792b77b2baa386ea410c9956876cc1193731292f4af4ab8956015fb76\"" Jul 11 00:32:30.565152 systemd[1]: Started cri-containerd-4f08d8c792b77b2baa386ea410c9956876cc1193731292f4af4ab8956015fb76.scope. Jul 11 00:32:30.603974 env[1218]: time="2025-07-11T00:32:30.600901055Z" level=info msg="StartContainer for \"4f08d8c792b77b2baa386ea410c9956876cc1193731292f4af4ab8956015fb76\" returns successfully" Jul 11 00:32:30.703309 systemd[1]: run-containerd-runc-k8s.io-4f08d8c792b77b2baa386ea410c9956876cc1193731292f4af4ab8956015fb76-runc.hboUpp.mount: Deactivated successfully. Jul 11 00:32:30.850150 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Jul 11 00:32:31.214411 kubelet[1427]: E0711 00:32:31.214333 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:32:31.521747 kubelet[1427]: E0711 00:32:31.521627 1427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:32:31.537310 kubelet[1427]: I0711 00:32:31.537227 1427 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bt62p" podStartSLOduration=5.537213504 podStartE2EDuration="5.537213504s" podCreationTimestamp="2025-07-11 00:32:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:32:31.537095824 +0000 UTC m=+56.597403525" watchObservedRunningTime="2025-07-11 00:32:31.537213504 +0000 UTC m=+56.597521205" Jul 11 00:32:32.214653 kubelet[1427]: E0711 00:32:32.214621 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:32:32.857132 kubelet[1427]: E0711 00:32:32.857072 1427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:32:33.075653 systemd[1]: run-containerd-runc-k8s.io-4f08d8c792b77b2baa386ea410c9956876cc1193731292f4af4ab8956015fb76-runc.OZAXwj.mount: Deactivated successfully. Jul 11 00:32:33.215918 kubelet[1427]: E0711 00:32:33.215866 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:32:33.662569 systemd-networkd[1046]: lxc_health: Link UP Jul 11 00:32:33.670374 systemd-networkd[1046]: lxc_health: Gained carrier Jul 11 00:32:33.671141 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 11 00:32:34.216700 kubelet[1427]: E0711 00:32:34.216647 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:32:34.858133 kubelet[1427]: E0711 00:32:34.858066 1427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:32:35.215300 systemd-networkd[1046]: lxc_health: Gained IPv6LL Jul 11 00:32:35.216818 kubelet[1427]: E0711 00:32:35.216771 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:32:35.223270 systemd[1]: run-containerd-runc-k8s.io-4f08d8c792b77b2baa386ea410c9956876cc1193731292f4af4ab8956015fb76-runc.QSanaI.mount: Deactivated successfully. Jul 11 00:32:35.529940 kubelet[1427]: E0711 00:32:35.529420 1427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:32:36.174557 kubelet[1427]: E0711 00:32:36.174519 1427 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:32:36.188474 env[1218]: time="2025-07-11T00:32:36.188429806Z" level=info msg="StopPodSandbox for \"0a6e16985afe2d4e588d5bbcb0d6e6bfb7af5fd8a26482c7ba73bf3c7f692f39\"" Jul 11 00:32:36.188819 env[1218]: time="2025-07-11T00:32:36.188526886Z" level=info msg="TearDown network for sandbox \"0a6e16985afe2d4e588d5bbcb0d6e6bfb7af5fd8a26482c7ba73bf3c7f692f39\" successfully" Jul 11 00:32:36.188819 env[1218]: time="2025-07-11T00:32:36.188560446Z" level=info msg="StopPodSandbox for \"0a6e16985afe2d4e588d5bbcb0d6e6bfb7af5fd8a26482c7ba73bf3c7f692f39\" returns successfully" Jul 11 00:32:36.189670 env[1218]: time="2025-07-11T00:32:36.189643487Z" level=info msg="RemovePodSandbox for \"0a6e16985afe2d4e588d5bbcb0d6e6bfb7af5fd8a26482c7ba73bf3c7f692f39\"" Jul 11 00:32:36.189760 env[1218]: time="2025-07-11T00:32:36.189671127Z" level=info msg="Forcibly stopping sandbox \"0a6e16985afe2d4e588d5bbcb0d6e6bfb7af5fd8a26482c7ba73bf3c7f692f39\"" Jul 11 00:32:36.189760 env[1218]: time="2025-07-11T00:32:36.189738007Z" level=info msg="TearDown network for sandbox \"0a6e16985afe2d4e588d5bbcb0d6e6bfb7af5fd8a26482c7ba73bf3c7f692f39\" successfully" Jul 11 00:32:36.193811 env[1218]: time="2025-07-11T00:32:36.193763572Z" level=info msg="RemovePodSandbox \"0a6e16985afe2d4e588d5bbcb0d6e6bfb7af5fd8a26482c7ba73bf3c7f692f39\" returns successfully" Jul 11 00:32:36.218201 kubelet[1427]: E0711 00:32:36.218162 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:32:36.531163 kubelet[1427]: E0711 00:32:36.531045 1427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:32:37.219298 kubelet[1427]: E0711 00:32:37.219251 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:32:38.219442 kubelet[1427]: E0711 00:32:38.219398 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:32:39.222370 kubelet[1427]: E0711 00:32:39.222322 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:32:40.222604 kubelet[1427]: E0711 00:32:40.222572 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:32:41.223569 kubelet[1427]: E0711 00:32:41.223507 1427 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"