May 13 00:19:44.726426 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 13 00:19:44.726445 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon May 12 23:22:00 -00 2025 May 13 00:19:44.726453 kernel: efi: EFI v2.70 by EDK II May 13 00:19:44.726458 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 May 13 00:19:44.726463 kernel: random: crng init done May 13 00:19:44.726469 kernel: ACPI: Early table checksum verification disabled May 13 00:19:44.726475 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) May 13 00:19:44.726481 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) May 13 00:19:44.726487 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:19:44.726492 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:19:44.726498 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:19:44.726503 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:19:44.726508 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:19:44.726514 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:19:44.726522 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:19:44.726527 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:19:44.726533 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:19:44.726539 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 13 00:19:44.726545 kernel: NUMA: Failed to initialise from firmware May 13 00:19:44.726551 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 13 00:19:44.726556 kernel: NUMA: NODE_DATA [mem 0xdcb0a900-0xdcb0ffff] May 13 00:19:44.726562 kernel: Zone ranges: May 13 00:19:44.726567 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 13 00:19:44.726574 kernel: DMA32 empty May 13 00:19:44.726579 kernel: Normal empty May 13 00:19:44.726585 kernel: Movable zone start for each node May 13 00:19:44.726590 kernel: Early memory node ranges May 13 00:19:44.726596 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] May 13 00:19:44.726601 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] May 13 00:19:44.726607 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] May 13 00:19:44.726612 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] May 13 00:19:44.726618 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] May 13 00:19:44.726623 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] May 13 00:19:44.726629 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] May 13 00:19:44.726634 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 13 00:19:44.726641 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 13 00:19:44.726647 kernel: psci: probing for conduit method from ACPI. May 13 00:19:44.726652 kernel: psci: PSCIv1.1 detected in firmware. May 13 00:19:44.726658 kernel: psci: Using standard PSCI v0.2 function IDs May 13 00:19:44.726664 kernel: psci: Trusted OS migration not required May 13 00:19:44.726672 kernel: psci: SMC Calling Convention v1.1 May 13 00:19:44.726678 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 13 00:19:44.726685 kernel: ACPI: SRAT not present May 13 00:19:44.726692 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 May 13 00:19:44.726698 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 May 13 00:19:44.726704 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 13 00:19:44.726710 kernel: Detected PIPT I-cache on CPU0 May 13 00:19:44.726716 kernel: CPU features: detected: GIC system register CPU interface May 13 00:19:44.726722 kernel: CPU features: detected: Hardware dirty bit management May 13 00:19:44.726727 kernel: CPU features: detected: Spectre-v4 May 13 00:19:44.726733 kernel: CPU features: detected: Spectre-BHB May 13 00:19:44.726740 kernel: CPU features: kernel page table isolation forced ON by KASLR May 13 00:19:44.726746 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 13 00:19:44.726752 kernel: CPU features: detected: ARM erratum 1418040 May 13 00:19:44.726758 kernel: CPU features: detected: SSBS not fully self-synchronizing May 13 00:19:44.726764 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 13 00:19:44.726770 kernel: Policy zone: DMA May 13 00:19:44.726777 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ae60136413c5686d5b1e9c38408a367f831e354d706496e9f743f02289aad53d May 13 00:19:44.726783 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 00:19:44.726789 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 00:19:44.726795 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 00:19:44.726801 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 00:19:44.726809 kernel: Memory: 2457336K/2572288K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36480K init, 777K bss, 114952K reserved, 0K cma-reserved) May 13 00:19:44.726831 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 00:19:44.726837 kernel: trace event string verifier disabled May 13 00:19:44.726850 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 00:19:44.726856 kernel: rcu: RCU event tracing is enabled. May 13 00:19:44.726863 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 00:19:44.726869 kernel: Trampoline variant of Tasks RCU enabled. May 13 00:19:44.726875 kernel: Tracing variant of Tasks RCU enabled. May 13 00:19:44.726881 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 00:19:44.726887 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 00:19:44.726893 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 13 00:19:44.726901 kernel: GICv3: 256 SPIs implemented May 13 00:19:44.726907 kernel: GICv3: 0 Extended SPIs implemented May 13 00:19:44.726913 kernel: GICv3: Distributor has no Range Selector support May 13 00:19:44.726919 kernel: Root IRQ handler: gic_handle_irq May 13 00:19:44.726925 kernel: GICv3: 16 PPIs implemented May 13 00:19:44.726930 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 13 00:19:44.726936 kernel: ACPI: SRAT not present May 13 00:19:44.726942 kernel: ITS [mem 0x08080000-0x0809ffff] May 13 00:19:44.726948 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) May 13 00:19:44.726955 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) May 13 00:19:44.726961 kernel: GICv3: using LPI property table @0x00000000400d0000 May 13 00:19:44.726967 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 May 13 00:19:44.726974 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:19:44.726980 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 13 00:19:44.726987 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 13 00:19:44.726993 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 13 00:19:44.726999 kernel: arm-pv: using stolen time PV May 13 00:19:44.727005 kernel: Console: colour dummy device 80x25 May 13 00:19:44.727011 kernel: ACPI: Core revision 20210730 May 13 00:19:44.727018 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 13 00:19:44.727024 kernel: pid_max: default: 32768 minimum: 301 May 13 00:19:44.727030 kernel: LSM: Security Framework initializing May 13 00:19:44.727037 kernel: SELinux: Initializing. May 13 00:19:44.727044 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:19:44.727050 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:19:44.727056 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 13 00:19:44.727062 kernel: rcu: Hierarchical SRCU implementation. May 13 00:19:44.727068 kernel: Platform MSI: ITS@0x8080000 domain created May 13 00:19:44.727074 kernel: PCI/MSI: ITS@0x8080000 domain created May 13 00:19:44.727081 kernel: Remapping and enabling EFI services. May 13 00:19:44.727087 kernel: smp: Bringing up secondary CPUs ... May 13 00:19:44.727094 kernel: Detected PIPT I-cache on CPU1 May 13 00:19:44.727101 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 13 00:19:44.727107 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 May 13 00:19:44.727113 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:19:44.727120 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 13 00:19:44.727140 kernel: Detected PIPT I-cache on CPU2 May 13 00:19:44.727147 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 13 00:19:44.727153 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 May 13 00:19:44.727160 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:19:44.727166 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 13 00:19:44.727174 kernel: Detected PIPT I-cache on CPU3 May 13 00:19:44.727180 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 13 00:19:44.727189 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 May 13 00:19:44.727196 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:19:44.727206 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 13 00:19:44.727216 kernel: smp: Brought up 1 node, 4 CPUs May 13 00:19:44.727223 kernel: SMP: Total of 4 processors activated. May 13 00:19:44.727232 kernel: CPU features: detected: 32-bit EL0 Support May 13 00:19:44.727239 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 13 00:19:44.727245 kernel: CPU features: detected: Common not Private translations May 13 00:19:44.727252 kernel: CPU features: detected: CRC32 instructions May 13 00:19:44.727259 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 13 00:19:44.727267 kernel: CPU features: detected: LSE atomic instructions May 13 00:19:44.727273 kernel: CPU features: detected: Privileged Access Never May 13 00:19:44.727280 kernel: CPU features: detected: RAS Extension Support May 13 00:19:44.727287 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 13 00:19:44.727293 kernel: CPU: All CPU(s) started at EL1 May 13 00:19:44.727302 kernel: alternatives: patching kernel code May 13 00:19:44.727308 kernel: devtmpfs: initialized May 13 00:19:44.727315 kernel: KASLR enabled May 13 00:19:44.727321 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 00:19:44.727328 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 00:19:44.727335 kernel: pinctrl core: initialized pinctrl subsystem May 13 00:19:44.727341 kernel: SMBIOS 3.0.0 present. May 13 00:19:44.727348 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 May 13 00:19:44.727354 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 00:19:44.727364 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 13 00:19:44.727371 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 13 00:19:44.727378 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 13 00:19:44.727385 kernel: audit: initializing netlink subsys (disabled) May 13 00:19:44.727392 kernel: audit: type=2000 audit(0.031:1): state=initialized audit_enabled=0 res=1 May 13 00:19:44.727398 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 00:19:44.727405 kernel: cpuidle: using governor menu May 13 00:19:44.727413 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 13 00:19:44.727420 kernel: ASID allocator initialised with 32768 entries May 13 00:19:44.727428 kernel: ACPI: bus type PCI registered May 13 00:19:44.727434 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 00:19:44.727441 kernel: Serial: AMBA PL011 UART driver May 13 00:19:44.727447 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 13 00:19:44.727456 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages May 13 00:19:44.727463 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 13 00:19:44.727469 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages May 13 00:19:44.727475 kernel: cryptd: max_cpu_qlen set to 1000 May 13 00:19:44.727482 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 13 00:19:44.727492 kernel: ACPI: Added _OSI(Module Device) May 13 00:19:44.727498 kernel: ACPI: Added _OSI(Processor Device) May 13 00:19:44.727505 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 00:19:44.727511 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 00:19:44.727518 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 13 00:19:44.727526 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 13 00:19:44.727533 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 13 00:19:44.727539 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 00:19:44.727546 kernel: ACPI: Interpreter enabled May 13 00:19:44.727554 kernel: ACPI: Using GIC for interrupt routing May 13 00:19:44.727560 kernel: ACPI: MCFG table detected, 1 entries May 13 00:19:44.727567 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 13 00:19:44.727573 kernel: printk: console [ttyAMA0] enabled May 13 00:19:44.727580 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 00:19:44.727723 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 00:19:44.727791 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 13 00:19:44.727861 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 13 00:19:44.727921 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 13 00:19:44.727978 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 13 00:19:44.727986 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 13 00:19:44.727993 kernel: PCI host bridge to bus 0000:00 May 13 00:19:44.728055 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 13 00:19:44.728108 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 13 00:19:44.728180 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 13 00:19:44.728234 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 00:19:44.728303 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 13 00:19:44.728370 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 13 00:19:44.728430 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 13 00:19:44.728489 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 13 00:19:44.728547 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 13 00:19:44.728609 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 13 00:19:44.728668 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 13 00:19:44.728726 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 13 00:19:44.728779 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 13 00:19:44.728830 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 13 00:19:44.728894 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 13 00:19:44.728903 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 13 00:19:44.728910 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 13 00:19:44.728918 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 13 00:19:44.728924 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 13 00:19:44.728931 kernel: iommu: Default domain type: Translated May 13 00:19:44.728938 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 13 00:19:44.728944 kernel: vgaarb: loaded May 13 00:19:44.728950 kernel: pps_core: LinuxPPS API ver. 1 registered May 13 00:19:44.728957 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 13 00:19:44.728964 kernel: PTP clock support registered May 13 00:19:44.728970 kernel: Registered efivars operations May 13 00:19:44.728978 kernel: clocksource: Switched to clocksource arch_sys_counter May 13 00:19:44.728984 kernel: VFS: Disk quotas dquot_6.6.0 May 13 00:19:44.728991 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 00:19:44.728998 kernel: pnp: PnP ACPI init May 13 00:19:44.729063 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 13 00:19:44.729072 kernel: pnp: PnP ACPI: found 1 devices May 13 00:19:44.729079 kernel: NET: Registered PF_INET protocol family May 13 00:19:44.729085 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 00:19:44.729093 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 00:19:44.729100 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 00:19:44.729127 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 00:19:44.729134 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 13 00:19:44.729140 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 00:19:44.729147 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:19:44.729154 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:19:44.729161 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 00:19:44.729168 kernel: PCI: CLS 0 bytes, default 64 May 13 00:19:44.729175 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 13 00:19:44.729182 kernel: kvm [1]: HYP mode not available May 13 00:19:44.729188 kernel: Initialise system trusted keyrings May 13 00:19:44.729195 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 00:19:44.729202 kernel: Key type asymmetric registered May 13 00:19:44.729208 kernel: Asymmetric key parser 'x509' registered May 13 00:19:44.729215 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 13 00:19:44.729221 kernel: io scheduler mq-deadline registered May 13 00:19:44.729228 kernel: io scheduler kyber registered May 13 00:19:44.729235 kernel: io scheduler bfq registered May 13 00:19:44.729242 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 13 00:19:44.729249 kernel: ACPI: button: Power Button [PWRB] May 13 00:19:44.729255 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 13 00:19:44.729316 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 13 00:19:44.729326 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 00:19:44.729333 kernel: thunder_xcv, ver 1.0 May 13 00:19:44.729339 kernel: thunder_bgx, ver 1.0 May 13 00:19:44.729346 kernel: nicpf, ver 1.0 May 13 00:19:44.729354 kernel: nicvf, ver 1.0 May 13 00:19:44.729438 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 13 00:19:44.729495 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-13T00:19:44 UTC (1747095584) May 13 00:19:44.729504 kernel: hid: raw HID events driver (C) Jiri Kosina May 13 00:19:44.729511 kernel: NET: Registered PF_INET6 protocol family May 13 00:19:44.729518 kernel: Segment Routing with IPv6 May 13 00:19:44.729524 kernel: In-situ OAM (IOAM) with IPv6 May 13 00:19:44.729531 kernel: NET: Registered PF_PACKET protocol family May 13 00:19:44.729539 kernel: Key type dns_resolver registered May 13 00:19:44.729545 kernel: registered taskstats version 1 May 13 00:19:44.729552 kernel: Loading compiled-in X.509 certificates May 13 00:19:44.729558 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: d291b704d59536a3c0ba96fd6f5a99459de8de99' May 13 00:19:44.729565 kernel: Key type .fscrypt registered May 13 00:19:44.729571 kernel: Key type fscrypt-provisioning registered May 13 00:19:44.729578 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 00:19:44.729584 kernel: ima: Allocated hash algorithm: sha1 May 13 00:19:44.729591 kernel: ima: No architecture policies found May 13 00:19:44.729598 kernel: clk: Disabling unused clocks May 13 00:19:44.729605 kernel: Freeing unused kernel memory: 36480K May 13 00:19:44.729611 kernel: Run /init as init process May 13 00:19:44.729618 kernel: with arguments: May 13 00:19:44.729624 kernel: /init May 13 00:19:44.729631 kernel: with environment: May 13 00:19:44.729637 kernel: HOME=/ May 13 00:19:44.729643 kernel: TERM=linux May 13 00:19:44.729649 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 00:19:44.729659 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 13 00:19:44.729667 systemd[1]: Detected virtualization kvm. May 13 00:19:44.729675 systemd[1]: Detected architecture arm64. May 13 00:19:44.729682 systemd[1]: Running in initrd. May 13 00:19:44.729689 systemd[1]: No hostname configured, using default hostname. May 13 00:19:44.729695 systemd[1]: Hostname set to . May 13 00:19:44.729703 systemd[1]: Initializing machine ID from VM UUID. May 13 00:19:44.729711 systemd[1]: Queued start job for default target initrd.target. May 13 00:19:44.729719 systemd[1]: Started systemd-ask-password-console.path. May 13 00:19:44.729726 systemd[1]: Reached target cryptsetup.target. May 13 00:19:44.729732 systemd[1]: Reached target paths.target. May 13 00:19:44.729739 systemd[1]: Reached target slices.target. May 13 00:19:44.729746 systemd[1]: Reached target swap.target. May 13 00:19:44.729753 systemd[1]: Reached target timers.target. May 13 00:19:44.729760 systemd[1]: Listening on iscsid.socket. May 13 00:19:44.729768 systemd[1]: Listening on iscsiuio.socket. May 13 00:19:44.729775 systemd[1]: Listening on systemd-journald-audit.socket. May 13 00:19:44.729782 systemd[1]: Listening on systemd-journald-dev-log.socket. May 13 00:19:44.729789 systemd[1]: Listening on systemd-journald.socket. May 13 00:19:44.729796 systemd[1]: Listening on systemd-networkd.socket. May 13 00:19:44.729803 systemd[1]: Listening on systemd-udevd-control.socket. May 13 00:19:44.729810 systemd[1]: Listening on systemd-udevd-kernel.socket. May 13 00:19:44.729817 systemd[1]: Reached target sockets.target. May 13 00:19:44.729825 systemd[1]: Starting kmod-static-nodes.service... May 13 00:19:44.729832 systemd[1]: Finished network-cleanup.service. May 13 00:19:44.729845 systemd[1]: Starting systemd-fsck-usr.service... May 13 00:19:44.729853 systemd[1]: Starting systemd-journald.service... May 13 00:19:44.729860 systemd[1]: Starting systemd-modules-load.service... May 13 00:19:44.729867 systemd[1]: Starting systemd-resolved.service... May 13 00:19:44.729874 systemd[1]: Starting systemd-vconsole-setup.service... May 13 00:19:44.729881 systemd[1]: Finished kmod-static-nodes.service. May 13 00:19:44.729888 systemd[1]: Finished systemd-fsck-usr.service. May 13 00:19:44.729896 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 13 00:19:44.729903 systemd[1]: Finished systemd-vconsole-setup.service. May 13 00:19:44.729911 kernel: audit: type=1130 audit(1747095584.729:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:44.729921 systemd-journald[290]: Journal started May 13 00:19:44.729961 systemd-journald[290]: Runtime Journal (/run/log/journal/7f1102baee9a4ddd99b74ce110d00c30) is 6.0M, max 48.7M, 42.6M free. May 13 00:19:44.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:44.722917 systemd-modules-load[291]: Inserted module 'overlay' May 13 00:19:44.733282 systemd[1]: Started systemd-journald.service. May 13 00:19:44.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:44.736328 kernel: audit: type=1130 audit(1747095584.733:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:44.736675 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 13 00:19:44.740653 kernel: audit: type=1130 audit(1747095584.737:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:44.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:44.741465 systemd[1]: Starting dracut-cmdline-ask.service... May 13 00:19:44.748593 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 00:19:44.752217 systemd-resolved[292]: Positive Trust Anchors: May 13 00:19:44.752230 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:19:44.756400 kernel: Bridge firewalling registered May 13 00:19:44.752258 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 13 00:19:44.769207 kernel: audit: type=1130 audit(1747095584.762:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:44.769229 kernel: SCSI subsystem initialized May 13 00:19:44.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:44.754562 systemd-modules-load[291]: Inserted module 'br_netfilter' May 13 00:19:44.757136 systemd-resolved[292]: Defaulting to hostname 'linux'. May 13 00:19:44.761852 systemd[1]: Started systemd-resolved.service. May 13 00:19:44.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:44.762778 systemd[1]: Reached target nss-lookup.target. May 13 00:19:44.779473 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 00:19:44.779496 kernel: device-mapper: uevent: version 1.0.3 May 13 00:19:44.779505 kernel: audit: type=1130 audit(1747095584.771:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:44.779515 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 13 00:19:44.770119 systemd[1]: Finished dracut-cmdline-ask.service. May 13 00:19:44.772418 systemd[1]: Starting dracut-cmdline.service... May 13 00:19:44.781355 dracut-cmdline[308]: dracut-dracut-053 May 13 00:19:44.783298 systemd-modules-load[291]: Inserted module 'dm_multipath' May 13 00:19:44.784080 systemd[1]: Finished systemd-modules-load.service. May 13 00:19:44.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:44.788023 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ae60136413c5686d5b1e9c38408a367f831e354d706496e9f743f02289aad53d May 13 00:19:44.794902 kernel: audit: type=1130 audit(1747095584.784:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:44.785673 systemd[1]: Starting systemd-sysctl.service... May 13 00:19:44.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:44.794755 systemd[1]: Finished systemd-sysctl.service. May 13 00:19:44.799614 kernel: audit: type=1130 audit(1747095584.795:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:44.851151 kernel: Loading iSCSI transport class v2.0-870. May 13 00:19:44.864151 kernel: iscsi: registered transport (tcp) May 13 00:19:44.878306 kernel: iscsi: registered transport (qla4xxx) May 13 00:19:44.878320 kernel: QLogic iSCSI HBA Driver May 13 00:19:44.912750 systemd[1]: Finished dracut-cmdline.service. May 13 00:19:44.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:44.914387 systemd[1]: Starting dracut-pre-udev.service... May 13 00:19:44.917733 kernel: audit: type=1130 audit(1747095584.913:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:44.962162 kernel: raid6: neonx8 gen() 13713 MB/s May 13 00:19:44.979147 kernel: raid6: neonx8 xor() 10754 MB/s May 13 00:19:44.996147 kernel: raid6: neonx4 gen() 13520 MB/s May 13 00:19:45.013169 kernel: raid6: neonx4 xor() 11140 MB/s May 13 00:19:45.030155 kernel: raid6: neonx2 gen() 12931 MB/s May 13 00:19:45.047166 kernel: raid6: neonx2 xor() 10356 MB/s May 13 00:19:45.064162 kernel: raid6: neonx1 gen() 10554 MB/s May 13 00:19:45.081161 kernel: raid6: neonx1 xor() 8761 MB/s May 13 00:19:45.098164 kernel: raid6: int64x8 gen() 6246 MB/s May 13 00:19:45.115153 kernel: raid6: int64x8 xor() 3518 MB/s May 13 00:19:45.132152 kernel: raid6: int64x4 gen() 7091 MB/s May 13 00:19:45.149156 kernel: raid6: int64x4 xor() 3799 MB/s May 13 00:19:45.166157 kernel: raid6: int64x2 gen() 6041 MB/s May 13 00:19:45.183155 kernel: raid6: int64x2 xor() 3267 MB/s May 13 00:19:45.200155 kernel: raid6: int64x1 gen() 4968 MB/s May 13 00:19:45.217259 kernel: raid6: int64x1 xor() 2603 MB/s May 13 00:19:45.217269 kernel: raid6: using algorithm neonx8 gen() 13713 MB/s May 13 00:19:45.217278 kernel: raid6: .... xor() 10754 MB/s, rmw enabled May 13 00:19:45.218361 kernel: raid6: using neon recovery algorithm May 13 00:19:45.229176 kernel: xor: measuring software checksum speed May 13 00:19:45.229190 kernel: 8regs : 16904 MB/sec May 13 00:19:45.230471 kernel: 32regs : 20262 MB/sec May 13 00:19:45.230482 kernel: arm64_neon : 23229 MB/sec May 13 00:19:45.230490 kernel: xor: using function: arm64_neon (23229 MB/sec) May 13 00:19:45.284165 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no May 13 00:19:45.294271 systemd[1]: Finished dracut-pre-udev.service. May 13 00:19:45.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:45.298134 kernel: audit: type=1130 audit(1747095585.294:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:45.297000 audit: BPF prog-id=7 op=LOAD May 13 00:19:45.297000 audit: BPF prog-id=8 op=LOAD May 13 00:19:45.298492 systemd[1]: Starting systemd-udevd.service... May 13 00:19:45.316075 systemd-udevd[491]: Using default interface naming scheme 'v252'. May 13 00:19:45.319383 systemd[1]: Started systemd-udevd.service. May 13 00:19:45.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:45.320905 systemd[1]: Starting dracut-pre-trigger.service... May 13 00:19:45.332371 dracut-pre-trigger[498]: rd.md=0: removing MD RAID activation May 13 00:19:45.358419 systemd[1]: Finished dracut-pre-trigger.service. May 13 00:19:45.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:45.359955 systemd[1]: Starting systemd-udev-trigger.service... May 13 00:19:45.391953 systemd[1]: Finished systemd-udev-trigger.service. May 13 00:19:45.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:45.432930 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 00:19:45.438113 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 00:19:45.438140 kernel: GPT:9289727 != 19775487 May 13 00:19:45.438150 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 00:19:45.438163 kernel: GPT:9289727 != 19775487 May 13 00:19:45.438171 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 00:19:45.438179 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:19:45.453152 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (539) May 13 00:19:45.454390 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 13 00:19:45.455461 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 13 00:19:45.463503 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 13 00:19:45.466810 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 13 00:19:45.470943 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 13 00:19:45.472761 systemd[1]: Starting disk-uuid.service... May 13 00:19:45.478556 disk-uuid[563]: Primary Header is updated. May 13 00:19:45.478556 disk-uuid[563]: Secondary Entries is updated. May 13 00:19:45.478556 disk-uuid[563]: Secondary Header is updated. May 13 00:19:45.481639 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:19:46.497827 disk-uuid[564]: The operation has completed successfully. May 13 00:19:46.499075 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:19:46.522714 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 00:19:46.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:46.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:46.522811 systemd[1]: Finished disk-uuid.service. May 13 00:19:46.524488 systemd[1]: Starting verity-setup.service... May 13 00:19:46.539153 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 13 00:19:46.563825 systemd[1]: Found device dev-mapper-usr.device. May 13 00:19:46.565388 systemd[1]: Mounting sysusr-usr.mount... May 13 00:19:46.566203 systemd[1]: Finished verity-setup.service. May 13 00:19:46.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:46.611159 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 13 00:19:46.611505 systemd[1]: Mounted sysusr-usr.mount. May 13 00:19:46.612344 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 13 00:19:46.613043 systemd[1]: Starting ignition-setup.service... May 13 00:19:46.615381 systemd[1]: Starting parse-ip-for-networkd.service... May 13 00:19:46.623724 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 00:19:46.623763 kernel: BTRFS info (device vda6): using free space tree May 13 00:19:46.623773 kernel: BTRFS info (device vda6): has skinny extents May 13 00:19:46.634421 systemd[1]: mnt-oem.mount: Deactivated successfully. May 13 00:19:46.640776 systemd[1]: Finished ignition-setup.service. May 13 00:19:46.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:46.642543 systemd[1]: Starting ignition-fetch-offline.service... May 13 00:19:46.696508 systemd[1]: Finished parse-ip-for-networkd.service. May 13 00:19:46.698680 systemd[1]: Starting systemd-networkd.service... May 13 00:19:46.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:46.698000 audit: BPF prog-id=9 op=LOAD May 13 00:19:46.725419 systemd-networkd[741]: lo: Link UP May 13 00:19:46.725433 systemd-networkd[741]: lo: Gained carrier May 13 00:19:46.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:46.725784 systemd-networkd[741]: Enumeration completed May 13 00:19:46.725887 systemd[1]: Started systemd-networkd.service. May 13 00:19:46.725986 systemd-networkd[741]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:19:46.731794 systemd-networkd[741]: eth0: Link UP May 13 00:19:46.731798 systemd-networkd[741]: eth0: Gained carrier May 13 00:19:46.732220 systemd[1]: Reached target network.target. May 13 00:19:46.734410 systemd[1]: Starting iscsiuio.service... May 13 00:19:46.743311 systemd[1]: Started iscsiuio.service. May 13 00:19:46.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:46.745089 systemd[1]: Starting iscsid.service... May 13 00:19:46.748423 iscsid[746]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 13 00:19:46.748423 iscsid[746]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 13 00:19:46.748423 iscsid[746]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 13 00:19:46.748423 iscsid[746]: If using hardware iscsi like qla4xxx this message can be ignored. May 13 00:19:46.748423 iscsid[746]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 13 00:19:46.748423 iscsid[746]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 13 00:19:46.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:46.749802 ignition[658]: Ignition 2.14.0 May 13 00:19:46.751517 systemd[1]: Started iscsid.service. May 13 00:19:46.749810 ignition[658]: Stage: fetch-offline May 13 00:19:46.759107 systemd[1]: Starting dracut-initqueue.service... May 13 00:19:46.749859 ignition[658]: no configs at "/usr/lib/ignition/base.d" May 13 00:19:46.763202 systemd-networkd[741]: eth0: DHCPv4 address 10.0.0.39/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:19:46.749868 ignition[658]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:19:46.750013 ignition[658]: parsed url from cmdline: "" May 13 00:19:46.750016 ignition[658]: no config URL provided May 13 00:19:46.750022 ignition[658]: reading system config file "/usr/lib/ignition/user.ign" May 13 00:19:46.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:46.769382 systemd[1]: Finished dracut-initqueue.service. May 13 00:19:46.750029 ignition[658]: no config at "/usr/lib/ignition/user.ign" May 13 00:19:46.770958 systemd[1]: Reached target remote-fs-pre.target. May 13 00:19:46.750050 ignition[658]: op(1): [started] loading QEMU firmware config module May 13 00:19:46.772837 systemd[1]: Reached target remote-cryptsetup.target. May 13 00:19:46.750056 ignition[658]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 00:19:46.774489 systemd[1]: Reached target remote-fs.target. May 13 00:19:46.760766 ignition[658]: op(1): [finished] loading QEMU firmware config module May 13 00:19:46.776796 systemd[1]: Starting dracut-pre-mount.service... May 13 00:19:46.776796 ignition[658]: parsing config with SHA512: 5a27efea017b245dd14c760179ae437cba90b43b6a5560d9f1c7474c93cfce72ae23f75e2fcb92583d0dbbc45fc018cfa8986d73fd75ef40f6221f4d46bab914 May 13 00:19:46.784646 systemd[1]: Finished dracut-pre-mount.service. May 13 00:19:46.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:46.788723 unknown[658]: fetched base config from "system" May 13 00:19:46.788733 unknown[658]: fetched user config from "qemu" May 13 00:19:46.789036 ignition[658]: fetch-offline: fetch-offline passed May 13 00:19:46.789089 ignition[658]: Ignition finished successfully May 13 00:19:46.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:46.790951 systemd[1]: Finished ignition-fetch-offline.service. May 13 00:19:46.792162 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 00:19:46.792952 systemd[1]: Starting ignition-kargs.service... May 13 00:19:46.801449 ignition[762]: Ignition 2.14.0 May 13 00:19:46.801459 ignition[762]: Stage: kargs May 13 00:19:46.801556 ignition[762]: no configs at "/usr/lib/ignition/base.d" May 13 00:19:46.801565 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:19:46.803878 systemd[1]: Finished ignition-kargs.service. May 13 00:19:46.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:46.802218 ignition[762]: kargs: kargs passed May 13 00:19:46.802262 ignition[762]: Ignition finished successfully May 13 00:19:46.806281 systemd[1]: Starting ignition-disks.service... May 13 00:19:46.812912 ignition[768]: Ignition 2.14.0 May 13 00:19:46.812923 ignition[768]: Stage: disks May 13 00:19:46.813016 ignition[768]: no configs at "/usr/lib/ignition/base.d" May 13 00:19:46.815027 systemd[1]: Finished ignition-disks.service. May 13 00:19:46.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:46.813025 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:19:46.816687 systemd[1]: Reached target initrd-root-device.target. May 13 00:19:46.814036 ignition[768]: disks: disks passed May 13 00:19:46.818011 systemd[1]: Reached target local-fs-pre.target. May 13 00:19:46.814081 ignition[768]: Ignition finished successfully May 13 00:19:46.819661 systemd[1]: Reached target local-fs.target. May 13 00:19:46.820998 systemd[1]: Reached target sysinit.target. May 13 00:19:46.822156 systemd[1]: Reached target basic.target. May 13 00:19:46.824377 systemd[1]: Starting systemd-fsck-root.service... May 13 00:19:46.835157 systemd-fsck[776]: ROOT: clean, 619/553520 files, 56022/553472 blocks May 13 00:19:46.838043 systemd[1]: Finished systemd-fsck-root.service. May 13 00:19:46.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:46.840261 systemd[1]: Mounting sysroot.mount... May 13 00:19:46.846939 systemd[1]: Mounted sysroot.mount. May 13 00:19:46.848177 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 13 00:19:46.847713 systemd[1]: Reached target initrd-root-fs.target. May 13 00:19:46.849965 systemd[1]: Mounting sysroot-usr.mount... May 13 00:19:46.850883 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 13 00:19:46.850922 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 00:19:46.850945 systemd[1]: Reached target ignition-diskful.target. May 13 00:19:46.852721 systemd[1]: Mounted sysroot-usr.mount. May 13 00:19:46.854618 systemd[1]: Starting initrd-setup-root.service... May 13 00:19:46.858821 initrd-setup-root[786]: cut: /sysroot/etc/passwd: No such file or directory May 13 00:19:46.862593 initrd-setup-root[794]: cut: /sysroot/etc/group: No such file or directory May 13 00:19:46.866649 initrd-setup-root[802]: cut: /sysroot/etc/shadow: No such file or directory May 13 00:19:46.870039 initrd-setup-root[810]: cut: /sysroot/etc/gshadow: No such file or directory May 13 00:19:46.896177 systemd[1]: Finished initrd-setup-root.service. May 13 00:19:46.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:46.897780 systemd[1]: Starting ignition-mount.service... May 13 00:19:46.899148 systemd[1]: Starting sysroot-boot.service... May 13 00:19:46.902870 bash[827]: umount: /sysroot/usr/share/oem: not mounted. May 13 00:19:46.911354 ignition[829]: INFO : Ignition 2.14.0 May 13 00:19:46.911354 ignition[829]: INFO : Stage: mount May 13 00:19:46.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:46.914195 ignition[829]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:19:46.914195 ignition[829]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:19:46.914195 ignition[829]: INFO : mount: mount passed May 13 00:19:46.914195 ignition[829]: INFO : Ignition finished successfully May 13 00:19:46.913205 systemd[1]: Finished ignition-mount.service. May 13 00:19:46.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:46.917385 systemd[1]: Finished sysroot-boot.service. May 13 00:19:47.573729 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 13 00:19:47.580146 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (837) May 13 00:19:47.582645 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 00:19:47.582700 kernel: BTRFS info (device vda6): using free space tree May 13 00:19:47.582710 kernel: BTRFS info (device vda6): has skinny extents May 13 00:19:47.586466 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 13 00:19:47.588074 systemd[1]: Starting ignition-files.service... May 13 00:19:47.602274 ignition[857]: INFO : Ignition 2.14.0 May 13 00:19:47.602274 ignition[857]: INFO : Stage: files May 13 00:19:47.604217 ignition[857]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:19:47.604217 ignition[857]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:19:47.604217 ignition[857]: DEBUG : files: compiled without relabeling support, skipping May 13 00:19:47.608297 ignition[857]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 00:19:47.608297 ignition[857]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 00:19:47.614277 ignition[857]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 00:19:47.615803 ignition[857]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 00:19:47.617587 unknown[857]: wrote ssh authorized keys file for user: core May 13 00:19:47.620905 ignition[857]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 00:19:47.620905 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" May 13 00:19:47.620905 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" May 13 00:19:47.620905 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:19:47.620905 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:19:47.620905 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 13 00:19:47.620905 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 13 00:19:47.620905 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 13 00:19:47.620905 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 May 13 00:19:47.920304 systemd-networkd[741]: eth0: Gained IPv6LL May 13 00:19:47.978627 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK May 13 00:19:48.347720 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 13 00:19:48.347720 ignition[857]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" May 13 00:19:48.351431 ignition[857]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:19:48.353610 ignition[857]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:19:48.353610 ignition[857]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" May 13 00:19:48.353610 ignition[857]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" May 13 00:19:48.353610 ignition[857]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:19:48.394001 ignition[857]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:19:48.394001 ignition[857]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" May 13 00:19:48.394001 ignition[857]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 00:19:48.394001 ignition[857]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 00:19:48.394001 ignition[857]: INFO : files: files passed May 13 00:19:48.394001 ignition[857]: INFO : Ignition finished successfully May 13 00:19:48.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:48.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:48.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:48.394226 systemd[1]: Finished ignition-files.service. May 13 00:19:48.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:48.399728 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 13 00:19:48.401147 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 13 00:19:48.415524 initrd-setup-root-after-ignition[882]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 13 00:19:48.401781 systemd[1]: Starting ignition-quench.service... May 13 00:19:48.421528 initrd-setup-root-after-ignition[884]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:19:48.405291 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 00:19:48.405367 systemd[1]: Finished ignition-quench.service. May 13 00:19:48.409443 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 13 00:19:48.410620 systemd[1]: Reached target ignition-complete.target. May 13 00:19:48.412671 systemd[1]: Starting initrd-parse-etc.service... May 13 00:19:48.428824 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 00:19:48.428915 systemd[1]: Finished initrd-parse-etc.service. May 13 00:19:48.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:48.430000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:48.430858 systemd[1]: Reached target initrd-fs.target. May 13 00:19:48.432203 systemd[1]: Reached target initrd.target. May 13 00:19:48.433575 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 13 00:19:48.434288 systemd[1]: Starting dracut-pre-pivot.service... May 13 00:19:48.443991 systemd[1]: Finished dracut-pre-pivot.service. May 13 00:19:48.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:48.445557 systemd[1]: Starting initrd-cleanup.service... May 13 00:19:48.453413 systemd[1]: Stopped target nss-lookup.target. May 13 00:19:48.454305 systemd[1]: Stopped target remote-cryptsetup.target. May 13 00:19:48.455826 systemd[1]: Stopped target timers.target. May 13 00:19:48.457259 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 00:19:48.458000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:48.457365 systemd[1]: Stopped dracut-pre-pivot.service. May 13 00:19:48.458717 systemd[1]: Stopped target initrd.target. May 13 00:19:48.460214 systemd[1]: Stopped target basic.target. May 13 00:19:48.461665 systemd[1]: Stopped target ignition-complete.target. May 13 00:19:48.463035 systemd[1]: Stopped target ignition-diskful.target. May 13 00:19:48.464424 systemd[1]: Stopped target initrd-root-device.target. May 13 00:19:48.465965 systemd[1]: Stopped target remote-fs.target. May 13 00:19:48.467390 systemd[1]: Stopped target remote-fs-pre.target. May 13 00:19:48.468890 systemd[1]: Stopped target sysinit.target. May 13 00:19:48.470218 systemd[1]: Stopped target local-fs.target. May 13 00:19:48.471617 systemd[1]: Stopped target local-fs-pre.target. May 13 00:19:48.473026 systemd[1]: Stopped target swap.target. May 13 00:19:48.475000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:48.474251 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 00:19:48.474367 systemd[1]: Stopped dracut-pre-mount.service. May 13 00:19:48.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:48.475661 systemd[1]: Stopped target cryptsetup.target. May 13 00:19:48.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:48.476890 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 00:19:48.476988 systemd[1]: Stopped dracut-initqueue.service. May 13 00:19:48.478467 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 00:19:48.478562 systemd[1]: Stopped ignition-fetch-offline.service. May 13 00:19:48.479908 systemd[1]: Stopped target paths.target. May 13 00:19:48.481171 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 00:19:48.485167 systemd[1]: Stopped systemd-ask-password-console.path. May 13 00:19:48.486090 systemd[1]: Stopped target slices.target. May 13 00:19:48.487666 systemd[1]: Stopped target sockets.target. May 13 00:19:48.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:48.489025 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 00:19:48.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:48.489150 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 13 00:19:48.494490 iscsid[746]: iscsid shutting down. May 13 00:19:48.490593 systemd[1]: ignition-files.service: Deactivated successfully. May 13 00:19:48.490777 systemd[1]: Stopped ignition-files.service. May 13 00:19:48.492836 systemd[1]: Stopping ignition-mount.service... May 13 00:19:48.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:48.493721 systemd[1]: Stopping iscsid.service... May 13 00:19:48.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:48.495652 systemd[1]: Stopping sysroot-boot.service... May 13 00:19:48.501984 ignition[897]: INFO : Ignition 2.14.0 May 13 00:19:48.501984 ignition[897]: INFO : Stage: umount May 13 00:19:48.501984 ignition[897]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:19:48.501984 ignition[897]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:19:48.501984 ignition[897]: INFO : umount: umount passed May 13 00:19:48.501984 ignition[897]: INFO : Ignition finished successfully May 13 00:19:48.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:48.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:48.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:48.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:48.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:48.497005 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 00:19:48.497183 systemd[1]: Stopped systemd-udev-trigger.service. May 13 00:19:48.498585 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 00:19:48.520000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:48.498683 systemd[1]: Stopped dracut-pre-trigger.service. May 13 00:19:48.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:48.501444 systemd[1]: iscsid.service: Deactivated successfully. May 13 00:19:48.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:48.501538 systemd[1]: Stopped iscsid.service. May 13 00:19:48.503045 systemd[1]: iscsid.socket: Deactivated successfully. May 13 00:19:48.503112 systemd[1]: Closed iscsid.socket. May 13 00:19:48.504187 systemd[1]: Stopping iscsiuio.service... May 13 00:19:48.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:48.505616 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 00:19:48.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:48.505707 systemd[1]: Finished initrd-cleanup.service. May 13 00:19:48.506768 systemd[1]: iscsiuio.service: Deactivated successfully. May 13 00:19:48.506873 systemd[1]: Stopped iscsiuio.service. May 13 00:19:48.544000 audit: BPF prog-id=6 op=UNLOAD May 13 00:19:48.508754 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 00:19:48.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:48.508854 systemd[1]: Stopped ignition-mount.service. May 13 00:19:48.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:48.511639 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 00:19:48.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:48.512574 systemd[1]: Stopped target network.target. May 13 00:19:48.517901 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 00:19:48.517939 systemd[1]: Closed iscsiuio.socket. May 13 00:19:48.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:48.519652 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 00:19:48.519695 systemd[1]: Stopped ignition-disks.service. May 13 00:19:48.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:48.521043 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 00:19:48.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:48.521082 systemd[1]: Stopped ignition-kargs.service. May 13 00:19:48.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:48.523207 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 00:19:48.523244 systemd[1]: Stopped ignition-setup.service. May 13 00:19:48.525249 systemd[1]: Stopping systemd-networkd.service... May 13 00:19:48.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:48.527001 systemd[1]: Stopping systemd-resolved.service... May 13 00:19:48.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:48.533204 systemd-networkd[741]: eth0: DHCPv6 lease lost May 13 00:19:48.565000 audit: BPF prog-id=9 op=UNLOAD May 13 00:19:48.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:48.536009 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 00:19:48.536116 systemd[1]: Stopped systemd-networkd.service. May 13 00:19:48.538192 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 00:19:48.538283 systemd[1]: Stopped systemd-resolved.service. May 13 00:19:48.539526 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 00:19:48.569000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:48.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:48.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:48.539554 systemd[1]: Closed systemd-networkd.socket. May 13 00:19:48.542513 systemd[1]: Stopping network-cleanup.service... May 13 00:19:48.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:48.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:48.543703 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 00:19:48.543759 systemd[1]: Stopped parse-ip-for-networkd.service. May 13 00:19:48.545386 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 00:19:48.545424 systemd[1]: Stopped systemd-sysctl.service. May 13 00:19:48.547517 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 00:19:48.547558 systemd[1]: Stopped systemd-modules-load.service. May 13 00:19:48.548467 systemd[1]: Stopping systemd-udevd.service... May 13 00:19:48.552787 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 00:19:48.553302 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 00:19:48.553388 systemd[1]: Stopped sysroot-boot.service. May 13 00:19:48.554793 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 00:19:48.554856 systemd[1]: Stopped initrd-setup-root.service. May 13 00:19:48.556703 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 00:19:48.556832 systemd[1]: Stopped systemd-udevd.service. May 13 00:19:48.558117 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 00:19:48.558215 systemd[1]: Stopped network-cleanup.service. May 13 00:19:48.559465 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 00:19:48.559499 systemd[1]: Closed systemd-udevd-control.socket. May 13 00:19:48.560657 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 00:19:48.560688 systemd[1]: Closed systemd-udevd-kernel.socket. May 13 00:19:48.562079 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 00:19:48.562139 systemd[1]: Stopped dracut-pre-udev.service. May 13 00:19:48.563393 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 00:19:48.563433 systemd[1]: Stopped dracut-cmdline.service. May 13 00:19:48.564682 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:19:48.564723 systemd[1]: Stopped dracut-cmdline-ask.service. May 13 00:19:48.566711 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 13 00:19:48.568270 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 00:19:48.568327 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. May 13 00:19:48.570501 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 00:19:48.570541 systemd[1]: Stopped kmod-static-nodes.service. May 13 00:19:48.571346 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:19:48.571385 systemd[1]: Stopped systemd-vconsole-setup.service. May 13 00:19:48.606322 systemd-journald[290]: Received SIGTERM from PID 1 (n/a). May 13 00:19:48.573482 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 13 00:19:48.573902 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 00:19:48.573981 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 13 00:19:48.575275 systemd[1]: Reached target initrd-switch-root.target. May 13 00:19:48.577256 systemd[1]: Starting initrd-switch-root.service... May 13 00:19:48.583516 systemd[1]: Switching root. May 13 00:19:48.611537 systemd-journald[290]: Journal stopped May 13 00:19:50.637897 kernel: SELinux: Class mctp_socket not defined in policy. May 13 00:19:50.637957 kernel: SELinux: Class anon_inode not defined in policy. May 13 00:19:50.637974 kernel: SELinux: the above unknown classes and permissions will be allowed May 13 00:19:50.637984 kernel: SELinux: policy capability network_peer_controls=1 May 13 00:19:50.637994 kernel: SELinux: policy capability open_perms=1 May 13 00:19:50.638003 kernel: SELinux: policy capability extended_socket_class=1 May 13 00:19:50.638013 kernel: SELinux: policy capability always_check_network=0 May 13 00:19:50.638025 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 00:19:50.638035 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 00:19:50.638044 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 00:19:50.638054 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 00:19:50.638067 systemd[1]: Successfully loaded SELinux policy in 33.731ms. May 13 00:19:50.638083 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.942ms. May 13 00:19:50.638095 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 13 00:19:50.638107 systemd[1]: Detected virtualization kvm. May 13 00:19:50.638117 systemd[1]: Detected architecture arm64. May 13 00:19:50.638141 systemd[1]: Detected first boot. May 13 00:19:50.638153 systemd[1]: Initializing machine ID from VM UUID. May 13 00:19:50.638163 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 13 00:19:50.638173 systemd[1]: Populated /etc with preset unit settings. May 13 00:19:50.638184 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:19:50.638196 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:19:50.638208 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:19:50.638221 kernel: kauditd_printk_skb: 81 callbacks suppressed May 13 00:19:50.638231 kernel: audit: type=1334 audit(1747095590.492:85): prog-id=12 op=LOAD May 13 00:19:50.638241 kernel: audit: type=1334 audit(1747095590.492:86): prog-id=3 op=UNLOAD May 13 00:19:50.638251 kernel: audit: type=1334 audit(1747095590.494:87): prog-id=13 op=LOAD May 13 00:19:50.638260 kernel: audit: type=1334 audit(1747095590.495:88): prog-id=14 op=LOAD May 13 00:19:50.638271 kernel: audit: type=1334 audit(1747095590.495:89): prog-id=4 op=UNLOAD May 13 00:19:50.638280 kernel: audit: type=1334 audit(1747095590.495:90): prog-id=5 op=UNLOAD May 13 00:19:50.638290 kernel: audit: type=1334 audit(1747095590.498:91): prog-id=15 op=LOAD May 13 00:19:50.638301 kernel: audit: type=1334 audit(1747095590.498:92): prog-id=12 op=UNLOAD May 13 00:19:50.638311 kernel: audit: type=1334 audit(1747095590.499:93): prog-id=16 op=LOAD May 13 00:19:50.638322 kernel: audit: type=1334 audit(1747095590.500:94): prog-id=17 op=LOAD May 13 00:19:50.638333 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 00:19:50.638344 systemd[1]: Stopped initrd-switch-root.service. May 13 00:19:50.638354 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 00:19:50.638365 systemd[1]: Created slice system-addon\x2dconfig.slice. May 13 00:19:50.638376 systemd[1]: Created slice system-addon\x2drun.slice. May 13 00:19:50.638387 systemd[1]: Created slice system-getty.slice. May 13 00:19:50.638397 systemd[1]: Created slice system-modprobe.slice. May 13 00:19:50.638409 systemd[1]: Created slice system-serial\x2dgetty.slice. May 13 00:19:50.638421 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 13 00:19:50.638432 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 13 00:19:50.638442 systemd[1]: Created slice user.slice. May 13 00:19:50.638452 systemd[1]: Started systemd-ask-password-console.path. May 13 00:19:50.638463 systemd[1]: Started systemd-ask-password-wall.path. May 13 00:19:50.638473 systemd[1]: Set up automount boot.automount. May 13 00:19:50.638485 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 13 00:19:50.638496 systemd[1]: Stopped target initrd-switch-root.target. May 13 00:19:50.638507 systemd[1]: Stopped target initrd-fs.target. May 13 00:19:50.638517 systemd[1]: Stopped target initrd-root-fs.target. May 13 00:19:50.638527 systemd[1]: Reached target integritysetup.target. May 13 00:19:50.638538 systemd[1]: Reached target remote-cryptsetup.target. May 13 00:19:50.638550 systemd[1]: Reached target remote-fs.target. May 13 00:19:50.638560 systemd[1]: Reached target slices.target. May 13 00:19:50.638571 systemd[1]: Reached target swap.target. May 13 00:19:50.638581 systemd[1]: Reached target torcx.target. May 13 00:19:50.638592 systemd[1]: Reached target veritysetup.target. May 13 00:19:50.638602 systemd[1]: Listening on systemd-coredump.socket. May 13 00:19:50.638613 systemd[1]: Listening on systemd-initctl.socket. May 13 00:19:50.638627 systemd[1]: Listening on systemd-networkd.socket. May 13 00:19:50.638638 systemd[1]: Listening on systemd-udevd-control.socket. May 13 00:19:50.638649 systemd[1]: Listening on systemd-udevd-kernel.socket. May 13 00:19:50.638661 systemd[1]: Listening on systemd-userdbd.socket. May 13 00:19:50.638671 systemd[1]: Mounting dev-hugepages.mount... May 13 00:19:50.638682 systemd[1]: Mounting dev-mqueue.mount... May 13 00:19:50.638692 systemd[1]: Mounting media.mount... May 13 00:19:50.638702 systemd[1]: Mounting sys-kernel-debug.mount... May 13 00:19:50.638713 systemd[1]: Mounting sys-kernel-tracing.mount... May 13 00:19:50.638724 systemd[1]: Mounting tmp.mount... May 13 00:19:50.638734 systemd[1]: Starting flatcar-tmpfiles.service... May 13 00:19:50.638745 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:19:50.638757 systemd[1]: Starting kmod-static-nodes.service... May 13 00:19:50.638767 systemd[1]: Starting modprobe@configfs.service... May 13 00:19:50.638778 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:19:50.638789 systemd[1]: Starting modprobe@drm.service... May 13 00:19:50.638799 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:19:50.638821 systemd[1]: Starting modprobe@fuse.service... May 13 00:19:50.638833 systemd[1]: Starting modprobe@loop.service... May 13 00:19:50.638844 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 00:19:50.638855 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 00:19:50.638867 systemd[1]: Stopped systemd-fsck-root.service. May 13 00:19:50.638877 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 00:19:50.638888 systemd[1]: Stopped systemd-fsck-usr.service. May 13 00:19:50.638901 systemd[1]: Stopped systemd-journald.service. May 13 00:19:50.638912 systemd[1]: Starting systemd-journald.service... May 13 00:19:50.638922 systemd[1]: Starting systemd-modules-load.service... May 13 00:19:50.638933 kernel: fuse: init (API version 7.34) May 13 00:19:50.638943 systemd[1]: Starting systemd-network-generator.service... May 13 00:19:50.638953 kernel: loop: module loaded May 13 00:19:50.638965 systemd[1]: Starting systemd-remount-fs.service... May 13 00:19:50.638975 systemd[1]: Starting systemd-udev-trigger.service... May 13 00:19:50.638986 systemd[1]: verity-setup.service: Deactivated successfully. May 13 00:19:50.638996 systemd[1]: Stopped verity-setup.service. May 13 00:19:50.639007 systemd[1]: Mounted dev-hugepages.mount. May 13 00:19:50.639017 systemd[1]: Mounted dev-mqueue.mount. May 13 00:19:50.639028 systemd[1]: Mounted media.mount. May 13 00:19:50.639038 systemd[1]: Mounted sys-kernel-debug.mount. May 13 00:19:50.639048 systemd[1]: Mounted sys-kernel-tracing.mount. May 13 00:19:50.639060 systemd[1]: Mounted tmp.mount. May 13 00:19:50.639071 systemd[1]: Finished kmod-static-nodes.service. May 13 00:19:50.639082 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 00:19:50.639092 systemd[1]: Finished modprobe@configfs.service. May 13 00:19:50.639111 systemd-journald[988]: Journal started May 13 00:19:50.639340 systemd-journald[988]: Runtime Journal (/run/log/journal/7f1102baee9a4ddd99b74ce110d00c30) is 6.0M, max 48.7M, 42.6M free. May 13 00:19:48.672000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 00:19:48.739000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 13 00:19:48.739000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 13 00:19:48.739000 audit: BPF prog-id=10 op=LOAD May 13 00:19:48.739000 audit: BPF prog-id=10 op=UNLOAD May 13 00:19:48.739000 audit: BPF prog-id=11 op=LOAD May 13 00:19:48.739000 audit: BPF prog-id=11 op=UNLOAD May 13 00:19:48.777000 audit[930]: AVC avc: denied { associate } for pid=930 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 13 00:19:48.777000 audit[930]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c589c a1=40000c8de0 a2=40000cf0c0 a3=32 items=0 ppid=913 pid=930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:19:48.777000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 13 00:19:48.778000 audit[930]: AVC avc: denied { associate } for pid=930 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 13 00:19:48.778000 audit[930]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001c5975 a2=1ed a3=0 items=2 ppid=913 pid=930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:19:48.778000 audit: CWD cwd="/" May 13 00:19:48.778000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:19:48.778000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:19:48.778000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 13 00:19:50.492000 audit: BPF prog-id=12 op=LOAD May 13 00:19:50.492000 audit: BPF prog-id=3 op=UNLOAD May 13 00:19:50.494000 audit: BPF prog-id=13 op=LOAD May 13 00:19:50.495000 audit: BPF prog-id=14 op=LOAD May 13 00:19:50.495000 audit: BPF prog-id=4 op=UNLOAD May 13 00:19:50.495000 audit: BPF prog-id=5 op=UNLOAD May 13 00:19:50.498000 audit: BPF prog-id=15 op=LOAD May 13 00:19:50.498000 audit: BPF prog-id=12 op=UNLOAD May 13 00:19:50.499000 audit: BPF prog-id=16 op=LOAD May 13 00:19:50.500000 audit: BPF prog-id=17 op=LOAD May 13 00:19:50.500000 audit: BPF prog-id=13 op=UNLOAD May 13 00:19:50.500000 audit: BPF prog-id=14 op=UNLOAD May 13 00:19:50.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:50.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:50.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:50.513000 audit: BPF prog-id=15 op=UNLOAD May 13 00:19:50.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:50.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:50.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:50.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:50.604000 audit: BPF prog-id=18 op=LOAD May 13 00:19:50.604000 audit: BPF prog-id=19 op=LOAD May 13 00:19:50.604000 audit: BPF prog-id=20 op=LOAD May 13 00:19:50.604000 audit: BPF prog-id=16 op=UNLOAD May 13 00:19:50.604000 audit: BPF prog-id=17 op=UNLOAD May 13 00:19:50.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:50.634000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 13 00:19:50.634000 audit[988]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffe6d99b90 a2=4000 a3=1 items=0 ppid=1 pid=988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:19:50.634000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 13 00:19:50.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:50.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:50.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:48.775750 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-13T00:19:48Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:19:50.491237 systemd[1]: Queued start job for default target multi-user.target. May 13 00:19:50.642883 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:19:50.642904 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:19:48.776039 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-13T00:19:48Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 13 00:19:50.491249 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 13 00:19:48.776058 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-13T00:19:48Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 13 00:19:50.501381 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 00:19:48.776089 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-13T00:19:48Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 13 00:19:48.776098 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-13T00:19:48Z" level=debug msg="skipped missing lower profile" missing profile=oem May 13 00:19:48.776140 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-13T00:19:48Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 13 00:19:48.776153 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-13T00:19:48Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 13 00:19:48.776341 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-13T00:19:48Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 13 00:19:48.776373 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-13T00:19:48Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 13 00:19:48.776385 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-13T00:19:48Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 13 00:19:48.777080 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-13T00:19:48Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 13 00:19:48.777115 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-13T00:19:48Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 13 00:19:48.777153 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-13T00:19:48Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 13 00:19:48.777167 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-13T00:19:48Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 13 00:19:48.777184 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-13T00:19:48Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 13 00:19:48.777197 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-13T00:19:48Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 13 00:19:50.205013 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-13T00:19:50Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 13 00:19:50.205326 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-13T00:19:50Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 13 00:19:50.205431 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-13T00:19:50Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 13 00:19:50.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:50.644000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:50.205613 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-13T00:19:50Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 13 00:19:50.205670 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-13T00:19:50Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 13 00:19:50.205737 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-13T00:19:50Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 13 00:19:50.646467 systemd[1]: Started systemd-journald.service. May 13 00:19:50.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:50.646575 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:19:50.646730 systemd[1]: Finished modprobe@drm.service. May 13 00:19:50.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:50.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:50.647865 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:19:50.648011 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:19:50.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:50.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:50.649155 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 00:19:50.649293 systemd[1]: Finished modprobe@fuse.service. May 13 00:19:50.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:50.649000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:50.650306 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:19:50.650442 systemd[1]: Finished modprobe@loop.service. May 13 00:19:50.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:50.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:50.651599 systemd[1]: Finished systemd-modules-load.service. May 13 00:19:50.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:50.653048 systemd[1]: Finished systemd-network-generator.service. May 13 00:19:50.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:50.654294 systemd[1]: Finished systemd-remount-fs.service. May 13 00:19:50.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:50.655632 systemd[1]: Reached target network-pre.target. May 13 00:19:50.657510 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 13 00:19:50.659444 systemd[1]: Mounting sys-kernel-config.mount... May 13 00:19:50.660222 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 00:19:50.662510 systemd[1]: Starting systemd-hwdb-update.service... May 13 00:19:50.664560 systemd[1]: Starting systemd-journal-flush.service... May 13 00:19:50.665492 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:19:50.666639 systemd[1]: Starting systemd-random-seed.service... May 13 00:19:50.667546 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:19:50.668565 systemd[1]: Starting systemd-sysctl.service... May 13 00:19:50.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:50.673546 systemd-journald[988]: Time spent on flushing to /var/log/journal/7f1102baee9a4ddd99b74ce110d00c30 is 15.525ms for 980 entries. May 13 00:19:50.673546 systemd-journald[988]: System Journal (/var/log/journal/7f1102baee9a4ddd99b74ce110d00c30) is 8.0M, max 195.6M, 187.6M free. May 13 00:19:50.703016 systemd-journald[988]: Received client request to flush runtime journal. May 13 00:19:50.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:50.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:50.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:50.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:50.671515 systemd[1]: Finished flatcar-tmpfiles.service. May 13 00:19:50.672649 systemd[1]: Finished systemd-udev-trigger.service. May 13 00:19:50.673731 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 13 00:19:50.703701 udevadm[1030]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 13 00:19:50.675730 systemd[1]: Mounted sys-kernel-config.mount. May 13 00:19:50.677695 systemd[1]: Starting systemd-sysusers.service... May 13 00:19:50.679604 systemd[1]: Starting systemd-udev-settle.service... May 13 00:19:50.681890 systemd[1]: Finished systemd-random-seed.service. May 13 00:19:50.683041 systemd[1]: Reached target first-boot-complete.target. May 13 00:19:50.695275 systemd[1]: Finished systemd-sysctl.service. May 13 00:19:50.698174 systemd[1]: Finished systemd-sysusers.service. May 13 00:19:50.700082 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 13 00:19:50.704053 systemd[1]: Finished systemd-journal-flush.service. May 13 00:19:50.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:50.719481 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 13 00:19:50.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:51.061359 systemd[1]: Finished systemd-hwdb-update.service. May 13 00:19:51.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:51.062000 audit: BPF prog-id=21 op=LOAD May 13 00:19:51.062000 audit: BPF prog-id=22 op=LOAD May 13 00:19:51.062000 audit: BPF prog-id=7 op=UNLOAD May 13 00:19:51.062000 audit: BPF prog-id=8 op=UNLOAD May 13 00:19:51.063725 systemd[1]: Starting systemd-udevd.service... May 13 00:19:51.079646 systemd-udevd[1036]: Using default interface naming scheme 'v252'. May 13 00:19:51.092413 systemd[1]: Started systemd-udevd.service. May 13 00:19:51.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:51.101000 audit: BPF prog-id=23 op=LOAD May 13 00:19:51.102361 systemd[1]: Starting systemd-networkd.service... May 13 00:19:51.113000 audit: BPF prog-id=24 op=LOAD May 13 00:19:51.113000 audit: BPF prog-id=25 op=LOAD May 13 00:19:51.113000 audit: BPF prog-id=26 op=LOAD May 13 00:19:51.114838 systemd[1]: Starting systemd-userdbd.service... May 13 00:19:51.129892 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. May 13 00:19:51.150818 systemd[1]: Started systemd-userdbd.service. May 13 00:19:51.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:51.162326 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 13 00:19:51.209547 systemd[1]: Finished systemd-udev-settle.service. May 13 00:19:51.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:51.211942 systemd[1]: Starting lvm2-activation-early.service... May 13 00:19:51.214208 systemd-networkd[1056]: lo: Link UP May 13 00:19:51.214216 systemd-networkd[1056]: lo: Gained carrier May 13 00:19:51.214538 systemd-networkd[1056]: Enumeration completed May 13 00:19:51.214632 systemd[1]: Started systemd-networkd.service. May 13 00:19:51.214638 systemd-networkd[1056]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:19:51.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:51.216197 systemd-networkd[1056]: eth0: Link UP May 13 00:19:51.216204 systemd-networkd[1056]: eth0: Gained carrier May 13 00:19:51.222178 lvm[1070]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:19:51.243240 systemd-networkd[1056]: eth0: DHCPv4 address 10.0.0.39/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:19:51.254030 systemd[1]: Finished lvm2-activation-early.service. May 13 00:19:51.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:51.255209 systemd[1]: Reached target cryptsetup.target. May 13 00:19:51.257396 systemd[1]: Starting lvm2-activation.service... May 13 00:19:51.261233 lvm[1071]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:19:51.294080 systemd[1]: Finished lvm2-activation.service. May 13 00:19:51.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:51.295154 systemd[1]: Reached target local-fs-pre.target. May 13 00:19:51.296017 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 00:19:51.296053 systemd[1]: Reached target local-fs.target. May 13 00:19:51.296876 systemd[1]: Reached target machines.target. May 13 00:19:51.299009 systemd[1]: Starting ldconfig.service... May 13 00:19:51.300152 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:19:51.300203 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:19:51.301310 systemd[1]: Starting systemd-boot-update.service... May 13 00:19:51.303115 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 13 00:19:51.305511 systemd[1]: Starting systemd-machine-id-commit.service... May 13 00:19:51.307563 systemd[1]: Starting systemd-sysext.service... May 13 00:19:51.309304 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1073 (bootctl) May 13 00:19:51.310359 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 13 00:19:51.322432 systemd[1]: Unmounting usr-share-oem.mount... May 13 00:19:51.324103 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 13 00:19:51.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:51.326970 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 13 00:19:51.327301 systemd[1]: Unmounted usr-share-oem.mount. May 13 00:19:51.374302 systemd[1]: Finished systemd-machine-id-commit.service. May 13 00:19:51.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:51.384149 kernel: loop0: detected capacity change from 0 to 201592 May 13 00:19:51.387026 systemd-fsck[1081]: fsck.fat 4.2 (2021-01-31) May 13 00:19:51.387026 systemd-fsck[1081]: /dev/vda1: 236 files, 117310/258078 clusters May 13 00:19:51.389589 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 13 00:19:51.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:51.399139 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 00:19:51.419150 kernel: loop1: detected capacity change from 0 to 201592 May 13 00:19:51.423570 (sd-sysext)[1085]: Using extensions 'kubernetes'. May 13 00:19:51.423984 (sd-sysext)[1085]: Merged extensions into '/usr'. May 13 00:19:51.443372 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:19:51.444719 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:19:51.446949 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:19:51.449305 systemd[1]: Starting modprobe@loop.service... May 13 00:19:51.450447 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:19:51.450576 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:19:51.451747 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:19:51.451899 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:19:51.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:51.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:51.453386 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:19:51.453505 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:19:51.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:51.454000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:51.454962 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:19:51.455066 systemd[1]: Finished modprobe@loop.service. May 13 00:19:51.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:51.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:51.456460 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:19:51.456561 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:19:51.475932 ldconfig[1072]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 00:19:51.479694 systemd[1]: Finished ldconfig.service. May 13 00:19:51.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:51.623704 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 00:19:51.625544 systemd[1]: Mounting boot.mount... May 13 00:19:51.627350 systemd[1]: Mounting usr-share-oem.mount... May 13 00:19:51.633696 systemd[1]: Mounted boot.mount. May 13 00:19:51.634638 systemd[1]: Mounted usr-share-oem.mount. May 13 00:19:51.636545 systemd[1]: Finished systemd-sysext.service. May 13 00:19:51.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:51.638629 systemd[1]: Starting ensure-sysext.service... May 13 00:19:51.640614 systemd[1]: Starting systemd-tmpfiles-setup.service... May 13 00:19:51.643636 systemd[1]: Finished systemd-boot-update.service. May 13 00:19:51.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:51.646097 systemd[1]: Reloading. May 13 00:19:51.650383 systemd-tmpfiles[1093]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 13 00:19:51.651984 systemd-tmpfiles[1093]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 00:19:51.653488 systemd-tmpfiles[1093]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 00:19:51.679517 /usr/lib/systemd/system-generators/torcx-generator[1113]: time="2025-05-13T00:19:51Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:19:51.679547 /usr/lib/systemd/system-generators/torcx-generator[1113]: time="2025-05-13T00:19:51Z" level=info msg="torcx already run" May 13 00:19:51.743579 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:19:51.743600 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:19:51.758548 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:19:51.801000 audit: BPF prog-id=27 op=LOAD May 13 00:19:51.801000 audit: BPF prog-id=23 op=UNLOAD May 13 00:19:51.803000 audit: BPF prog-id=28 op=LOAD May 13 00:19:51.803000 audit: BPF prog-id=24 op=UNLOAD May 13 00:19:51.803000 audit: BPF prog-id=29 op=LOAD May 13 00:19:51.803000 audit: BPF prog-id=30 op=LOAD May 13 00:19:51.803000 audit: BPF prog-id=25 op=UNLOAD May 13 00:19:51.803000 audit: BPF prog-id=26 op=UNLOAD May 13 00:19:51.804000 audit: BPF prog-id=31 op=LOAD May 13 00:19:51.804000 audit: BPF prog-id=32 op=LOAD May 13 00:19:51.804000 audit: BPF prog-id=21 op=UNLOAD May 13 00:19:51.804000 audit: BPF prog-id=22 op=UNLOAD May 13 00:19:51.804000 audit: BPF prog-id=33 op=LOAD May 13 00:19:51.804000 audit: BPF prog-id=18 op=UNLOAD May 13 00:19:51.804000 audit: BPF prog-id=34 op=LOAD May 13 00:19:51.804000 audit: BPF prog-id=35 op=LOAD May 13 00:19:51.804000 audit: BPF prog-id=19 op=UNLOAD May 13 00:19:51.804000 audit: BPF prog-id=20 op=UNLOAD May 13 00:19:51.810838 systemd[1]: Finished systemd-tmpfiles-setup.service. May 13 00:19:51.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:51.815357 systemd[1]: Starting audit-rules.service... May 13 00:19:51.817321 systemd[1]: Starting clean-ca-certificates.service... May 13 00:19:51.819882 systemd[1]: Starting systemd-journal-catalog-update.service... May 13 00:19:51.823000 audit: BPF prog-id=36 op=LOAD May 13 00:19:51.824169 systemd[1]: Starting systemd-resolved.service... May 13 00:19:51.826000 audit: BPF prog-id=37 op=LOAD May 13 00:19:51.828077 systemd[1]: Starting systemd-timesyncd.service... May 13 00:19:51.830010 systemd[1]: Starting systemd-update-utmp.service... May 13 00:19:51.831553 systemd[1]: Finished clean-ca-certificates.service. May 13 00:19:51.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:51.834000 audit[1160]: SYSTEM_BOOT pid=1160 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 13 00:19:51.838830 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:19:51.840417 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:19:51.842447 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:19:51.844829 systemd[1]: Starting modprobe@loop.service... May 13 00:19:51.845619 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:19:51.845762 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:19:51.845908 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:19:51.846875 systemd[1]: Finished systemd-journal-catalog-update.service. May 13 00:19:51.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:51.848424 systemd[1]: Finished systemd-update-utmp.service. May 13 00:19:51.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:51.849744 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:19:51.849868 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:19:51.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:51.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:51.851095 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:19:51.851305 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:19:51.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:51.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:51.852585 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:19:51.852695 systemd[1]: Finished modprobe@loop.service. May 13 00:19:51.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:51.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:51.856999 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:19:51.858427 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:19:51.860316 systemd[1]: Starting modprobe@drm.service... May 13 00:19:51.862206 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:19:51.864222 systemd[1]: Starting modprobe@loop.service... May 13 00:19:51.865049 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:19:51.865235 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:19:51.866474 systemd[1]: Starting systemd-networkd-wait-online.service... May 13 00:19:51.868813 systemd[1]: Starting systemd-update-done.service... May 13 00:19:51.869742 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:19:51.870838 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:19:51.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:51.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:51.870969 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:19:51.872246 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:19:51.872361 systemd[1]: Finished modprobe@drm.service. May 13 00:19:51.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:51.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:51.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:51.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:51.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:51.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:51.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:51.873611 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:19:51.873728 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:19:51.875002 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:19:51.875107 systemd[1]: Finished modprobe@loop.service. May 13 00:19:51.876656 systemd[1]: Finished systemd-update-done.service. May 13 00:19:51.878514 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:19:51.878555 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:19:51.878822 systemd[1]: Finished ensure-sysext.service. May 13 00:19:51.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:19:51.881000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 13 00:19:51.881000 audit[1180]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffceea7360 a2=420 a3=0 items=0 ppid=1151 pid=1180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:19:51.881000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 13 00:19:51.881790 augenrules[1180]: No rules May 13 00:19:51.882685 systemd[1]: Finished audit-rules.service. May 13 00:19:51.885447 systemd-resolved[1157]: Positive Trust Anchors: May 13 00:19:51.885665 systemd-resolved[1157]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:19:51.885739 systemd-resolved[1157]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 13 00:19:51.889409 systemd[1]: Started systemd-timesyncd.service. May 13 00:19:51.890478 systemd-timesyncd[1159]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 00:19:51.890530 systemd-timesyncd[1159]: Initial clock synchronization to Tue 2025-05-13 00:19:52.019187 UTC. May 13 00:19:51.890869 systemd[1]: Reached target time-set.target. May 13 00:19:51.897357 systemd-resolved[1157]: Defaulting to hostname 'linux'. May 13 00:19:51.899012 systemd[1]: Started systemd-resolved.service. May 13 00:19:51.899987 systemd[1]: Reached target network.target. May 13 00:19:51.900785 systemd[1]: Reached target nss-lookup.target. May 13 00:19:51.901606 systemd[1]: Reached target sysinit.target. May 13 00:19:51.902456 systemd[1]: Started motdgen.path. May 13 00:19:51.903195 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 13 00:19:51.904415 systemd[1]: Started logrotate.timer. May 13 00:19:51.905247 systemd[1]: Started mdadm.timer. May 13 00:19:51.905919 systemd[1]: Started systemd-tmpfiles-clean.timer. May 13 00:19:51.906789 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 00:19:51.906831 systemd[1]: Reached target paths.target. May 13 00:19:51.907627 systemd[1]: Reached target timers.target. May 13 00:19:51.908746 systemd[1]: Listening on dbus.socket. May 13 00:19:51.910582 systemd[1]: Starting docker.socket... May 13 00:19:51.913695 systemd[1]: Listening on sshd.socket. May 13 00:19:51.914728 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:19:51.915184 systemd[1]: Listening on docker.socket. May 13 00:19:51.916137 systemd[1]: Reached target sockets.target. May 13 00:19:51.916951 systemd[1]: Reached target basic.target. May 13 00:19:51.917803 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 13 00:19:51.917837 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 13 00:19:51.918833 systemd[1]: Starting containerd.service... May 13 00:19:51.920543 systemd[1]: Starting dbus.service... May 13 00:19:51.922237 systemd[1]: Starting enable-oem-cloudinit.service... May 13 00:19:51.924252 systemd[1]: Starting extend-filesystems.service... May 13 00:19:51.925199 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 13 00:19:51.926518 systemd[1]: Starting motdgen.service... May 13 00:19:51.928777 systemd[1]: Starting ssh-key-proc-cmdline.service... May 13 00:19:51.932711 jq[1190]: false May 13 00:19:51.931258 systemd[1]: Starting sshd-keygen.service... May 13 00:19:51.934542 systemd[1]: Starting systemd-logind.service... May 13 00:19:51.935434 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:19:51.935518 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 00:19:51.936075 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 00:19:51.937040 systemd[1]: Starting update-engine.service... May 13 00:19:51.938882 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 13 00:19:51.943790 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 00:19:51.944052 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 13 00:19:51.944362 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 00:19:51.944502 systemd[1]: Finished ssh-key-proc-cmdline.service. May 13 00:19:51.946086 jq[1204]: true May 13 00:19:51.953501 jq[1211]: true May 13 00:19:51.962689 extend-filesystems[1191]: Found loop1 May 13 00:19:51.962713 systemd[1]: motdgen.service: Deactivated successfully. May 13 00:19:51.962896 systemd[1]: Finished motdgen.service. May 13 00:19:51.963878 extend-filesystems[1191]: Found vda May 13 00:19:51.965023 extend-filesystems[1191]: Found vda1 May 13 00:19:51.965891 extend-filesystems[1191]: Found vda2 May 13 00:19:51.966682 extend-filesystems[1191]: Found vda3 May 13 00:19:51.967548 extend-filesystems[1191]: Found usr May 13 00:19:51.968377 extend-filesystems[1191]: Found vda4 May 13 00:19:51.969253 extend-filesystems[1191]: Found vda6 May 13 00:19:51.970256 extend-filesystems[1191]: Found vda7 May 13 00:19:51.970256 extend-filesystems[1191]: Found vda9 May 13 00:19:51.970256 extend-filesystems[1191]: Checking size of /dev/vda9 May 13 00:19:51.990319 systemd-logind[1199]: Watching system buttons on /dev/input/event0 (Power Button) May 13 00:19:51.990350 dbus-daemon[1189]: [system] SELinux support is enabled May 13 00:19:51.990622 systemd[1]: Started dbus.service. May 13 00:19:51.991517 systemd-logind[1199]: New seat seat0. May 13 00:19:51.993190 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 00:19:51.993217 systemd[1]: Reached target system-config.target. May 13 00:19:51.994435 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 00:19:51.994455 systemd[1]: Reached target user-config.target. May 13 00:19:51.995903 systemd[1]: Started systemd-logind.service. May 13 00:19:51.996002 dbus-daemon[1189]: [system] Successfully activated service 'org.freedesktop.systemd1' May 13 00:19:52.003117 extend-filesystems[1191]: Resized partition /dev/vda9 May 13 00:19:52.008169 extend-filesystems[1237]: resize2fs 1.46.5 (30-Dec-2021) May 13 00:19:52.021172 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 00:19:52.026417 bash[1233]: Updated "/home/core/.ssh/authorized_keys" May 13 00:19:52.028714 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 13 00:19:52.047346 update_engine[1203]: I0513 00:19:52.046873 1203 main.cc:92] Flatcar Update Engine starting May 13 00:19:52.051063 update_engine[1203]: I0513 00:19:52.051036 1203 update_check_scheduler.cc:74] Next update check in 11m20s May 13 00:19:52.051599 systemd[1]: Started update-engine.service. May 13 00:19:52.056027 systemd[1]: Started locksmithd.service. May 13 00:19:52.057177 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 00:19:52.072706 extend-filesystems[1237]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 00:19:52.072706 extend-filesystems[1237]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 00:19:52.072706 extend-filesystems[1237]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 00:19:52.077723 extend-filesystems[1191]: Resized filesystem in /dev/vda9 May 13 00:19:52.073527 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 00:19:52.078914 env[1210]: time="2025-05-13T00:19:52.078709615Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 13 00:19:52.073704 systemd[1]: Finished extend-filesystems.service. May 13 00:19:52.097954 env[1210]: time="2025-05-13T00:19:52.097894500Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 13 00:19:52.098079 env[1210]: time="2025-05-13T00:19:52.098057685Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 13 00:19:52.099893 env[1210]: time="2025-05-13T00:19:52.099771113Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 13 00:19:52.099893 env[1210]: time="2025-05-13T00:19:52.099804847Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 13 00:19:52.100045 env[1210]: time="2025-05-13T00:19:52.100025097Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:19:52.100221 env[1210]: time="2025-05-13T00:19:52.100047004Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 13 00:19:52.100221 env[1210]: time="2025-05-13T00:19:52.100061473Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 13 00:19:52.100221 env[1210]: time="2025-05-13T00:19:52.100070943Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 13 00:19:52.100221 env[1210]: time="2025-05-13T00:19:52.100161214Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 13 00:19:52.100490 env[1210]: time="2025-05-13T00:19:52.100447469Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 13 00:19:52.100630 env[1210]: time="2025-05-13T00:19:52.100590820Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:19:52.100630 env[1210]: time="2025-05-13T00:19:52.100611549Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 13 00:19:52.100787 env[1210]: time="2025-05-13T00:19:52.100668288Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 13 00:19:52.100787 env[1210]: time="2025-05-13T00:19:52.100680806Z" level=info msg="metadata content store policy set" policy=shared May 13 00:19:52.103730 env[1210]: time="2025-05-13T00:19:52.103699921Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 13 00:19:52.103730 env[1210]: time="2025-05-13T00:19:52.103732070Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 13 00:19:52.103829 env[1210]: time="2025-05-13T00:19:52.103745523Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 13 00:19:52.103829 env[1210]: time="2025-05-13T00:19:52.103782387Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 13 00:19:52.103829 env[1210]: time="2025-05-13T00:19:52.103797101Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 13 00:19:52.103829 env[1210]: time="2025-05-13T00:19:52.103813846Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 13 00:19:52.103928 env[1210]: time="2025-05-13T00:19:52.103826445Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 13 00:19:52.104258 env[1210]: time="2025-05-13T00:19:52.104237031Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 13 00:19:52.104292 env[1210]: time="2025-05-13T00:19:52.104268083Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 13 00:19:52.104292 env[1210]: time="2025-05-13T00:19:52.104281983Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 13 00:19:52.104341 env[1210]: time="2025-05-13T00:19:52.104295030Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 13 00:19:52.104341 env[1210]: time="2025-05-13T00:19:52.104307101Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 13 00:19:52.104436 env[1210]: time="2025-05-13T00:19:52.104419807Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 13 00:19:52.104541 env[1210]: time="2025-05-13T00:19:52.104524302Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 13 00:19:52.104812 env[1210]: time="2025-05-13T00:19:52.104787756Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 13 00:19:52.104849 env[1210]: time="2025-05-13T00:19:52.104825433Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 13 00:19:52.104849 env[1210]: time="2025-05-13T00:19:52.104840390Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 13 00:19:52.104964 env[1210]: time="2025-05-13T00:19:52.104950251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 13 00:19:52.104993 env[1210]: time="2025-05-13T00:19:52.104969435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 13 00:19:52.104993 env[1210]: time="2025-05-13T00:19:52.104982969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 13 00:19:52.105053 env[1210]: time="2025-05-13T00:19:52.104994797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 13 00:19:52.105085 env[1210]: time="2025-05-13T00:19:52.105056250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 13 00:19:52.105085 env[1210]: time="2025-05-13T00:19:52.105069460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 13 00:19:52.105085 env[1210]: time="2025-05-13T00:19:52.105081246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 13 00:19:52.105152 env[1210]: time="2025-05-13T00:19:52.105094253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 13 00:19:52.105152 env[1210]: time="2025-05-13T00:19:52.105108153Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 13 00:19:52.105268 env[1210]: time="2025-05-13T00:19:52.105251545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 13 00:19:52.105301 env[1210]: time="2025-05-13T00:19:52.105270932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 13 00:19:52.105301 env[1210]: time="2025-05-13T00:19:52.105283531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 13 00:19:52.105340 env[1210]: time="2025-05-13T00:19:52.105300643Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 13 00:19:52.105340 env[1210]: time="2025-05-13T00:19:52.105315071Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 13 00:19:52.105340 env[1210]: time="2025-05-13T00:19:52.105325882Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 13 00:19:52.105401 env[1210]: time="2025-05-13T00:19:52.105342750Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 13 00:19:52.105401 env[1210]: time="2025-05-13T00:19:52.105376484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 13 00:19:52.105625 env[1210]: time="2025-05-13T00:19:52.105574176Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 13 00:19:52.106234 env[1210]: time="2025-05-13T00:19:52.105642011Z" level=info msg="Connect containerd service" May 13 00:19:52.106234 env[1210]: time="2025-05-13T00:19:52.105672413Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 13 00:19:52.106355 env[1210]: time="2025-05-13T00:19:52.106326659Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:19:52.107267 env[1210]: time="2025-05-13T00:19:52.107217981Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 00:19:52.107363 env[1210]: time="2025-05-13T00:19:52.106532683Z" level=info msg="Start subscribing containerd event" May 13 00:19:52.107410 env[1210]: time="2025-05-13T00:19:52.107388564Z" level=info msg="Start recovering state" May 13 00:19:52.107441 env[1210]: time="2025-05-13T00:19:52.107411080Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 00:19:52.107574 systemd[1]: Started containerd.service. May 13 00:19:52.107692 env[1210]: time="2025-05-13T00:19:52.107673884Z" level=info msg="containerd successfully booted in 0.042212s" May 13 00:19:52.108710 env[1210]: time="2025-05-13T00:19:52.108637390Z" level=info msg="Start event monitor" May 13 00:19:52.108710 env[1210]: time="2025-05-13T00:19:52.108685797Z" level=info msg="Start snapshots syncer" May 13 00:19:52.108710 env[1210]: time="2025-05-13T00:19:52.108702380Z" level=info msg="Start cni network conf syncer for default" May 13 00:19:52.108710 env[1210]: time="2025-05-13T00:19:52.108711443Z" level=info msg="Start streaming server" May 13 00:19:52.115367 locksmithd[1239]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 00:19:52.720296 systemd-networkd[1056]: eth0: Gained IPv6LL May 13 00:19:52.724928 systemd[1]: Finished systemd-networkd-wait-online.service. May 13 00:19:52.726228 systemd[1]: Reached target network-online.target. May 13 00:19:52.728640 systemd[1]: Starting kubelet.service... May 13 00:19:53.325803 systemd[1]: Started kubelet.service. May 13 00:19:53.743946 kubelet[1255]: E0513 00:19:53.743832 1255 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:19:53.745791 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:19:53.745932 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:19:54.499565 sshd_keygen[1209]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 00:19:54.518043 systemd[1]: Finished sshd-keygen.service. May 13 00:19:54.520368 systemd[1]: Starting issuegen.service... May 13 00:19:54.524882 systemd[1]: issuegen.service: Deactivated successfully. May 13 00:19:54.525043 systemd[1]: Finished issuegen.service. May 13 00:19:54.527302 systemd[1]: Starting systemd-user-sessions.service... May 13 00:19:54.533335 systemd[1]: Finished systemd-user-sessions.service. May 13 00:19:54.535682 systemd[1]: Started getty@tty1.service. May 13 00:19:54.537768 systemd[1]: Started serial-getty@ttyAMA0.service. May 13 00:19:54.538862 systemd[1]: Reached target getty.target. May 13 00:19:54.539855 systemd[1]: Reached target multi-user.target. May 13 00:19:54.541926 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 13 00:19:54.549067 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 13 00:19:54.549247 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 13 00:19:54.550323 systemd[1]: Startup finished in 586ms (kernel) + 4.061s (initrd) + 5.917s (userspace) = 10.565s. May 13 00:19:57.526408 systemd[1]: Created slice system-sshd.slice. May 13 00:19:57.527540 systemd[1]: Started sshd@0-10.0.0.39:22-10.0.0.1:51454.service. May 13 00:19:57.574464 sshd[1277]: Accepted publickey for core from 10.0.0.1 port 51454 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:19:57.580726 sshd[1277]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:19:57.590753 systemd[1]: Created slice user-500.slice. May 13 00:19:57.591864 systemd[1]: Starting user-runtime-dir@500.service... May 13 00:19:57.593266 systemd-logind[1199]: New session 1 of user core. May 13 00:19:57.599751 systemd[1]: Finished user-runtime-dir@500.service. May 13 00:19:57.600998 systemd[1]: Starting user@500.service... May 13 00:19:57.604064 (systemd)[1280]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 00:19:57.663497 systemd[1280]: Queued start job for default target default.target. May 13 00:19:57.663998 systemd[1280]: Reached target paths.target. May 13 00:19:57.664030 systemd[1280]: Reached target sockets.target. May 13 00:19:57.664041 systemd[1280]: Reached target timers.target. May 13 00:19:57.664051 systemd[1280]: Reached target basic.target. May 13 00:19:57.664092 systemd[1280]: Reached target default.target. May 13 00:19:57.664117 systemd[1280]: Startup finished in 54ms. May 13 00:19:57.664264 systemd[1]: Started user@500.service. May 13 00:19:57.665221 systemd[1]: Started session-1.scope. May 13 00:19:57.716974 systemd[1]: Started sshd@1-10.0.0.39:22-10.0.0.1:51460.service. May 13 00:19:57.749976 sshd[1289]: Accepted publickey for core from 10.0.0.1 port 51460 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:19:57.751249 sshd[1289]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:19:57.755205 systemd-logind[1199]: New session 2 of user core. May 13 00:19:57.756774 systemd[1]: Started session-2.scope. May 13 00:19:57.813202 sshd[1289]: pam_unix(sshd:session): session closed for user core May 13 00:19:57.816121 systemd[1]: Started sshd@2-10.0.0.39:22-10.0.0.1:51464.service. May 13 00:19:57.816581 systemd[1]: sshd@1-10.0.0.39:22-10.0.0.1:51460.service: Deactivated successfully. May 13 00:19:57.817616 systemd-logind[1199]: Session 2 logged out. Waiting for processes to exit. May 13 00:19:57.817693 systemd[1]: session-2.scope: Deactivated successfully. May 13 00:19:57.819590 systemd-logind[1199]: Removed session 2. May 13 00:19:57.849531 sshd[1294]: Accepted publickey for core from 10.0.0.1 port 51464 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:19:57.850766 sshd[1294]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:19:57.854047 systemd-logind[1199]: New session 3 of user core. May 13 00:19:57.854843 systemd[1]: Started session-3.scope. May 13 00:19:57.904764 sshd[1294]: pam_unix(sshd:session): session closed for user core May 13 00:19:57.908602 systemd[1]: Started sshd@3-10.0.0.39:22-10.0.0.1:51470.service. May 13 00:19:57.909086 systemd[1]: sshd@2-10.0.0.39:22-10.0.0.1:51464.service: Deactivated successfully. May 13 00:19:57.909782 systemd[1]: session-3.scope: Deactivated successfully. May 13 00:19:57.910317 systemd-logind[1199]: Session 3 logged out. Waiting for processes to exit. May 13 00:19:57.911222 systemd-logind[1199]: Removed session 3. May 13 00:19:57.941167 sshd[1301]: Accepted publickey for core from 10.0.0.1 port 51470 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:19:57.942599 sshd[1301]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:19:57.945827 systemd-logind[1199]: New session 4 of user core. May 13 00:19:57.946678 systemd[1]: Started session-4.scope. May 13 00:19:58.002594 sshd[1301]: pam_unix(sshd:session): session closed for user core May 13 00:19:58.006200 systemd[1]: sshd@3-10.0.0.39:22-10.0.0.1:51470.service: Deactivated successfully. May 13 00:19:58.006866 systemd[1]: session-4.scope: Deactivated successfully. May 13 00:19:58.007392 systemd-logind[1199]: Session 4 logged out. Waiting for processes to exit. May 13 00:19:58.008557 systemd[1]: Started sshd@4-10.0.0.39:22-10.0.0.1:51480.service. May 13 00:19:58.009224 systemd-logind[1199]: Removed session 4. May 13 00:19:58.041389 sshd[1308]: Accepted publickey for core from 10.0.0.1 port 51480 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:19:58.042634 sshd[1308]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:19:58.046049 systemd-logind[1199]: New session 5 of user core. May 13 00:19:58.046912 systemd[1]: Started session-5.scope. May 13 00:19:58.108373 sudo[1311]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 00:19:58.108598 sudo[1311]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 13 00:19:58.120858 systemd[1]: Starting coreos-metadata.service... May 13 00:19:58.127710 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 00:19:58.128022 systemd[1]: Finished coreos-metadata.service. May 13 00:19:58.624739 systemd[1]: Stopped kubelet.service. May 13 00:19:58.627247 systemd[1]: Starting kubelet.service... May 13 00:19:58.649758 systemd[1]: Reloading. May 13 00:19:58.696703 /usr/lib/systemd/system-generators/torcx-generator[1371]: time="2025-05-13T00:19:58Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:19:58.696735 /usr/lib/systemd/system-generators/torcx-generator[1371]: time="2025-05-13T00:19:58Z" level=info msg="torcx already run" May 13 00:19:58.839536 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:19:58.839686 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:19:58.855077 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:19:58.920947 systemd[1]: Started kubelet.service. May 13 00:19:58.925149 systemd[1]: Stopping kubelet.service... May 13 00:19:58.925693 systemd[1]: kubelet.service: Deactivated successfully. May 13 00:19:58.925958 systemd[1]: Stopped kubelet.service. May 13 00:19:58.927852 systemd[1]: Starting kubelet.service... May 13 00:19:59.019530 systemd[1]: Started kubelet.service. May 13 00:19:59.054916 kubelet[1419]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:19:59.054916 kubelet[1419]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 13 00:19:59.054916 kubelet[1419]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:19:59.055297 kubelet[1419]: I0513 00:19:59.054981 1419 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:19:59.651195 kubelet[1419]: I0513 00:19:59.651144 1419 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 13 00:19:59.651195 kubelet[1419]: I0513 00:19:59.651182 1419 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:19:59.651459 kubelet[1419]: I0513 00:19:59.651432 1419 server.go:954] "Client rotation is on, will bootstrap in background" May 13 00:19:59.760258 kubelet[1419]: I0513 00:19:59.760220 1419 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:19:59.767587 kubelet[1419]: E0513 00:19:59.767553 1419 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 13 00:19:59.767587 kubelet[1419]: I0513 00:19:59.767584 1419 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 13 00:19:59.770808 kubelet[1419]: I0513 00:19:59.770786 1419 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:19:59.774475 kubelet[1419]: I0513 00:19:59.774395 1419 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:19:59.774672 kubelet[1419]: I0513 00:19:59.774482 1419 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.39","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 00:19:59.774865 kubelet[1419]: I0513 00:19:59.774854 1419 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:19:59.774916 kubelet[1419]: I0513 00:19:59.774867 1419 container_manager_linux.go:304] "Creating device plugin manager" May 13 00:19:59.775231 kubelet[1419]: I0513 00:19:59.775215 1419 state_mem.go:36] "Initialized new in-memory state store" May 13 00:19:59.778610 kubelet[1419]: I0513 00:19:59.778578 1419 kubelet.go:446] "Attempting to sync node with API server" May 13 00:19:59.778610 kubelet[1419]: I0513 00:19:59.778610 1419 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:19:59.778677 kubelet[1419]: I0513 00:19:59.778629 1419 kubelet.go:352] "Adding apiserver pod source" May 13 00:19:59.778677 kubelet[1419]: I0513 00:19:59.778641 1419 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:19:59.778797 kubelet[1419]: E0513 00:19:59.778771 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:19:59.778837 kubelet[1419]: E0513 00:19:59.778828 1419 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:19:59.784690 kubelet[1419]: I0513 00:19:59.784667 1419 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 13 00:19:59.786230 kubelet[1419]: I0513 00:19:59.786212 1419 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:19:59.786806 kubelet[1419]: W0513 00:19:59.786793 1419 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 00:19:59.790084 kubelet[1419]: I0513 00:19:59.790057 1419 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 13 00:19:59.790168 kubelet[1419]: I0513 00:19:59.790104 1419 server.go:1287] "Started kubelet" May 13 00:19:59.790325 kubelet[1419]: I0513 00:19:59.790293 1419 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:19:59.791463 kubelet[1419]: I0513 00:19:59.791441 1419 server.go:490] "Adding debug handlers to kubelet server" May 13 00:19:59.805061 kubelet[1419]: E0513 00:19:59.805037 1419 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:19:59.805975 kubelet[1419]: I0513 00:19:59.805886 1419 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:19:59.806338 kubelet[1419]: I0513 00:19:59.806316 1419 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:19:59.807961 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 13 00:19:59.808118 kubelet[1419]: I0513 00:19:59.808091 1419 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:19:59.810270 kubelet[1419]: I0513 00:19:59.810238 1419 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 00:19:59.811268 kubelet[1419]: I0513 00:19:59.811241 1419 volume_manager.go:297] "Starting Kubelet Volume Manager" May 13 00:19:59.811993 kubelet[1419]: E0513 00:19:59.811964 1419 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.39\" not found" May 13 00:19:59.812448 kubelet[1419]: I0513 00:19:59.812421 1419 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 00:19:59.812498 kubelet[1419]: I0513 00:19:59.812488 1419 reconciler.go:26] "Reconciler: start to sync state" May 13 00:19:59.814666 kubelet[1419]: I0513 00:19:59.814645 1419 factory.go:221] Registration of the containerd container factory successfully May 13 00:19:59.814767 kubelet[1419]: I0513 00:19:59.814756 1419 factory.go:221] Registration of the systemd container factory successfully May 13 00:19:59.814892 kubelet[1419]: I0513 00:19:59.814873 1419 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:19:59.846354 kubelet[1419]: I0513 00:19:59.846324 1419 cpu_manager.go:221] "Starting CPU manager" policy="none" May 13 00:19:59.846354 kubelet[1419]: I0513 00:19:59.846345 1419 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 13 00:19:59.846517 kubelet[1419]: I0513 00:19:59.846367 1419 state_mem.go:36] "Initialized new in-memory state store" May 13 00:19:59.848338 kubelet[1419]: E0513 00:19:59.848289 1419 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.39\" not found" node="10.0.0.39" May 13 00:19:59.912192 kubelet[1419]: E0513 00:19:59.912066 1419 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.39\" not found" May 13 00:19:59.928112 kubelet[1419]: I0513 00:19:59.928066 1419 policy_none.go:49] "None policy: Start" May 13 00:19:59.928112 kubelet[1419]: I0513 00:19:59.928113 1419 memory_manager.go:186] "Starting memorymanager" policy="None" May 13 00:19:59.928260 kubelet[1419]: I0513 00:19:59.928140 1419 state_mem.go:35] "Initializing new in-memory state store" May 13 00:19:59.932904 systemd[1]: Created slice kubepods.slice. May 13 00:19:59.937490 systemd[1]: Created slice kubepods-burstable.slice. May 13 00:19:59.940265 systemd[1]: Created slice kubepods-besteffort.slice. May 13 00:19:59.948974 kubelet[1419]: I0513 00:19:59.948928 1419 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:19:59.949110 kubelet[1419]: I0513 00:19:59.949089 1419 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 00:19:59.949159 kubelet[1419]: I0513 00:19:59.949108 1419 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:19:59.949707 kubelet[1419]: I0513 00:19:59.949379 1419 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:19:59.950604 kubelet[1419]: E0513 00:19:59.950128 1419 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 13 00:19:59.950604 kubelet[1419]: E0513 00:19:59.950178 1419 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.39\" not found" May 13 00:20:00.004042 kubelet[1419]: I0513 00:20:00.003999 1419 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:20:00.004950 kubelet[1419]: I0513 00:20:00.004929 1419 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:20:00.005015 kubelet[1419]: I0513 00:20:00.004959 1419 status_manager.go:227] "Starting to sync pod status with apiserver" May 13 00:20:00.005015 kubelet[1419]: I0513 00:20:00.004981 1419 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 13 00:20:00.005015 kubelet[1419]: I0513 00:20:00.004987 1419 kubelet.go:2388] "Starting kubelet main sync loop" May 13 00:20:00.005114 kubelet[1419]: E0513 00:20:00.005037 1419 kubelet.go:2412] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" May 13 00:20:00.050903 kubelet[1419]: I0513 00:20:00.050877 1419 kubelet_node_status.go:76] "Attempting to register node" node="10.0.0.39" May 13 00:20:00.055781 kubelet[1419]: I0513 00:20:00.055752 1419 kubelet_node_status.go:79] "Successfully registered node" node="10.0.0.39" May 13 00:20:00.059611 kubelet[1419]: I0513 00:20:00.059591 1419 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" May 13 00:20:00.059942 env[1210]: time="2025-05-13T00:20:00.059899455Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 00:20:00.060203 kubelet[1419]: I0513 00:20:00.060075 1419 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" May 13 00:20:00.543029 sudo[1311]: pam_unix(sudo:session): session closed for user root May 13 00:20:00.544966 sshd[1308]: pam_unix(sshd:session): session closed for user core May 13 00:20:00.547760 systemd-logind[1199]: Session 5 logged out. Waiting for processes to exit. May 13 00:20:00.547995 systemd[1]: sshd@4-10.0.0.39:22-10.0.0.1:51480.service: Deactivated successfully. May 13 00:20:00.548669 systemd[1]: session-5.scope: Deactivated successfully. May 13 00:20:00.549297 systemd-logind[1199]: Removed session 5. May 13 00:20:00.652969 kubelet[1419]: I0513 00:20:00.652925 1419 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" May 13 00:20:00.653235 kubelet[1419]: W0513 00:20:00.653208 1419 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 13 00:20:00.653527 kubelet[1419]: W0513 00:20:00.653212 1419 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 13 00:20:00.653617 kubelet[1419]: W0513 00:20:00.653234 1419 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 13 00:20:00.779462 kubelet[1419]: E0513 00:20:00.779426 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:00.782868 kubelet[1419]: I0513 00:20:00.782846 1419 apiserver.go:52] "Watching apiserver" May 13 00:20:00.790877 systemd[1]: Created slice kubepods-burstable-pod7bb5ca81_a35c_4bb6_ae8a_5c4bca8d0e92.slice. May 13 00:20:00.809047 systemd[1]: Created slice kubepods-besteffort-pod00737819_d274_4567_b8aa_dff7086c1ded.slice. May 13 00:20:00.813722 kubelet[1419]: I0513 00:20:00.813688 1419 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 00:20:00.818113 kubelet[1419]: I0513 00:20:00.818079 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/00737819-d274-4567-b8aa-dff7086c1ded-kube-proxy\") pod \"kube-proxy-wg2qr\" (UID: \"00737819-d274-4567-b8aa-dff7086c1ded\") " pod="kube-system/kube-proxy-wg2qr" May 13 00:20:00.818213 kubelet[1419]: I0513 00:20:00.818118 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-lib-modules\") pod \"cilium-lkxsl\" (UID: \"7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92\") " pod="kube-system/cilium-lkxsl" May 13 00:20:00.818213 kubelet[1419]: I0513 00:20:00.818146 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-xtables-lock\") pod \"cilium-lkxsl\" (UID: \"7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92\") " pod="kube-system/cilium-lkxsl" May 13 00:20:00.818213 kubelet[1419]: I0513 00:20:00.818162 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-clustermesh-secrets\") pod \"cilium-lkxsl\" (UID: \"7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92\") " pod="kube-system/cilium-lkxsl" May 13 00:20:00.818213 kubelet[1419]: I0513 00:20:00.818178 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-hostproc\") pod \"cilium-lkxsl\" (UID: \"7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92\") " pod="kube-system/cilium-lkxsl" May 13 00:20:00.818213 kubelet[1419]: I0513 00:20:00.818194 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-host-proc-sys-kernel\") pod \"cilium-lkxsl\" (UID: \"7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92\") " pod="kube-system/cilium-lkxsl" May 13 00:20:00.818213 kubelet[1419]: I0513 00:20:00.818210 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq57p\" (UniqueName: \"kubernetes.io/projected/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-kube-api-access-fq57p\") pod \"cilium-lkxsl\" (UID: \"7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92\") " pod="kube-system/cilium-lkxsl" May 13 00:20:00.818344 kubelet[1419]: I0513 00:20:00.818226 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-etc-cni-netd\") pod \"cilium-lkxsl\" (UID: \"7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92\") " pod="kube-system/cilium-lkxsl" May 13 00:20:00.818344 kubelet[1419]: I0513 00:20:00.818244 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-host-proc-sys-net\") pod \"cilium-lkxsl\" (UID: \"7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92\") " pod="kube-system/cilium-lkxsl" May 13 00:20:00.818344 kubelet[1419]: I0513 00:20:00.818278 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/00737819-d274-4567-b8aa-dff7086c1ded-xtables-lock\") pod \"kube-proxy-wg2qr\" (UID: \"00737819-d274-4567-b8aa-dff7086c1ded\") " pod="kube-system/kube-proxy-wg2qr" May 13 00:20:00.818344 kubelet[1419]: I0513 00:20:00.818306 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00737819-d274-4567-b8aa-dff7086c1ded-lib-modules\") pod \"kube-proxy-wg2qr\" (UID: \"00737819-d274-4567-b8aa-dff7086c1ded\") " pod="kube-system/kube-proxy-wg2qr" May 13 00:20:00.818428 kubelet[1419]: I0513 00:20:00.818343 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-cilium-run\") pod \"cilium-lkxsl\" (UID: \"7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92\") " pod="kube-system/cilium-lkxsl" May 13 00:20:00.818428 kubelet[1419]: I0513 00:20:00.818381 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-cilium-cgroup\") pod \"cilium-lkxsl\" (UID: \"7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92\") " pod="kube-system/cilium-lkxsl" May 13 00:20:00.818428 kubelet[1419]: I0513 00:20:00.818397 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-hubble-tls\") pod \"cilium-lkxsl\" (UID: \"7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92\") " pod="kube-system/cilium-lkxsl" May 13 00:20:00.818428 kubelet[1419]: I0513 00:20:00.818416 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clcq9\" (UniqueName: \"kubernetes.io/projected/00737819-d274-4567-b8aa-dff7086c1ded-kube-api-access-clcq9\") pod \"kube-proxy-wg2qr\" (UID: \"00737819-d274-4567-b8aa-dff7086c1ded\") " pod="kube-system/kube-proxy-wg2qr" May 13 00:20:00.818512 kubelet[1419]: I0513 00:20:00.818433 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-bpf-maps\") pod \"cilium-lkxsl\" (UID: \"7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92\") " pod="kube-system/cilium-lkxsl" May 13 00:20:00.818512 kubelet[1419]: I0513 00:20:00.818449 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-cni-path\") pod \"cilium-lkxsl\" (UID: \"7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92\") " pod="kube-system/cilium-lkxsl" May 13 00:20:00.818512 kubelet[1419]: I0513 00:20:00.818464 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-cilium-config-path\") pod \"cilium-lkxsl\" (UID: \"7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92\") " pod="kube-system/cilium-lkxsl" May 13 00:20:00.920165 kubelet[1419]: I0513 00:20:00.920098 1419 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 13 00:20:01.107466 kubelet[1419]: E0513 00:20:01.107352 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:20:01.108968 env[1210]: time="2025-05-13T00:20:01.108907100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lkxsl,Uid:7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92,Namespace:kube-system,Attempt:0,}" May 13 00:20:01.119986 kubelet[1419]: E0513 00:20:01.119962 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:20:01.120636 env[1210]: time="2025-05-13T00:20:01.120597332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wg2qr,Uid:00737819-d274-4567-b8aa-dff7086c1ded,Namespace:kube-system,Attempt:0,}" May 13 00:20:01.685779 env[1210]: time="2025-05-13T00:20:01.685717507Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:01.686790 env[1210]: time="2025-05-13T00:20:01.686762983Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:01.688751 env[1210]: time="2025-05-13T00:20:01.688723547Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:01.690119 env[1210]: time="2025-05-13T00:20:01.690092542Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:01.692458 env[1210]: time="2025-05-13T00:20:01.692425781Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:01.693885 env[1210]: time="2025-05-13T00:20:01.693856473Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:01.694584 env[1210]: time="2025-05-13T00:20:01.694558776Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:01.696335 env[1210]: time="2025-05-13T00:20:01.696281393Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:01.721355 env[1210]: time="2025-05-13T00:20:01.721280454Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:20:01.721355 env[1210]: time="2025-05-13T00:20:01.721336403Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:20:01.721355 env[1210]: time="2025-05-13T00:20:01.721349345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:20:01.721664 env[1210]: time="2025-05-13T00:20:01.721620733Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c6fd5b1fca22c161e08b3211820afb1d41349854324fab71b1235c4c23592bf3 pid=1484 runtime=io.containerd.runc.v2 May 13 00:20:01.721778 env[1210]: time="2025-05-13T00:20:01.721328083Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:20:01.721935 env[1210]: time="2025-05-13T00:20:01.721767118Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:20:01.721935 env[1210]: time="2025-05-13T00:20:01.721869853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:20:01.722215 env[1210]: time="2025-05-13T00:20:01.722163186Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/06d66311cf8d8d575be306b19d41821795c397ba5a548cdfd5784053cab53a8b pid=1485 runtime=io.containerd.runc.v2 May 13 00:20:01.752579 systemd[1]: Started cri-containerd-06d66311cf8d8d575be306b19d41821795c397ba5a548cdfd5784053cab53a8b.scope. May 13 00:20:01.753923 systemd[1]: Started cri-containerd-c6fd5b1fca22c161e08b3211820afb1d41349854324fab71b1235c4c23592bf3.scope. May 13 00:20:01.779708 kubelet[1419]: E0513 00:20:01.779652 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:01.790068 env[1210]: time="2025-05-13T00:20:01.790022262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lkxsl,Uid:7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92,Namespace:kube-system,Attempt:0,} returns sandbox id \"06d66311cf8d8d575be306b19d41821795c397ba5a548cdfd5784053cab53a8b\"" May 13 00:20:01.792813 kubelet[1419]: E0513 00:20:01.792768 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:20:01.793977 env[1210]: time="2025-05-13T00:20:01.793940616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wg2qr,Uid:00737819-d274-4567-b8aa-dff7086c1ded,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6fd5b1fca22c161e08b3211820afb1d41349854324fab71b1235c4c23592bf3\"" May 13 00:20:01.794662 env[1210]: time="2025-05-13T00:20:01.794629535Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 13 00:20:01.795061 kubelet[1419]: E0513 00:20:01.795007 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:20:01.926092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3003651640.mount: Deactivated successfully. May 13 00:20:02.780831 kubelet[1419]: E0513 00:20:02.780788 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:03.781968 kubelet[1419]: E0513 00:20:03.781923 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:04.782912 kubelet[1419]: E0513 00:20:04.782860 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:05.461681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3949085102.mount: Deactivated successfully. May 13 00:20:05.784178 kubelet[1419]: E0513 00:20:05.784010 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:06.784303 kubelet[1419]: E0513 00:20:06.784244 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:07.710227 env[1210]: time="2025-05-13T00:20:07.710166250Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:07.711684 env[1210]: time="2025-05-13T00:20:07.711643372Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:07.713336 env[1210]: time="2025-05-13T00:20:07.713297477Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:07.713943 env[1210]: time="2025-05-13T00:20:07.713914976Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 13 00:20:07.715423 env[1210]: time="2025-05-13T00:20:07.715396347Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 13 00:20:07.716699 env[1210]: time="2025-05-13T00:20:07.716657440Z" level=info msg="CreateContainer within sandbox \"06d66311cf8d8d575be306b19d41821795c397ba5a548cdfd5784053cab53a8b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 00:20:07.728751 env[1210]: time="2025-05-13T00:20:07.728715136Z" level=info msg="CreateContainer within sandbox \"06d66311cf8d8d575be306b19d41821795c397ba5a548cdfd5784053cab53a8b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"426d63f3e8d713e2f998caaafd5c6966f00885139d0d819fd0fa83fcb6117e92\"" May 13 00:20:07.729316 env[1210]: time="2025-05-13T00:20:07.729290624Z" level=info msg="StartContainer for \"426d63f3e8d713e2f998caaafd5c6966f00885139d0d819fd0fa83fcb6117e92\"" May 13 00:20:07.755828 systemd[1]: Started cri-containerd-426d63f3e8d713e2f998caaafd5c6966f00885139d0d819fd0fa83fcb6117e92.scope. May 13 00:20:07.784340 kubelet[1419]: E0513 00:20:07.784300 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:07.795551 env[1210]: time="2025-05-13T00:20:07.795498015Z" level=info msg="StartContainer for \"426d63f3e8d713e2f998caaafd5c6966f00885139d0d819fd0fa83fcb6117e92\" returns successfully" May 13 00:20:07.835519 systemd[1]: cri-containerd-426d63f3e8d713e2f998caaafd5c6966f00885139d0d819fd0fa83fcb6117e92.scope: Deactivated successfully. May 13 00:20:07.945297 env[1210]: time="2025-05-13T00:20:07.945253384Z" level=info msg="shim disconnected" id=426d63f3e8d713e2f998caaafd5c6966f00885139d0d819fd0fa83fcb6117e92 May 13 00:20:07.945528 env[1210]: time="2025-05-13T00:20:07.945509619Z" level=warning msg="cleaning up after shim disconnected" id=426d63f3e8d713e2f998caaafd5c6966f00885139d0d819fd0fa83fcb6117e92 namespace=k8s.io May 13 00:20:07.945600 env[1210]: time="2025-05-13T00:20:07.945587027Z" level=info msg="cleaning up dead shim" May 13 00:20:07.952027 env[1210]: time="2025-05-13T00:20:07.951996520Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:20:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1598 runtime=io.containerd.runc.v2\n" May 13 00:20:08.024042 kubelet[1419]: E0513 00:20:08.023509 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:20:08.025840 env[1210]: time="2025-05-13T00:20:08.025802262Z" level=info msg="CreateContainer within sandbox \"06d66311cf8d8d575be306b19d41821795c397ba5a548cdfd5784053cab53a8b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 00:20:08.037633 env[1210]: time="2025-05-13T00:20:08.037575318Z" level=info msg="CreateContainer within sandbox \"06d66311cf8d8d575be306b19d41821795c397ba5a548cdfd5784053cab53a8b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a52eecf1ff4c98a049045d7c86c0e0efad804128b1f5de40f6eaa29319516875\"" May 13 00:20:08.038203 env[1210]: time="2025-05-13T00:20:08.038178823Z" level=info msg="StartContainer for \"a52eecf1ff4c98a049045d7c86c0e0efad804128b1f5de40f6eaa29319516875\"" May 13 00:20:08.052810 systemd[1]: Started cri-containerd-a52eecf1ff4c98a049045d7c86c0e0efad804128b1f5de40f6eaa29319516875.scope. May 13 00:20:08.090078 env[1210]: time="2025-05-13T00:20:08.090030313Z" level=info msg="StartContainer for \"a52eecf1ff4c98a049045d7c86c0e0efad804128b1f5de40f6eaa29319516875\" returns successfully" May 13 00:20:08.100537 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 00:20:08.100739 systemd[1]: Stopped systemd-sysctl.service. May 13 00:20:08.100895 systemd[1]: Stopping systemd-sysctl.service... May 13 00:20:08.102338 systemd[1]: Starting systemd-sysctl.service... May 13 00:20:08.103325 systemd[1]: cri-containerd-a52eecf1ff4c98a049045d7c86c0e0efad804128b1f5de40f6eaa29319516875.scope: Deactivated successfully. May 13 00:20:08.109906 systemd[1]: Finished systemd-sysctl.service. May 13 00:20:08.122795 env[1210]: time="2025-05-13T00:20:08.122743976Z" level=info msg="shim disconnected" id=a52eecf1ff4c98a049045d7c86c0e0efad804128b1f5de40f6eaa29319516875 May 13 00:20:08.122795 env[1210]: time="2025-05-13T00:20:08.122790545Z" level=warning msg="cleaning up after shim disconnected" id=a52eecf1ff4c98a049045d7c86c0e0efad804128b1f5de40f6eaa29319516875 namespace=k8s.io May 13 00:20:08.122795 env[1210]: time="2025-05-13T00:20:08.122800524Z" level=info msg="cleaning up dead shim" May 13 00:20:08.129768 env[1210]: time="2025-05-13T00:20:08.129728828Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:20:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1662 runtime=io.containerd.runc.v2\n" May 13 00:20:08.724895 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-426d63f3e8d713e2f998caaafd5c6966f00885139d0d819fd0fa83fcb6117e92-rootfs.mount: Deactivated successfully. May 13 00:20:08.784461 kubelet[1419]: E0513 00:20:08.784408 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:08.831892 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2766550213.mount: Deactivated successfully. May 13 00:20:09.027041 kubelet[1419]: E0513 00:20:09.026390 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:20:09.028577 env[1210]: time="2025-05-13T00:20:09.028536080Z" level=info msg="CreateContainer within sandbox \"06d66311cf8d8d575be306b19d41821795c397ba5a548cdfd5784053cab53a8b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 00:20:09.040779 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount22784392.mount: Deactivated successfully. May 13 00:20:09.047226 env[1210]: time="2025-05-13T00:20:09.047173586Z" level=info msg="CreateContainer within sandbox \"06d66311cf8d8d575be306b19d41821795c397ba5a548cdfd5784053cab53a8b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"eebd83c03da4989ae8fd4904bd08b85f99cee7c170d15d4ec159c5f38ec0f543\"" May 13 00:20:09.047587 env[1210]: time="2025-05-13T00:20:09.047529857Z" level=info msg="StartContainer for \"eebd83c03da4989ae8fd4904bd08b85f99cee7c170d15d4ec159c5f38ec0f543\"" May 13 00:20:09.062065 systemd[1]: Started cri-containerd-eebd83c03da4989ae8fd4904bd08b85f99cee7c170d15d4ec159c5f38ec0f543.scope. May 13 00:20:09.111932 env[1210]: time="2025-05-13T00:20:09.111893848Z" level=info msg="StartContainer for \"eebd83c03da4989ae8fd4904bd08b85f99cee7c170d15d4ec159c5f38ec0f543\" returns successfully" May 13 00:20:09.112758 systemd[1]: cri-containerd-eebd83c03da4989ae8fd4904bd08b85f99cee7c170d15d4ec159c5f38ec0f543.scope: Deactivated successfully. May 13 00:20:09.214955 env[1210]: time="2025-05-13T00:20:09.214910618Z" level=info msg="shim disconnected" id=eebd83c03da4989ae8fd4904bd08b85f99cee7c170d15d4ec159c5f38ec0f543 May 13 00:20:09.214955 env[1210]: time="2025-05-13T00:20:09.214951966Z" level=warning msg="cleaning up after shim disconnected" id=eebd83c03da4989ae8fd4904bd08b85f99cee7c170d15d4ec159c5f38ec0f543 namespace=k8s.io May 13 00:20:09.214955 env[1210]: time="2025-05-13T00:20:09.214960701Z" level=info msg="cleaning up dead shim" May 13 00:20:09.221298 env[1210]: time="2025-05-13T00:20:09.221262364Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:20:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1718 runtime=io.containerd.runc.v2\n" May 13 00:20:09.323296 env[1210]: time="2025-05-13T00:20:09.322790102Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:09.324712 env[1210]: time="2025-05-13T00:20:09.324668661Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:09.326048 env[1210]: time="2025-05-13T00:20:09.326022869Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:09.327309 env[1210]: time="2025-05-13T00:20:09.327281199Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:09.327746 env[1210]: time="2025-05-13T00:20:09.327708027Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\"" May 13 00:20:09.329437 env[1210]: time="2025-05-13T00:20:09.329403603Z" level=info msg="CreateContainer within sandbox \"c6fd5b1fca22c161e08b3211820afb1d41349854324fab71b1235c4c23592bf3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 00:20:09.339847 env[1210]: time="2025-05-13T00:20:09.339802429Z" level=info msg="CreateContainer within sandbox \"c6fd5b1fca22c161e08b3211820afb1d41349854324fab71b1235c4c23592bf3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3f6e510d2017ebff5ef81fe390ed16c3bccc5b62bc95e96a8698dfda820e81af\"" May 13 00:20:09.340288 env[1210]: time="2025-05-13T00:20:09.340260870Z" level=info msg="StartContainer for \"3f6e510d2017ebff5ef81fe390ed16c3bccc5b62bc95e96a8698dfda820e81af\"" May 13 00:20:09.353535 systemd[1]: Started cri-containerd-3f6e510d2017ebff5ef81fe390ed16c3bccc5b62bc95e96a8698dfda820e81af.scope. May 13 00:20:09.389385 env[1210]: time="2025-05-13T00:20:09.389341765Z" level=info msg="StartContainer for \"3f6e510d2017ebff5ef81fe390ed16c3bccc5b62bc95e96a8698dfda820e81af\" returns successfully" May 13 00:20:09.724894 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3065962053.mount: Deactivated successfully. May 13 00:20:09.785307 kubelet[1419]: E0513 00:20:09.785254 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:10.029977 kubelet[1419]: E0513 00:20:10.029703 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:20:10.032214 kubelet[1419]: E0513 00:20:10.032143 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:20:10.032918 env[1210]: time="2025-05-13T00:20:10.032775430Z" level=info msg="CreateContainer within sandbox \"06d66311cf8d8d575be306b19d41821795c397ba5a548cdfd5784053cab53a8b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 00:20:10.043146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount939724222.mount: Deactivated successfully. May 13 00:20:10.048489 env[1210]: time="2025-05-13T00:20:10.048405823Z" level=info msg="CreateContainer within sandbox \"06d66311cf8d8d575be306b19d41821795c397ba5a548cdfd5784053cab53a8b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c6cee3af6849f0155a50deaba420708d83548979eaf1239c5511e77902ca9bb5\"" May 13 00:20:10.049734 env[1210]: time="2025-05-13T00:20:10.049708076Z" level=info msg="StartContainer for \"c6cee3af6849f0155a50deaba420708d83548979eaf1239c5511e77902ca9bb5\"" May 13 00:20:10.055873 kubelet[1419]: I0513 00:20:10.055817 1419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wg2qr" podStartSLOduration=2.522804197 podStartE2EDuration="10.055800289s" podCreationTimestamp="2025-05-13 00:20:00 +0000 UTC" firstStartedPulling="2025-05-13 00:20:01.795372112 +0000 UTC m=+2.772253539" lastFinishedPulling="2025-05-13 00:20:09.328368204 +0000 UTC m=+10.305249631" observedRunningTime="2025-05-13 00:20:10.055625835 +0000 UTC m=+11.032507222" watchObservedRunningTime="2025-05-13 00:20:10.055800289 +0000 UTC m=+11.032681716" May 13 00:20:10.063397 systemd[1]: Started cri-containerd-c6cee3af6849f0155a50deaba420708d83548979eaf1239c5511e77902ca9bb5.scope. May 13 00:20:10.103115 systemd[1]: cri-containerd-c6cee3af6849f0155a50deaba420708d83548979eaf1239c5511e77902ca9bb5.scope: Deactivated successfully. May 13 00:20:10.105698 env[1210]: time="2025-05-13T00:20:10.105656057Z" level=info msg="StartContainer for \"c6cee3af6849f0155a50deaba420708d83548979eaf1239c5511e77902ca9bb5\" returns successfully" May 13 00:20:10.155518 env[1210]: time="2025-05-13T00:20:10.155470766Z" level=info msg="shim disconnected" id=c6cee3af6849f0155a50deaba420708d83548979eaf1239c5511e77902ca9bb5 May 13 00:20:10.155518 env[1210]: time="2025-05-13T00:20:10.155517554Z" level=warning msg="cleaning up after shim disconnected" id=c6cee3af6849f0155a50deaba420708d83548979eaf1239c5511e77902ca9bb5 namespace=k8s.io May 13 00:20:10.155738 env[1210]: time="2025-05-13T00:20:10.155527007Z" level=info msg="cleaning up dead shim" May 13 00:20:10.161668 env[1210]: time="2025-05-13T00:20:10.161636365Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:20:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1942 runtime=io.containerd.runc.v2\n" May 13 00:20:10.785769 kubelet[1419]: E0513 00:20:10.785730 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:11.039114 kubelet[1419]: E0513 00:20:11.038842 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:20:11.039648 kubelet[1419]: E0513 00:20:11.039511 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:20:11.041424 env[1210]: time="2025-05-13T00:20:11.041383403Z" level=info msg="CreateContainer within sandbox \"06d66311cf8d8d575be306b19d41821795c397ba5a548cdfd5784053cab53a8b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 00:20:11.054094 env[1210]: time="2025-05-13T00:20:11.054014907Z" level=info msg="CreateContainer within sandbox \"06d66311cf8d8d575be306b19d41821795c397ba5a548cdfd5784053cab53a8b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"33a9578730f26612f5573a1b07cad67f1d26361effe6646e464632d2f5a36e0b\"" May 13 00:20:11.054608 env[1210]: time="2025-05-13T00:20:11.054581107Z" level=info msg="StartContainer for \"33a9578730f26612f5573a1b07cad67f1d26361effe6646e464632d2f5a36e0b\"" May 13 00:20:11.070688 systemd[1]: Started cri-containerd-33a9578730f26612f5573a1b07cad67f1d26361effe6646e464632d2f5a36e0b.scope. May 13 00:20:11.113921 env[1210]: time="2025-05-13T00:20:11.109913956Z" level=info msg="StartContainer for \"33a9578730f26612f5573a1b07cad67f1d26361effe6646e464632d2f5a36e0b\" returns successfully" May 13 00:20:11.245103 kubelet[1419]: I0513 00:20:11.245071 1419 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 13 00:20:11.497163 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 13 00:20:11.724417 systemd[1]: run-containerd-runc-k8s.io-33a9578730f26612f5573a1b07cad67f1d26361effe6646e464632d2f5a36e0b-runc.o3uJV1.mount: Deactivated successfully. May 13 00:20:11.734164 kernel: Initializing XFRM netlink socket May 13 00:20:11.737157 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 13 00:20:11.786898 kubelet[1419]: E0513 00:20:11.786802 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:12.046435 kubelet[1419]: E0513 00:20:12.046327 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:20:12.787842 kubelet[1419]: E0513 00:20:12.787791 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:13.047687 kubelet[1419]: E0513 00:20:13.047571 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:20:13.349307 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 13 00:20:13.349410 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 13 00:20:13.347336 systemd-networkd[1056]: cilium_host: Link UP May 13 00:20:13.347452 systemd-networkd[1056]: cilium_net: Link UP May 13 00:20:13.348198 systemd-networkd[1056]: cilium_net: Gained carrier May 13 00:20:13.350148 systemd-networkd[1056]: cilium_host: Gained carrier May 13 00:20:13.427489 systemd-networkd[1056]: cilium_vxlan: Link UP May 13 00:20:13.427497 systemd-networkd[1056]: cilium_vxlan: Gained carrier May 13 00:20:13.711162 kernel: NET: Registered PF_ALG protocol family May 13 00:20:13.788902 kubelet[1419]: E0513 00:20:13.788855 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:14.048651 kubelet[1419]: E0513 00:20:14.048541 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:20:14.224521 systemd-networkd[1056]: cilium_host: Gained IPv6LL May 13 00:20:14.288430 systemd-networkd[1056]: cilium_net: Gained IPv6LL May 13 00:20:14.327955 systemd-networkd[1056]: lxc_health: Link UP May 13 00:20:14.340858 systemd-networkd[1056]: lxc_health: Gained carrier May 13 00:20:14.341319 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 13 00:20:14.608268 systemd-networkd[1056]: cilium_vxlan: Gained IPv6LL May 13 00:20:14.790049 kubelet[1419]: E0513 00:20:14.789994 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:15.696294 systemd-networkd[1056]: lxc_health: Gained IPv6LL May 13 00:20:15.790810 kubelet[1419]: E0513 00:20:15.790766 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:16.067600 kubelet[1419]: E0513 00:20:16.067492 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:20:16.085796 kubelet[1419]: I0513 00:20:16.085720 1419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lkxsl" podStartSLOduration=10.164673266 podStartE2EDuration="16.085702299s" podCreationTimestamp="2025-05-13 00:20:00 +0000 UTC" firstStartedPulling="2025-05-13 00:20:01.794186642 +0000 UTC m=+2.771068069" lastFinishedPulling="2025-05-13 00:20:07.715215715 +0000 UTC m=+8.692097102" observedRunningTime="2025-05-13 00:20:12.06782939 +0000 UTC m=+13.044710817" watchObservedRunningTime="2025-05-13 00:20:16.085702299 +0000 UTC m=+17.062583726" May 13 00:20:16.474065 systemd[1]: Created slice kubepods-besteffort-pod390e4f7b_3813_43be_bc67_dcaa1747aab8.slice. May 13 00:20:16.519316 kubelet[1419]: I0513 00:20:16.519255 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spn8m\" (UniqueName: \"kubernetes.io/projected/390e4f7b-3813-43be-bc67-dcaa1747aab8-kube-api-access-spn8m\") pod \"nginx-deployment-7fcdb87857-95rql\" (UID: \"390e4f7b-3813-43be-bc67-dcaa1747aab8\") " pod="default/nginx-deployment-7fcdb87857-95rql" May 13 00:20:16.777681 env[1210]: time="2025-05-13T00:20:16.777202317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-95rql,Uid:390e4f7b-3813-43be-bc67-dcaa1747aab8,Namespace:default,Attempt:0,}" May 13 00:20:16.792369 kubelet[1419]: E0513 00:20:16.791747 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:16.815234 systemd-networkd[1056]: lxc15186bc4dd4d: Link UP May 13 00:20:16.827407 kernel: eth0: renamed from tmp8a4c5 May 13 00:20:16.835371 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 13 00:20:16.835464 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc15186bc4dd4d: link becomes ready May 13 00:20:16.835383 systemd-networkd[1056]: lxc15186bc4dd4d: Gained carrier May 13 00:20:17.052580 kubelet[1419]: E0513 00:20:17.052191 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:20:17.792772 kubelet[1419]: E0513 00:20:17.792712 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:18.054054 kubelet[1419]: E0513 00:20:18.053738 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:20:18.256355 systemd-networkd[1056]: lxc15186bc4dd4d: Gained IPv6LL May 13 00:20:18.793747 kubelet[1419]: E0513 00:20:18.793690 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:18.844237 env[1210]: time="2025-05-13T00:20:18.844024301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:20:18.844569 env[1210]: time="2025-05-13T00:20:18.844221400Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:20:18.844569 env[1210]: time="2025-05-13T00:20:18.844232846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:20:18.844569 env[1210]: time="2025-05-13T00:20:18.844439309Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8a4c5de8688f1926e53bad7c1058ffce3f15fccaa8a30a498b9e554d98db6d36 pid=2496 runtime=io.containerd.runc.v2 May 13 00:20:18.856770 systemd[1]: run-containerd-runc-k8s.io-8a4c5de8688f1926e53bad7c1058ffce3f15fccaa8a30a498b9e554d98db6d36-runc.AN9EPK.mount: Deactivated successfully. May 13 00:20:18.858209 systemd[1]: Started cri-containerd-8a4c5de8688f1926e53bad7c1058ffce3f15fccaa8a30a498b9e554d98db6d36.scope. May 13 00:20:18.912943 systemd-resolved[1157]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:20:18.929702 env[1210]: time="2025-05-13T00:20:18.929645975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-95rql,Uid:390e4f7b-3813-43be-bc67-dcaa1747aab8,Namespace:default,Attempt:0,} returns sandbox id \"8a4c5de8688f1926e53bad7c1058ffce3f15fccaa8a30a498b9e554d98db6d36\"" May 13 00:20:18.930978 env[1210]: time="2025-05-13T00:20:18.930943983Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 13 00:20:19.778992 kubelet[1419]: E0513 00:20:19.778950 1419 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:19.794830 kubelet[1419]: E0513 00:20:19.794797 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:20.795558 kubelet[1419]: E0513 00:20:20.795515 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:21.079368 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1513561275.mount: Deactivated successfully. May 13 00:20:21.796074 kubelet[1419]: E0513 00:20:21.796024 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:22.307830 env[1210]: time="2025-05-13T00:20:22.307780563Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:22.309451 env[1210]: time="2025-05-13T00:20:22.309413602Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:22.310914 env[1210]: time="2025-05-13T00:20:22.310886393Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:22.312396 env[1210]: time="2025-05-13T00:20:22.312360865Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:22.314015 env[1210]: time="2025-05-13T00:20:22.313982861Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\"" May 13 00:20:22.315921 env[1210]: time="2025-05-13T00:20:22.315890060Z" level=info msg="CreateContainer within sandbox \"8a4c5de8688f1926e53bad7c1058ffce3f15fccaa8a30a498b9e554d98db6d36\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" May 13 00:20:22.325437 env[1210]: time="2025-05-13T00:20:22.325398126Z" level=info msg="CreateContainer within sandbox \"8a4c5de8688f1926e53bad7c1058ffce3f15fccaa8a30a498b9e554d98db6d36\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"4ef0068ba983900592f9d32fbd85f6f258b7a7e7f968fa1077684cafbfab701b\"" May 13 00:20:22.325928 env[1210]: time="2025-05-13T00:20:22.325881587Z" level=info msg="StartContainer for \"4ef0068ba983900592f9d32fbd85f6f258b7a7e7f968fa1077684cafbfab701b\"" May 13 00:20:22.341870 systemd[1]: run-containerd-runc-k8s.io-4ef0068ba983900592f9d32fbd85f6f258b7a7e7f968fa1077684cafbfab701b-runc.gSM1Dc.mount: Deactivated successfully. May 13 00:20:22.343430 systemd[1]: Started cri-containerd-4ef0068ba983900592f9d32fbd85f6f258b7a7e7f968fa1077684cafbfab701b.scope. May 13 00:20:22.380285 env[1210]: time="2025-05-13T00:20:22.380241437Z" level=info msg="StartContainer for \"4ef0068ba983900592f9d32fbd85f6f258b7a7e7f968fa1077684cafbfab701b\" returns successfully" May 13 00:20:22.797466 kubelet[1419]: E0513 00:20:22.796900 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:23.072364 kubelet[1419]: I0513 00:20:23.072030 1419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-95rql" podStartSLOduration=3.6875941 podStartE2EDuration="7.072013289s" podCreationTimestamp="2025-05-13 00:20:16 +0000 UTC" firstStartedPulling="2025-05-13 00:20:18.930427645 +0000 UTC m=+19.907309072" lastFinishedPulling="2025-05-13 00:20:22.314846834 +0000 UTC m=+23.291728261" observedRunningTime="2025-05-13 00:20:23.071442262 +0000 UTC m=+24.048323689" watchObservedRunningTime="2025-05-13 00:20:23.072013289 +0000 UTC m=+24.048894716" May 13 00:20:23.797416 kubelet[1419]: E0513 00:20:23.797372 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:24.797915 kubelet[1419]: E0513 00:20:24.797875 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:25.799036 kubelet[1419]: E0513 00:20:25.798998 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:26.799514 kubelet[1419]: E0513 00:20:26.799456 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:27.800153 kubelet[1419]: E0513 00:20:27.800098 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:28.051531 systemd[1]: Created slice kubepods-besteffort-podbe475230_4279_44e7_9cdb_af78c2914f68.slice. May 13 00:20:28.087841 kubelet[1419]: I0513 00:20:28.087792 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/be475230-4279-44e7-9cdb-af78c2914f68-data\") pod \"nfs-server-provisioner-0\" (UID: \"be475230-4279-44e7-9cdb-af78c2914f68\") " pod="default/nfs-server-provisioner-0" May 13 00:20:28.087841 kubelet[1419]: I0513 00:20:28.087836 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j55xk\" (UniqueName: \"kubernetes.io/projected/be475230-4279-44e7-9cdb-af78c2914f68-kube-api-access-j55xk\") pod \"nfs-server-provisioner-0\" (UID: \"be475230-4279-44e7-9cdb-af78c2914f68\") " pod="default/nfs-server-provisioner-0" May 13 00:20:28.354988 env[1210]: time="2025-05-13T00:20:28.354596108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:be475230-4279-44e7-9cdb-af78c2914f68,Namespace:default,Attempt:0,}" May 13 00:20:28.380384 systemd-networkd[1056]: lxcc3e279415f13: Link UP May 13 00:20:28.390156 kernel: eth0: renamed from tmpf88b9 May 13 00:20:28.395174 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 13 00:20:28.395266 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcc3e279415f13: link becomes ready May 13 00:20:28.395369 systemd-networkd[1056]: lxcc3e279415f13: Gained carrier May 13 00:20:28.567067 env[1210]: time="2025-05-13T00:20:28.566989405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:20:28.567067 env[1210]: time="2025-05-13T00:20:28.567030012Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:20:28.567067 env[1210]: time="2025-05-13T00:20:28.567040654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:20:28.567287 env[1210]: time="2025-05-13T00:20:28.567248250Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f88b97b58ef8fa0726ff626390898efe7bb8b8427609a0287b366a7b27eb5be3 pid=2626 runtime=io.containerd.runc.v2 May 13 00:20:28.582072 systemd[1]: Started cri-containerd-f88b97b58ef8fa0726ff626390898efe7bb8b8427609a0287b366a7b27eb5be3.scope. May 13 00:20:28.605848 systemd-resolved[1157]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:20:28.621531 env[1210]: time="2025-05-13T00:20:28.621486130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:be475230-4279-44e7-9cdb-af78c2914f68,Namespace:default,Attempt:0,} returns sandbox id \"f88b97b58ef8fa0726ff626390898efe7bb8b8427609a0287b366a7b27eb5be3\"" May 13 00:20:28.623163 env[1210]: time="2025-05-13T00:20:28.623120131Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" May 13 00:20:28.801150 kubelet[1419]: E0513 00:20:28.801100 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:29.802079 kubelet[1419]: E0513 00:20:29.802031 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:30.352439 systemd-networkd[1056]: lxcc3e279415f13: Gained IPv6LL May 13 00:20:30.646477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2851580504.mount: Deactivated successfully. May 13 00:20:30.802760 kubelet[1419]: E0513 00:20:30.802715 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:31.803525 kubelet[1419]: E0513 00:20:31.803475 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:32.384525 env[1210]: time="2025-05-13T00:20:32.384478588Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:32.386149 env[1210]: time="2025-05-13T00:20:32.386108413Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:32.387700 env[1210]: time="2025-05-13T00:20:32.387669108Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:32.390096 env[1210]: time="2025-05-13T00:20:32.390058918Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:32.390896 env[1210]: time="2025-05-13T00:20:32.390861468Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" May 13 00:20:32.394766 env[1210]: time="2025-05-13T00:20:32.394734882Z" level=info msg="CreateContainer within sandbox \"f88b97b58ef8fa0726ff626390898efe7bb8b8427609a0287b366a7b27eb5be3\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" May 13 00:20:32.404604 env[1210]: time="2025-05-13T00:20:32.404557917Z" level=info msg="CreateContainer within sandbox \"f88b97b58ef8fa0726ff626390898efe7bb8b8427609a0287b366a7b27eb5be3\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"c90a96cc7bad956134725ecccc5bd0988d531e3092af5dad97a6b9ad3ed39a4a\"" May 13 00:20:32.405256 env[1210]: time="2025-05-13T00:20:32.405229570Z" level=info msg="StartContainer for \"c90a96cc7bad956134725ecccc5bd0988d531e3092af5dad97a6b9ad3ed39a4a\"" May 13 00:20:32.423803 systemd[1]: Started cri-containerd-c90a96cc7bad956134725ecccc5bd0988d531e3092af5dad97a6b9ad3ed39a4a.scope. May 13 00:20:32.564284 env[1210]: time="2025-05-13T00:20:32.564228855Z" level=info msg="StartContainer for \"c90a96cc7bad956134725ecccc5bd0988d531e3092af5dad97a6b9ad3ed39a4a\" returns successfully" May 13 00:20:32.803791 kubelet[1419]: E0513 00:20:32.803620 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:33.804455 kubelet[1419]: E0513 00:20:33.804415 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:34.805108 kubelet[1419]: E0513 00:20:34.805066 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:35.805996 kubelet[1419]: E0513 00:20:35.805954 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:36.807116 kubelet[1419]: E0513 00:20:36.807072 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:36.990185 update_engine[1203]: I0513 00:20:36.989904 1203 update_attempter.cc:509] Updating boot flags... May 13 00:20:37.807422 kubelet[1419]: E0513 00:20:37.807373 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:38.807782 kubelet[1419]: E0513 00:20:38.807746 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:39.778779 kubelet[1419]: E0513 00:20:39.778735 1419 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:39.808302 kubelet[1419]: E0513 00:20:39.808272 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:40.808932 kubelet[1419]: E0513 00:20:40.808883 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:41.809919 kubelet[1419]: E0513 00:20:41.809886 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:42.563677 kubelet[1419]: I0513 00:20:42.563605 1419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=10.792957755 podStartE2EDuration="14.563588143s" podCreationTimestamp="2025-05-13 00:20:28 +0000 UTC" firstStartedPulling="2025-05-13 00:20:28.622778752 +0000 UTC m=+29.599660179" lastFinishedPulling="2025-05-13 00:20:32.39340914 +0000 UTC m=+33.370290567" observedRunningTime="2025-05-13 00:20:33.091677398 +0000 UTC m=+34.068558825" watchObservedRunningTime="2025-05-13 00:20:42.563588143 +0000 UTC m=+43.540469570" May 13 00:20:42.568370 systemd[1]: Created slice kubepods-besteffort-podd815dffa_a6e4_4cb5_a4f0_18ae250fb45d.slice. May 13 00:20:42.672510 kubelet[1419]: I0513 00:20:42.672479 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4st2h\" (UniqueName: \"kubernetes.io/projected/d815dffa-a6e4-4cb5-a4f0-18ae250fb45d-kube-api-access-4st2h\") pod \"test-pod-1\" (UID: \"d815dffa-a6e4-4cb5-a4f0-18ae250fb45d\") " pod="default/test-pod-1" May 13 00:20:42.672651 kubelet[1419]: I0513 00:20:42.672633 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6e75273c-bca6-4437-8b7b-8149be34249c\" (UniqueName: \"kubernetes.io/nfs/d815dffa-a6e4-4cb5-a4f0-18ae250fb45d-pvc-6e75273c-bca6-4437-8b7b-8149be34249c\") pod \"test-pod-1\" (UID: \"d815dffa-a6e4-4cb5-a4f0-18ae250fb45d\") " pod="default/test-pod-1" May 13 00:20:42.801150 kernel: FS-Cache: Loaded May 13 00:20:42.810402 kubelet[1419]: E0513 00:20:42.810362 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:42.832476 kernel: RPC: Registered named UNIX socket transport module. May 13 00:20:42.832582 kernel: RPC: Registered udp transport module. May 13 00:20:42.833633 kernel: RPC: Registered tcp transport module. May 13 00:20:42.834462 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. May 13 00:20:42.877148 kernel: FS-Cache: Netfs 'nfs' registered for caching May 13 00:20:43.006482 kernel: NFS: Registering the id_resolver key type May 13 00:20:43.006635 kernel: Key type id_resolver registered May 13 00:20:43.006704 kernel: Key type id_legacy registered May 13 00:20:43.042821 nfsidmap[2763]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 13 00:20:43.046462 nfsidmap[2766]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 13 00:20:43.171474 env[1210]: time="2025-05-13T00:20:43.171429285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:d815dffa-a6e4-4cb5-a4f0-18ae250fb45d,Namespace:default,Attempt:0,}" May 13 00:20:43.193385 systemd-networkd[1056]: lxc23deed22f737: Link UP May 13 00:20:43.204933 kernel: eth0: renamed from tmp28f09 May 13 00:20:43.214241 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 13 00:20:43.214354 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc23deed22f737: link becomes ready May 13 00:20:43.214147 systemd-networkd[1056]: lxc23deed22f737: Gained carrier May 13 00:20:43.387506 env[1210]: time="2025-05-13T00:20:43.387440777Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:20:43.387711 env[1210]: time="2025-05-13T00:20:43.387684797Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:20:43.387827 env[1210]: time="2025-05-13T00:20:43.387804086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:20:43.388089 env[1210]: time="2025-05-13T00:20:43.388057066Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/28f097906048b1a2a98e027b726e47e41550092e19ce74b11f38029fdc35d959 pid=2800 runtime=io.containerd.runc.v2 May 13 00:20:43.398046 systemd[1]: Started cri-containerd-28f097906048b1a2a98e027b726e47e41550092e19ce74b11f38029fdc35d959.scope. May 13 00:20:43.430687 systemd-resolved[1157]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:20:43.452434 env[1210]: time="2025-05-13T00:20:43.452387648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:d815dffa-a6e4-4cb5-a4f0-18ae250fb45d,Namespace:default,Attempt:0,} returns sandbox id \"28f097906048b1a2a98e027b726e47e41550092e19ce74b11f38029fdc35d959\"" May 13 00:20:43.453784 env[1210]: time="2025-05-13T00:20:43.453713113Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 13 00:20:43.681691 env[1210]: time="2025-05-13T00:20:43.681546463Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:43.683726 env[1210]: time="2025-05-13T00:20:43.683682233Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:43.685966 env[1210]: time="2025-05-13T00:20:43.685921250Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:43.687407 env[1210]: time="2025-05-13T00:20:43.687362844Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:43.688753 env[1210]: time="2025-05-13T00:20:43.688716592Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\"" May 13 00:20:43.690645 env[1210]: time="2025-05-13T00:20:43.690611822Z" level=info msg="CreateContainer within sandbox \"28f097906048b1a2a98e027b726e47e41550092e19ce74b11f38029fdc35d959\" for container &ContainerMetadata{Name:test,Attempt:0,}" May 13 00:20:43.703581 env[1210]: time="2025-05-13T00:20:43.703525326Z" level=info msg="CreateContainer within sandbox \"28f097906048b1a2a98e027b726e47e41550092e19ce74b11f38029fdc35d959\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"5f82da55dd9101207af39b4bcac0a1b2c88b921fa7db541043977af5e42f36dc\"" May 13 00:20:43.704257 env[1210]: time="2025-05-13T00:20:43.704196820Z" level=info msg="StartContainer for \"5f82da55dd9101207af39b4bcac0a1b2c88b921fa7db541043977af5e42f36dc\"" May 13 00:20:43.717710 systemd[1]: Started cri-containerd-5f82da55dd9101207af39b4bcac0a1b2c88b921fa7db541043977af5e42f36dc.scope. May 13 00:20:43.778716 env[1210]: time="2025-05-13T00:20:43.778653365Z" level=info msg="StartContainer for \"5f82da55dd9101207af39b4bcac0a1b2c88b921fa7db541043977af5e42f36dc\" returns successfully" May 13 00:20:43.811532 kubelet[1419]: E0513 00:20:43.811496 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:44.111091 kubelet[1419]: I0513 00:20:44.110959 1419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=15.874805206 podStartE2EDuration="16.110942094s" podCreationTimestamp="2025-05-13 00:20:28 +0000 UTC" firstStartedPulling="2025-05-13 00:20:43.453342124 +0000 UTC m=+44.430223551" lastFinishedPulling="2025-05-13 00:20:43.689479052 +0000 UTC m=+44.666360439" observedRunningTime="2025-05-13 00:20:44.110729238 +0000 UTC m=+45.087610665" watchObservedRunningTime="2025-05-13 00:20:44.110942094 +0000 UTC m=+45.087823521" May 13 00:20:44.811803 kubelet[1419]: E0513 00:20:44.811748 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:45.136306 systemd-networkd[1056]: lxc23deed22f737: Gained IPv6LL May 13 00:20:45.812115 kubelet[1419]: E0513 00:20:45.812060 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:46.812487 kubelet[1419]: E0513 00:20:46.812438 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:47.813083 kubelet[1419]: E0513 00:20:47.813038 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:48.813521 kubelet[1419]: E0513 00:20:48.813471 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:49.814596 kubelet[1419]: E0513 00:20:49.814552 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:50.815579 kubelet[1419]: E0513 00:20:50.815525 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:50.829866 env[1210]: time="2025-05-13T00:20:50.829798861Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:20:50.834930 env[1210]: time="2025-05-13T00:20:50.834897600Z" level=info msg="StopContainer for \"33a9578730f26612f5573a1b07cad67f1d26361effe6646e464632d2f5a36e0b\" with timeout 2 (s)" May 13 00:20:50.835276 env[1210]: time="2025-05-13T00:20:50.835252781Z" level=info msg="Stop container \"33a9578730f26612f5573a1b07cad67f1d26361effe6646e464632d2f5a36e0b\" with signal terminated" May 13 00:20:50.840483 systemd-networkd[1056]: lxc_health: Link DOWN May 13 00:20:50.840489 systemd-networkd[1056]: lxc_health: Lost carrier May 13 00:20:50.880441 systemd[1]: cri-containerd-33a9578730f26612f5573a1b07cad67f1d26361effe6646e464632d2f5a36e0b.scope: Deactivated successfully. May 13 00:20:50.880756 systemd[1]: cri-containerd-33a9578730f26612f5573a1b07cad67f1d26361effe6646e464632d2f5a36e0b.scope: Consumed 6.579s CPU time. May 13 00:20:50.896046 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33a9578730f26612f5573a1b07cad67f1d26361effe6646e464632d2f5a36e0b-rootfs.mount: Deactivated successfully. May 13 00:20:50.906818 env[1210]: time="2025-05-13T00:20:50.906772340Z" level=info msg="shim disconnected" id=33a9578730f26612f5573a1b07cad67f1d26361effe6646e464632d2f5a36e0b May 13 00:20:50.906818 env[1210]: time="2025-05-13T00:20:50.906818383Z" level=warning msg="cleaning up after shim disconnected" id=33a9578730f26612f5573a1b07cad67f1d26361effe6646e464632d2f5a36e0b namespace=k8s.io May 13 00:20:50.907030 env[1210]: time="2025-05-13T00:20:50.906828423Z" level=info msg="cleaning up dead shim" May 13 00:20:50.913322 env[1210]: time="2025-05-13T00:20:50.913282322Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:20:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2932 runtime=io.containerd.runc.v2\n" May 13 00:20:50.915693 env[1210]: time="2025-05-13T00:20:50.915657542Z" level=info msg="StopContainer for \"33a9578730f26612f5573a1b07cad67f1d26361effe6646e464632d2f5a36e0b\" returns successfully" May 13 00:20:50.916384 env[1210]: time="2025-05-13T00:20:50.916346502Z" level=info msg="StopPodSandbox for \"06d66311cf8d8d575be306b19d41821795c397ba5a548cdfd5784053cab53a8b\"" May 13 00:20:50.916533 env[1210]: time="2025-05-13T00:20:50.916512232Z" level=info msg="Container to stop \"426d63f3e8d713e2f998caaafd5c6966f00885139d0d819fd0fa83fcb6117e92\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:20:50.916632 env[1210]: time="2025-05-13T00:20:50.916615438Z" level=info msg="Container to stop \"a52eecf1ff4c98a049045d7c86c0e0efad804128b1f5de40f6eaa29319516875\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:20:50.916703 env[1210]: time="2025-05-13T00:20:50.916687562Z" level=info msg="Container to stop \"eebd83c03da4989ae8fd4904bd08b85f99cee7c170d15d4ec159c5f38ec0f543\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:20:50.916765 env[1210]: time="2025-05-13T00:20:50.916749886Z" level=info msg="Container to stop \"c6cee3af6849f0155a50deaba420708d83548979eaf1239c5511e77902ca9bb5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:20:50.916828 env[1210]: time="2025-05-13T00:20:50.916806809Z" level=info msg="Container to stop \"33a9578730f26612f5573a1b07cad67f1d26361effe6646e464632d2f5a36e0b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:20:50.918436 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-06d66311cf8d8d575be306b19d41821795c397ba5a548cdfd5784053cab53a8b-shm.mount: Deactivated successfully. May 13 00:20:50.923360 systemd[1]: cri-containerd-06d66311cf8d8d575be306b19d41821795c397ba5a548cdfd5784053cab53a8b.scope: Deactivated successfully. May 13 00:20:50.941639 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06d66311cf8d8d575be306b19d41821795c397ba5a548cdfd5784053cab53a8b-rootfs.mount: Deactivated successfully. May 13 00:20:50.948860 env[1210]: time="2025-05-13T00:20:50.948811008Z" level=info msg="shim disconnected" id=06d66311cf8d8d575be306b19d41821795c397ba5a548cdfd5784053cab53a8b May 13 00:20:50.949022 env[1210]: time="2025-05-13T00:20:50.948863971Z" level=warning msg="cleaning up after shim disconnected" id=06d66311cf8d8d575be306b19d41821795c397ba5a548cdfd5784053cab53a8b namespace=k8s.io May 13 00:20:50.949022 env[1210]: time="2025-05-13T00:20:50.948875212Z" level=info msg="cleaning up dead shim" May 13 00:20:50.956549 env[1210]: time="2025-05-13T00:20:50.956508900Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:20:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2963 runtime=io.containerd.runc.v2\n" May 13 00:20:50.956842 env[1210]: time="2025-05-13T00:20:50.956819078Z" level=info msg="TearDown network for sandbox \"06d66311cf8d8d575be306b19d41821795c397ba5a548cdfd5784053cab53a8b\" successfully" May 13 00:20:50.956881 env[1210]: time="2025-05-13T00:20:50.956842200Z" level=info msg="StopPodSandbox for \"06d66311cf8d8d575be306b19d41821795c397ba5a548cdfd5784053cab53a8b\" returns successfully" May 13 00:20:51.019030 kubelet[1419]: I0513 00:20:51.018992 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-hubble-tls\") pod \"7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92\" (UID: \"7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92\") " May 13 00:20:51.019263 kubelet[1419]: I0513 00:20:51.019243 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-cilium-cgroup\") pod \"7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92\" (UID: \"7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92\") " May 13 00:20:51.019349 kubelet[1419]: I0513 00:20:51.019337 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-bpf-maps\") pod \"7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92\" (UID: \"7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92\") " May 13 00:20:51.019436 kubelet[1419]: I0513 00:20:51.019424 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-cni-path\") pod \"7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92\" (UID: \"7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92\") " May 13 00:20:51.019520 kubelet[1419]: I0513 00:20:51.019349 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92" (UID: "7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:20:51.019520 kubelet[1419]: I0513 00:20:51.019499 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-hostproc\") pod \"7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92\" (UID: \"7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92\") " May 13 00:20:51.019597 kubelet[1419]: I0513 00:20:51.019375 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92" (UID: "7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:20:51.019597 kubelet[1419]: I0513 00:20:51.019493 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-cni-path" (OuterVolumeSpecName: "cni-path") pod "7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92" (UID: "7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:20:51.019597 kubelet[1419]: I0513 00:20:51.019561 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-clustermesh-secrets\") pod \"7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92\" (UID: \"7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92\") " May 13 00:20:51.019597 kubelet[1419]: I0513 00:20:51.019597 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-etc-cni-netd\") pod \"7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92\" (UID: \"7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92\") " May 13 00:20:51.019597 kubelet[1419]: I0513 00:20:51.019617 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fq57p\" (UniqueName: \"kubernetes.io/projected/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-kube-api-access-fq57p\") pod \"7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92\" (UID: \"7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92\") " May 13 00:20:51.019597 kubelet[1419]: I0513 00:20:51.019632 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-lib-modules\") pod \"7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92\" (UID: \"7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92\") " May 13 00:20:51.019793 kubelet[1419]: I0513 00:20:51.019648 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-host-proc-sys-net\") pod \"7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92\" (UID: \"7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92\") " May 13 00:20:51.019793 kubelet[1419]: I0513 00:20:51.019675 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-cilium-config-path\") pod \"7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92\" (UID: \"7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92\") " May 13 00:20:51.019793 kubelet[1419]: I0513 00:20:51.019695 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-xtables-lock\") pod \"7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92\" (UID: \"7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92\") " May 13 00:20:51.019793 kubelet[1419]: I0513 00:20:51.019711 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-host-proc-sys-kernel\") pod \"7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92\" (UID: \"7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92\") " May 13 00:20:51.019793 kubelet[1419]: I0513 00:20:51.019731 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-cilium-run\") pod \"7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92\" (UID: \"7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92\") " May 13 00:20:51.019793 kubelet[1419]: I0513 00:20:51.019767 1419 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-bpf-maps\") on node \"10.0.0.39\" DevicePath \"\"" May 13 00:20:51.019793 kubelet[1419]: I0513 00:20:51.019781 1419 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-cni-path\") on node \"10.0.0.39\" DevicePath \"\"" May 13 00:20:51.020019 kubelet[1419]: I0513 00:20:51.019789 1419 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-cilium-cgroup\") on node \"10.0.0.39\" DevicePath \"\"" May 13 00:20:51.020019 kubelet[1419]: I0513 00:20:51.019821 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92" (UID: "7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:20:51.020019 kubelet[1419]: I0513 00:20:51.019841 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92" (UID: "7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:20:51.021993 kubelet[1419]: I0513 00:20:51.020114 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-hostproc" (OuterVolumeSpecName: "hostproc") pod "7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92" (UID: "7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:20:51.021993 kubelet[1419]: I0513 00:20:51.020238 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92" (UID: "7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:20:51.021993 kubelet[1419]: I0513 00:20:51.020272 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92" (UID: "7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:20:51.021993 kubelet[1419]: I0513 00:20:51.020294 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92" (UID: "7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:20:51.021993 kubelet[1419]: I0513 00:20:51.020308 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92" (UID: "7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:20:51.022202 kubelet[1419]: I0513 00:20:51.021949 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92" (UID: "7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 13 00:20:51.027031 systemd[1]: var-lib-kubelet-pods-7bb5ca81\x2da35c\x2d4bb6\x2dae8a\x2d5c4bca8d0e92-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 00:20:51.027735 kubelet[1419]: I0513 00:20:51.027706 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92" (UID: "7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 13 00:20:51.028189 kubelet[1419]: I0513 00:20:51.028162 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92" (UID: "7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 00:20:51.028414 kubelet[1419]: I0513 00:20:51.028379 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-kube-api-access-fq57p" (OuterVolumeSpecName: "kube-api-access-fq57p") pod "7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92" (UID: "7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92"). InnerVolumeSpecName "kube-api-access-fq57p". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 00:20:51.118287 kubelet[1419]: I0513 00:20:51.118191 1419 scope.go:117] "RemoveContainer" containerID="33a9578730f26612f5573a1b07cad67f1d26361effe6646e464632d2f5a36e0b" May 13 00:20:51.120145 kubelet[1419]: I0513 00:20:51.120112 1419 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-hubble-tls\") on node \"10.0.0.39\" DevicePath \"\"" May 13 00:20:51.120498 kubelet[1419]: I0513 00:20:51.120480 1419 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-hostproc\") on node \"10.0.0.39\" DevicePath \"\"" May 13 00:20:51.120816 kubelet[1419]: I0513 00:20:51.120798 1419 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-clustermesh-secrets\") on node \"10.0.0.39\" DevicePath \"\"" May 13 00:20:51.120889 kubelet[1419]: I0513 00:20:51.120878 1419 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-etc-cni-netd\") on node \"10.0.0.39\" DevicePath \"\"" May 13 00:20:51.121002 kubelet[1419]: I0513 00:20:51.120987 1419 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fq57p\" (UniqueName: \"kubernetes.io/projected/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-kube-api-access-fq57p\") on node \"10.0.0.39\" DevicePath \"\"" May 13 00:20:51.121067 kubelet[1419]: I0513 00:20:51.121055 1419 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-lib-modules\") on node \"10.0.0.39\" DevicePath \"\"" May 13 00:20:51.121155 kubelet[1419]: I0513 00:20:51.121143 1419 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-host-proc-sys-net\") on node \"10.0.0.39\" DevicePath \"\"" May 13 00:20:51.121224 kubelet[1419]: I0513 00:20:51.121207 1419 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-cilium-config-path\") on node \"10.0.0.39\" DevicePath \"\"" May 13 00:20:51.121444 kubelet[1419]: I0513 00:20:51.121429 1419 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-xtables-lock\") on node \"10.0.0.39\" DevicePath \"\"" May 13 00:20:51.121513 kubelet[1419]: I0513 00:20:51.121502 1419 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-host-proc-sys-kernel\") on node \"10.0.0.39\" DevicePath \"\"" May 13 00:20:51.121596 kubelet[1419]: I0513 00:20:51.121568 1419 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92-cilium-run\") on node \"10.0.0.39\" DevicePath \"\"" May 13 00:20:51.121709 systemd[1]: Removed slice kubepods-burstable-pod7bb5ca81_a35c_4bb6_ae8a_5c4bca8d0e92.slice. May 13 00:20:51.121793 systemd[1]: kubepods-burstable-pod7bb5ca81_a35c_4bb6_ae8a_5c4bca8d0e92.slice: Consumed 6.787s CPU time. May 13 00:20:51.124067 env[1210]: time="2025-05-13T00:20:51.124029226Z" level=info msg="RemoveContainer for \"33a9578730f26612f5573a1b07cad67f1d26361effe6646e464632d2f5a36e0b\"" May 13 00:20:51.133913 env[1210]: time="2025-05-13T00:20:51.133866101Z" level=info msg="RemoveContainer for \"33a9578730f26612f5573a1b07cad67f1d26361effe6646e464632d2f5a36e0b\" returns successfully" May 13 00:20:51.134172 kubelet[1419]: I0513 00:20:51.134139 1419 scope.go:117] "RemoveContainer" containerID="c6cee3af6849f0155a50deaba420708d83548979eaf1239c5511e77902ca9bb5" May 13 00:20:51.135913 env[1210]: time="2025-05-13T00:20:51.135871455Z" level=info msg="RemoveContainer for \"c6cee3af6849f0155a50deaba420708d83548979eaf1239c5511e77902ca9bb5\"" May 13 00:20:51.138684 env[1210]: time="2025-05-13T00:20:51.138595008Z" level=info msg="RemoveContainer for \"c6cee3af6849f0155a50deaba420708d83548979eaf1239c5511e77902ca9bb5\" returns successfully" May 13 00:20:51.138833 kubelet[1419]: I0513 00:20:51.138774 1419 scope.go:117] "RemoveContainer" containerID="eebd83c03da4989ae8fd4904bd08b85f99cee7c170d15d4ec159c5f38ec0f543" May 13 00:20:51.140375 env[1210]: time="2025-05-13T00:20:51.140139095Z" level=info msg="RemoveContainer for \"eebd83c03da4989ae8fd4904bd08b85f99cee7c170d15d4ec159c5f38ec0f543\"" May 13 00:20:51.142479 env[1210]: time="2025-05-13T00:20:51.142394023Z" level=info msg="RemoveContainer for \"eebd83c03da4989ae8fd4904bd08b85f99cee7c170d15d4ec159c5f38ec0f543\" returns successfully" May 13 00:20:51.142699 kubelet[1419]: I0513 00:20:51.142674 1419 scope.go:117] "RemoveContainer" containerID="a52eecf1ff4c98a049045d7c86c0e0efad804128b1f5de40f6eaa29319516875" May 13 00:20:51.143808 env[1210]: time="2025-05-13T00:20:51.143766620Z" level=info msg="RemoveContainer for \"a52eecf1ff4c98a049045d7c86c0e0efad804128b1f5de40f6eaa29319516875\"" May 13 00:20:51.146335 env[1210]: time="2025-05-13T00:20:51.146299043Z" level=info msg="RemoveContainer for \"a52eecf1ff4c98a049045d7c86c0e0efad804128b1f5de40f6eaa29319516875\" returns successfully" May 13 00:20:51.146595 kubelet[1419]: I0513 00:20:51.146565 1419 scope.go:117] "RemoveContainer" containerID="426d63f3e8d713e2f998caaafd5c6966f00885139d0d819fd0fa83fcb6117e92" May 13 00:20:51.147711 env[1210]: time="2025-05-13T00:20:51.147674641Z" level=info msg="RemoveContainer for \"426d63f3e8d713e2f998caaafd5c6966f00885139d0d819fd0fa83fcb6117e92\"" May 13 00:20:51.150220 env[1210]: time="2025-05-13T00:20:51.150180462Z" level=info msg="RemoveContainer for \"426d63f3e8d713e2f998caaafd5c6966f00885139d0d819fd0fa83fcb6117e92\" returns successfully" May 13 00:20:51.150460 kubelet[1419]: I0513 00:20:51.150428 1419 scope.go:117] "RemoveContainer" containerID="33a9578730f26612f5573a1b07cad67f1d26361effe6646e464632d2f5a36e0b" May 13 00:20:51.150738 env[1210]: time="2025-05-13T00:20:51.150649369Z" level=error msg="ContainerStatus for \"33a9578730f26612f5573a1b07cad67f1d26361effe6646e464632d2f5a36e0b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"33a9578730f26612f5573a1b07cad67f1d26361effe6646e464632d2f5a36e0b\": not found" May 13 00:20:51.150917 kubelet[1419]: E0513 00:20:51.150868 1419 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"33a9578730f26612f5573a1b07cad67f1d26361effe6646e464632d2f5a36e0b\": not found" containerID="33a9578730f26612f5573a1b07cad67f1d26361effe6646e464632d2f5a36e0b" May 13 00:20:51.151003 kubelet[1419]: I0513 00:20:51.150922 1419 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"33a9578730f26612f5573a1b07cad67f1d26361effe6646e464632d2f5a36e0b"} err="failed to get container status \"33a9578730f26612f5573a1b07cad67f1d26361effe6646e464632d2f5a36e0b\": rpc error: code = NotFound desc = an error occurred when try to find container \"33a9578730f26612f5573a1b07cad67f1d26361effe6646e464632d2f5a36e0b\": not found" May 13 00:20:51.151081 kubelet[1419]: I0513 00:20:51.151046 1419 scope.go:117] "RemoveContainer" containerID="c6cee3af6849f0155a50deaba420708d83548979eaf1239c5511e77902ca9bb5" May 13 00:20:51.151329 env[1210]: time="2025-05-13T00:20:51.151273804Z" level=error msg="ContainerStatus for \"c6cee3af6849f0155a50deaba420708d83548979eaf1239c5511e77902ca9bb5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c6cee3af6849f0155a50deaba420708d83548979eaf1239c5511e77902ca9bb5\": not found" May 13 00:20:51.151456 kubelet[1419]: E0513 00:20:51.151436 1419 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c6cee3af6849f0155a50deaba420708d83548979eaf1239c5511e77902ca9bb5\": not found" containerID="c6cee3af6849f0155a50deaba420708d83548979eaf1239c5511e77902ca9bb5" May 13 00:20:51.151488 kubelet[1419]: I0513 00:20:51.151464 1419 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c6cee3af6849f0155a50deaba420708d83548979eaf1239c5511e77902ca9bb5"} err="failed to get container status \"c6cee3af6849f0155a50deaba420708d83548979eaf1239c5511e77902ca9bb5\": rpc error: code = NotFound desc = an error occurred when try to find container \"c6cee3af6849f0155a50deaba420708d83548979eaf1239c5511e77902ca9bb5\": not found" May 13 00:20:51.151488 kubelet[1419]: I0513 00:20:51.151480 1419 scope.go:117] "RemoveContainer" containerID="eebd83c03da4989ae8fd4904bd08b85f99cee7c170d15d4ec159c5f38ec0f543" May 13 00:20:51.151805 env[1210]: time="2025-05-13T00:20:51.151753511Z" level=error msg="ContainerStatus for \"eebd83c03da4989ae8fd4904bd08b85f99cee7c170d15d4ec159c5f38ec0f543\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eebd83c03da4989ae8fd4904bd08b85f99cee7c170d15d4ec159c5f38ec0f543\": not found" May 13 00:20:51.151957 kubelet[1419]: E0513 00:20:51.151938 1419 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eebd83c03da4989ae8fd4904bd08b85f99cee7c170d15d4ec159c5f38ec0f543\": not found" containerID="eebd83c03da4989ae8fd4904bd08b85f99cee7c170d15d4ec159c5f38ec0f543" May 13 00:20:51.151988 kubelet[1419]: I0513 00:20:51.151966 1419 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eebd83c03da4989ae8fd4904bd08b85f99cee7c170d15d4ec159c5f38ec0f543"} err="failed to get container status \"eebd83c03da4989ae8fd4904bd08b85f99cee7c170d15d4ec159c5f38ec0f543\": rpc error: code = NotFound desc = an error occurred when try to find container \"eebd83c03da4989ae8fd4904bd08b85f99cee7c170d15d4ec159c5f38ec0f543\": not found" May 13 00:20:51.151988 kubelet[1419]: I0513 00:20:51.151981 1419 scope.go:117] "RemoveContainer" containerID="a52eecf1ff4c98a049045d7c86c0e0efad804128b1f5de40f6eaa29319516875" May 13 00:20:51.152372 env[1210]: time="2025-05-13T00:20:51.152316383Z" level=error msg="ContainerStatus for \"a52eecf1ff4c98a049045d7c86c0e0efad804128b1f5de40f6eaa29319516875\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a52eecf1ff4c98a049045d7c86c0e0efad804128b1f5de40f6eaa29319516875\": not found" May 13 00:20:51.152494 kubelet[1419]: E0513 00:20:51.152475 1419 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a52eecf1ff4c98a049045d7c86c0e0efad804128b1f5de40f6eaa29319516875\": not found" containerID="a52eecf1ff4c98a049045d7c86c0e0efad804128b1f5de40f6eaa29319516875" May 13 00:20:51.152522 kubelet[1419]: I0513 00:20:51.152501 1419 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a52eecf1ff4c98a049045d7c86c0e0efad804128b1f5de40f6eaa29319516875"} err="failed to get container status \"a52eecf1ff4c98a049045d7c86c0e0efad804128b1f5de40f6eaa29319516875\": rpc error: code = NotFound desc = an error occurred when try to find container \"a52eecf1ff4c98a049045d7c86c0e0efad804128b1f5de40f6eaa29319516875\": not found" May 13 00:20:51.152522 kubelet[1419]: I0513 00:20:51.152519 1419 scope.go:117] "RemoveContainer" containerID="426d63f3e8d713e2f998caaafd5c6966f00885139d0d819fd0fa83fcb6117e92" May 13 00:20:51.152812 env[1210]: time="2025-05-13T00:20:51.152763768Z" level=error msg="ContainerStatus for \"426d63f3e8d713e2f998caaafd5c6966f00885139d0d819fd0fa83fcb6117e92\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"426d63f3e8d713e2f998caaafd5c6966f00885139d0d819fd0fa83fcb6117e92\": not found" May 13 00:20:51.152919 kubelet[1419]: E0513 00:20:51.152900 1419 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"426d63f3e8d713e2f998caaafd5c6966f00885139d0d819fd0fa83fcb6117e92\": not found" containerID="426d63f3e8d713e2f998caaafd5c6966f00885139d0d819fd0fa83fcb6117e92" May 13 00:20:51.152945 kubelet[1419]: I0513 00:20:51.152925 1419 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"426d63f3e8d713e2f998caaafd5c6966f00885139d0d819fd0fa83fcb6117e92"} err="failed to get container status \"426d63f3e8d713e2f998caaafd5c6966f00885139d0d819fd0fa83fcb6117e92\": rpc error: code = NotFound desc = an error occurred when try to find container \"426d63f3e8d713e2f998caaafd5c6966f00885139d0d819fd0fa83fcb6117e92\": not found" May 13 00:20:51.806682 systemd[1]: var-lib-kubelet-pods-7bb5ca81\x2da35c\x2d4bb6\x2dae8a\x2d5c4bca8d0e92-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfq57p.mount: Deactivated successfully. May 13 00:20:51.806785 systemd[1]: var-lib-kubelet-pods-7bb5ca81\x2da35c\x2d4bb6\x2dae8a\x2d5c4bca8d0e92-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 00:20:51.816681 kubelet[1419]: E0513 00:20:51.816652 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:52.008108 kubelet[1419]: I0513 00:20:52.008071 1419 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92" path="/var/lib/kubelet/pods/7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92/volumes" May 13 00:20:52.817691 kubelet[1419]: E0513 00:20:52.817630 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:53.408330 kubelet[1419]: I0513 00:20:53.408282 1419 memory_manager.go:355] "RemoveStaleState removing state" podUID="7bb5ca81-a35c-4bb6-ae8a-5c4bca8d0e92" containerName="cilium-agent" May 13 00:20:53.415015 systemd[1]: Created slice kubepods-besteffort-pod7533eaf6_4c7c_488b_88cd_5a2ff963fb4b.slice. May 13 00:20:53.422190 systemd[1]: Created slice kubepods-burstable-pod7fdd3fc4_a5b7_4f15_a07f_e0d779223677.slice. May 13 00:20:53.435041 kubelet[1419]: I0513 00:20:53.435003 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-xtables-lock\") pod \"cilium-2hskx\" (UID: \"7fdd3fc4-a5b7-4f15-a07f-e0d779223677\") " pod="kube-system/cilium-2hskx" May 13 00:20:53.435041 kubelet[1419]: I0513 00:20:53.435041 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7533eaf6-4c7c-488b-88cd-5a2ff963fb4b-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-kftzb\" (UID: \"7533eaf6-4c7c-488b-88cd-5a2ff963fb4b\") " pod="kube-system/cilium-operator-6c4d7847fc-kftzb" May 13 00:20:53.435222 kubelet[1419]: I0513 00:20:53.435058 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-cilium-run\") pod \"cilium-2hskx\" (UID: \"7fdd3fc4-a5b7-4f15-a07f-e0d779223677\") " pod="kube-system/cilium-2hskx" May 13 00:20:53.435222 kubelet[1419]: I0513 00:20:53.435073 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-bpf-maps\") pod \"cilium-2hskx\" (UID: \"7fdd3fc4-a5b7-4f15-a07f-e0d779223677\") " pod="kube-system/cilium-2hskx" May 13 00:20:53.435222 kubelet[1419]: I0513 00:20:53.435090 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-hostproc\") pod \"cilium-2hskx\" (UID: \"7fdd3fc4-a5b7-4f15-a07f-e0d779223677\") " pod="kube-system/cilium-2hskx" May 13 00:20:53.435222 kubelet[1419]: I0513 00:20:53.435104 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-host-proc-sys-net\") pod \"cilium-2hskx\" (UID: \"7fdd3fc4-a5b7-4f15-a07f-e0d779223677\") " pod="kube-system/cilium-2hskx" May 13 00:20:53.435222 kubelet[1419]: I0513 00:20:53.435119 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-cilium-cgroup\") pod \"cilium-2hskx\" (UID: \"7fdd3fc4-a5b7-4f15-a07f-e0d779223677\") " pod="kube-system/cilium-2hskx" May 13 00:20:53.435222 kubelet[1419]: I0513 00:20:53.435149 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-etc-cni-netd\") pod \"cilium-2hskx\" (UID: \"7fdd3fc4-a5b7-4f15-a07f-e0d779223677\") " pod="kube-system/cilium-2hskx" May 13 00:20:53.435354 kubelet[1419]: I0513 00:20:53.435166 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-host-proc-sys-kernel\") pod \"cilium-2hskx\" (UID: \"7fdd3fc4-a5b7-4f15-a07f-e0d779223677\") " pod="kube-system/cilium-2hskx" May 13 00:20:53.435354 kubelet[1419]: I0513 00:20:53.435185 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-lib-modules\") pod \"cilium-2hskx\" (UID: \"7fdd3fc4-a5b7-4f15-a07f-e0d779223677\") " pod="kube-system/cilium-2hskx" May 13 00:20:53.435354 kubelet[1419]: I0513 00:20:53.435200 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-clustermesh-secrets\") pod \"cilium-2hskx\" (UID: \"7fdd3fc4-a5b7-4f15-a07f-e0d779223677\") " pod="kube-system/cilium-2hskx" May 13 00:20:53.435354 kubelet[1419]: I0513 00:20:53.435214 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-cilium-config-path\") pod \"cilium-2hskx\" (UID: \"7fdd3fc4-a5b7-4f15-a07f-e0d779223677\") " pod="kube-system/cilium-2hskx" May 13 00:20:53.435354 kubelet[1419]: I0513 00:20:53.435230 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-hubble-tls\") pod \"cilium-2hskx\" (UID: \"7fdd3fc4-a5b7-4f15-a07f-e0d779223677\") " pod="kube-system/cilium-2hskx" May 13 00:20:53.435461 kubelet[1419]: I0513 00:20:53.435261 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v588c\" (UniqueName: \"kubernetes.io/projected/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-kube-api-access-v588c\") pod \"cilium-2hskx\" (UID: \"7fdd3fc4-a5b7-4f15-a07f-e0d779223677\") " pod="kube-system/cilium-2hskx" May 13 00:20:53.435461 kubelet[1419]: I0513 00:20:53.435277 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b25lz\" (UniqueName: \"kubernetes.io/projected/7533eaf6-4c7c-488b-88cd-5a2ff963fb4b-kube-api-access-b25lz\") pod \"cilium-operator-6c4d7847fc-kftzb\" (UID: \"7533eaf6-4c7c-488b-88cd-5a2ff963fb4b\") " pod="kube-system/cilium-operator-6c4d7847fc-kftzb" May 13 00:20:53.435461 kubelet[1419]: I0513 00:20:53.435292 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-cni-path\") pod \"cilium-2hskx\" (UID: \"7fdd3fc4-a5b7-4f15-a07f-e0d779223677\") " pod="kube-system/cilium-2hskx" May 13 00:20:53.435461 kubelet[1419]: I0513 00:20:53.435305 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-cilium-ipsec-secrets\") pod \"cilium-2hskx\" (UID: \"7fdd3fc4-a5b7-4f15-a07f-e0d779223677\") " pod="kube-system/cilium-2hskx" May 13 00:20:53.587805 kubelet[1419]: E0513 00:20:53.587760 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:20:53.588691 env[1210]: time="2025-05-13T00:20:53.588285199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2hskx,Uid:7fdd3fc4-a5b7-4f15-a07f-e0d779223677,Namespace:kube-system,Attempt:0,}" May 13 00:20:53.600289 env[1210]: time="2025-05-13T00:20:53.600206343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:20:53.600396 env[1210]: time="2025-05-13T00:20:53.600294468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:20:53.600396 env[1210]: time="2025-05-13T00:20:53.600323749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:20:53.600529 env[1210]: time="2025-05-13T00:20:53.600477717Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/30a68ba53e59c58a5b0763758f7683eca48a55e61a2f75bd2ce7ea7d8a01aac1 pid=2991 runtime=io.containerd.runc.v2 May 13 00:20:53.610601 systemd[1]: Started cri-containerd-30a68ba53e59c58a5b0763758f7683eca48a55e61a2f75bd2ce7ea7d8a01aac1.scope. May 13 00:20:53.649804 env[1210]: time="2025-05-13T00:20:53.649756178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2hskx,Uid:7fdd3fc4-a5b7-4f15-a07f-e0d779223677,Namespace:kube-system,Attempt:0,} returns sandbox id \"30a68ba53e59c58a5b0763758f7683eca48a55e61a2f75bd2ce7ea7d8a01aac1\"" May 13 00:20:53.650584 kubelet[1419]: E0513 00:20:53.650560 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:20:53.652093 env[1210]: time="2025-05-13T00:20:53.652059978Z" level=info msg="CreateContainer within sandbox \"30a68ba53e59c58a5b0763758f7683eca48a55e61a2f75bd2ce7ea7d8a01aac1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 00:20:53.660943 env[1210]: time="2025-05-13T00:20:53.660856719Z" level=info msg="CreateContainer within sandbox \"30a68ba53e59c58a5b0763758f7683eca48a55e61a2f75bd2ce7ea7d8a01aac1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"47493cd697c261abd746a0a9485dacf075e701f98e25da0d6c2e1233280c6001\"" May 13 00:20:53.662501 env[1210]: time="2025-05-13T00:20:53.662458723Z" level=info msg="StartContainer for \"47493cd697c261abd746a0a9485dacf075e701f98e25da0d6c2e1233280c6001\"" May 13 00:20:53.675245 systemd[1]: Started cri-containerd-47493cd697c261abd746a0a9485dacf075e701f98e25da0d6c2e1233280c6001.scope. May 13 00:20:53.693767 systemd[1]: cri-containerd-47493cd697c261abd746a0a9485dacf075e701f98e25da0d6c2e1233280c6001.scope: Deactivated successfully. May 13 00:20:53.707139 env[1210]: time="2025-05-13T00:20:53.707080139Z" level=info msg="shim disconnected" id=47493cd697c261abd746a0a9485dacf075e701f98e25da0d6c2e1233280c6001 May 13 00:20:53.707307 env[1210]: time="2025-05-13T00:20:53.707156583Z" level=warning msg="cleaning up after shim disconnected" id=47493cd697c261abd746a0a9485dacf075e701f98e25da0d6c2e1233280c6001 namespace=k8s.io May 13 00:20:53.707307 env[1210]: time="2025-05-13T00:20:53.707167624Z" level=info msg="cleaning up dead shim" May 13 00:20:53.713919 env[1210]: time="2025-05-13T00:20:53.713876615Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:20:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3049 runtime=io.containerd.runc.v2\ntime=\"2025-05-13T00:20:53Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/47493cd697c261abd746a0a9485dacf075e701f98e25da0d6c2e1233280c6001/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" May 13 00:20:53.714249 env[1210]: time="2025-05-13T00:20:53.714152590Z" level=error msg="copy shim log" error="read /proc/self/fd/58: file already closed" May 13 00:20:53.714379 env[1210]: time="2025-05-13T00:20:53.714352000Z" level=error msg="Failed to pipe stderr of container \"47493cd697c261abd746a0a9485dacf075e701f98e25da0d6c2e1233280c6001\"" error="reading from a closed fifo" May 13 00:20:53.714420 env[1210]: time="2025-05-13T00:20:53.714356320Z" level=error msg="Failed to pipe stdout of container \"47493cd697c261abd746a0a9485dacf075e701f98e25da0d6c2e1233280c6001\"" error="reading from a closed fifo" May 13 00:20:53.716017 env[1210]: time="2025-05-13T00:20:53.715963804Z" level=error msg="StartContainer for \"47493cd697c261abd746a0a9485dacf075e701f98e25da0d6c2e1233280c6001\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" May 13 00:20:53.716254 kubelet[1419]: E0513 00:20:53.716213 1419 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="47493cd697c261abd746a0a9485dacf075e701f98e25da0d6c2e1233280c6001" May 13 00:20:53.716608 kubelet[1419]: E0513 00:20:53.716559 1419 kuberuntime_manager.go:1341] "Unhandled Error" err=< May 13 00:20:53.716608 kubelet[1419]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; May 13 00:20:53.716608 kubelet[1419]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; May 13 00:20:53.716608 kubelet[1419]: rm /hostbin/cilium-mount May 13 00:20:53.716786 kubelet[1419]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v588c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-2hskx_kube-system(7fdd3fc4-a5b7-4f15-a07f-e0d779223677): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown May 13 00:20:53.716786 kubelet[1419]: > logger="UnhandledError" May 13 00:20:53.717762 kubelet[1419]: E0513 00:20:53.717712 1419 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-2hskx" podUID="7fdd3fc4-a5b7-4f15-a07f-e0d779223677" May 13 00:20:53.718879 kubelet[1419]: E0513 00:20:53.718838 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:20:53.719405 env[1210]: time="2025-05-13T00:20:53.719376583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-kftzb,Uid:7533eaf6-4c7c-488b-88cd-5a2ff963fb4b,Namespace:kube-system,Attempt:0,}" May 13 00:20:53.730569 env[1210]: time="2025-05-13T00:20:53.730501286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:20:53.730569 env[1210]: time="2025-05-13T00:20:53.730537888Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:20:53.730569 env[1210]: time="2025-05-13T00:20:53.730548288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:20:53.730720 env[1210]: time="2025-05-13T00:20:53.730660894Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/29796e481cfcef6c6769951f5d404d8fa97e07dabc55ebbd1c68b57717584248 pid=3071 runtime=io.containerd.runc.v2 May 13 00:20:53.740274 systemd[1]: Started cri-containerd-29796e481cfcef6c6769951f5d404d8fa97e07dabc55ebbd1c68b57717584248.scope. May 13 00:20:53.773722 env[1210]: time="2025-05-13T00:20:53.773672826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-kftzb,Uid:7533eaf6-4c7c-488b-88cd-5a2ff963fb4b,Namespace:kube-system,Attempt:0,} returns sandbox id \"29796e481cfcef6c6769951f5d404d8fa97e07dabc55ebbd1c68b57717584248\"" May 13 00:20:53.774551 kubelet[1419]: E0513 00:20:53.774512 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:20:53.775483 env[1210]: time="2025-05-13T00:20:53.775454079Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 13 00:20:53.817887 kubelet[1419]: E0513 00:20:53.817848 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:54.126087 env[1210]: time="2025-05-13T00:20:54.125945527Z" level=info msg="StopPodSandbox for \"30a68ba53e59c58a5b0763758f7683eca48a55e61a2f75bd2ce7ea7d8a01aac1\"" May 13 00:20:54.126087 env[1210]: time="2025-05-13T00:20:54.126013530Z" level=info msg="Container to stop \"47493cd697c261abd746a0a9485dacf075e701f98e25da0d6c2e1233280c6001\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:20:54.131900 systemd[1]: cri-containerd-30a68ba53e59c58a5b0763758f7683eca48a55e61a2f75bd2ce7ea7d8a01aac1.scope: Deactivated successfully. May 13 00:20:54.153007 env[1210]: time="2025-05-13T00:20:54.152956371Z" level=info msg="shim disconnected" id=30a68ba53e59c58a5b0763758f7683eca48a55e61a2f75bd2ce7ea7d8a01aac1 May 13 00:20:54.153007 env[1210]: time="2025-05-13T00:20:54.153005254Z" level=warning msg="cleaning up after shim disconnected" id=30a68ba53e59c58a5b0763758f7683eca48a55e61a2f75bd2ce7ea7d8a01aac1 namespace=k8s.io May 13 00:20:54.153007 env[1210]: time="2025-05-13T00:20:54.153014614Z" level=info msg="cleaning up dead shim" May 13 00:20:54.160110 env[1210]: time="2025-05-13T00:20:54.160069450Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:20:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3123 runtime=io.containerd.runc.v2\n" May 13 00:20:54.160398 env[1210]: time="2025-05-13T00:20:54.160361025Z" level=info msg="TearDown network for sandbox \"30a68ba53e59c58a5b0763758f7683eca48a55e61a2f75bd2ce7ea7d8a01aac1\" successfully" May 13 00:20:54.160398 env[1210]: time="2025-05-13T00:20:54.160388507Z" level=info msg="StopPodSandbox for \"30a68ba53e59c58a5b0763758f7683eca48a55e61a2f75bd2ce7ea7d8a01aac1\" returns successfully" May 13 00:20:54.241857 kubelet[1419]: I0513 00:20:54.240510 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-host-proc-sys-net\") pod \"7fdd3fc4-a5b7-4f15-a07f-e0d779223677\" (UID: \"7fdd3fc4-a5b7-4f15-a07f-e0d779223677\") " May 13 00:20:54.241857 kubelet[1419]: I0513 00:20:54.240544 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-host-proc-sys-kernel\") pod \"7fdd3fc4-a5b7-4f15-a07f-e0d779223677\" (UID: \"7fdd3fc4-a5b7-4f15-a07f-e0d779223677\") " May 13 00:20:54.241857 kubelet[1419]: I0513 00:20:54.240564 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-hostproc\") pod \"7fdd3fc4-a5b7-4f15-a07f-e0d779223677\" (UID: \"7fdd3fc4-a5b7-4f15-a07f-e0d779223677\") " May 13 00:20:54.241857 kubelet[1419]: I0513 00:20:54.240597 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-clustermesh-secrets\") pod \"7fdd3fc4-a5b7-4f15-a07f-e0d779223677\" (UID: \"7fdd3fc4-a5b7-4f15-a07f-e0d779223677\") " May 13 00:20:54.241857 kubelet[1419]: I0513 00:20:54.240617 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-cilium-ipsec-secrets\") pod \"7fdd3fc4-a5b7-4f15-a07f-e0d779223677\" (UID: \"7fdd3fc4-a5b7-4f15-a07f-e0d779223677\") " May 13 00:20:54.241857 kubelet[1419]: I0513 00:20:54.240631 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-lib-modules\") pod \"7fdd3fc4-a5b7-4f15-a07f-e0d779223677\" (UID: \"7fdd3fc4-a5b7-4f15-a07f-e0d779223677\") " May 13 00:20:54.241857 kubelet[1419]: I0513 00:20:54.240652 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-xtables-lock\") pod \"7fdd3fc4-a5b7-4f15-a07f-e0d779223677\" (UID: \"7fdd3fc4-a5b7-4f15-a07f-e0d779223677\") " May 13 00:20:54.241857 kubelet[1419]: I0513 00:20:54.240667 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-cilium-run\") pod \"7fdd3fc4-a5b7-4f15-a07f-e0d779223677\" (UID: \"7fdd3fc4-a5b7-4f15-a07f-e0d779223677\") " May 13 00:20:54.241857 kubelet[1419]: I0513 00:20:54.240682 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-bpf-maps\") pod \"7fdd3fc4-a5b7-4f15-a07f-e0d779223677\" (UID: \"7fdd3fc4-a5b7-4f15-a07f-e0d779223677\") " May 13 00:20:54.241857 kubelet[1419]: I0513 00:20:54.240698 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-cilium-cgroup\") pod \"7fdd3fc4-a5b7-4f15-a07f-e0d779223677\" (UID: \"7fdd3fc4-a5b7-4f15-a07f-e0d779223677\") " May 13 00:20:54.241857 kubelet[1419]: I0513 00:20:54.240711 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-etc-cni-netd\") pod \"7fdd3fc4-a5b7-4f15-a07f-e0d779223677\" (UID: \"7fdd3fc4-a5b7-4f15-a07f-e0d779223677\") " May 13 00:20:54.241857 kubelet[1419]: I0513 00:20:54.240740 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-cni-path\") pod \"7fdd3fc4-a5b7-4f15-a07f-e0d779223677\" (UID: \"7fdd3fc4-a5b7-4f15-a07f-e0d779223677\") " May 13 00:20:54.241857 kubelet[1419]: I0513 00:20:54.240760 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-cilium-config-path\") pod \"7fdd3fc4-a5b7-4f15-a07f-e0d779223677\" (UID: \"7fdd3fc4-a5b7-4f15-a07f-e0d779223677\") " May 13 00:20:54.241857 kubelet[1419]: I0513 00:20:54.240778 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-hubble-tls\") pod \"7fdd3fc4-a5b7-4f15-a07f-e0d779223677\" (UID: \"7fdd3fc4-a5b7-4f15-a07f-e0d779223677\") " May 13 00:20:54.241857 kubelet[1419]: I0513 00:20:54.240803 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v588c\" (UniqueName: \"kubernetes.io/projected/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-kube-api-access-v588c\") pod \"7fdd3fc4-a5b7-4f15-a07f-e0d779223677\" (UID: \"7fdd3fc4-a5b7-4f15-a07f-e0d779223677\") " May 13 00:20:54.241857 kubelet[1419]: I0513 00:20:54.241555 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7fdd3fc4-a5b7-4f15-a07f-e0d779223677" (UID: "7fdd3fc4-a5b7-4f15-a07f-e0d779223677"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:20:54.242626 kubelet[1419]: I0513 00:20:54.241589 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7fdd3fc4-a5b7-4f15-a07f-e0d779223677" (UID: "7fdd3fc4-a5b7-4f15-a07f-e0d779223677"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:20:54.242626 kubelet[1419]: I0513 00:20:54.241608 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-hostproc" (OuterVolumeSpecName: "hostproc") pod "7fdd3fc4-a5b7-4f15-a07f-e0d779223677" (UID: "7fdd3fc4-a5b7-4f15-a07f-e0d779223677"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:20:54.242626 kubelet[1419]: I0513 00:20:54.241907 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7fdd3fc4-a5b7-4f15-a07f-e0d779223677" (UID: "7fdd3fc4-a5b7-4f15-a07f-e0d779223677"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:20:54.242626 kubelet[1419]: I0513 00:20:54.242016 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-cni-path" (OuterVolumeSpecName: "cni-path") pod "7fdd3fc4-a5b7-4f15-a07f-e0d779223677" (UID: "7fdd3fc4-a5b7-4f15-a07f-e0d779223677"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:20:54.242626 kubelet[1419]: I0513 00:20:54.242038 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7fdd3fc4-a5b7-4f15-a07f-e0d779223677" (UID: "7fdd3fc4-a5b7-4f15-a07f-e0d779223677"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:20:54.242626 kubelet[1419]: I0513 00:20:54.242053 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7fdd3fc4-a5b7-4f15-a07f-e0d779223677" (UID: "7fdd3fc4-a5b7-4f15-a07f-e0d779223677"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:20:54.242626 kubelet[1419]: I0513 00:20:54.242082 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7fdd3fc4-a5b7-4f15-a07f-e0d779223677" (UID: "7fdd3fc4-a5b7-4f15-a07f-e0d779223677"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:20:54.242626 kubelet[1419]: I0513 00:20:54.242101 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7fdd3fc4-a5b7-4f15-a07f-e0d779223677" (UID: "7fdd3fc4-a5b7-4f15-a07f-e0d779223677"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:20:54.242626 kubelet[1419]: I0513 00:20:54.242036 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7fdd3fc4-a5b7-4f15-a07f-e0d779223677" (UID: "7fdd3fc4-a5b7-4f15-a07f-e0d779223677"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:20:54.243996 kubelet[1419]: I0513 00:20:54.243959 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-kube-api-access-v588c" (OuterVolumeSpecName: "kube-api-access-v588c") pod "7fdd3fc4-a5b7-4f15-a07f-e0d779223677" (UID: "7fdd3fc4-a5b7-4f15-a07f-e0d779223677"). InnerVolumeSpecName "kube-api-access-v588c". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 00:20:54.244100 kubelet[1419]: I0513 00:20:54.244024 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7fdd3fc4-a5b7-4f15-a07f-e0d779223677" (UID: "7fdd3fc4-a5b7-4f15-a07f-e0d779223677"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 13 00:20:54.245072 kubelet[1419]: I0513 00:20:54.245026 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "7fdd3fc4-a5b7-4f15-a07f-e0d779223677" (UID: "7fdd3fc4-a5b7-4f15-a07f-e0d779223677"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 13 00:20:54.245175 kubelet[1419]: I0513 00:20:54.245113 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7fdd3fc4-a5b7-4f15-a07f-e0d779223677" (UID: "7fdd3fc4-a5b7-4f15-a07f-e0d779223677"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 13 00:20:54.247385 kubelet[1419]: I0513 00:20:54.247339 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7fdd3fc4-a5b7-4f15-a07f-e0d779223677" (UID: "7fdd3fc4-a5b7-4f15-a07f-e0d779223677"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 00:20:54.342642 kubelet[1419]: I0513 00:20:54.341208 1419 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v588c\" (UniqueName: \"kubernetes.io/projected/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-kube-api-access-v588c\") on node \"10.0.0.39\" DevicePath \"\"" May 13 00:20:54.342642 kubelet[1419]: I0513 00:20:54.341244 1419 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-hostproc\") on node \"10.0.0.39\" DevicePath \"\"" May 13 00:20:54.342642 kubelet[1419]: I0513 00:20:54.341258 1419 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-host-proc-sys-net\") on node \"10.0.0.39\" DevicePath \"\"" May 13 00:20:54.342642 kubelet[1419]: I0513 00:20:54.341267 1419 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-host-proc-sys-kernel\") on node \"10.0.0.39\" DevicePath \"\"" May 13 00:20:54.342642 kubelet[1419]: I0513 00:20:54.341275 1419 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-cilium-ipsec-secrets\") on node \"10.0.0.39\" DevicePath \"\"" May 13 00:20:54.342642 kubelet[1419]: I0513 00:20:54.341284 1419 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-lib-modules\") on node \"10.0.0.39\" DevicePath \"\"" May 13 00:20:54.342642 kubelet[1419]: I0513 00:20:54.341292 1419 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-clustermesh-secrets\") on node \"10.0.0.39\" DevicePath \"\"" May 13 00:20:54.342642 kubelet[1419]: I0513 00:20:54.341300 1419 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-cilium-cgroup\") on node \"10.0.0.39\" DevicePath \"\"" May 13 00:20:54.342642 kubelet[1419]: I0513 00:20:54.341308 1419 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-etc-cni-netd\") on node \"10.0.0.39\" DevicePath \"\"" May 13 00:20:54.342642 kubelet[1419]: I0513 00:20:54.341316 1419 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-xtables-lock\") on node \"10.0.0.39\" DevicePath \"\"" May 13 00:20:54.342642 kubelet[1419]: I0513 00:20:54.341324 1419 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-cilium-run\") on node \"10.0.0.39\" DevicePath \"\"" May 13 00:20:54.342642 kubelet[1419]: I0513 00:20:54.341333 1419 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-bpf-maps\") on node \"10.0.0.39\" DevicePath \"\"" May 13 00:20:54.342642 kubelet[1419]: I0513 00:20:54.341341 1419 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-cilium-config-path\") on node \"10.0.0.39\" DevicePath \"\"" May 13 00:20:54.342642 kubelet[1419]: I0513 00:20:54.341349 1419 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-hubble-tls\") on node \"10.0.0.39\" DevicePath \"\"" May 13 00:20:54.342642 kubelet[1419]: I0513 00:20:54.341356 1419 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7fdd3fc4-a5b7-4f15-a07f-e0d779223677-cni-path\") on node \"10.0.0.39\" DevicePath \"\"" May 13 00:20:54.541820 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-30a68ba53e59c58a5b0763758f7683eca48a55e61a2f75bd2ce7ea7d8a01aac1-shm.mount: Deactivated successfully. May 13 00:20:54.541906 systemd[1]: var-lib-kubelet-pods-7fdd3fc4\x2da5b7\x2d4f15\x2da07f\x2de0d779223677-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv588c.mount: Deactivated successfully. May 13 00:20:54.541963 systemd[1]: var-lib-kubelet-pods-7fdd3fc4\x2da5b7\x2d4f15\x2da07f\x2de0d779223677-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 00:20:54.542013 systemd[1]: var-lib-kubelet-pods-7fdd3fc4\x2da5b7\x2d4f15\x2da07f\x2de0d779223677-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 00:20:54.542059 systemd[1]: var-lib-kubelet-pods-7fdd3fc4\x2da5b7\x2d4f15\x2da07f\x2de0d779223677-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 13 00:20:54.818213 kubelet[1419]: E0513 00:20:54.818077 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:54.905455 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1679770099.mount: Deactivated successfully. May 13 00:20:54.963547 kubelet[1419]: E0513 00:20:54.963484 1419 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 00:20:55.129010 kubelet[1419]: I0513 00:20:55.128910 1419 scope.go:117] "RemoveContainer" containerID="47493cd697c261abd746a0a9485dacf075e701f98e25da0d6c2e1233280c6001" May 13 00:20:55.130210 env[1210]: time="2025-05-13T00:20:55.130169179Z" level=info msg="RemoveContainer for \"47493cd697c261abd746a0a9485dacf075e701f98e25da0d6c2e1233280c6001\"" May 13 00:20:55.132972 env[1210]: time="2025-05-13T00:20:55.132922513Z" level=info msg="RemoveContainer for \"47493cd697c261abd746a0a9485dacf075e701f98e25da0d6c2e1233280c6001\" returns successfully" May 13 00:20:55.133052 systemd[1]: Removed slice kubepods-burstable-pod7fdd3fc4_a5b7_4f15_a07f_e0d779223677.slice. May 13 00:20:55.165900 kubelet[1419]: I0513 00:20:55.165853 1419 memory_manager.go:355] "RemoveStaleState removing state" podUID="7fdd3fc4-a5b7-4f15-a07f-e0d779223677" containerName="mount-cgroup" May 13 00:20:55.170620 systemd[1]: Created slice kubepods-burstable-pod0741fbb7_6e68_4ab6_aeb2_dcdb784afb1a.slice. May 13 00:20:55.246379 kubelet[1419]: I0513 00:20:55.246250 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0741fbb7-6e68-4ab6-aeb2-dcdb784afb1a-cilium-run\") pod \"cilium-55nd8\" (UID: \"0741fbb7-6e68-4ab6-aeb2-dcdb784afb1a\") " pod="kube-system/cilium-55nd8" May 13 00:20:55.246379 kubelet[1419]: I0513 00:20:55.246321 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0741fbb7-6e68-4ab6-aeb2-dcdb784afb1a-cilium-cgroup\") pod \"cilium-55nd8\" (UID: \"0741fbb7-6e68-4ab6-aeb2-dcdb784afb1a\") " pod="kube-system/cilium-55nd8" May 13 00:20:55.246379 kubelet[1419]: I0513 00:20:55.246350 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0741fbb7-6e68-4ab6-aeb2-dcdb784afb1a-clustermesh-secrets\") pod \"cilium-55nd8\" (UID: \"0741fbb7-6e68-4ab6-aeb2-dcdb784afb1a\") " pod="kube-system/cilium-55nd8" May 13 00:20:55.246585 kubelet[1419]: I0513 00:20:55.246392 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0741fbb7-6e68-4ab6-aeb2-dcdb784afb1a-cilium-config-path\") pod \"cilium-55nd8\" (UID: \"0741fbb7-6e68-4ab6-aeb2-dcdb784afb1a\") " pod="kube-system/cilium-55nd8" May 13 00:20:55.246585 kubelet[1419]: I0513 00:20:55.246411 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0741fbb7-6e68-4ab6-aeb2-dcdb784afb1a-host-proc-sys-net\") pod \"cilium-55nd8\" (UID: \"0741fbb7-6e68-4ab6-aeb2-dcdb784afb1a\") " pod="kube-system/cilium-55nd8" May 13 00:20:55.246585 kubelet[1419]: I0513 00:20:55.246426 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0741fbb7-6e68-4ab6-aeb2-dcdb784afb1a-hubble-tls\") pod \"cilium-55nd8\" (UID: \"0741fbb7-6e68-4ab6-aeb2-dcdb784afb1a\") " pod="kube-system/cilium-55nd8" May 13 00:20:55.246585 kubelet[1419]: I0513 00:20:55.246443 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0741fbb7-6e68-4ab6-aeb2-dcdb784afb1a-cilium-ipsec-secrets\") pod \"cilium-55nd8\" (UID: \"0741fbb7-6e68-4ab6-aeb2-dcdb784afb1a\") " pod="kube-system/cilium-55nd8" May 13 00:20:55.246585 kubelet[1419]: I0513 00:20:55.246481 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0741fbb7-6e68-4ab6-aeb2-dcdb784afb1a-xtables-lock\") pod \"cilium-55nd8\" (UID: \"0741fbb7-6e68-4ab6-aeb2-dcdb784afb1a\") " pod="kube-system/cilium-55nd8" May 13 00:20:55.246585 kubelet[1419]: I0513 00:20:55.246500 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0741fbb7-6e68-4ab6-aeb2-dcdb784afb1a-host-proc-sys-kernel\") pod \"cilium-55nd8\" (UID: \"0741fbb7-6e68-4ab6-aeb2-dcdb784afb1a\") " pod="kube-system/cilium-55nd8" May 13 00:20:55.246585 kubelet[1419]: I0513 00:20:55.246516 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0741fbb7-6e68-4ab6-aeb2-dcdb784afb1a-bpf-maps\") pod \"cilium-55nd8\" (UID: \"0741fbb7-6e68-4ab6-aeb2-dcdb784afb1a\") " pod="kube-system/cilium-55nd8" May 13 00:20:55.246585 kubelet[1419]: I0513 00:20:55.246540 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0741fbb7-6e68-4ab6-aeb2-dcdb784afb1a-hostproc\") pod \"cilium-55nd8\" (UID: \"0741fbb7-6e68-4ab6-aeb2-dcdb784afb1a\") " pod="kube-system/cilium-55nd8" May 13 00:20:55.246585 kubelet[1419]: I0513 00:20:55.246557 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0741fbb7-6e68-4ab6-aeb2-dcdb784afb1a-cni-path\") pod \"cilium-55nd8\" (UID: \"0741fbb7-6e68-4ab6-aeb2-dcdb784afb1a\") " pod="kube-system/cilium-55nd8" May 13 00:20:55.246585 kubelet[1419]: I0513 00:20:55.246572 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0741fbb7-6e68-4ab6-aeb2-dcdb784afb1a-etc-cni-netd\") pod \"cilium-55nd8\" (UID: \"0741fbb7-6e68-4ab6-aeb2-dcdb784afb1a\") " pod="kube-system/cilium-55nd8" May 13 00:20:55.246825 kubelet[1419]: I0513 00:20:55.246607 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0741fbb7-6e68-4ab6-aeb2-dcdb784afb1a-lib-modules\") pod \"cilium-55nd8\" (UID: \"0741fbb7-6e68-4ab6-aeb2-dcdb784afb1a\") " pod="kube-system/cilium-55nd8" May 13 00:20:55.246825 kubelet[1419]: I0513 00:20:55.246648 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn4pn\" (UniqueName: \"kubernetes.io/projected/0741fbb7-6e68-4ab6-aeb2-dcdb784afb1a-kube-api-access-dn4pn\") pod \"cilium-55nd8\" (UID: \"0741fbb7-6e68-4ab6-aeb2-dcdb784afb1a\") " pod="kube-system/cilium-55nd8" May 13 00:20:55.483216 kubelet[1419]: E0513 00:20:55.483165 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:20:55.483687 env[1210]: time="2025-05-13T00:20:55.483646975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-55nd8,Uid:0741fbb7-6e68-4ab6-aeb2-dcdb784afb1a,Namespace:kube-system,Attempt:0,}" May 13 00:20:55.498980 env[1210]: time="2025-05-13T00:20:55.498895239Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:20:55.498980 env[1210]: time="2025-05-13T00:20:55.498934800Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:20:55.498980 env[1210]: time="2025-05-13T00:20:55.498945361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:20:55.499176 env[1210]: time="2025-05-13T00:20:55.499115369Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/724f4cd0a462d51b435f87fdcb72f917f600abcb5d6c5922f3280bbffd174fef pid=3152 runtime=io.containerd.runc.v2 May 13 00:20:55.508874 systemd[1]: Started cri-containerd-724f4cd0a462d51b435f87fdcb72f917f600abcb5d6c5922f3280bbffd174fef.scope. May 13 00:20:55.550809 env[1210]: time="2025-05-13T00:20:55.550745447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-55nd8,Uid:0741fbb7-6e68-4ab6-aeb2-dcdb784afb1a,Namespace:kube-system,Attempt:0,} returns sandbox id \"724f4cd0a462d51b435f87fdcb72f917f600abcb5d6c5922f3280bbffd174fef\"" May 13 00:20:55.551410 kubelet[1419]: E0513 00:20:55.551391 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:20:55.553206 env[1210]: time="2025-05-13T00:20:55.553170525Z" level=info msg="CreateContainer within sandbox \"724f4cd0a462d51b435f87fdcb72f917f600abcb5d6c5922f3280bbffd174fef\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 00:20:55.563039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3349061531.mount: Deactivated successfully. May 13 00:20:55.567969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4289012006.mount: Deactivated successfully. May 13 00:20:55.572177 env[1210]: time="2025-05-13T00:20:55.572108089Z" level=info msg="CreateContainer within sandbox \"724f4cd0a462d51b435f87fdcb72f917f600abcb5d6c5922f3280bbffd174fef\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"391a49c473f0a5a3b14694ade5c29f93447b1f7f76fb3a89e1ec201787089e64\"" May 13 00:20:55.572799 env[1210]: time="2025-05-13T00:20:55.572765601Z" level=info msg="StartContainer for \"391a49c473f0a5a3b14694ade5c29f93447b1f7f76fb3a89e1ec201787089e64\"" May 13 00:20:55.586488 systemd[1]: Started cri-containerd-391a49c473f0a5a3b14694ade5c29f93447b1f7f76fb3a89e1ec201787089e64.scope. May 13 00:20:55.622579 env[1210]: time="2025-05-13T00:20:55.622535788Z" level=info msg="StartContainer for \"391a49c473f0a5a3b14694ade5c29f93447b1f7f76fb3a89e1ec201787089e64\" returns successfully" May 13 00:20:55.627959 systemd[1]: cri-containerd-391a49c473f0a5a3b14694ade5c29f93447b1f7f76fb3a89e1ec201787089e64.scope: Deactivated successfully. May 13 00:20:55.677114 env[1210]: time="2025-05-13T00:20:55.677068247Z" level=info msg="shim disconnected" id=391a49c473f0a5a3b14694ade5c29f93447b1f7f76fb3a89e1ec201787089e64 May 13 00:20:55.677114 env[1210]: time="2025-05-13T00:20:55.677111409Z" level=warning msg="cleaning up after shim disconnected" id=391a49c473f0a5a3b14694ade5c29f93447b1f7f76fb3a89e1ec201787089e64 namespace=k8s.io May 13 00:20:55.677351 env[1210]: time="2025-05-13T00:20:55.677153291Z" level=info msg="cleaning up dead shim" May 13 00:20:55.683782 env[1210]: time="2025-05-13T00:20:55.683738412Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:20:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3236 runtime=io.containerd.runc.v2\n" May 13 00:20:55.819339 kubelet[1419]: E0513 00:20:55.819220 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:55.872370 env[1210]: time="2025-05-13T00:20:55.872327488Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:55.873750 env[1210]: time="2025-05-13T00:20:55.873723116Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:55.875252 env[1210]: time="2025-05-13T00:20:55.875227229Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:55.875683 env[1210]: time="2025-05-13T00:20:55.875651130Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 13 00:20:55.877977 env[1210]: time="2025-05-13T00:20:55.877946362Z" level=info msg="CreateContainer within sandbox \"29796e481cfcef6c6769951f5d404d8fa97e07dabc55ebbd1c68b57717584248\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 13 00:20:55.888275 env[1210]: time="2025-05-13T00:20:55.888236304Z" level=info msg="CreateContainer within sandbox \"29796e481cfcef6c6769951f5d404d8fa97e07dabc55ebbd1c68b57717584248\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"18f79995acbefff482a4370c10dffb975910b03fb64d5df84b956e9386128449\"" May 13 00:20:55.888889 env[1210]: time="2025-05-13T00:20:55.888688926Z" level=info msg="StartContainer for \"18f79995acbefff482a4370c10dffb975910b03fb64d5df84b956e9386128449\"" May 13 00:20:55.903346 systemd[1]: Started cri-containerd-18f79995acbefff482a4370c10dffb975910b03fb64d5df84b956e9386128449.scope. May 13 00:20:55.953992 env[1210]: time="2025-05-13T00:20:55.953919267Z" level=info msg="StartContainer for \"18f79995acbefff482a4370c10dffb975910b03fb64d5df84b956e9386128449\" returns successfully" May 13 00:20:56.009183 kubelet[1419]: I0513 00:20:56.009142 1419 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fdd3fc4-a5b7-4f15-a07f-e0d779223677" path="/var/lib/kubelet/pods/7fdd3fc4-a5b7-4f15-a07f-e0d779223677/volumes" May 13 00:20:56.132718 kubelet[1419]: E0513 00:20:56.132396 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:20:56.134714 kubelet[1419]: E0513 00:20:56.134693 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:20:56.136967 env[1210]: time="2025-05-13T00:20:56.136930889Z" level=info msg="CreateContainer within sandbox \"724f4cd0a462d51b435f87fdcb72f917f600abcb5d6c5922f3280bbffd174fef\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 00:20:56.159364 env[1210]: time="2025-05-13T00:20:56.159298983Z" level=info msg="CreateContainer within sandbox \"724f4cd0a462d51b435f87fdcb72f917f600abcb5d6c5922f3280bbffd174fef\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"22f07cde7b6ca51f36a1b58cc0108ee9bb669aae7118935effa2b65f769d9e65\"" May 13 00:20:56.160166 env[1210]: time="2025-05-13T00:20:56.160139543Z" level=info msg="StartContainer for \"22f07cde7b6ca51f36a1b58cc0108ee9bb669aae7118935effa2b65f769d9e65\"" May 13 00:20:56.164968 kubelet[1419]: I0513 00:20:56.164770 1419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-kftzb" podStartSLOduration=1.0633802509999999 podStartE2EDuration="3.16475232s" podCreationTimestamp="2025-05-13 00:20:53 +0000 UTC" firstStartedPulling="2025-05-13 00:20:53.775218387 +0000 UTC m=+54.752099814" lastFinishedPulling="2025-05-13 00:20:55.876590496 +0000 UTC m=+56.853471883" observedRunningTime="2025-05-13 00:20:56.142010048 +0000 UTC m=+57.118891475" watchObservedRunningTime="2025-05-13 00:20:56.16475232 +0000 UTC m=+57.141633747" May 13 00:20:56.178032 systemd[1]: Started cri-containerd-22f07cde7b6ca51f36a1b58cc0108ee9bb669aae7118935effa2b65f769d9e65.scope. May 13 00:20:56.215568 env[1210]: time="2025-05-13T00:20:56.215496472Z" level=info msg="StartContainer for \"22f07cde7b6ca51f36a1b58cc0108ee9bb669aae7118935effa2b65f769d9e65\" returns successfully" May 13 00:20:56.236005 systemd[1]: cri-containerd-22f07cde7b6ca51f36a1b58cc0108ee9bb669aae7118935effa2b65f769d9e65.scope: Deactivated successfully. May 13 00:20:56.257271 env[1210]: time="2025-05-13T00:20:56.257222678Z" level=info msg="shim disconnected" id=22f07cde7b6ca51f36a1b58cc0108ee9bb669aae7118935effa2b65f769d9e65 May 13 00:20:56.257271 env[1210]: time="2025-05-13T00:20:56.257269721Z" level=warning msg="cleaning up after shim disconnected" id=22f07cde7b6ca51f36a1b58cc0108ee9bb669aae7118935effa2b65f769d9e65 namespace=k8s.io May 13 00:20:56.257271 env[1210]: time="2025-05-13T00:20:56.257278481Z" level=info msg="cleaning up dead shim" May 13 00:20:56.264170 env[1210]: time="2025-05-13T00:20:56.264119643Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:20:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3337 runtime=io.containerd.runc.v2\n" May 13 00:20:56.810660 kubelet[1419]: W0513 00:20:56.810518 1419 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7fdd3fc4_a5b7_4f15_a07f_e0d779223677.slice/cri-containerd-47493cd697c261abd746a0a9485dacf075e701f98e25da0d6c2e1233280c6001.scope WatchSource:0}: container "47493cd697c261abd746a0a9485dacf075e701f98e25da0d6c2e1233280c6001" in namespace "k8s.io": not found May 13 00:20:56.819350 kubelet[1419]: E0513 00:20:56.819313 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:57.138619 kubelet[1419]: E0513 00:20:57.138583 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:20:57.139318 kubelet[1419]: E0513 00:20:57.139296 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:20:57.141214 env[1210]: time="2025-05-13T00:20:57.141160808Z" level=info msg="CreateContainer within sandbox \"724f4cd0a462d51b435f87fdcb72f917f600abcb5d6c5922f3280bbffd174fef\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 00:20:57.204007 env[1210]: time="2025-05-13T00:20:57.203956792Z" level=info msg="CreateContainer within sandbox \"724f4cd0a462d51b435f87fdcb72f917f600abcb5d6c5922f3280bbffd174fef\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e92d9d46e9ca6547517b99524bbc5e5b251e56165a50b2df2c5c93e279fbe8a5\"" May 13 00:20:57.206102 env[1210]: time="2025-05-13T00:20:57.206064328Z" level=info msg="StartContainer for \"e92d9d46e9ca6547517b99524bbc5e5b251e56165a50b2df2c5c93e279fbe8a5\"" May 13 00:20:57.225669 systemd[1]: Started cri-containerd-e92d9d46e9ca6547517b99524bbc5e5b251e56165a50b2df2c5c93e279fbe8a5.scope. May 13 00:20:57.262244 env[1210]: time="2025-05-13T00:20:57.262061761Z" level=info msg="StartContainer for \"e92d9d46e9ca6547517b99524bbc5e5b251e56165a50b2df2c5c93e279fbe8a5\" returns successfully" May 13 00:20:57.265839 systemd[1]: cri-containerd-e92d9d46e9ca6547517b99524bbc5e5b251e56165a50b2df2c5c93e279fbe8a5.scope: Deactivated successfully. May 13 00:20:57.287735 env[1210]: time="2025-05-13T00:20:57.287686330Z" level=info msg="shim disconnected" id=e92d9d46e9ca6547517b99524bbc5e5b251e56165a50b2df2c5c93e279fbe8a5 May 13 00:20:57.287966 env[1210]: time="2025-05-13T00:20:57.287944301Z" level=warning msg="cleaning up after shim disconnected" id=e92d9d46e9ca6547517b99524bbc5e5b251e56165a50b2df2c5c93e279fbe8a5 namespace=k8s.io May 13 00:20:57.288041 env[1210]: time="2025-05-13T00:20:57.288028385Z" level=info msg="cleaning up dead shim" May 13 00:20:57.294713 env[1210]: time="2025-05-13T00:20:57.294677969Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:20:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3395 runtime=io.containerd.runc.v2\n" May 13 00:20:57.541095 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e92d9d46e9ca6547517b99524bbc5e5b251e56165a50b2df2c5c93e279fbe8a5-rootfs.mount: Deactivated successfully. May 13 00:20:57.820059 kubelet[1419]: E0513 00:20:57.819947 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:58.142814 kubelet[1419]: E0513 00:20:58.142784 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:20:58.144754 env[1210]: time="2025-05-13T00:20:58.144716008Z" level=info msg="CreateContainer within sandbox \"724f4cd0a462d51b435f87fdcb72f917f600abcb5d6c5922f3280bbffd174fef\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 00:20:58.168953 env[1210]: time="2025-05-13T00:20:58.168265288Z" level=info msg="CreateContainer within sandbox \"724f4cd0a462d51b435f87fdcb72f917f600abcb5d6c5922f3280bbffd174fef\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fdc722228affeeea870c64528e7fa9961a8dd373232ff2b65338127cd417138c\"" May 13 00:20:58.170590 env[1210]: time="2025-05-13T00:20:58.169735473Z" level=info msg="StartContainer for \"fdc722228affeeea870c64528e7fa9961a8dd373232ff2b65338127cd417138c\"" May 13 00:20:58.195508 systemd[1]: Started cri-containerd-fdc722228affeeea870c64528e7fa9961a8dd373232ff2b65338127cd417138c.scope. May 13 00:20:58.226365 systemd[1]: cri-containerd-fdc722228affeeea870c64528e7fa9961a8dd373232ff2b65338127cd417138c.scope: Deactivated successfully. May 13 00:20:58.228189 env[1210]: time="2025-05-13T00:20:58.228083490Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0741fbb7_6e68_4ab6_aeb2_dcdb784afb1a.slice/cri-containerd-fdc722228affeeea870c64528e7fa9961a8dd373232ff2b65338127cd417138c.scope/memory.events\": no such file or directory" May 13 00:20:58.231611 env[1210]: time="2025-05-13T00:20:58.231574564Z" level=info msg="StartContainer for \"fdc722228affeeea870c64528e7fa9961a8dd373232ff2b65338127cd417138c\" returns successfully" May 13 00:20:58.248452 env[1210]: time="2025-05-13T00:20:58.248408707Z" level=info msg="shim disconnected" id=fdc722228affeeea870c64528e7fa9961a8dd373232ff2b65338127cd417138c May 13 00:20:58.248645 env[1210]: time="2025-05-13T00:20:58.248626157Z" level=warning msg="cleaning up after shim disconnected" id=fdc722228affeeea870c64528e7fa9961a8dd373232ff2b65338127cd417138c namespace=k8s.io May 13 00:20:58.248717 env[1210]: time="2025-05-13T00:20:58.248705001Z" level=info msg="cleaning up dead shim" May 13 00:20:58.254718 env[1210]: time="2025-05-13T00:20:58.254683345Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:20:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3450 runtime=io.containerd.runc.v2\n" May 13 00:20:58.541751 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fdc722228affeeea870c64528e7fa9961a8dd373232ff2b65338127cd417138c-rootfs.mount: Deactivated successfully. May 13 00:20:58.820504 kubelet[1419]: E0513 00:20:58.820393 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:59.147422 kubelet[1419]: E0513 00:20:59.147181 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:20:59.148928 env[1210]: time="2025-05-13T00:20:59.148887442Z" level=info msg="CreateContainer within sandbox \"724f4cd0a462d51b435f87fdcb72f917f600abcb5d6c5922f3280bbffd174fef\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 00:20:59.160584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2238629557.mount: Deactivated successfully. May 13 00:20:59.167739 env[1210]: time="2025-05-13T00:20:59.167688567Z" level=info msg="CreateContainer within sandbox \"724f4cd0a462d51b435f87fdcb72f917f600abcb5d6c5922f3280bbffd174fef\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fce2cd8935b9d88780d1196dc59cb6d9fa2c84a27105c2a9356df964733bf07d\"" May 13 00:20:59.168259 env[1210]: time="2025-05-13T00:20:59.168227630Z" level=info msg="StartContainer for \"fce2cd8935b9d88780d1196dc59cb6d9fa2c84a27105c2a9356df964733bf07d\"" May 13 00:20:59.183177 systemd[1]: Started cri-containerd-fce2cd8935b9d88780d1196dc59cb6d9fa2c84a27105c2a9356df964733bf07d.scope. May 13 00:20:59.218398 env[1210]: time="2025-05-13T00:20:59.218350936Z" level=info msg="StartContainer for \"fce2cd8935b9d88780d1196dc59cb6d9fa2c84a27105c2a9356df964733bf07d\" returns successfully" May 13 00:20:59.457216 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) May 13 00:20:59.779482 kubelet[1419]: E0513 00:20:59.779427 1419 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:59.806830 env[1210]: time="2025-05-13T00:20:59.806642366Z" level=info msg="StopPodSandbox for \"30a68ba53e59c58a5b0763758f7683eca48a55e61a2f75bd2ce7ea7d8a01aac1\"" May 13 00:20:59.806830 env[1210]: time="2025-05-13T00:20:59.806745370Z" level=info msg="TearDown network for sandbox \"30a68ba53e59c58a5b0763758f7683eca48a55e61a2f75bd2ce7ea7d8a01aac1\" successfully" May 13 00:20:59.806830 env[1210]: time="2025-05-13T00:20:59.806779172Z" level=info msg="StopPodSandbox for \"30a68ba53e59c58a5b0763758f7683eca48a55e61a2f75bd2ce7ea7d8a01aac1\" returns successfully" May 13 00:20:59.807156 env[1210]: time="2025-05-13T00:20:59.807103426Z" level=info msg="RemovePodSandbox for \"30a68ba53e59c58a5b0763758f7683eca48a55e61a2f75bd2ce7ea7d8a01aac1\"" May 13 00:20:59.807218 env[1210]: time="2025-05-13T00:20:59.807167188Z" level=info msg="Forcibly stopping sandbox \"30a68ba53e59c58a5b0763758f7683eca48a55e61a2f75bd2ce7ea7d8a01aac1\"" May 13 00:20:59.807267 env[1210]: time="2025-05-13T00:20:59.807249392Z" level=info msg="TearDown network for sandbox \"30a68ba53e59c58a5b0763758f7683eca48a55e61a2f75bd2ce7ea7d8a01aac1\" successfully" May 13 00:20:59.821068 kubelet[1419]: E0513 00:20:59.821028 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:20:59.821384 env[1210]: time="2025-05-13T00:20:59.821176388Z" level=info msg="RemovePodSandbox \"30a68ba53e59c58a5b0763758f7683eca48a55e61a2f75bd2ce7ea7d8a01aac1\" returns successfully" May 13 00:20:59.821606 env[1210]: time="2025-05-13T00:20:59.821560525Z" level=info msg="StopPodSandbox for \"06d66311cf8d8d575be306b19d41821795c397ba5a548cdfd5784053cab53a8b\"" May 13 00:20:59.821730 env[1210]: time="2025-05-13T00:20:59.821641728Z" level=info msg="TearDown network for sandbox \"06d66311cf8d8d575be306b19d41821795c397ba5a548cdfd5784053cab53a8b\" successfully" May 13 00:20:59.821730 env[1210]: time="2025-05-13T00:20:59.821676690Z" level=info msg="StopPodSandbox for \"06d66311cf8d8d575be306b19d41821795c397ba5a548cdfd5784053cab53a8b\" returns successfully" May 13 00:20:59.821940 env[1210]: time="2025-05-13T00:20:59.821904459Z" level=info msg="RemovePodSandbox for \"06d66311cf8d8d575be306b19d41821795c397ba5a548cdfd5784053cab53a8b\"" May 13 00:20:59.821988 env[1210]: time="2025-05-13T00:20:59.821933381Z" level=info msg="Forcibly stopping sandbox \"06d66311cf8d8d575be306b19d41821795c397ba5a548cdfd5784053cab53a8b\"" May 13 00:20:59.822022 env[1210]: time="2025-05-13T00:20:59.822003664Z" level=info msg="TearDown network for sandbox \"06d66311cf8d8d575be306b19d41821795c397ba5a548cdfd5784053cab53a8b\" successfully" May 13 00:20:59.824654 env[1210]: time="2025-05-13T00:20:59.824617136Z" level=info msg="RemovePodSandbox \"06d66311cf8d8d575be306b19d41821795c397ba5a548cdfd5784053cab53a8b\" returns successfully" May 13 00:20:59.922006 kubelet[1419]: W0513 00:20:59.921949 1419 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0741fbb7_6e68_4ab6_aeb2_dcdb784afb1a.slice/cri-containerd-391a49c473f0a5a3b14694ade5c29f93447b1f7f76fb3a89e1ec201787089e64.scope WatchSource:0}: task 391a49c473f0a5a3b14694ade5c29f93447b1f7f76fb3a89e1ec201787089e64 not found: not found May 13 00:21:00.152737 kubelet[1419]: E0513 00:21:00.152341 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:00.822131 kubelet[1419]: E0513 00:21:00.822068 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:21:01.484459 kubelet[1419]: E0513 00:21:01.484422 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:01.828199 kubelet[1419]: E0513 00:21:01.823086 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:21:02.245209 systemd-networkd[1056]: lxc_health: Link UP May 13 00:21:02.261901 systemd-networkd[1056]: lxc_health: Gained carrier May 13 00:21:02.262147 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 13 00:21:02.824028 kubelet[1419]: E0513 00:21:02.823982 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:21:03.031848 kubelet[1419]: W0513 00:21:03.031807 1419 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0741fbb7_6e68_4ab6_aeb2_dcdb784afb1a.slice/cri-containerd-22f07cde7b6ca51f36a1b58cc0108ee9bb669aae7118935effa2b65f769d9e65.scope WatchSource:0}: task 22f07cde7b6ca51f36a1b58cc0108ee9bb669aae7118935effa2b65f769d9e65 not found: not found May 13 00:21:03.376285 systemd-networkd[1056]: lxc_health: Gained IPv6LL May 13 00:21:03.485280 kubelet[1419]: E0513 00:21:03.485151 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:03.504098 kubelet[1419]: I0513 00:21:03.504036 1419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-55nd8" podStartSLOduration=8.504021158 podStartE2EDuration="8.504021158s" podCreationTimestamp="2025-05-13 00:20:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:21:00.168072193 +0000 UTC m=+61.144953940" watchObservedRunningTime="2025-05-13 00:21:03.504021158 +0000 UTC m=+64.480902545" May 13 00:21:03.824783 kubelet[1419]: E0513 00:21:03.824747 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:21:04.159298 kubelet[1419]: E0513 00:21:04.159096 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:04.825623 kubelet[1419]: E0513 00:21:04.825574 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:21:05.160856 kubelet[1419]: E0513 00:21:05.160828 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:05.826455 kubelet[1419]: E0513 00:21:05.826395 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:21:06.150792 kubelet[1419]: W0513 00:21:06.150748 1419 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0741fbb7_6e68_4ab6_aeb2_dcdb784afb1a.slice/cri-containerd-e92d9d46e9ca6547517b99524bbc5e5b251e56165a50b2df2c5c93e279fbe8a5.scope WatchSource:0}: task e92d9d46e9ca6547517b99524bbc5e5b251e56165a50b2df2c5c93e279fbe8a5 not found: not found May 13 00:21:06.826550 kubelet[1419]: E0513 00:21:06.826489 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:21:07.826912 kubelet[1419]: E0513 00:21:07.826864 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:21:08.827066 kubelet[1419]: E0513 00:21:08.826999 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:21:09.263086 kubelet[1419]: W0513 00:21:09.263049 1419 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0741fbb7_6e68_4ab6_aeb2_dcdb784afb1a.slice/cri-containerd-fdc722228affeeea870c64528e7fa9961a8dd373232ff2b65338127cd417138c.scope WatchSource:0}: task fdc722228affeeea870c64528e7fa9961a8dd373232ff2b65338127cd417138c not found: not found May 13 00:21:09.828110 kubelet[1419]: E0513 00:21:09.828074 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"