May 14 00:48:53.773990 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 14 00:48:53.774010 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Tue May 13 23:17:31 -00 2025 May 14 00:48:53.774018 kernel: efi: EFI v2.70 by EDK II May 14 00:48:53.774024 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 May 14 00:48:53.774029 kernel: random: crng init done May 14 00:48:53.774034 kernel: ACPI: Early table checksum verification disabled May 14 00:48:53.774041 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) May 14 00:48:53.774048 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) May 14 00:48:53.774054 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:48:53.774059 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:48:53.774065 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:48:53.774070 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:48:53.774075 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:48:53.774081 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:48:53.774089 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:48:53.774095 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:48:53.774100 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:48:53.774106 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 14 00:48:53.774112 kernel: NUMA: Failed to initialise from firmware May 14 00:48:53.774118 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 14 00:48:53.774124 kernel: NUMA: NODE_DATA [mem 0xdcb09900-0xdcb0efff] May 14 00:48:53.774129 kernel: Zone ranges: May 14 00:48:53.774135 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 14 00:48:53.774142 kernel: DMA32 empty May 14 00:48:53.774148 kernel: Normal empty May 14 00:48:53.774153 kernel: Movable zone start for each node May 14 00:48:53.774159 kernel: Early memory node ranges May 14 00:48:53.774165 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] May 14 00:48:53.774171 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] May 14 00:48:53.774176 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] May 14 00:48:53.774182 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] May 14 00:48:53.774188 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] May 14 00:48:53.774193 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] May 14 00:48:53.774199 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] May 14 00:48:53.774205 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 14 00:48:53.774212 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 14 00:48:53.774218 kernel: psci: probing for conduit method from ACPI. May 14 00:48:53.774223 kernel: psci: PSCIv1.1 detected in firmware. May 14 00:48:53.774229 kernel: psci: Using standard PSCI v0.2 function IDs May 14 00:48:53.774235 kernel: psci: Trusted OS migration not required May 14 00:48:53.774243 kernel: psci: SMC Calling Convention v1.1 May 14 00:48:53.774249 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 14 00:48:53.774257 kernel: ACPI: SRAT not present May 14 00:48:53.774263 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 May 14 00:48:53.774269 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 May 14 00:48:53.774276 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 14 00:48:53.774282 kernel: Detected PIPT I-cache on CPU0 May 14 00:48:53.774288 kernel: CPU features: detected: GIC system register CPU interface May 14 00:48:53.774294 kernel: CPU features: detected: Hardware dirty bit management May 14 00:48:53.774300 kernel: CPU features: detected: Spectre-v4 May 14 00:48:53.774306 kernel: CPU features: detected: Spectre-BHB May 14 00:48:53.774313 kernel: CPU features: kernel page table isolation forced ON by KASLR May 14 00:48:53.774320 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 14 00:48:53.774326 kernel: CPU features: detected: ARM erratum 1418040 May 14 00:48:53.774332 kernel: CPU features: detected: SSBS not fully self-synchronizing May 14 00:48:53.774338 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 14 00:48:53.774344 kernel: Policy zone: DMA May 14 00:48:53.774351 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=412b3b42de04d7d5abb18ecf506be3ad2c72d6425f1b2391aa97d359e8bd9923 May 14 00:48:53.774358 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 00:48:53.774364 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 14 00:48:53.774370 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 00:48:53.774376 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 00:48:53.774384 kernel: Memory: 2457332K/2572288K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36480K init, 777K bss, 114956K reserved, 0K cma-reserved) May 14 00:48:53.774390 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 14 00:48:53.774396 kernel: trace event string verifier disabled May 14 00:48:53.774403 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 00:48:53.774409 kernel: rcu: RCU event tracing is enabled. May 14 00:48:53.774415 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 14 00:48:53.774422 kernel: Trampoline variant of Tasks RCU enabled. May 14 00:48:53.774428 kernel: Tracing variant of Tasks RCU enabled. May 14 00:48:53.774434 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 00:48:53.774440 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 14 00:48:53.774447 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 14 00:48:53.774454 kernel: GICv3: 256 SPIs implemented May 14 00:48:53.774460 kernel: GICv3: 0 Extended SPIs implemented May 14 00:48:53.774466 kernel: GICv3: Distributor has no Range Selector support May 14 00:48:53.774472 kernel: Root IRQ handler: gic_handle_irq May 14 00:48:53.774478 kernel: GICv3: 16 PPIs implemented May 14 00:48:53.774484 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 14 00:48:53.774490 kernel: ACPI: SRAT not present May 14 00:48:53.774496 kernel: ITS [mem 0x08080000-0x0809ffff] May 14 00:48:53.774502 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) May 14 00:48:53.774509 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) May 14 00:48:53.774515 kernel: GICv3: using LPI property table @0x00000000400d0000 May 14 00:48:53.774521 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 May 14 00:48:53.774528 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:48:53.774534 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 14 00:48:53.774541 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 14 00:48:53.774547 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 14 00:48:53.774553 kernel: arm-pv: using stolen time PV May 14 00:48:53.774559 kernel: Console: colour dummy device 80x25 May 14 00:48:53.774566 kernel: ACPI: Core revision 20210730 May 14 00:48:53.774572 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 14 00:48:53.774579 kernel: pid_max: default: 32768 minimum: 301 May 14 00:48:53.774585 kernel: LSM: Security Framework initializing May 14 00:48:53.774592 kernel: SELinux: Initializing. May 14 00:48:53.774599 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 00:48:53.774605 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 00:48:53.774611 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 14 00:48:53.774618 kernel: rcu: Hierarchical SRCU implementation. May 14 00:48:53.774624 kernel: Platform MSI: ITS@0x8080000 domain created May 14 00:48:53.774631 kernel: PCI/MSI: ITS@0x8080000 domain created May 14 00:48:53.774637 kernel: Remapping and enabling EFI services. May 14 00:48:53.774643 kernel: smp: Bringing up secondary CPUs ... May 14 00:48:53.774651 kernel: Detected PIPT I-cache on CPU1 May 14 00:48:53.774657 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 14 00:48:53.774664 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 May 14 00:48:53.774670 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:48:53.774677 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 14 00:48:53.774683 kernel: Detected PIPT I-cache on CPU2 May 14 00:48:53.774701 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 14 00:48:53.774709 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 May 14 00:48:53.774715 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:48:53.774721 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 14 00:48:53.774729 kernel: Detected PIPT I-cache on CPU3 May 14 00:48:53.774736 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 14 00:48:53.774742 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 May 14 00:48:53.774769 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:48:53.774781 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 14 00:48:53.774789 kernel: smp: Brought up 1 node, 4 CPUs May 14 00:48:53.774797 kernel: SMP: Total of 4 processors activated. May 14 00:48:53.774803 kernel: CPU features: detected: 32-bit EL0 Support May 14 00:48:53.774810 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 14 00:48:53.774817 kernel: CPU features: detected: Common not Private translations May 14 00:48:53.774824 kernel: CPU features: detected: CRC32 instructions May 14 00:48:53.774830 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 14 00:48:53.774838 kernel: CPU features: detected: LSE atomic instructions May 14 00:48:53.774845 kernel: CPU features: detected: Privileged Access Never May 14 00:48:53.774851 kernel: CPU features: detected: RAS Extension Support May 14 00:48:53.774866 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 14 00:48:53.774873 kernel: CPU: All CPU(s) started at EL1 May 14 00:48:53.774881 kernel: alternatives: patching kernel code May 14 00:48:53.774887 kernel: devtmpfs: initialized May 14 00:48:53.774894 kernel: KASLR enabled May 14 00:48:53.774908 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 00:48:53.774929 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 14 00:48:53.774936 kernel: pinctrl core: initialized pinctrl subsystem May 14 00:48:53.774942 kernel: SMBIOS 3.0.0 present. May 14 00:48:53.774949 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 May 14 00:48:53.774955 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 00:48:53.774964 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 14 00:48:53.774970 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 14 00:48:53.774977 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 14 00:48:53.774984 kernel: audit: initializing netlink subsys (disabled) May 14 00:48:53.774990 kernel: audit: type=2000 audit(0.033:1): state=initialized audit_enabled=0 res=1 May 14 00:48:53.774997 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 00:48:53.775004 kernel: cpuidle: using governor menu May 14 00:48:53.775010 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 14 00:48:53.775017 kernel: ASID allocator initialised with 32768 entries May 14 00:48:53.775025 kernel: ACPI: bus type PCI registered May 14 00:48:53.775031 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 00:48:53.775038 kernel: Serial: AMBA PL011 UART driver May 14 00:48:53.775044 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 14 00:48:53.775051 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages May 14 00:48:53.775057 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 14 00:48:53.775064 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages May 14 00:48:53.775070 kernel: cryptd: max_cpu_qlen set to 1000 May 14 00:48:53.775077 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 14 00:48:53.775085 kernel: ACPI: Added _OSI(Module Device) May 14 00:48:53.775092 kernel: ACPI: Added _OSI(Processor Device) May 14 00:48:53.775098 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 00:48:53.775104 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 00:48:53.775111 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 14 00:48:53.775117 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 14 00:48:53.775124 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 14 00:48:53.775130 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 00:48:53.775137 kernel: ACPI: Interpreter enabled May 14 00:48:53.775144 kernel: ACPI: Using GIC for interrupt routing May 14 00:48:53.775151 kernel: ACPI: MCFG table detected, 1 entries May 14 00:48:53.775157 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 14 00:48:53.775164 kernel: printk: console [ttyAMA0] enabled May 14 00:48:53.775170 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 14 00:48:53.775309 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 00:48:53.775374 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 14 00:48:53.775433 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 14 00:48:53.775490 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 14 00:48:53.775546 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 14 00:48:53.775554 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 14 00:48:53.775561 kernel: PCI host bridge to bus 0000:00 May 14 00:48:53.775625 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 14 00:48:53.775677 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 14 00:48:53.775737 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 14 00:48:53.775792 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 14 00:48:53.775863 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 14 00:48:53.775949 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 14 00:48:53.776011 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 14 00:48:53.776069 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 14 00:48:53.776128 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 14 00:48:53.776189 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 14 00:48:53.776250 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 14 00:48:53.776309 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 14 00:48:53.776361 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 14 00:48:53.776412 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 14 00:48:53.776462 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 14 00:48:53.776471 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 14 00:48:53.776478 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 14 00:48:53.776486 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 14 00:48:53.776492 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 14 00:48:53.776499 kernel: iommu: Default domain type: Translated May 14 00:48:53.776505 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 14 00:48:53.776512 kernel: vgaarb: loaded May 14 00:48:53.776518 kernel: pps_core: LinuxPPS API ver. 1 registered May 14 00:48:53.776525 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 14 00:48:53.776532 kernel: PTP clock support registered May 14 00:48:53.776538 kernel: Registered efivars operations May 14 00:48:53.776547 kernel: clocksource: Switched to clocksource arch_sys_counter May 14 00:48:53.776553 kernel: VFS: Disk quotas dquot_6.6.0 May 14 00:48:53.776560 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 00:48:53.776567 kernel: pnp: PnP ACPI init May 14 00:48:53.776632 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 14 00:48:53.776642 kernel: pnp: PnP ACPI: found 1 devices May 14 00:48:53.776648 kernel: NET: Registered PF_INET protocol family May 14 00:48:53.776655 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 14 00:48:53.776664 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 14 00:48:53.776670 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 00:48:53.776677 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 00:48:53.776693 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 14 00:48:53.776701 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 14 00:48:53.776707 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 00:48:53.776714 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 00:48:53.776720 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 00:48:53.776727 kernel: PCI: CLS 0 bytes, default 64 May 14 00:48:53.776735 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 14 00:48:53.776741 kernel: kvm [1]: HYP mode not available May 14 00:48:53.776748 kernel: Initialise system trusted keyrings May 14 00:48:53.776755 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 14 00:48:53.776761 kernel: Key type asymmetric registered May 14 00:48:53.776768 kernel: Asymmetric key parser 'x509' registered May 14 00:48:53.776774 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 14 00:48:53.776781 kernel: io scheduler mq-deadline registered May 14 00:48:53.776787 kernel: io scheduler kyber registered May 14 00:48:53.776795 kernel: io scheduler bfq registered May 14 00:48:53.776802 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 14 00:48:53.776808 kernel: ACPI: button: Power Button [PWRB] May 14 00:48:53.776815 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 14 00:48:53.776880 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 14 00:48:53.776889 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 00:48:53.776905 kernel: thunder_xcv, ver 1.0 May 14 00:48:53.776912 kernel: thunder_bgx, ver 1.0 May 14 00:48:53.776919 kernel: nicpf, ver 1.0 May 14 00:48:53.776927 kernel: nicvf, ver 1.0 May 14 00:48:53.776999 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 14 00:48:53.777060 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-14T00:48:53 UTC (1747183733) May 14 00:48:53.777070 kernel: hid: raw HID events driver (C) Jiri Kosina May 14 00:48:53.777076 kernel: NET: Registered PF_INET6 protocol family May 14 00:48:53.777083 kernel: Segment Routing with IPv6 May 14 00:48:53.777089 kernel: In-situ OAM (IOAM) with IPv6 May 14 00:48:53.777096 kernel: NET: Registered PF_PACKET protocol family May 14 00:48:53.777105 kernel: Key type dns_resolver registered May 14 00:48:53.777111 kernel: registered taskstats version 1 May 14 00:48:53.777118 kernel: Loading compiled-in X.509 certificates May 14 00:48:53.777125 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: 7727f4e7680a5b8534f3d5e7bb84b1f695e8c34b' May 14 00:48:53.777135 kernel: Key type .fscrypt registered May 14 00:48:53.777142 kernel: Key type fscrypt-provisioning registered May 14 00:48:53.777149 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 00:48:53.777156 kernel: ima: Allocated hash algorithm: sha1 May 14 00:48:53.777162 kernel: ima: No architecture policies found May 14 00:48:53.777170 kernel: clk: Disabling unused clocks May 14 00:48:53.777177 kernel: Freeing unused kernel memory: 36480K May 14 00:48:53.777184 kernel: Run /init as init process May 14 00:48:53.777191 kernel: with arguments: May 14 00:48:53.777197 kernel: /init May 14 00:48:53.777203 kernel: with environment: May 14 00:48:53.777210 kernel: HOME=/ May 14 00:48:53.777216 kernel: TERM=linux May 14 00:48:53.777223 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 00:48:53.777233 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 14 00:48:53.777241 systemd[1]: Detected virtualization kvm. May 14 00:48:53.777249 systemd[1]: Detected architecture arm64. May 14 00:48:53.777256 systemd[1]: Running in initrd. May 14 00:48:53.777263 systemd[1]: No hostname configured, using default hostname. May 14 00:48:53.777270 systemd[1]: Hostname set to . May 14 00:48:53.777278 systemd[1]: Initializing machine ID from VM UUID. May 14 00:48:53.777286 systemd[1]: Queued start job for default target initrd.target. May 14 00:48:53.777294 systemd[1]: Started systemd-ask-password-console.path. May 14 00:48:53.777301 systemd[1]: Reached target cryptsetup.target. May 14 00:48:53.777308 systemd[1]: Reached target paths.target. May 14 00:48:53.777315 systemd[1]: Reached target slices.target. May 14 00:48:53.777322 systemd[1]: Reached target swap.target. May 14 00:48:53.777329 systemd[1]: Reached target timers.target. May 14 00:48:53.777336 systemd[1]: Listening on iscsid.socket. May 14 00:48:53.777344 systemd[1]: Listening on iscsiuio.socket. May 14 00:48:53.777351 systemd[1]: Listening on systemd-journald-audit.socket. May 14 00:48:53.777358 systemd[1]: Listening on systemd-journald-dev-log.socket. May 14 00:48:53.777365 systemd[1]: Listening on systemd-journald.socket. May 14 00:48:53.777373 systemd[1]: Listening on systemd-networkd.socket. May 14 00:48:53.777380 systemd[1]: Listening on systemd-udevd-control.socket. May 14 00:48:53.777387 systemd[1]: Listening on systemd-udevd-kernel.socket. May 14 00:48:53.777394 systemd[1]: Reached target sockets.target. May 14 00:48:53.777402 systemd[1]: Starting kmod-static-nodes.service... May 14 00:48:53.777409 systemd[1]: Finished network-cleanup.service. May 14 00:48:53.777416 systemd[1]: Starting systemd-fsck-usr.service... May 14 00:48:53.777423 systemd[1]: Starting systemd-journald.service... May 14 00:48:53.777430 systemd[1]: Starting systemd-modules-load.service... May 14 00:48:53.777437 systemd[1]: Starting systemd-resolved.service... May 14 00:48:53.777444 systemd[1]: Starting systemd-vconsole-setup.service... May 14 00:48:53.777451 systemd[1]: Finished kmod-static-nodes.service. May 14 00:48:53.777458 systemd[1]: Finished systemd-fsck-usr.service. May 14 00:48:53.777466 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 14 00:48:53.777473 systemd[1]: Finished systemd-vconsole-setup.service. May 14 00:48:53.777481 kernel: audit: type=1130 audit(1747183733.773:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:53.777488 systemd[1]: Starting dracut-cmdline-ask.service... May 14 00:48:53.777498 systemd-journald[290]: Journal started May 14 00:48:53.777537 systemd-journald[290]: Runtime Journal (/run/log/journal/798ebf09f10a49b6bb4efcd984bb1e88) is 6.0M, max 48.7M, 42.6M free. May 14 00:48:53.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:53.774840 systemd-modules-load[291]: Inserted module 'overlay' May 14 00:48:53.779046 systemd[1]: Started systemd-journald.service. May 14 00:48:53.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:53.784856 kernel: audit: type=1130 audit(1747183733.778:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:53.785102 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 14 00:48:53.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:53.788920 kernel: audit: type=1130 audit(1747183733.784:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:53.798218 systemd[1]: Finished dracut-cmdline-ask.service. May 14 00:48:53.811724 kernel: audit: type=1130 audit(1747183733.797:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:53.811762 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 00:48:53.811784 kernel: Bridge firewalling registered May 14 00:48:53.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:53.799721 systemd[1]: Starting dracut-cmdline.service... May 14 00:48:53.813444 dracut-cmdline[308]: dracut-dracut-053 May 14 00:48:53.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:53.815948 kernel: audit: type=1130 audit(1747183733.812:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:53.806274 systemd-resolved[292]: Positive Trust Anchors: May 14 00:48:53.816744 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=412b3b42de04d7d5abb18ecf506be3ad2c72d6425f1b2391aa97d359e8bd9923 May 14 00:48:53.806282 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 00:48:53.806308 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 14 00:48:53.806424 systemd-modules-load[291]: Inserted module 'br_netfilter' May 14 00:48:53.826499 kernel: SCSI subsystem initialized May 14 00:48:53.810831 systemd-resolved[292]: Defaulting to hostname 'linux'. May 14 00:48:53.812774 systemd[1]: Started systemd-resolved.service. May 14 00:48:53.814325 systemd[1]: Reached target nss-lookup.target. May 14 00:48:53.836466 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 00:48:53.836534 kernel: device-mapper: uevent: version 1.0.3 May 14 00:48:53.836546 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 14 00:48:53.838718 systemd-modules-load[291]: Inserted module 'dm_multipath' May 14 00:48:53.839539 systemd[1]: Finished systemd-modules-load.service. May 14 00:48:53.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:53.842653 systemd[1]: Starting systemd-sysctl.service... May 14 00:48:53.843753 kernel: audit: type=1130 audit(1747183733.840:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:53.850003 systemd[1]: Finished systemd-sysctl.service. May 14 00:48:53.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:53.852911 kernel: audit: type=1130 audit(1747183733.849:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:53.878916 kernel: Loading iSCSI transport class v2.0-870. May 14 00:48:53.890924 kernel: iscsi: registered transport (tcp) May 14 00:48:53.905928 kernel: iscsi: registered transport (qla4xxx) May 14 00:48:53.905975 kernel: QLogic iSCSI HBA Driver May 14 00:48:53.942889 systemd[1]: Finished dracut-cmdline.service. May 14 00:48:53.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:53.944402 systemd[1]: Starting dracut-pre-udev.service... May 14 00:48:53.946732 kernel: audit: type=1130 audit(1747183733.942:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:53.989928 kernel: raid6: neonx8 gen() 13582 MB/s May 14 00:48:54.006917 kernel: raid6: neonx8 xor() 10641 MB/s May 14 00:48:54.023911 kernel: raid6: neonx4 gen() 13283 MB/s May 14 00:48:54.040911 kernel: raid6: neonx4 xor() 10976 MB/s May 14 00:48:54.057911 kernel: raid6: neonx2 gen() 12748 MB/s May 14 00:48:54.074919 kernel: raid6: neonx2 xor() 10047 MB/s May 14 00:48:54.091921 kernel: raid6: neonx1 gen() 10280 MB/s May 14 00:48:54.108920 kernel: raid6: neonx1 xor() 8392 MB/s May 14 00:48:54.125917 kernel: raid6: int64x8 gen() 6139 MB/s May 14 00:48:54.142915 kernel: raid6: int64x8 xor() 3448 MB/s May 14 00:48:54.159917 kernel: raid6: int64x4 gen() 7135 MB/s May 14 00:48:54.176917 kernel: raid6: int64x4 xor() 3734 MB/s May 14 00:48:54.193918 kernel: raid6: int64x2 gen() 6098 MB/s May 14 00:48:54.210914 kernel: raid6: int64x2 xor() 3261 MB/s May 14 00:48:54.227913 kernel: raid6: int64x1 gen() 4993 MB/s May 14 00:48:54.245253 kernel: raid6: int64x1 xor() 2613 MB/s May 14 00:48:54.245266 kernel: raid6: using algorithm neonx8 gen() 13582 MB/s May 14 00:48:54.245276 kernel: raid6: .... xor() 10641 MB/s, rmw enabled May 14 00:48:54.245284 kernel: raid6: using neon recovery algorithm May 14 00:48:54.255931 kernel: xor: measuring software checksum speed May 14 00:48:54.255957 kernel: 8regs : 17191 MB/sec May 14 00:48:54.256914 kernel: 32regs : 20707 MB/sec May 14 00:48:54.256930 kernel: arm64_neon : 26813 MB/sec May 14 00:48:54.256938 kernel: xor: using function: arm64_neon (26813 MB/sec) May 14 00:48:54.312917 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no May 14 00:48:54.323448 systemd[1]: Finished dracut-pre-udev.service. May 14 00:48:54.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:54.325000 audit: BPF prog-id=7 op=LOAD May 14 00:48:54.325000 audit: BPF prog-id=8 op=LOAD May 14 00:48:54.326732 systemd[1]: Starting systemd-udevd.service... May 14 00:48:54.327776 kernel: audit: type=1130 audit(1747183734.323:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:54.339081 systemd-udevd[492]: Using default interface naming scheme 'v252'. May 14 00:48:54.342448 systemd[1]: Started systemd-udevd.service. May 14 00:48:54.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:54.344396 systemd[1]: Starting dracut-pre-trigger.service... May 14 00:48:54.356235 dracut-pre-trigger[500]: rd.md=0: removing MD RAID activation May 14 00:48:54.382473 systemd[1]: Finished dracut-pre-trigger.service. May 14 00:48:54.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:54.383843 systemd[1]: Starting systemd-udev-trigger.service... May 14 00:48:54.417033 systemd[1]: Finished systemd-udev-trigger.service. May 14 00:48:54.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:54.448775 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 14 00:48:54.452743 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 14 00:48:54.452758 kernel: GPT:9289727 != 19775487 May 14 00:48:54.452766 kernel: GPT:Alternate GPT header not at the end of the disk. May 14 00:48:54.452775 kernel: GPT:9289727 != 19775487 May 14 00:48:54.452782 kernel: GPT: Use GNU Parted to correct GPT errors. May 14 00:48:54.452797 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 00:48:54.462216 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 14 00:48:54.464919 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (538) May 14 00:48:54.470251 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 14 00:48:54.476126 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 14 00:48:54.478675 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 14 00:48:54.479641 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 14 00:48:54.481655 systemd[1]: Starting disk-uuid.service... May 14 00:48:54.487301 disk-uuid[562]: Primary Header is updated. May 14 00:48:54.487301 disk-uuid[562]: Secondary Entries is updated. May 14 00:48:54.487301 disk-uuid[562]: Secondary Header is updated. May 14 00:48:54.489913 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 00:48:54.500929 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 00:48:55.500920 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 00:48:55.501022 disk-uuid[563]: The operation has completed successfully. May 14 00:48:55.519577 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 00:48:55.519670 systemd[1]: Finished disk-uuid.service. May 14 00:48:55.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:55.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:55.525552 systemd[1]: Starting verity-setup.service... May 14 00:48:55.540969 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 14 00:48:55.560609 systemd[1]: Found device dev-mapper-usr.device. May 14 00:48:55.563202 systemd[1]: Mounting sysusr-usr.mount... May 14 00:48:55.564932 systemd[1]: Finished verity-setup.service. May 14 00:48:55.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:55.608916 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 14 00:48:55.609428 systemd[1]: Mounted sysusr-usr.mount. May 14 00:48:55.610563 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 14 00:48:55.612187 systemd[1]: Starting ignition-setup.service... May 14 00:48:55.613992 systemd[1]: Starting parse-ip-for-networkd.service... May 14 00:48:55.620151 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 00:48:55.620187 kernel: BTRFS info (device vda6): using free space tree May 14 00:48:55.620196 kernel: BTRFS info (device vda6): has skinny extents May 14 00:48:55.627823 systemd[1]: mnt-oem.mount: Deactivated successfully. May 14 00:48:55.632723 systemd[1]: Finished ignition-setup.service. May 14 00:48:55.634578 systemd[1]: Starting ignition-fetch-offline.service... May 14 00:48:55.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:55.701131 systemd[1]: Finished parse-ip-for-networkd.service. May 14 00:48:55.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:55.701000 audit: BPF prog-id=9 op=LOAD May 14 00:48:55.703009 systemd[1]: Starting systemd-networkd.service... May 14 00:48:55.705582 ignition[648]: Ignition 2.14.0 May 14 00:48:55.705590 ignition[648]: Stage: fetch-offline May 14 00:48:55.705628 ignition[648]: no configs at "/usr/lib/ignition/base.d" May 14 00:48:55.705637 ignition[648]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:48:55.705771 ignition[648]: parsed url from cmdline: "" May 14 00:48:55.705774 ignition[648]: no config URL provided May 14 00:48:55.705779 ignition[648]: reading system config file "/usr/lib/ignition/user.ign" May 14 00:48:55.705786 ignition[648]: no config at "/usr/lib/ignition/user.ign" May 14 00:48:55.705803 ignition[648]: op(1): [started] loading QEMU firmware config module May 14 00:48:55.705808 ignition[648]: op(1): executing: "modprobe" "qemu_fw_cfg" May 14 00:48:55.711872 ignition[648]: op(1): [finished] loading QEMU firmware config module May 14 00:48:55.711904 ignition[648]: QEMU firmware config was not found. Ignoring... May 14 00:48:55.718978 ignition[648]: parsing config with SHA512: 3ec3673fa0d7fba50c4a3b5125497bb1ee61d05563a3b3e5dea3822b0bbdb6a0e0b22d01a8040901403e477898ae82c43ce603e6e992c9874bad50d34d396f07 May 14 00:48:55.724715 unknown[648]: fetched base config from "system" May 14 00:48:55.724727 unknown[648]: fetched user config from "qemu" May 14 00:48:55.725057 ignition[648]: fetch-offline: fetch-offline passed May 14 00:48:55.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:55.726233 systemd[1]: Finished ignition-fetch-offline.service. May 14 00:48:55.725107 ignition[648]: Ignition finished successfully May 14 00:48:55.730651 systemd-networkd[740]: lo: Link UP May 14 00:48:55.730666 systemd-networkd[740]: lo: Gained carrier May 14 00:48:55.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:55.731031 systemd-networkd[740]: Enumeration completed May 14 00:48:55.731100 systemd[1]: Started systemd-networkd.service. May 14 00:48:55.731198 systemd-networkd[740]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 00:48:55.732158 systemd[1]: Reached target network.target. May 14 00:48:55.732583 systemd-networkd[740]: eth0: Link UP May 14 00:48:55.732586 systemd-networkd[740]: eth0: Gained carrier May 14 00:48:55.733234 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 14 00:48:55.733955 systemd[1]: Starting ignition-kargs.service... May 14 00:48:55.735797 systemd[1]: Starting iscsiuio.service... May 14 00:48:55.743380 ignition[744]: Ignition 2.14.0 May 14 00:48:55.743391 ignition[744]: Stage: kargs May 14 00:48:55.743484 ignition[744]: no configs at "/usr/lib/ignition/base.d" May 14 00:48:55.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:55.744428 systemd[1]: Started iscsiuio.service. May 14 00:48:55.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:55.743493 ignition[744]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:48:55.745509 systemd[1]: Finished ignition-kargs.service. May 14 00:48:55.744226 ignition[744]: kargs: kargs passed May 14 00:48:55.745974 systemd-networkd[740]: eth0: DHCPv4 address 10.0.0.109/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 00:48:55.744269 ignition[744]: Ignition finished successfully May 14 00:48:55.747776 systemd[1]: Starting ignition-disks.service... May 14 00:48:55.754323 iscsid[754]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 14 00:48:55.754323 iscsid[754]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log May 14 00:48:55.754323 iscsid[754]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 14 00:48:55.754323 iscsid[754]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 14 00:48:55.754323 iscsid[754]: If using hardware iscsi like qla4xxx this message can be ignored. May 14 00:48:55.754323 iscsid[754]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 14 00:48:55.754323 iscsid[754]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 14 00:48:55.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:55.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:55.749941 systemd[1]: Starting iscsid.service... May 14 00:48:55.754124 ignition[753]: Ignition 2.14.0 May 14 00:48:55.756155 systemd[1]: Started iscsid.service. May 14 00:48:55.754130 ignition[753]: Stage: disks May 14 00:48:55.759294 systemd[1]: Finished ignition-disks.service. May 14 00:48:55.754213 ignition[753]: no configs at "/usr/lib/ignition/base.d" May 14 00:48:55.761695 systemd[1]: Reached target initrd-root-device.target. May 14 00:48:55.754222 ignition[753]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:48:55.762971 systemd[1]: Reached target local-fs-pre.target. May 14 00:48:55.754984 ignition[753]: disks: disks passed May 14 00:48:55.764384 systemd[1]: Reached target local-fs.target. May 14 00:48:55.755022 ignition[753]: Ignition finished successfully May 14 00:48:55.765970 systemd[1]: Reached target sysinit.target. May 14 00:48:55.767496 systemd[1]: Reached target basic.target. May 14 00:48:55.769330 systemd[1]: Starting dracut-initqueue.service... May 14 00:48:55.778895 systemd[1]: Finished dracut-initqueue.service. May 14 00:48:55.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:55.779563 systemd[1]: Reached target remote-fs-pre.target. May 14 00:48:55.780721 systemd[1]: Reached target remote-cryptsetup.target. May 14 00:48:55.781921 systemd[1]: Reached target remote-fs.target. May 14 00:48:55.783751 systemd[1]: Starting dracut-pre-mount.service... May 14 00:48:55.791016 systemd[1]: Finished dracut-pre-mount.service. May 14 00:48:55.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:55.792308 systemd[1]: Starting systemd-fsck-root.service... May 14 00:48:55.802516 systemd-fsck[776]: ROOT: clean, 619/553520 files, 56022/553472 blocks May 14 00:48:55.806038 systemd[1]: Finished systemd-fsck-root.service. May 14 00:48:55.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:55.807664 systemd[1]: Mounting sysroot.mount... May 14 00:48:55.813651 systemd[1]: Mounted sysroot.mount. May 14 00:48:55.814987 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 14 00:48:55.814462 systemd[1]: Reached target initrd-root-fs.target. May 14 00:48:55.816549 systemd[1]: Mounting sysroot-usr.mount... May 14 00:48:55.817270 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 14 00:48:55.817305 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 00:48:55.817327 systemd[1]: Reached target ignition-diskful.target. May 14 00:48:55.818920 systemd[1]: Mounted sysroot-usr.mount. May 14 00:48:55.820554 systemd[1]: Starting initrd-setup-root.service... May 14 00:48:55.824457 initrd-setup-root[786]: cut: /sysroot/etc/passwd: No such file or directory May 14 00:48:55.828073 initrd-setup-root[794]: cut: /sysroot/etc/group: No such file or directory May 14 00:48:55.831939 initrd-setup-root[802]: cut: /sysroot/etc/shadow: No such file or directory May 14 00:48:55.835822 initrd-setup-root[810]: cut: /sysroot/etc/gshadow: No such file or directory May 14 00:48:55.859196 systemd[1]: Finished initrd-setup-root.service. May 14 00:48:55.860487 systemd[1]: Starting ignition-mount.service... May 14 00:48:55.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:55.861610 systemd[1]: Starting sysroot-boot.service... May 14 00:48:55.865808 bash[827]: umount: /sysroot/usr/share/oem: not mounted. May 14 00:48:55.873929 ignition[829]: INFO : Ignition 2.14.0 May 14 00:48:55.873929 ignition[829]: INFO : Stage: mount May 14 00:48:55.873929 ignition[829]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 00:48:55.873929 ignition[829]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:48:55.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:55.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:55.875489 systemd[1]: Finished ignition-mount.service. May 14 00:48:55.880985 ignition[829]: INFO : mount: mount passed May 14 00:48:55.880985 ignition[829]: INFO : Ignition finished successfully May 14 00:48:55.877336 systemd[1]: Finished sysroot-boot.service. May 14 00:48:56.571806 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 14 00:48:56.578347 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (837) May 14 00:48:56.578380 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 00:48:56.578391 kernel: BTRFS info (device vda6): using free space tree May 14 00:48:56.579228 kernel: BTRFS info (device vda6): has skinny extents May 14 00:48:56.581863 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 14 00:48:56.583397 systemd[1]: Starting ignition-files.service... May 14 00:48:56.596892 ignition[857]: INFO : Ignition 2.14.0 May 14 00:48:56.596892 ignition[857]: INFO : Stage: files May 14 00:48:56.598171 ignition[857]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 00:48:56.598171 ignition[857]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:48:56.598171 ignition[857]: DEBUG : files: compiled without relabeling support, skipping May 14 00:48:56.602162 ignition[857]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 00:48:56.602162 ignition[857]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 00:48:56.607524 ignition[857]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 00:48:56.608670 ignition[857]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 00:48:56.609961 unknown[857]: wrote ssh authorized keys file for user: core May 14 00:48:56.610934 ignition[857]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 00:48:56.610934 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 14 00:48:56.610934 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 14 00:48:56.610934 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 14 00:48:56.610934 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 14 00:48:56.610934 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 00:48:56.619973 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 00:48:56.619973 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 14 00:48:56.619973 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 14 00:48:56.619973 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 14 00:48:56.619973 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 14 00:48:56.947352 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK May 14 00:48:57.445935 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 14 00:48:57.445935 ignition[857]: INFO : files: op(8): [started] processing unit "containerd.service" May 14 00:48:57.449737 ignition[857]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 14 00:48:57.449737 ignition[857]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 14 00:48:57.449737 ignition[857]: INFO : files: op(8): [finished] processing unit "containerd.service" May 14 00:48:57.449737 ignition[857]: INFO : files: op(a): [started] processing unit "coreos-metadata.service" May 14 00:48:57.449737 ignition[857]: INFO : files: op(a): op(b): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 00:48:57.449737 ignition[857]: INFO : files: op(a): op(b): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 00:48:57.449737 ignition[857]: INFO : files: op(a): [finished] processing unit "coreos-metadata.service" May 14 00:48:57.449737 ignition[857]: INFO : files: op(c): [started] setting preset to disabled for "coreos-metadata.service" May 14 00:48:57.449737 ignition[857]: INFO : files: op(c): op(d): [started] removing enablement symlink(s) for "coreos-metadata.service" May 14 00:48:57.488302 ignition[857]: INFO : files: op(c): op(d): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 14 00:48:57.489828 ignition[857]: INFO : files: op(c): [finished] setting preset to disabled for "coreos-metadata.service" May 14 00:48:57.489828 ignition[857]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 00:48:57.489828 ignition[857]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 00:48:57.489828 ignition[857]: INFO : files: files passed May 14 00:48:57.489828 ignition[857]: INFO : Ignition finished successfully May 14 00:48:57.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:57.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:57.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:57.489764 systemd[1]: Finished ignition-files.service. May 14 00:48:57.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:57.491479 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 14 00:48:57.492693 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 14 00:48:57.503441 initrd-setup-root-after-ignition[883]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 14 00:48:57.493324 systemd[1]: Starting ignition-quench.service... May 14 00:48:57.505638 initrd-setup-root-after-ignition[885]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 00:48:57.496302 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 00:48:57.496391 systemd[1]: Finished ignition-quench.service. May 14 00:48:57.498101 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 14 00:48:57.499336 systemd[1]: Reached target ignition-complete.target. May 14 00:48:57.501070 systemd[1]: Starting initrd-parse-etc.service... May 14 00:48:57.512320 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 00:48:57.512399 systemd[1]: Finished initrd-parse-etc.service. May 14 00:48:57.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:57.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:57.513825 systemd[1]: Reached target initrd-fs.target. May 14 00:48:57.514873 systemd[1]: Reached target initrd.target. May 14 00:48:57.516000 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 14 00:48:57.516630 systemd[1]: Starting dracut-pre-pivot.service... May 14 00:48:57.520529 systemd-networkd[740]: eth0: Gained IPv6LL May 14 00:48:57.527045 systemd[1]: Finished dracut-pre-pivot.service. May 14 00:48:57.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:57.528295 systemd[1]: Starting initrd-cleanup.service... May 14 00:48:57.535570 systemd[1]: Stopped target nss-lookup.target. May 14 00:48:57.536272 systemd[1]: Stopped target remote-cryptsetup.target. May 14 00:48:57.537513 systemd[1]: Stopped target timers.target. May 14 00:48:57.538655 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 00:48:57.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:57.538757 systemd[1]: Stopped dracut-pre-pivot.service. May 14 00:48:57.539832 systemd[1]: Stopped target initrd.target. May 14 00:48:57.541262 systemd[1]: Stopped target basic.target. May 14 00:48:57.542514 systemd[1]: Stopped target ignition-complete.target. May 14 00:48:57.543614 systemd[1]: Stopped target ignition-diskful.target. May 14 00:48:57.544727 systemd[1]: Stopped target initrd-root-device.target. May 14 00:48:57.546007 systemd[1]: Stopped target remote-fs.target. May 14 00:48:57.547188 systemd[1]: Stopped target remote-fs-pre.target. May 14 00:48:57.548410 systemd[1]: Stopped target sysinit.target. May 14 00:48:57.549613 systemd[1]: Stopped target local-fs.target. May 14 00:48:57.550739 systemd[1]: Stopped target local-fs-pre.target. May 14 00:48:57.551833 systemd[1]: Stopped target swap.target. May 14 00:48:57.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:57.552870 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 00:48:57.552982 systemd[1]: Stopped dracut-pre-mount.service. May 14 00:48:57.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:57.554178 systemd[1]: Stopped target cryptsetup.target. May 14 00:48:57.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:57.555172 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 00:48:57.555269 systemd[1]: Stopped dracut-initqueue.service. May 14 00:48:57.556589 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 00:48:57.556678 systemd[1]: Stopped ignition-fetch-offline.service. May 14 00:48:57.557806 systemd[1]: Stopped target paths.target. May 14 00:48:57.558811 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 00:48:57.561939 systemd[1]: Stopped systemd-ask-password-console.path. May 14 00:48:57.562743 systemd[1]: Stopped target slices.target. May 14 00:48:57.563782 systemd[1]: Stopped target sockets.target. May 14 00:48:57.565175 systemd[1]: iscsid.socket: Deactivated successfully. May 14 00:48:57.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:57.565240 systemd[1]: Closed iscsid.socket. May 14 00:48:57.569000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:57.566354 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 00:48:57.566417 systemd[1]: Closed iscsiuio.socket. May 14 00:48:57.567398 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 00:48:57.567488 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 14 00:48:57.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:57.568698 systemd[1]: ignition-files.service: Deactivated successfully. May 14 00:48:57.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:57.568793 systemd[1]: Stopped ignition-files.service. May 14 00:48:57.570552 systemd[1]: Stopping ignition-mount.service... May 14 00:48:57.579063 ignition[898]: INFO : Ignition 2.14.0 May 14 00:48:57.579063 ignition[898]: INFO : Stage: umount May 14 00:48:57.579063 ignition[898]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 00:48:57.579063 ignition[898]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:48:57.579063 ignition[898]: INFO : umount: umount passed May 14 00:48:57.579063 ignition[898]: INFO : Ignition finished successfully May 14 00:48:57.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:57.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:57.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:57.584000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:57.572289 systemd[1]: Stopping sysroot-boot.service... May 14 00:48:57.585000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:57.572798 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 00:48:57.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:57.572918 systemd[1]: Stopped systemd-udev-trigger.service. May 14 00:48:57.574564 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 00:48:57.574647 systemd[1]: Stopped dracut-pre-trigger.service. May 14 00:48:57.578762 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 00:48:57.578839 systemd[1]: Finished initrd-cleanup.service. May 14 00:48:57.580052 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 00:48:57.580129 systemd[1]: Stopped ignition-mount.service. May 14 00:48:57.581836 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 00:48:57.582770 systemd[1]: Stopped target network.target. May 14 00:48:57.584497 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 00:48:57.584549 systemd[1]: Stopped ignition-disks.service. May 14 00:48:57.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:57.585526 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 00:48:57.585562 systemd[1]: Stopped ignition-kargs.service. May 14 00:48:57.586670 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 00:48:57.586706 systemd[1]: Stopped ignition-setup.service. May 14 00:48:57.587867 systemd[1]: Stopping systemd-networkd.service... May 14 00:48:57.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:57.589764 systemd[1]: Stopping systemd-resolved.service... May 14 00:48:57.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:57.594965 systemd-networkd[740]: eth0: DHCPv6 lease lost May 14 00:48:57.609000 audit: BPF prog-id=9 op=UNLOAD May 14 00:48:57.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:57.597077 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 00:48:57.597164 systemd[1]: Stopped systemd-networkd.service. May 14 00:48:57.598298 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 00:48:57.598327 systemd[1]: Closed systemd-networkd.socket. May 14 00:48:57.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:57.599962 systemd[1]: Stopping network-cleanup.service... May 14 00:48:57.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:57.601971 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 00:48:57.602029 systemd[1]: Stopped parse-ip-for-networkd.service. May 14 00:48:57.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:57.617000 audit: BPF prog-id=6 op=UNLOAD May 14 00:48:57.606314 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 00:48:57.606357 systemd[1]: Stopped systemd-sysctl.service. May 14 00:48:57.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:57.608360 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 00:48:57.620000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:57.608403 systemd[1]: Stopped systemd-modules-load.service. May 14 00:48:57.609438 systemd[1]: Stopping systemd-udevd.service... May 14 00:48:57.612951 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 00:48:57.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:57.613387 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 00:48:57.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:57.613486 systemd[1]: Stopped systemd-resolved.service. May 14 00:48:57.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:57.615077 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 00:48:57.615150 systemd[1]: Stopped sysroot-boot.service. May 14 00:48:57.616576 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 00:48:57.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:57.616619 systemd[1]: Stopped initrd-setup-root.service. May 14 00:48:57.631000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:57.618681 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 00:48:57.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:57.618773 systemd[1]: Stopped network-cleanup.service. May 14 00:48:57.620178 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 00:48:57.620275 systemd[1]: Stopped systemd-udevd.service. May 14 00:48:57.621246 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 00:48:57.621277 systemd[1]: Closed systemd-udevd-control.socket. May 14 00:48:57.622359 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 00:48:57.622391 systemd[1]: Closed systemd-udevd-kernel.socket. May 14 00:48:57.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:57.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:57.623641 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 00:48:57.623684 systemd[1]: Stopped dracut-pre-udev.service. May 14 00:48:57.624800 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 00:48:57.624838 systemd[1]: Stopped dracut-cmdline.service. May 14 00:48:57.626331 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 00:48:57.626367 systemd[1]: Stopped dracut-cmdline-ask.service. May 14 00:48:57.628264 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 14 00:48:57.629396 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 14 00:48:57.644000 audit: BPF prog-id=5 op=UNLOAD May 14 00:48:57.644000 audit: BPF prog-id=4 op=UNLOAD May 14 00:48:57.644000 audit: BPF prog-id=3 op=UNLOAD May 14 00:48:57.629452 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. May 14 00:48:57.645000 audit: BPF prog-id=8 op=UNLOAD May 14 00:48:57.645000 audit: BPF prog-id=7 op=UNLOAD May 14 00:48:57.631246 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 00:48:57.631285 systemd[1]: Stopped kmod-static-nodes.service. May 14 00:48:57.632085 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 00:48:57.632123 systemd[1]: Stopped systemd-vconsole-setup.service. May 14 00:48:57.634022 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 14 00:48:57.634386 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 00:48:57.634459 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 14 00:48:57.635513 systemd[1]: Reached target initrd-switch-root.target. May 14 00:48:57.637354 systemd[1]: Starting initrd-switch-root.service... May 14 00:48:57.642512 systemd[1]: Switching root. May 14 00:48:57.661941 iscsid[754]: iscsid shutting down. May 14 00:48:57.662618 systemd-journald[290]: Journal stopped May 14 00:48:59.705673 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). May 14 00:48:59.705727 kernel: SELinux: Class mctp_socket not defined in policy. May 14 00:48:59.705748 kernel: SELinux: Class anon_inode not defined in policy. May 14 00:48:59.705759 kernel: SELinux: the above unknown classes and permissions will be allowed May 14 00:48:59.705769 kernel: SELinux: policy capability network_peer_controls=1 May 14 00:48:59.705779 kernel: SELinux: policy capability open_perms=1 May 14 00:48:59.705788 kernel: SELinux: policy capability extended_socket_class=1 May 14 00:48:59.705801 kernel: SELinux: policy capability always_check_network=0 May 14 00:48:59.705816 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 00:48:59.705826 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 00:48:59.705835 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 00:48:59.705845 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 00:48:59.705855 systemd[1]: Successfully loaded SELinux policy in 31.563ms. May 14 00:48:59.705874 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.747ms. May 14 00:48:59.705886 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 14 00:48:59.705914 systemd[1]: Detected virtualization kvm. May 14 00:48:59.705927 systemd[1]: Detected architecture arm64. May 14 00:48:59.705937 systemd[1]: Detected first boot. May 14 00:48:59.705947 systemd[1]: Initializing machine ID from VM UUID. May 14 00:48:59.705958 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 14 00:48:59.705969 systemd[1]: Populated /etc with preset unit settings. May 14 00:48:59.705983 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 14 00:48:59.705994 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 14 00:48:59.706005 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:48:59.706017 systemd[1]: Queued start job for default target multi-user.target. May 14 00:48:59.706027 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 14 00:48:59.706038 systemd[1]: Created slice system-addon\x2dconfig.slice. May 14 00:48:59.706052 systemd[1]: Created slice system-addon\x2drun.slice. May 14 00:48:59.706063 systemd[1]: Created slice system-getty.slice. May 14 00:48:59.706073 systemd[1]: Created slice system-modprobe.slice. May 14 00:48:59.706083 systemd[1]: Created slice system-serial\x2dgetty.slice. May 14 00:48:59.706094 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 14 00:48:59.706104 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 14 00:48:59.706115 systemd[1]: Created slice user.slice. May 14 00:48:59.706126 systemd[1]: Started systemd-ask-password-console.path. May 14 00:48:59.706136 systemd[1]: Started systemd-ask-password-wall.path. May 14 00:48:59.706146 systemd[1]: Set up automount boot.automount. May 14 00:48:59.706157 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 14 00:48:59.706168 systemd[1]: Reached target integritysetup.target. May 14 00:48:59.706179 systemd[1]: Reached target remote-cryptsetup.target. May 14 00:48:59.706190 systemd[1]: Reached target remote-fs.target. May 14 00:48:59.706201 systemd[1]: Reached target slices.target. May 14 00:48:59.706212 systemd[1]: Reached target swap.target. May 14 00:48:59.706222 systemd[1]: Reached target torcx.target. May 14 00:48:59.706232 systemd[1]: Reached target veritysetup.target. May 14 00:48:59.706242 systemd[1]: Listening on systemd-coredump.socket. May 14 00:48:59.706254 systemd[1]: Listening on systemd-initctl.socket. May 14 00:48:59.706264 systemd[1]: Listening on systemd-journald-audit.socket. May 14 00:48:59.706278 kernel: kauditd_printk_skb: 79 callbacks suppressed May 14 00:48:59.706294 kernel: audit: type=1400 audit(1747183739.605:83): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 14 00:48:59.706305 kernel: audit: type=1335 audit(1747183739.606:84): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 14 00:48:59.706315 systemd[1]: Listening on systemd-journald-dev-log.socket. May 14 00:48:59.706325 systemd[1]: Listening on systemd-journald.socket. May 14 00:48:59.706335 systemd[1]: Listening on systemd-networkd.socket. May 14 00:48:59.706346 systemd[1]: Listening on systemd-udevd-control.socket. May 14 00:48:59.706357 systemd[1]: Listening on systemd-udevd-kernel.socket. May 14 00:48:59.706367 systemd[1]: Listening on systemd-userdbd.socket. May 14 00:48:59.706377 systemd[1]: Mounting dev-hugepages.mount... May 14 00:48:59.706387 systemd[1]: Mounting dev-mqueue.mount... May 14 00:48:59.706397 systemd[1]: Mounting media.mount... May 14 00:48:59.706408 systemd[1]: Mounting sys-kernel-debug.mount... May 14 00:48:59.706418 systemd[1]: Mounting sys-kernel-tracing.mount... May 14 00:48:59.706428 systemd[1]: Mounting tmp.mount... May 14 00:48:59.706438 systemd[1]: Starting flatcar-tmpfiles.service... May 14 00:48:59.706449 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 14 00:48:59.706460 systemd[1]: Starting kmod-static-nodes.service... May 14 00:48:59.706470 systemd[1]: Starting modprobe@configfs.service... May 14 00:48:59.706481 systemd[1]: Starting modprobe@dm_mod.service... May 14 00:48:59.706491 systemd[1]: Starting modprobe@drm.service... May 14 00:48:59.706501 systemd[1]: Starting modprobe@efi_pstore.service... May 14 00:48:59.706513 systemd[1]: Starting modprobe@fuse.service... May 14 00:48:59.706523 systemd[1]: Starting modprobe@loop.service... May 14 00:48:59.706534 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 00:48:59.706545 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 14 00:48:59.706555 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) May 14 00:48:59.706566 systemd[1]: Starting systemd-journald.service... May 14 00:48:59.706575 kernel: loop: module loaded May 14 00:48:59.706585 systemd[1]: Starting systemd-modules-load.service... May 14 00:48:59.706595 kernel: fuse: init (API version 7.34) May 14 00:48:59.706604 systemd[1]: Starting systemd-network-generator.service... May 14 00:48:59.706615 systemd[1]: Starting systemd-remount-fs.service... May 14 00:48:59.706625 systemd[1]: Starting systemd-udev-trigger.service... May 14 00:48:59.706637 systemd[1]: Mounted dev-hugepages.mount. May 14 00:48:59.706647 systemd[1]: Mounted dev-mqueue.mount. May 14 00:48:59.706657 systemd[1]: Mounted media.mount. May 14 00:48:59.706667 systemd[1]: Mounted sys-kernel-debug.mount. May 14 00:48:59.706677 systemd[1]: Mounted sys-kernel-tracing.mount. May 14 00:48:59.706688 systemd[1]: Mounted tmp.mount. May 14 00:48:59.706697 systemd[1]: Finished kmod-static-nodes.service. May 14 00:48:59.706708 kernel: audit: type=1130 audit(1747183739.695:85): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:59.706718 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 00:48:59.706729 systemd[1]: Finished modprobe@configfs.service. May 14 00:48:59.706746 kernel: audit: type=1130 audit(1747183739.699:86): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:59.706756 kernel: audit: type=1131 audit(1747183739.701:87): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:59.706765 kernel: audit: type=1305 audit(1747183739.701:88): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 14 00:48:59.706778 kernel: audit: type=1300 audit(1747183739.701:88): arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=ffffd45e4850 a2=4000 a3=1 items=0 ppid=1 pid=1031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 14 00:48:59.706791 systemd-journald[1031]: Journal started May 14 00:48:59.706834 systemd-journald[1031]: Runtime Journal (/run/log/journal/798ebf09f10a49b6bb4efcd984bb1e88) is 6.0M, max 48.7M, 42.6M free. May 14 00:48:59.605000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 14 00:48:59.606000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 14 00:48:59.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:59.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:59.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:59.701000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 14 00:48:59.701000 audit[1031]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=ffffd45e4850 a2=4000 a3=1 items=0 ppid=1 pid=1031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 14 00:48:59.709217 systemd[1]: Started systemd-journald.service. May 14 00:48:59.709260 kernel: audit: type=1327 audit(1747183739.701:88): proctitle="/usr/lib/systemd/systemd-journald" May 14 00:48:59.701000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 14 00:48:59.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:59.711604 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:48:59.711830 systemd[1]: Finished modprobe@dm_mod.service. May 14 00:48:59.714340 kernel: audit: type=1130 audit(1747183739.709:89): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:59.714388 kernel: audit: type=1130 audit(1747183739.713:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:59.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:59.713812 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 00:48:59.714028 systemd[1]: Finished modprobe@drm.service. May 14 00:48:59.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:59.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:59.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:59.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:59.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:59.716557 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:48:59.716766 systemd[1]: Finished modprobe@efi_pstore.service. May 14 00:48:59.717925 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 00:48:59.718126 systemd[1]: Finished modprobe@fuse.service. May 14 00:48:59.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:59.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:59.719171 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:48:59.719370 systemd[1]: Finished modprobe@loop.service. May 14 00:48:59.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:59.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:59.720568 systemd[1]: Finished systemd-modules-load.service. May 14 00:48:59.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:59.722659 systemd[1]: Finished systemd-network-generator.service. May 14 00:48:59.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:59.724572 systemd[1]: Finished systemd-remount-fs.service. May 14 00:48:59.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:59.725769 systemd[1]: Reached target network-pre.target. May 14 00:48:59.727794 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 14 00:48:59.729644 systemd[1]: Mounting sys-kernel-config.mount... May 14 00:48:59.730524 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 00:48:59.732578 systemd[1]: Starting systemd-hwdb-update.service... May 14 00:48:59.734558 systemd[1]: Starting systemd-journal-flush.service... May 14 00:48:59.735483 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 00:48:59.736598 systemd[1]: Starting systemd-random-seed.service... May 14 00:48:59.737530 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 14 00:48:59.738710 systemd[1]: Starting systemd-sysctl.service... May 14 00:48:59.739491 systemd-journald[1031]: Time spent on flushing to /var/log/journal/798ebf09f10a49b6bb4efcd984bb1e88 is 12.020ms for 914 entries. May 14 00:48:59.739491 systemd-journald[1031]: System Journal (/var/log/journal/798ebf09f10a49b6bb4efcd984bb1e88) is 8.0M, max 195.6M, 187.6M free. May 14 00:48:59.797370 systemd-journald[1031]: Received client request to flush runtime journal. May 14 00:48:59.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:59.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:59.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:59.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:59.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:59.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:48:59.742541 systemd[1]: Finished flatcar-tmpfiles.service. May 14 00:48:59.743550 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 14 00:48:59.744749 systemd[1]: Mounted sys-kernel-config.mount. May 14 00:48:59.798169 udevadm[1082]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 14 00:48:59.746606 systemd[1]: Starting systemd-sysusers.service... May 14 00:48:59.751432 systemd[1]: Finished systemd-udev-trigger.service. May 14 00:48:59.753935 systemd[1]: Starting systemd-udev-settle.service... May 14 00:48:59.766997 systemd[1]: Finished systemd-sysctl.service. May 14 00:48:59.769159 systemd[1]: Finished systemd-sysusers.service. May 14 00:48:59.771011 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 14 00:48:59.782728 systemd[1]: Finished systemd-random-seed.service. May 14 00:48:59.783515 systemd[1]: Reached target first-boot-complete.target. May 14 00:48:59.794173 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 14 00:48:59.798517 systemd[1]: Finished systemd-journal-flush.service. May 14 00:48:59.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:49:00.108340 systemd[1]: Finished systemd-hwdb-update.service. May 14 00:49:00.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:49:00.110281 systemd[1]: Starting systemd-udevd.service... May 14 00:49:00.133881 systemd-udevd[1093]: Using default interface naming scheme 'v252'. May 14 00:49:00.151851 systemd[1]: Started systemd-udevd.service. May 14 00:49:00.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:49:00.153996 systemd[1]: Starting systemd-networkd.service... May 14 00:49:00.165452 systemd[1]: Starting systemd-userdbd.service... May 14 00:49:00.170351 systemd[1]: Found device dev-ttyAMA0.device. May 14 00:49:00.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:49:00.207003 systemd[1]: Started systemd-userdbd.service. May 14 00:49:00.248785 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 14 00:49:00.271481 systemd[1]: Finished systemd-udev-settle.service. May 14 00:49:00.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:49:00.273659 systemd[1]: Starting lvm2-activation-early.service... May 14 00:49:00.293484 systemd-networkd[1100]: lo: Link UP May 14 00:49:00.293777 systemd-networkd[1100]: lo: Gained carrier May 14 00:49:00.294235 systemd-networkd[1100]: Enumeration completed May 14 00:49:00.294450 systemd-networkd[1100]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 00:49:00.294466 systemd[1]: Started systemd-networkd.service. May 14 00:49:00.295162 lvm[1127]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 00:49:00.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:49:00.296334 systemd-networkd[1100]: eth0: Link UP May 14 00:49:00.296411 systemd-networkd[1100]: eth0: Gained carrier May 14 00:49:00.332051 systemd-networkd[1100]: eth0: DHCPv4 address 10.0.0.109/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 00:49:00.337819 systemd[1]: Finished lvm2-activation-early.service. May 14 00:49:00.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:49:00.338867 systemd[1]: Reached target cryptsetup.target. May 14 00:49:00.340942 systemd[1]: Starting lvm2-activation.service... May 14 00:49:00.344481 lvm[1129]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 00:49:00.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:49:00.376817 systemd[1]: Finished lvm2-activation.service. May 14 00:49:00.377593 systemd[1]: Reached target local-fs-pre.target. May 14 00:49:00.378249 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 00:49:00.378284 systemd[1]: Reached target local-fs.target. May 14 00:49:00.378840 systemd[1]: Reached target machines.target. May 14 00:49:00.380641 systemd[1]: Starting ldconfig.service... May 14 00:49:00.381866 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 14 00:49:00.381958 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 14 00:49:00.383081 systemd[1]: Starting systemd-boot-update.service... May 14 00:49:00.385186 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 14 00:49:00.387141 systemd[1]: Starting systemd-machine-id-commit.service... May 14 00:49:00.389143 systemd[1]: Starting systemd-sysext.service... May 14 00:49:00.390199 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1132 (bootctl) May 14 00:49:00.391270 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 14 00:49:00.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:49:00.393819 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 14 00:49:00.411033 systemd[1]: Unmounting usr-share-oem.mount... May 14 00:49:00.416543 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 14 00:49:00.416827 systemd[1]: Unmounted usr-share-oem.mount. May 14 00:49:00.453972 kernel: loop0: detected capacity change from 0 to 194096 May 14 00:49:00.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:49:00.456204 systemd[1]: Finished systemd-machine-id-commit.service. May 14 00:49:00.463639 systemd-fsck[1140]: fsck.fat 4.2 (2021-01-31) May 14 00:49:00.463639 systemd-fsck[1140]: /dev/vda1: 236 files, 117310/258078 clusters May 14 00:49:00.464936 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 00:49:00.466243 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 14 00:49:00.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:49:00.483925 kernel: loop1: detected capacity change from 0 to 194096 May 14 00:49:00.489141 (sd-sysext)[1150]: Using extensions 'kubernetes'. May 14 00:49:00.489456 (sd-sysext)[1150]: Merged extensions into '/usr'. May 14 00:49:00.504538 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 14 00:49:00.506080 systemd[1]: Starting modprobe@dm_mod.service... May 14 00:49:00.508098 systemd[1]: Starting modprobe@efi_pstore.service... May 14 00:49:00.510112 systemd[1]: Starting modprobe@loop.service... May 14 00:49:00.510753 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 14 00:49:00.510868 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 14 00:49:00.511795 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:49:00.512003 systemd[1]: Finished modprobe@dm_mod.service. May 14 00:49:00.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:49:00.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:49:00.513214 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:49:00.513392 systemd[1]: Finished modprobe@efi_pstore.service. May 14 00:49:00.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:49:00.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:49:00.514570 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:49:00.514764 systemd[1]: Finished modprobe@loop.service. May 14 00:49:00.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:49:00.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:49:00.515956 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 00:49:00.516057 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 14 00:49:00.566832 ldconfig[1131]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 00:49:00.570230 systemd[1]: Finished ldconfig.service. May 14 00:49:00.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:49:00.686814 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 00:49:00.688795 systemd[1]: Mounting boot.mount... May 14 00:49:00.690628 systemd[1]: Mounting usr-share-oem.mount... May 14 00:49:00.697171 systemd[1]: Mounted boot.mount. May 14 00:49:00.700165 systemd[1]: Mounted usr-share-oem.mount. May 14 00:49:00.701958 systemd[1]: Finished systemd-sysext.service. May 14 00:49:00.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:49:00.703948 systemd[1]: Starting ensure-sysext.service... May 14 00:49:00.705512 systemd[1]: Starting systemd-tmpfiles-setup.service... May 14 00:49:00.706631 systemd[1]: Finished systemd-boot-update.service. May 14 00:49:00.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:49:00.710793 systemd[1]: Reloading. May 14 00:49:00.714517 systemd-tmpfiles[1168]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 14 00:49:00.715283 systemd-tmpfiles[1168]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 00:49:00.716592 systemd-tmpfiles[1168]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 00:49:00.745133 /usr/lib/systemd/system-generators/torcx-generator[1188]: time="2025-05-14T00:49:00Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 14 00:49:00.745163 /usr/lib/systemd/system-generators/torcx-generator[1188]: time="2025-05-14T00:49:00Z" level=info msg="torcx already run" May 14 00:49:00.813943 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 14 00:49:00.813964 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 14 00:49:00.831483 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:49:00.872579 systemd[1]: Finished systemd-tmpfiles-setup.service. May 14 00:49:00.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:49:00.876489 systemd[1]: Starting audit-rules.service... May 14 00:49:00.878418 systemd[1]: Starting clean-ca-certificates.service... May 14 00:49:00.880409 systemd[1]: Starting systemd-journal-catalog-update.service... May 14 00:49:00.883236 systemd[1]: Starting systemd-resolved.service... May 14 00:49:00.885485 systemd[1]: Starting systemd-timesyncd.service... May 14 00:49:00.887534 systemd[1]: Starting systemd-update-utmp.service... May 14 00:49:00.889057 systemd[1]: Finished clean-ca-certificates.service. May 14 00:49:00.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:49:00.892485 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 00:49:00.896433 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 14 00:49:00.897700 systemd[1]: Starting modprobe@dm_mod.service... May 14 00:49:00.899836 systemd[1]: Starting modprobe@efi_pstore.service... May 14 00:49:00.902024 systemd[1]: Starting modprobe@loop.service... May 14 00:49:00.902690 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 14 00:49:00.902883 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 14 00:49:00.903084 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 00:49:00.904045 systemd[1]: Finished systemd-journal-catalog-update.service. May 14 00:49:00.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:49:00.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:49:00.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:49:00.906000 audit[1241]: SYSTEM_BOOT pid=1241 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 14 00:49:00.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:49:00.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:49:00.905362 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:49:00.905510 systemd[1]: Finished modprobe@dm_mod.service. May 14 00:49:00.906635 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:49:00.906788 systemd[1]: Finished modprobe@loop.service. May 14 00:49:00.911702 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 14 00:49:00.913419 systemd[1]: Starting modprobe@dm_mod.service... May 14 00:49:00.915290 systemd[1]: Starting modprobe@loop.service... May 14 00:49:00.916078 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 14 00:49:00.916268 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 14 00:49:00.917832 systemd[1]: Starting systemd-update-done.service... May 14 00:49:00.918586 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 00:49:00.919844 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:49:00.920028 systemd[1]: Finished modprobe@efi_pstore.service. May 14 00:49:00.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:49:00.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:49:00.921290 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:49:00.921443 systemd[1]: Finished modprobe@dm_mod.service. May 14 00:49:00.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:49:00.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:49:00.922540 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:49:00.922703 systemd[1]: Finished modprobe@loop.service. May 14 00:49:00.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:49:00.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:49:00.923914 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 00:49:00.924065 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 14 00:49:00.925331 systemd[1]: Finished systemd-update-utmp.service. May 14 00:49:00.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:49:00.926541 systemd[1]: Finished systemd-update-done.service. May 14 00:49:00.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:49:00.929422 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 14 00:49:00.930986 systemd[1]: Starting modprobe@dm_mod.service... May 14 00:49:00.933431 systemd[1]: Starting modprobe@drm.service... May 14 00:49:00.935258 systemd[1]: Starting modprobe@efi_pstore.service... May 14 00:49:00.937588 systemd[1]: Starting modprobe@loop.service... May 14 00:49:00.938536 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 14 00:49:00.938689 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 14 00:49:00.940288 systemd[1]: Starting systemd-networkd-wait-online.service... May 14 00:49:00.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:49:00.943000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:49:00.941505 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 00:49:00.942664 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:49:00.942861 systemd[1]: Finished modprobe@dm_mod.service. May 14 00:49:00.944132 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 00:49:00.945014 systemd[1]: Finished modprobe@drm.service. May 14 00:49:00.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:49:00.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:49:00.946425 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:49:00.946575 systemd[1]: Finished modprobe@loop.service. May 14 00:49:00.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:49:00.947000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:49:00.950258 systemd[1]: Finished ensure-sysext.service. May 14 00:49:00.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:49:00.951075 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 14 00:49:00.952126 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:49:00.952297 systemd[1]: Finished modprobe@efi_pstore.service. May 14 00:49:00.953036 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 00:49:00.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:49:00.952000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:49:00.976089 systemd-resolved[1238]: Positive Trust Anchors: May 14 00:49:00.976382 systemd-resolved[1238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 00:49:00.976461 systemd-resolved[1238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 14 00:49:00.976000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 14 00:49:00.976000 audit[1281]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff3c58400 a2=420 a3=0 items=0 ppid=1234 pid=1281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 14 00:49:00.976000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 14 00:49:00.977600 augenrules[1281]: No rules May 14 00:49:00.978425 systemd[1]: Finished audit-rules.service. May 14 00:49:00.990685 systemd[1]: Started systemd-timesyncd.service. May 14 00:49:00.991438 systemd-timesyncd[1240]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 14 00:49:00.991490 systemd-timesyncd[1240]: Initial clock synchronization to Wed 2025-05-14 00:49:01.113172 UTC. May 14 00:49:00.991810 systemd[1]: Reached target time-set.target. May 14 00:49:00.993959 systemd-resolved[1238]: Defaulting to hostname 'linux'. May 14 00:49:01.000534 systemd[1]: Started systemd-resolved.service. May 14 00:49:01.001220 systemd[1]: Reached target network.target. May 14 00:49:01.001793 systemd[1]: Reached target nss-lookup.target. May 14 00:49:01.002406 systemd[1]: Reached target sysinit.target. May 14 00:49:01.003046 systemd[1]: Started motdgen.path. May 14 00:49:01.003582 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 14 00:49:01.004570 systemd[1]: Started logrotate.timer. May 14 00:49:01.005263 systemd[1]: Started mdadm.timer. May 14 00:49:01.005764 systemd[1]: Started systemd-tmpfiles-clean.timer. May 14 00:49:01.006412 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 00:49:01.006443 systemd[1]: Reached target paths.target. May 14 00:49:01.006987 systemd[1]: Reached target timers.target. May 14 00:49:01.007859 systemd[1]: Listening on dbus.socket. May 14 00:49:01.009772 systemd[1]: Starting docker.socket... May 14 00:49:01.011511 systemd[1]: Listening on sshd.socket. May 14 00:49:01.012267 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 14 00:49:01.012593 systemd[1]: Listening on docker.socket. May 14 00:49:01.013230 systemd[1]: Reached target sockets.target. May 14 00:49:01.013803 systemd[1]: Reached target basic.target. May 14 00:49:01.014585 systemd[1]: System is tainted: cgroupsv1 May 14 00:49:01.014637 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 14 00:49:01.014658 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 14 00:49:01.015704 systemd[1]: Starting containerd.service... May 14 00:49:01.017479 systemd[1]: Starting dbus.service... May 14 00:49:01.019248 systemd[1]: Starting enable-oem-cloudinit.service... May 14 00:49:01.021323 systemd[1]: Starting extend-filesystems.service... May 14 00:49:01.022038 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 14 00:49:01.023533 systemd[1]: Starting motdgen.service... May 14 00:49:01.025411 systemd[1]: Starting ssh-key-proc-cmdline.service... May 14 00:49:01.027305 systemd[1]: Starting sshd-keygen.service... May 14 00:49:01.030946 jq[1293]: false May 14 00:49:01.029906 systemd[1]: Starting systemd-logind.service... May 14 00:49:01.030625 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 14 00:49:01.030739 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 00:49:01.032078 systemd[1]: Starting update-engine.service... May 14 00:49:01.033772 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 14 00:49:01.036171 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 00:49:01.037003 jq[1306]: true May 14 00:49:01.036708 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 14 00:49:01.049393 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 00:49:01.049654 systemd[1]: Finished ssh-key-proc-cmdline.service. May 14 00:49:01.056704 jq[1310]: true May 14 00:49:01.057894 extend-filesystems[1294]: Found loop1 May 14 00:49:01.057894 extend-filesystems[1294]: Found vda May 14 00:49:01.059648 extend-filesystems[1294]: Found vda1 May 14 00:49:01.059648 extend-filesystems[1294]: Found vda2 May 14 00:49:01.059648 extend-filesystems[1294]: Found vda3 May 14 00:49:01.059648 extend-filesystems[1294]: Found usr May 14 00:49:01.059648 extend-filesystems[1294]: Found vda4 May 14 00:49:01.059648 extend-filesystems[1294]: Found vda6 May 14 00:49:01.059648 extend-filesystems[1294]: Found vda7 May 14 00:49:01.059648 extend-filesystems[1294]: Found vda9 May 14 00:49:01.059648 extend-filesystems[1294]: Checking size of /dev/vda9 May 14 00:49:01.060425 systemd[1]: motdgen.service: Deactivated successfully. May 14 00:49:01.060663 systemd[1]: Finished motdgen.service. May 14 00:49:01.091694 dbus-daemon[1292]: [system] SELinux support is enabled May 14 00:49:01.091878 systemd[1]: Started dbus.service. May 14 00:49:01.094607 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 00:49:01.094639 systemd[1]: Reached target system-config.target. May 14 00:49:01.095541 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 00:49:01.100797 bash[1342]: Updated "/home/core/.ssh/authorized_keys" May 14 00:49:01.095567 systemd[1]: Reached target user-config.target. May 14 00:49:01.101524 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 14 00:49:01.105095 extend-filesystems[1294]: Resized partition /dev/vda9 May 14 00:49:01.112398 extend-filesystems[1348]: resize2fs 1.46.5 (30-Dec-2021) May 14 00:49:01.116904 systemd-logind[1301]: Watching system buttons on /dev/input/event0 (Power Button) May 14 00:49:01.117275 systemd-logind[1301]: New seat seat0. May 14 00:49:01.117344 update_engine[1305]: I0514 00:49:01.116842 1305 main.cc:92] Flatcar Update Engine starting May 14 00:49:01.122777 systemd[1]: Started systemd-logind.service. May 14 00:49:01.123087 update_engine[1305]: I0514 00:49:01.123055 1305 update_check_scheduler.cc:74] Next update check in 7m5s May 14 00:49:01.123695 systemd[1]: Started update-engine.service. May 14 00:49:01.132406 systemd[1]: Started locksmithd.service. May 14 00:49:01.132921 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 14 00:49:01.163087 env[1312]: time="2025-05-14T00:49:01.163030533Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 14 00:49:01.167930 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 14 00:49:01.177876 extend-filesystems[1348]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 14 00:49:01.177876 extend-filesystems[1348]: old_desc_blocks = 1, new_desc_blocks = 1 May 14 00:49:01.177876 extend-filesystems[1348]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 14 00:49:01.182284 extend-filesystems[1294]: Resized filesystem in /dev/vda9 May 14 00:49:01.178632 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 00:49:01.178888 systemd[1]: Finished extend-filesystems.service. May 14 00:49:01.187269 env[1312]: time="2025-05-14T00:49:01.186988938Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 14 00:49:01.187269 env[1312]: time="2025-05-14T00:49:01.187169771Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 14 00:49:01.188448 env[1312]: time="2025-05-14T00:49:01.188375007Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 14 00:49:01.188448 env[1312]: time="2025-05-14T00:49:01.188405666Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 14 00:49:01.188689 env[1312]: time="2025-05-14T00:49:01.188662925Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 14 00:49:01.188689 env[1312]: time="2025-05-14T00:49:01.188686194Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 14 00:49:01.188762 env[1312]: time="2025-05-14T00:49:01.188699473Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 14 00:49:01.188762 env[1312]: time="2025-05-14T00:49:01.188709544Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 14 00:49:01.188809 env[1312]: time="2025-05-14T00:49:01.188783696Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 14 00:49:01.189106 env[1312]: time="2025-05-14T00:49:01.189079858Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 14 00:49:01.189279 env[1312]: time="2025-05-14T00:49:01.189253665Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 14 00:49:01.189279 env[1312]: time="2025-05-14T00:49:01.189274497Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 14 00:49:01.189343 env[1312]: time="2025-05-14T00:49:01.189331796Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 14 00:49:01.189372 env[1312]: time="2025-05-14T00:49:01.189343979Z" level=info msg="metadata content store policy set" policy=shared May 14 00:49:01.194609 env[1312]: time="2025-05-14T00:49:01.194562404Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 14 00:49:01.194609 env[1312]: time="2025-05-14T00:49:01.194591927Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 14 00:49:01.194609 env[1312]: time="2025-05-14T00:49:01.194604151Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 14 00:49:01.194738 env[1312]: time="2025-05-14T00:49:01.194634161Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 14 00:49:01.194738 env[1312]: time="2025-05-14T00:49:01.194655074Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 14 00:49:01.194738 env[1312]: time="2025-05-14T00:49:01.194670587Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 14 00:49:01.194738 env[1312]: time="2025-05-14T00:49:01.194683379Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 14 00:49:01.195078 env[1312]: time="2025-05-14T00:49:01.195052840Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 14 00:49:01.195078 env[1312]: time="2025-05-14T00:49:01.195075540Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 14 00:49:01.195138 env[1312]: time="2025-05-14T00:49:01.195089835Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 14 00:49:01.195138 env[1312]: time="2025-05-14T00:49:01.195102261Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 14 00:49:01.195138 env[1312]: time="2025-05-14T00:49:01.195114484Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 14 00:49:01.195253 env[1312]: time="2025-05-14T00:49:01.195227012Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 14 00:49:01.195360 env[1312]: time="2025-05-14T00:49:01.195306728Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 14 00:49:01.195712 env[1312]: time="2025-05-14T00:49:01.195644189Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 14 00:49:01.195712 env[1312]: time="2025-05-14T00:49:01.195684189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 14 00:49:01.195712 env[1312]: time="2025-05-14T00:49:01.195698321Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 14 00:49:01.195821 env[1312]: time="2025-05-14T00:49:01.195805610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 14 00:49:01.195853 env[1312]: time="2025-05-14T00:49:01.195821813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 14 00:49:01.195853 env[1312]: time="2025-05-14T00:49:01.195835173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 14 00:49:01.195853 env[1312]: time="2025-05-14T00:49:01.195846463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 14 00:49:01.195942 env[1312]: time="2025-05-14T00:49:01.195923214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 14 00:49:01.195971 env[1312]: time="2025-05-14T00:49:01.195946483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 14 00:49:01.195997 env[1312]: time="2025-05-14T00:49:01.195970320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 14 00:49:01.195997 env[1312]: time="2025-05-14T00:49:01.195982503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 14 00:49:01.195997 env[1312]: time="2025-05-14T00:49:01.195995173Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 14 00:49:01.196135 env[1312]: time="2025-05-14T00:49:01.196114036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 14 00:49:01.196169 env[1312]: time="2025-05-14T00:49:01.196133812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 14 00:49:01.196169 env[1312]: time="2025-05-14T00:49:01.196146929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 14 00:49:01.196169 env[1312]: time="2025-05-14T00:49:01.196159437Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 14 00:49:01.196225 env[1312]: time="2025-05-14T00:49:01.196173284Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 14 00:49:01.196225 env[1312]: time="2025-05-14T00:49:01.196183883Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 14 00:49:01.196225 env[1312]: time="2025-05-14T00:49:01.196207558Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 14 00:49:01.196285 env[1312]: time="2025-05-14T00:49:01.196239761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 14 00:49:01.196483 env[1312]: time="2025-05-14T00:49:01.196432735Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 14 00:49:01.197052 env[1312]: time="2025-05-14T00:49:01.196488979Z" level=info msg="Connect containerd service" May 14 00:49:01.197052 env[1312]: time="2025-05-14T00:49:01.196516837Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 14 00:49:01.197257 env[1312]: time="2025-05-14T00:49:01.197211698Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 00:49:01.197565 env[1312]: time="2025-05-14T00:49:01.197486987Z" level=info msg="Start subscribing containerd event" May 14 00:49:01.197565 env[1312]: time="2025-05-14T00:49:01.197548550Z" level=info msg="Start recovering state" May 14 00:49:01.197629 env[1312]: time="2025-05-14T00:49:01.197606540Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 00:49:01.197652 env[1312]: time="2025-05-14T00:49:01.197632936Z" level=info msg="Start event monitor" May 14 00:49:01.197674 env[1312]: time="2025-05-14T00:49:01.197666804Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 00:49:01.197755 env[1312]: time="2025-05-14T00:49:01.197737464Z" level=info msg="containerd successfully booted in 0.035785s" May 14 00:49:01.197817 systemd[1]: Started containerd.service. May 14 00:49:01.200713 env[1312]: time="2025-05-14T00:49:01.198913542Z" level=info msg="Start snapshots syncer" May 14 00:49:01.200713 env[1312]: time="2025-05-14T00:49:01.198947248Z" level=info msg="Start cni network conf syncer for default" May 14 00:49:01.200713 env[1312]: time="2025-05-14T00:49:01.199047187Z" level=info msg="Start streaming server" May 14 00:49:01.203991 locksmithd[1349]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 00:49:01.807016 systemd-networkd[1100]: eth0: Gained IPv6LL May 14 00:49:01.809204 systemd[1]: Finished systemd-networkd-wait-online.service. May 14 00:49:01.810209 systemd[1]: Reached target network-online.target. May 14 00:49:01.812448 systemd[1]: Starting kubelet.service... May 14 00:49:01.839267 sshd_keygen[1316]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 00:49:01.857339 systemd[1]: Finished sshd-keygen.service. May 14 00:49:01.859521 systemd[1]: Starting issuegen.service... May 14 00:49:01.864754 systemd[1]: issuegen.service: Deactivated successfully. May 14 00:49:01.864971 systemd[1]: Finished issuegen.service. May 14 00:49:01.866963 systemd[1]: Starting systemd-user-sessions.service... May 14 00:49:01.873098 systemd[1]: Finished systemd-user-sessions.service. May 14 00:49:01.875132 systemd[1]: Started getty@tty1.service. May 14 00:49:01.877042 systemd[1]: Started serial-getty@ttyAMA0.service. May 14 00:49:01.877890 systemd[1]: Reached target getty.target. May 14 00:49:02.347345 systemd[1]: Started kubelet.service. May 14 00:49:02.348549 systemd[1]: Reached target multi-user.target. May 14 00:49:02.350568 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 14 00:49:02.356586 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 14 00:49:02.356818 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 14 00:49:02.357783 systemd[1]: Startup finished in 4.708s (kernel) + 4.642s (userspace) = 9.350s. May 14 00:49:02.853015 kubelet[1387]: E0514 00:49:02.852965 1387 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:49:02.854575 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:49:02.854719 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:49:05.876002 systemd[1]: Created slice system-sshd.slice. May 14 00:49:05.877165 systemd[1]: Started sshd@0-10.0.0.109:22-10.0.0.1:48922.service. May 14 00:49:05.931789 sshd[1398]: Accepted publickey for core from 10.0.0.1 port 48922 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:49:05.934417 sshd[1398]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:49:05.943761 systemd[1]: Created slice user-500.slice. May 14 00:49:05.944709 systemd[1]: Starting user-runtime-dir@500.service... May 14 00:49:05.948836 systemd-logind[1301]: New session 1 of user core. May 14 00:49:05.956093 systemd[1]: Finished user-runtime-dir@500.service. May 14 00:49:05.957206 systemd[1]: Starting user@500.service... May 14 00:49:05.962065 (systemd)[1402]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 00:49:06.033762 systemd[1402]: Queued start job for default target default.target. May 14 00:49:06.034282 systemd[1402]: Reached target paths.target. May 14 00:49:06.034405 systemd[1402]: Reached target sockets.target. May 14 00:49:06.034478 systemd[1402]: Reached target timers.target. May 14 00:49:06.034547 systemd[1402]: Reached target basic.target. May 14 00:49:06.034652 systemd[1402]: Reached target default.target. May 14 00:49:06.034750 systemd[1402]: Startup finished in 66ms. May 14 00:49:06.034763 systemd[1]: Started user@500.service. May 14 00:49:06.035727 systemd[1]: Started session-1.scope. May 14 00:49:06.087009 systemd[1]: Started sshd@1-10.0.0.109:22-10.0.0.1:48928.service. May 14 00:49:06.125179 sshd[1412]: Accepted publickey for core from 10.0.0.1 port 48928 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:49:06.126389 sshd[1412]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:49:06.131013 systemd-logind[1301]: New session 2 of user core. May 14 00:49:06.131812 systemd[1]: Started session-2.scope. May 14 00:49:06.188070 sshd[1412]: pam_unix(sshd:session): session closed for user core May 14 00:49:06.190231 systemd[1]: Started sshd@2-10.0.0.109:22-10.0.0.1:48932.service. May 14 00:49:06.194588 systemd[1]: sshd@1-10.0.0.109:22-10.0.0.1:48928.service: Deactivated successfully. May 14 00:49:06.195253 systemd[1]: session-2.scope: Deactivated successfully. May 14 00:49:06.197241 systemd-logind[1301]: Session 2 logged out. Waiting for processes to exit. May 14 00:49:06.198777 systemd-logind[1301]: Removed session 2. May 14 00:49:06.228597 sshd[1417]: Accepted publickey for core from 10.0.0.1 port 48932 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:49:06.230282 sshd[1417]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:49:06.235113 systemd[1]: Started session-3.scope. May 14 00:49:06.236071 systemd-logind[1301]: New session 3 of user core. May 14 00:49:06.288355 sshd[1417]: pam_unix(sshd:session): session closed for user core May 14 00:49:06.290667 systemd[1]: Started sshd@3-10.0.0.109:22-10.0.0.1:48942.service. May 14 00:49:06.295999 systemd[1]: sshd@2-10.0.0.109:22-10.0.0.1:48932.service: Deactivated successfully. May 14 00:49:06.296660 systemd[1]: session-3.scope: Deactivated successfully. May 14 00:49:06.297946 systemd-logind[1301]: Session 3 logged out. Waiting for processes to exit. May 14 00:49:06.299556 systemd-logind[1301]: Removed session 3. May 14 00:49:06.326923 sshd[1424]: Accepted publickey for core from 10.0.0.1 port 48942 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:49:06.328077 sshd[1424]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:49:06.332879 systemd[1]: Started session-4.scope. May 14 00:49:06.333835 systemd-logind[1301]: New session 4 of user core. May 14 00:49:06.396094 sshd[1424]: pam_unix(sshd:session): session closed for user core May 14 00:49:06.400027 systemd[1]: Started sshd@4-10.0.0.109:22-10.0.0.1:48954.service. May 14 00:49:06.400815 systemd[1]: sshd@3-10.0.0.109:22-10.0.0.1:48942.service: Deactivated successfully. May 14 00:49:06.404929 systemd[1]: session-4.scope: Deactivated successfully. May 14 00:49:06.411990 systemd-logind[1301]: Session 4 logged out. Waiting for processes to exit. May 14 00:49:06.412817 systemd-logind[1301]: Removed session 4. May 14 00:49:06.438086 sshd[1431]: Accepted publickey for core from 10.0.0.1 port 48954 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:49:06.439592 sshd[1431]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:49:06.443730 systemd[1]: Started session-5.scope. May 14 00:49:06.444006 systemd-logind[1301]: New session 5 of user core. May 14 00:49:06.508633 sudo[1437]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 00:49:06.508863 sudo[1437]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 14 00:49:06.520456 systemd[1]: Starting coreos-metadata.service... May 14 00:49:06.530092 systemd[1]: coreos-metadata.service: Deactivated successfully. May 14 00:49:06.530297 systemd[1]: Finished coreos-metadata.service. May 14 00:49:07.053421 systemd[1]: Stopped kubelet.service. May 14 00:49:07.055436 systemd[1]: Starting kubelet.service... May 14 00:49:07.072930 systemd[1]: Reloading. May 14 00:49:07.123225 /usr/lib/systemd/system-generators/torcx-generator[1508]: time="2025-05-14T00:49:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 14 00:49:07.123258 /usr/lib/systemd/system-generators/torcx-generator[1508]: time="2025-05-14T00:49:07Z" level=info msg="torcx already run" May 14 00:49:07.189011 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 14 00:49:07.189155 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 14 00:49:07.207226 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:49:07.273513 systemd[1]: Started kubelet.service. May 14 00:49:07.276948 systemd[1]: Stopping kubelet.service... May 14 00:49:07.277585 systemd[1]: kubelet.service: Deactivated successfully. May 14 00:49:07.278001 systemd[1]: Stopped kubelet.service. May 14 00:49:07.279816 systemd[1]: Starting kubelet.service... May 14 00:49:07.361215 systemd[1]: Started kubelet.service. May 14 00:49:07.401529 kubelet[1569]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:49:07.401529 kubelet[1569]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 00:49:07.401529 kubelet[1569]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:49:07.401891 kubelet[1569]: I0514 00:49:07.401616 1569 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 00:49:08.117716 kubelet[1569]: I0514 00:49:08.117665 1569 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 14 00:49:08.117716 kubelet[1569]: I0514 00:49:08.117701 1569 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 00:49:08.117920 kubelet[1569]: I0514 00:49:08.117894 1569 server.go:927] "Client rotation is on, will bootstrap in background" May 14 00:49:08.158589 kubelet[1569]: I0514 00:49:08.158560 1569 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 00:49:08.167701 kubelet[1569]: I0514 00:49:08.167667 1569 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 00:49:08.169421 kubelet[1569]: I0514 00:49:08.169373 1569 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 00:49:08.169578 kubelet[1569]: I0514 00:49:08.169417 1569 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.109","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 14 00:49:08.169651 kubelet[1569]: I0514 00:49:08.169644 1569 topology_manager.go:138] "Creating topology manager with none policy" May 14 00:49:08.169698 kubelet[1569]: I0514 00:49:08.169655 1569 container_manager_linux.go:301] "Creating device plugin manager" May 14 00:49:08.169921 kubelet[1569]: I0514 00:49:08.169894 1569 state_mem.go:36] "Initialized new in-memory state store" May 14 00:49:08.173126 kubelet[1569]: I0514 00:49:08.173097 1569 kubelet.go:400] "Attempting to sync node with API server" May 14 00:49:08.173126 kubelet[1569]: I0514 00:49:08.173126 1569 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 00:49:08.173361 kubelet[1569]: I0514 00:49:08.173349 1569 kubelet.go:312] "Adding apiserver pod source" May 14 00:49:08.173438 kubelet[1569]: I0514 00:49:08.173427 1569 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 00:49:08.173469 kubelet[1569]: E0514 00:49:08.173440 1569 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:08.173599 kubelet[1569]: E0514 00:49:08.173571 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:08.174544 kubelet[1569]: I0514 00:49:08.174520 1569 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 14 00:49:08.175018 kubelet[1569]: I0514 00:49:08.175003 1569 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 00:49:08.175094 kubelet[1569]: W0514 00:49:08.175057 1569 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 00:49:08.175858 kubelet[1569]: I0514 00:49:08.175842 1569 server.go:1264] "Started kubelet" May 14 00:49:08.176364 kubelet[1569]: I0514 00:49:08.176319 1569 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 00:49:08.176646 kubelet[1569]: I0514 00:49:08.176618 1569 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 00:49:08.176944 kubelet[1569]: I0514 00:49:08.176895 1569 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 00:49:08.178148 kubelet[1569]: I0514 00:49:08.178125 1569 server.go:455] "Adding debug handlers to kubelet server" May 14 00:49:08.178928 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 14 00:49:08.179012 kubelet[1569]: I0514 00:49:08.178942 1569 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 00:49:08.189647 kubelet[1569]: E0514 00:49:08.189621 1569 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.109\" not found" May 14 00:49:08.189728 kubelet[1569]: I0514 00:49:08.189720 1569 volume_manager.go:291] "Starting Kubelet Volume Manager" May 14 00:49:08.189973 kubelet[1569]: I0514 00:49:08.189947 1569 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 00:49:08.190340 kubelet[1569]: I0514 00:49:08.190312 1569 reconciler.go:26] "Reconciler: start to sync state" May 14 00:49:08.193554 kubelet[1569]: E0514 00:49:08.193528 1569 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 00:49:08.198950 kubelet[1569]: I0514 00:49:08.198860 1569 factory.go:221] Registration of the systemd container factory successfully May 14 00:49:08.199052 kubelet[1569]: I0514 00:49:08.199026 1569 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 00:49:08.200556 kubelet[1569]: I0514 00:49:08.200535 1569 factory.go:221] Registration of the containerd container factory successfully May 14 00:49:08.210598 kubelet[1569]: E0514 00:49:08.210572 1569 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.109\" not found" node="10.0.0.109" May 14 00:49:08.220805 kubelet[1569]: I0514 00:49:08.220782 1569 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 00:49:08.220805 kubelet[1569]: I0514 00:49:08.220798 1569 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 00:49:08.220987 kubelet[1569]: I0514 00:49:08.220820 1569 state_mem.go:36] "Initialized new in-memory state store" May 14 00:49:08.291404 kubelet[1569]: I0514 00:49:08.291376 1569 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.109" May 14 00:49:08.302886 kubelet[1569]: I0514 00:49:08.302862 1569 policy_none.go:49] "None policy: Start" May 14 00:49:08.303597 kubelet[1569]: I0514 00:49:08.303579 1569 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 00:49:08.303724 kubelet[1569]: I0514 00:49:08.303713 1569 state_mem.go:35] "Initializing new in-memory state store" May 14 00:49:08.309321 kubelet[1569]: I0514 00:49:08.309297 1569 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.109" May 14 00:49:08.309438 kubelet[1569]: I0514 00:49:08.309411 1569 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 00:49:08.309595 kubelet[1569]: I0514 00:49:08.309558 1569 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 00:49:08.309681 kubelet[1569]: I0514 00:49:08.309665 1569 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 00:49:08.315690 kubelet[1569]: I0514 00:49:08.315655 1569 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" May 14 00:49:08.316196 env[1312]: time="2025-05-14T00:49:08.316054906Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 00:49:08.316467 kubelet[1569]: I0514 00:49:08.316249 1569 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" May 14 00:49:08.356729 kubelet[1569]: I0514 00:49:08.356682 1569 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 00:49:08.357779 kubelet[1569]: I0514 00:49:08.357758 1569 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 00:49:08.357821 kubelet[1569]: I0514 00:49:08.357790 1569 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 00:49:08.357821 kubelet[1569]: I0514 00:49:08.357807 1569 kubelet.go:2337] "Starting kubelet main sync loop" May 14 00:49:08.357874 kubelet[1569]: E0514 00:49:08.357857 1569 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" May 14 00:49:08.381481 sudo[1437]: pam_unix(sudo:session): session closed for user root May 14 00:49:08.383340 sshd[1431]: pam_unix(sshd:session): session closed for user core May 14 00:49:08.385821 systemd[1]: sshd@4-10.0.0.109:22-10.0.0.1:48954.service: Deactivated successfully. May 14 00:49:08.386838 systemd-logind[1301]: Session 5 logged out. Waiting for processes to exit. May 14 00:49:08.386923 systemd[1]: session-5.scope: Deactivated successfully. May 14 00:49:08.387758 systemd-logind[1301]: Removed session 5. May 14 00:49:09.129019 kubelet[1569]: I0514 00:49:09.128987 1569 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" May 14 00:49:09.129754 kubelet[1569]: W0514 00:49:09.129731 1569 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 14 00:49:09.129887 kubelet[1569]: W0514 00:49:09.129873 1569 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 14 00:49:09.130003 kubelet[1569]: W0514 00:49:09.129989 1569 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 14 00:49:09.173612 kubelet[1569]: I0514 00:49:09.173569 1569 apiserver.go:52] "Watching apiserver" May 14 00:49:09.173727 kubelet[1569]: E0514 00:49:09.173670 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:09.176898 kubelet[1569]: I0514 00:49:09.176868 1569 topology_manager.go:215] "Topology Admit Handler" podUID="e4344ad1-2811-4680-b4a1-b7ef0b3607ba" podNamespace="kube-system" podName="cilium-zzv9q" May 14 00:49:09.177040 kubelet[1569]: I0514 00:49:09.177018 1569 topology_manager.go:215] "Topology Admit Handler" podUID="be7961e5-bf3e-44e4-a77b-61b0a459ac23" podNamespace="kube-system" podName="kube-proxy-wwd4h" May 14 00:49:09.191165 kubelet[1569]: I0514 00:49:09.191130 1569 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 00:49:09.196101 kubelet[1569]: I0514 00:49:09.196066 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq2mb\" (UniqueName: \"kubernetes.io/projected/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-kube-api-access-bq2mb\") pod \"cilium-zzv9q\" (UID: \"e4344ad1-2811-4680-b4a1-b7ef0b3607ba\") " pod="kube-system/cilium-zzv9q" May 14 00:49:09.197743 kubelet[1569]: I0514 00:49:09.197149 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be7961e5-bf3e-44e4-a77b-61b0a459ac23-xtables-lock\") pod \"kube-proxy-wwd4h\" (UID: \"be7961e5-bf3e-44e4-a77b-61b0a459ac23\") " pod="kube-system/kube-proxy-wwd4h" May 14 00:49:09.197816 kubelet[1569]: I0514 00:49:09.197771 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be7961e5-bf3e-44e4-a77b-61b0a459ac23-lib-modules\") pod \"kube-proxy-wwd4h\" (UID: \"be7961e5-bf3e-44e4-a77b-61b0a459ac23\") " pod="kube-system/kube-proxy-wwd4h" May 14 00:49:09.197816 kubelet[1569]: I0514 00:49:09.197805 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-bpf-maps\") pod \"cilium-zzv9q\" (UID: \"e4344ad1-2811-4680-b4a1-b7ef0b3607ba\") " pod="kube-system/cilium-zzv9q" May 14 00:49:09.197883 kubelet[1569]: I0514 00:49:09.197827 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-xtables-lock\") pod \"cilium-zzv9q\" (UID: \"e4344ad1-2811-4680-b4a1-b7ef0b3607ba\") " pod="kube-system/cilium-zzv9q" May 14 00:49:09.197883 kubelet[1569]: I0514 00:49:09.197843 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-cilium-config-path\") pod \"cilium-zzv9q\" (UID: \"e4344ad1-2811-4680-b4a1-b7ef0b3607ba\") " pod="kube-system/cilium-zzv9q" May 14 00:49:09.197883 kubelet[1569]: I0514 00:49:09.197871 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-host-proc-sys-kernel\") pod \"cilium-zzv9q\" (UID: \"e4344ad1-2811-4680-b4a1-b7ef0b3607ba\") " pod="kube-system/cilium-zzv9q" May 14 00:49:09.197954 kubelet[1569]: I0514 00:49:09.197888 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-hubble-tls\") pod \"cilium-zzv9q\" (UID: \"e4344ad1-2811-4680-b4a1-b7ef0b3607ba\") " pod="kube-system/cilium-zzv9q" May 14 00:49:09.197954 kubelet[1569]: I0514 00:49:09.197919 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-cilium-run\") pod \"cilium-zzv9q\" (UID: \"e4344ad1-2811-4680-b4a1-b7ef0b3607ba\") " pod="kube-system/cilium-zzv9q" May 14 00:49:09.197954 kubelet[1569]: I0514 00:49:09.197939 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-lib-modules\") pod \"cilium-zzv9q\" (UID: \"e4344ad1-2811-4680-b4a1-b7ef0b3607ba\") " pod="kube-system/cilium-zzv9q" May 14 00:49:09.198035 kubelet[1569]: I0514 00:49:09.197977 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-host-proc-sys-net\") pod \"cilium-zzv9q\" (UID: \"e4344ad1-2811-4680-b4a1-b7ef0b3607ba\") " pod="kube-system/cilium-zzv9q" May 14 00:49:09.198035 kubelet[1569]: I0514 00:49:09.198012 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/be7961e5-bf3e-44e4-a77b-61b0a459ac23-kube-proxy\") pod \"kube-proxy-wwd4h\" (UID: \"be7961e5-bf3e-44e4-a77b-61b0a459ac23\") " pod="kube-system/kube-proxy-wwd4h" May 14 00:49:09.198035 kubelet[1569]: I0514 00:49:09.198028 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-cilium-cgroup\") pod \"cilium-zzv9q\" (UID: \"e4344ad1-2811-4680-b4a1-b7ef0b3607ba\") " pod="kube-system/cilium-zzv9q" May 14 00:49:09.198096 kubelet[1569]: I0514 00:49:09.198043 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-cni-path\") pod \"cilium-zzv9q\" (UID: \"e4344ad1-2811-4680-b4a1-b7ef0b3607ba\") " pod="kube-system/cilium-zzv9q" May 14 00:49:09.198096 kubelet[1569]: I0514 00:49:09.198060 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-etc-cni-netd\") pod \"cilium-zzv9q\" (UID: \"e4344ad1-2811-4680-b4a1-b7ef0b3607ba\") " pod="kube-system/cilium-zzv9q" May 14 00:49:09.198096 kubelet[1569]: I0514 00:49:09.198074 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-hostproc\") pod \"cilium-zzv9q\" (UID: \"e4344ad1-2811-4680-b4a1-b7ef0b3607ba\") " pod="kube-system/cilium-zzv9q" May 14 00:49:09.198096 kubelet[1569]: I0514 00:49:09.198089 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-clustermesh-secrets\") pod \"cilium-zzv9q\" (UID: \"e4344ad1-2811-4680-b4a1-b7ef0b3607ba\") " pod="kube-system/cilium-zzv9q" May 14 00:49:09.198180 kubelet[1569]: I0514 00:49:09.198106 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpmmm\" (UniqueName: \"kubernetes.io/projected/be7961e5-bf3e-44e4-a77b-61b0a459ac23-kube-api-access-wpmmm\") pod \"kube-proxy-wwd4h\" (UID: \"be7961e5-bf3e-44e4-a77b-61b0a459ac23\") " pod="kube-system/kube-proxy-wwd4h" May 14 00:49:09.481171 kubelet[1569]: E0514 00:49:09.481058 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:49:09.481481 kubelet[1569]: E0514 00:49:09.481457 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:49:09.482052 env[1312]: time="2025-05-14T00:49:09.482001040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zzv9q,Uid:e4344ad1-2811-4680-b4a1-b7ef0b3607ba,Namespace:kube-system,Attempt:0,}" May 14 00:49:09.482531 env[1312]: time="2025-05-14T00:49:09.482500037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wwd4h,Uid:be7961e5-bf3e-44e4-a77b-61b0a459ac23,Namespace:kube-system,Attempt:0,}" May 14 00:49:10.138946 env[1312]: time="2025-05-14T00:49:10.138877888Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:49:10.139674 env[1312]: time="2025-05-14T00:49:10.139652900Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:49:10.141283 env[1312]: time="2025-05-14T00:49:10.141248812Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:49:10.142818 env[1312]: time="2025-05-14T00:49:10.142766167Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:49:10.144298 env[1312]: time="2025-05-14T00:49:10.144270904Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:49:10.145779 env[1312]: time="2025-05-14T00:49:10.145751692Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:49:10.148048 env[1312]: time="2025-05-14T00:49:10.148022842Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:49:10.148977 env[1312]: time="2025-05-14T00:49:10.148946450Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:49:10.178522 kubelet[1569]: E0514 00:49:10.178434 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:10.183836 env[1312]: time="2025-05-14T00:49:10.183754914Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:49:10.183955 env[1312]: time="2025-05-14T00:49:10.183867386Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:49:10.183955 env[1312]: time="2025-05-14T00:49:10.183894871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:49:10.184039 env[1312]: time="2025-05-14T00:49:10.183769179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:49:10.184039 env[1312]: time="2025-05-14T00:49:10.183809563Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:49:10.184039 env[1312]: time="2025-05-14T00:49:10.183820251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:49:10.184202 env[1312]: time="2025-05-14T00:49:10.184139587Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/92e13076e7e4dd1d47d03766059ae55e6b43664d68f282c5d62f7ed73fc5a40c pid=1633 runtime=io.containerd.runc.v2 May 14 00:49:10.184202 env[1312]: time="2025-05-14T00:49:10.184177600Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0ddf605fea8dbfb7f8f77db8caafd1c1eb60d922739ec0ac984fb9cb61d00cd8 pid=1632 runtime=io.containerd.runc.v2 May 14 00:49:10.259863 env[1312]: time="2025-05-14T00:49:10.259800295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wwd4h,Uid:be7961e5-bf3e-44e4-a77b-61b0a459ac23,Namespace:kube-system,Attempt:0,} returns sandbox id \"92e13076e7e4dd1d47d03766059ae55e6b43664d68f282c5d62f7ed73fc5a40c\"" May 14 00:49:10.261943 kubelet[1569]: E0514 00:49:10.261548 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:49:10.262502 env[1312]: time="2025-05-14T00:49:10.262202401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zzv9q,Uid:e4344ad1-2811-4680-b4a1-b7ef0b3607ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ddf605fea8dbfb7f8f77db8caafd1c1eb60d922739ec0ac984fb9cb61d00cd8\"" May 14 00:49:10.262622 env[1312]: time="2025-05-14T00:49:10.262584341Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 14 00:49:10.263304 kubelet[1569]: E0514 00:49:10.263120 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:49:10.306381 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount474912893.mount: Deactivated successfully. May 14 00:49:11.178607 kubelet[1569]: E0514 00:49:11.178529 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:11.263279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount778638612.mount: Deactivated successfully. May 14 00:49:11.681524 env[1312]: time="2025-05-14T00:49:11.681461205Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:49:11.682591 env[1312]: time="2025-05-14T00:49:11.682561435Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:49:11.683906 env[1312]: time="2025-05-14T00:49:11.683876441Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:49:11.684833 env[1312]: time="2025-05-14T00:49:11.684806793Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:49:11.685498 env[1312]: time="2025-05-14T00:49:11.685470441Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 14 00:49:11.688047 env[1312]: time="2025-05-14T00:49:11.688016760Z" level=info msg="CreateContainer within sandbox \"92e13076e7e4dd1d47d03766059ae55e6b43664d68f282c5d62f7ed73fc5a40c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 00:49:11.688231 env[1312]: time="2025-05-14T00:49:11.688045474Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 14 00:49:11.700499 env[1312]: time="2025-05-14T00:49:11.700454422Z" level=info msg="CreateContainer within sandbox \"92e13076e7e4dd1d47d03766059ae55e6b43664d68f282c5d62f7ed73fc5a40c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ec083b1f8b0af4abbb17db4a9fffd058e23616b177b102db8019117693bc728e\"" May 14 00:49:11.701265 env[1312]: time="2025-05-14T00:49:11.701234574Z" level=info msg="StartContainer for \"ec083b1f8b0af4abbb17db4a9fffd058e23616b177b102db8019117693bc728e\"" May 14 00:49:11.759028 env[1312]: time="2025-05-14T00:49:11.758978875Z" level=info msg="StartContainer for \"ec083b1f8b0af4abbb17db4a9fffd058e23616b177b102db8019117693bc728e\" returns successfully" May 14 00:49:12.179461 kubelet[1569]: E0514 00:49:12.179352 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:12.372926 kubelet[1569]: E0514 00:49:12.372582 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:49:13.180318 kubelet[1569]: E0514 00:49:13.180265 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:13.380914 kubelet[1569]: E0514 00:49:13.380871 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:49:14.180981 kubelet[1569]: E0514 00:49:14.180936 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:15.182014 kubelet[1569]: E0514 00:49:15.181972 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:15.650213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1541494968.mount: Deactivated successfully. May 14 00:49:16.182759 kubelet[1569]: E0514 00:49:16.182710 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:17.184855 kubelet[1569]: E0514 00:49:17.183113 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:17.765017 env[1312]: time="2025-05-14T00:49:17.764965652Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:49:17.766040 env[1312]: time="2025-05-14T00:49:17.766005238Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:49:17.767313 env[1312]: time="2025-05-14T00:49:17.767279765Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:49:17.768501 env[1312]: time="2025-05-14T00:49:17.768466254Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 14 00:49:17.770442 env[1312]: time="2025-05-14T00:49:17.770408980Z" level=info msg="CreateContainer within sandbox \"0ddf605fea8dbfb7f8f77db8caafd1c1eb60d922739ec0ac984fb9cb61d00cd8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 00:49:17.779621 env[1312]: time="2025-05-14T00:49:17.779580879Z" level=info msg="CreateContainer within sandbox \"0ddf605fea8dbfb7f8f77db8caafd1c1eb60d922739ec0ac984fb9cb61d00cd8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"845dd3f9fac6e25e59383561c1099732ad0b8336bbe690cb615eacdd19929b5c\"" May 14 00:49:17.780059 env[1312]: time="2025-05-14T00:49:17.780036457Z" level=info msg="StartContainer for \"845dd3f9fac6e25e59383561c1099732ad0b8336bbe690cb615eacdd19929b5c\"" May 14 00:49:17.839966 env[1312]: time="2025-05-14T00:49:17.839919476Z" level=info msg="StartContainer for \"845dd3f9fac6e25e59383561c1099732ad0b8336bbe690cb615eacdd19929b5c\" returns successfully" May 14 00:49:18.046378 env[1312]: time="2025-05-14T00:49:18.046036126Z" level=info msg="shim disconnected" id=845dd3f9fac6e25e59383561c1099732ad0b8336bbe690cb615eacdd19929b5c May 14 00:49:18.046574 env[1312]: time="2025-05-14T00:49:18.046551535Z" level=warning msg="cleaning up after shim disconnected" id=845dd3f9fac6e25e59383561c1099732ad0b8336bbe690cb615eacdd19929b5c namespace=k8s.io May 14 00:49:18.046635 env[1312]: time="2025-05-14T00:49:18.046622166Z" level=info msg="cleaning up dead shim" May 14 00:49:18.053191 env[1312]: time="2025-05-14T00:49:18.053154866Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:49:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1909 runtime=io.containerd.runc.v2\n" May 14 00:49:18.184073 kubelet[1569]: E0514 00:49:18.184014 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:18.388586 kubelet[1569]: E0514 00:49:18.388323 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:49:18.390384 env[1312]: time="2025-05-14T00:49:18.390346437Z" level=info msg="CreateContainer within sandbox \"0ddf605fea8dbfb7f8f77db8caafd1c1eb60d922739ec0ac984fb9cb61d00cd8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 00:49:18.401866 env[1312]: time="2025-05-14T00:49:18.401799825Z" level=info msg="CreateContainer within sandbox \"0ddf605fea8dbfb7f8f77db8caafd1c1eb60d922739ec0ac984fb9cb61d00cd8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f14bebee514af0d85fb4b31976ca02652183dc66da6b8d3db8ae3cc72ef3a435\"" May 14 00:49:18.402303 env[1312]: time="2025-05-14T00:49:18.402276333Z" level=info msg="StartContainer for \"f14bebee514af0d85fb4b31976ca02652183dc66da6b8d3db8ae3cc72ef3a435\"" May 14 00:49:18.410201 kubelet[1569]: I0514 00:49:18.410144 1569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wwd4h" podStartSLOduration=8.985786775 podStartE2EDuration="10.410103706s" podCreationTimestamp="2025-05-14 00:49:08 +0000 UTC" firstStartedPulling="2025-05-14 00:49:10.262155387 +0000 UTC m=+2.896671712" lastFinishedPulling="2025-05-14 00:49:11.686472318 +0000 UTC m=+4.320988643" observedRunningTime="2025-05-14 00:49:12.38072444 +0000 UTC m=+5.015240765" watchObservedRunningTime="2025-05-14 00:49:18.410103706 +0000 UTC m=+11.044619991" May 14 00:49:18.462927 env[1312]: time="2025-05-14T00:49:18.457099515Z" level=info msg="StartContainer for \"f14bebee514af0d85fb4b31976ca02652183dc66da6b8d3db8ae3cc72ef3a435\" returns successfully" May 14 00:49:18.499891 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 00:49:18.500182 systemd[1]: Stopped systemd-sysctl.service. May 14 00:49:18.500433 systemd[1]: Stopping systemd-sysctl.service... May 14 00:49:18.502002 systemd[1]: Starting systemd-sysctl.service... May 14 00:49:18.509526 systemd[1]: Finished systemd-sysctl.service. May 14 00:49:18.522462 env[1312]: time="2025-05-14T00:49:18.522419582Z" level=info msg="shim disconnected" id=f14bebee514af0d85fb4b31976ca02652183dc66da6b8d3db8ae3cc72ef3a435 May 14 00:49:18.522715 env[1312]: time="2025-05-14T00:49:18.522694053Z" level=warning msg="cleaning up after shim disconnected" id=f14bebee514af0d85fb4b31976ca02652183dc66da6b8d3db8ae3cc72ef3a435 namespace=k8s.io May 14 00:49:18.522796 env[1312]: time="2025-05-14T00:49:18.522781430Z" level=info msg="cleaning up dead shim" May 14 00:49:18.529445 env[1312]: time="2025-05-14T00:49:18.529410962Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:49:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1974 runtime=io.containerd.runc.v2\n" May 14 00:49:18.776501 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-845dd3f9fac6e25e59383561c1099732ad0b8336bbe690cb615eacdd19929b5c-rootfs.mount: Deactivated successfully. May 14 00:49:19.184769 kubelet[1569]: E0514 00:49:19.184639 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:19.391598 kubelet[1569]: E0514 00:49:19.391559 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:49:19.393787 env[1312]: time="2025-05-14T00:49:19.393741548Z" level=info msg="CreateContainer within sandbox \"0ddf605fea8dbfb7f8f77db8caafd1c1eb60d922739ec0ac984fb9cb61d00cd8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 00:49:19.467369 env[1312]: time="2025-05-14T00:49:19.467255752Z" level=info msg="CreateContainer within sandbox \"0ddf605fea8dbfb7f8f77db8caafd1c1eb60d922739ec0ac984fb9cb61d00cd8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9ac5cc4f172435e6428fb4e4bb5f2d67553fbed1d281d5dc217286689203d91a\"" May 14 00:49:19.468049 env[1312]: time="2025-05-14T00:49:19.468013073Z" level=info msg="StartContainer for \"9ac5cc4f172435e6428fb4e4bb5f2d67553fbed1d281d5dc217286689203d91a\"" May 14 00:49:19.518138 env[1312]: time="2025-05-14T00:49:19.518099957Z" level=info msg="StartContainer for \"9ac5cc4f172435e6428fb4e4bb5f2d67553fbed1d281d5dc217286689203d91a\" returns successfully" May 14 00:49:19.545975 env[1312]: time="2025-05-14T00:49:19.545934175Z" level=info msg="shim disconnected" id=9ac5cc4f172435e6428fb4e4bb5f2d67553fbed1d281d5dc217286689203d91a May 14 00:49:19.546196 env[1312]: time="2025-05-14T00:49:19.546177429Z" level=warning msg="cleaning up after shim disconnected" id=9ac5cc4f172435e6428fb4e4bb5f2d67553fbed1d281d5dc217286689203d91a namespace=k8s.io May 14 00:49:19.546262 env[1312]: time="2025-05-14T00:49:19.546248006Z" level=info msg="cleaning up dead shim" May 14 00:49:19.552991 env[1312]: time="2025-05-14T00:49:19.552956347Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:49:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2039 runtime=io.containerd.runc.v2\n" May 14 00:49:19.776091 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ac5cc4f172435e6428fb4e4bb5f2d67553fbed1d281d5dc217286689203d91a-rootfs.mount: Deactivated successfully. May 14 00:49:20.185097 kubelet[1569]: E0514 00:49:20.184956 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:20.395591 kubelet[1569]: E0514 00:49:20.395438 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:49:20.399086 env[1312]: time="2025-05-14T00:49:20.399035546Z" level=info msg="CreateContainer within sandbox \"0ddf605fea8dbfb7f8f77db8caafd1c1eb60d922739ec0ac984fb9cb61d00cd8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 00:49:20.408699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2125619580.mount: Deactivated successfully. May 14 00:49:20.412782 env[1312]: time="2025-05-14T00:49:20.412745557Z" level=info msg="CreateContainer within sandbox \"0ddf605fea8dbfb7f8f77db8caafd1c1eb60d922739ec0ac984fb9cb61d00cd8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"76b8f4941061aca1533cbb5547edcde64f3c4e7a276a887e22ad0476454c22c5\"" May 14 00:49:20.413349 env[1312]: time="2025-05-14T00:49:20.413306672Z" level=info msg="StartContainer for \"76b8f4941061aca1533cbb5547edcde64f3c4e7a276a887e22ad0476454c22c5\"" May 14 00:49:20.460987 env[1312]: time="2025-05-14T00:49:20.460889068Z" level=info msg="StartContainer for \"76b8f4941061aca1533cbb5547edcde64f3c4e7a276a887e22ad0476454c22c5\" returns successfully" May 14 00:49:20.476033 env[1312]: time="2025-05-14T00:49:20.475989232Z" level=info msg="shim disconnected" id=76b8f4941061aca1533cbb5547edcde64f3c4e7a276a887e22ad0476454c22c5 May 14 00:49:20.476033 env[1312]: time="2025-05-14T00:49:20.476032084Z" level=warning msg="cleaning up after shim disconnected" id=76b8f4941061aca1533cbb5547edcde64f3c4e7a276a887e22ad0476454c22c5 namespace=k8s.io May 14 00:49:20.476232 env[1312]: time="2025-05-14T00:49:20.476042336Z" level=info msg="cleaning up dead shim" May 14 00:49:20.482277 env[1312]: time="2025-05-14T00:49:20.482242754Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:49:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2094 runtime=io.containerd.runc.v2\n" May 14 00:49:20.776207 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76b8f4941061aca1533cbb5547edcde64f3c4e7a276a887e22ad0476454c22c5-rootfs.mount: Deactivated successfully. May 14 00:49:21.185940 kubelet[1569]: E0514 00:49:21.185829 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:21.399983 kubelet[1569]: E0514 00:49:21.399950 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:49:21.402714 env[1312]: time="2025-05-14T00:49:21.402664715Z" level=info msg="CreateContainer within sandbox \"0ddf605fea8dbfb7f8f77db8caafd1c1eb60d922739ec0ac984fb9cb61d00cd8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 00:49:21.417571 env[1312]: time="2025-05-14T00:49:21.417520313Z" level=info msg="CreateContainer within sandbox \"0ddf605fea8dbfb7f8f77db8caafd1c1eb60d922739ec0ac984fb9cb61d00cd8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e77fc8480b3f78e9fb616a6e6c9843def917b46f4ea15b973428f64eda908efc\"" May 14 00:49:21.418153 env[1312]: time="2025-05-14T00:49:21.418127593Z" level=info msg="StartContainer for \"e77fc8480b3f78e9fb616a6e6c9843def917b46f4ea15b973428f64eda908efc\"" May 14 00:49:21.488970 env[1312]: time="2025-05-14T00:49:21.486205537Z" level=info msg="StartContainer for \"e77fc8480b3f78e9fb616a6e6c9843def917b46f4ea15b973428f64eda908efc\" returns successfully" May 14 00:49:21.649239 kubelet[1569]: I0514 00:49:21.649019 1569 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 14 00:49:21.874940 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 14 00:49:22.100930 kernel: Initializing XFRM netlink socket May 14 00:49:22.103927 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 14 00:49:22.186677 kubelet[1569]: E0514 00:49:22.186634 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:22.403755 kubelet[1569]: E0514 00:49:22.403641 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:49:22.417768 kubelet[1569]: I0514 00:49:22.417501 1569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zzv9q" podStartSLOduration=6.912056097 podStartE2EDuration="14.417486112s" podCreationTimestamp="2025-05-14 00:49:08 +0000 UTC" firstStartedPulling="2025-05-14 00:49:10.26371642 +0000 UTC m=+2.898232705" lastFinishedPulling="2025-05-14 00:49:17.769146394 +0000 UTC m=+10.403662720" observedRunningTime="2025-05-14 00:49:22.417355071 +0000 UTC m=+15.051871396" watchObservedRunningTime="2025-05-14 00:49:22.417486112 +0000 UTC m=+15.052002437" May 14 00:49:23.187078 kubelet[1569]: E0514 00:49:23.187029 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:23.404807 kubelet[1569]: E0514 00:49:23.404744 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:49:23.708804 systemd-networkd[1100]: cilium_host: Link UP May 14 00:49:23.710659 systemd-networkd[1100]: cilium_net: Link UP May 14 00:49:23.710837 systemd-networkd[1100]: cilium_net: Gained carrier May 14 00:49:23.710935 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 14 00:49:23.710976 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 14 00:49:23.711003 systemd-networkd[1100]: cilium_host: Gained carrier May 14 00:49:23.748980 systemd-networkd[1100]: cilium_net: Gained IPv6LL May 14 00:49:23.790936 systemd-networkd[1100]: cilium_vxlan: Link UP May 14 00:49:23.790945 systemd-networkd[1100]: cilium_vxlan: Gained carrier May 14 00:49:24.101928 kernel: NET: Registered PF_ALG protocol family May 14 00:49:24.187977 kubelet[1569]: E0514 00:49:24.187923 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:24.290375 kubelet[1569]: I0514 00:49:24.290316 1569 topology_manager.go:215] "Topology Admit Handler" podUID="d9bb0e0d-4fc9-4f93-8883-d5b9985e2f9f" podNamespace="default" podName="nginx-deployment-85f456d6dd-rdhw2" May 14 00:49:24.389140 kubelet[1569]: I0514 00:49:24.389100 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ndlr\" (UniqueName: \"kubernetes.io/projected/d9bb0e0d-4fc9-4f93-8883-d5b9985e2f9f-kube-api-access-4ndlr\") pod \"nginx-deployment-85f456d6dd-rdhw2\" (UID: \"d9bb0e0d-4fc9-4f93-8883-d5b9985e2f9f\") " pod="default/nginx-deployment-85f456d6dd-rdhw2" May 14 00:49:24.406088 kubelet[1569]: E0514 00:49:24.406050 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:49:24.462099 systemd-networkd[1100]: cilium_host: Gained IPv6LL May 14 00:49:24.595776 env[1312]: time="2025-05-14T00:49:24.595719108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-rdhw2,Uid:d9bb0e0d-4fc9-4f93-8883-d5b9985e2f9f,Namespace:default,Attempt:0,}" May 14 00:49:24.695657 systemd-networkd[1100]: lxc_health: Link UP May 14 00:49:24.706005 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 14 00:49:24.704488 systemd-networkd[1100]: lxc_health: Gained carrier May 14 00:49:25.132014 systemd-networkd[1100]: lxca0c1850f5ba9: Link UP May 14 00:49:25.142923 kernel: eth0: renamed from tmp9350a May 14 00:49:25.154689 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 14 00:49:25.154783 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxca0c1850f5ba9: link becomes ready May 14 00:49:25.154214 systemd-networkd[1100]: lxca0c1850f5ba9: Gained carrier May 14 00:49:25.188748 kubelet[1569]: E0514 00:49:25.188708 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:25.358021 systemd-networkd[1100]: cilium_vxlan: Gained IPv6LL May 14 00:49:26.189963 kubelet[1569]: E0514 00:49:26.189920 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:26.510031 systemd-networkd[1100]: lxca0c1850f5ba9: Gained IPv6LL May 14 00:49:26.539677 kubelet[1569]: E0514 00:49:26.539634 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:49:26.574029 systemd-networkd[1100]: lxc_health: Gained IPv6LL May 14 00:49:27.191283 kubelet[1569]: E0514 00:49:27.191212 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:28.173767 kubelet[1569]: E0514 00:49:28.173724 1569 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:28.192021 kubelet[1569]: E0514 00:49:28.191993 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:28.713411 env[1312]: time="2025-05-14T00:49:28.713338944Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:49:28.713411 env[1312]: time="2025-05-14T00:49:28.713379440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:49:28.713411 env[1312]: time="2025-05-14T00:49:28.713390245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:49:28.713792 env[1312]: time="2025-05-14T00:49:28.713514216Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9350aabb6dda0a2514c74a51c7e83cfb12672358838d03cc58e5f36ac20cf87e pid=2631 runtime=io.containerd.runc.v2 May 14 00:49:28.729262 systemd[1]: run-containerd-runc-k8s.io-9350aabb6dda0a2514c74a51c7e83cfb12672358838d03cc58e5f36ac20cf87e-runc.EfyAgf.mount: Deactivated successfully. May 14 00:49:28.788541 systemd-resolved[1238]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 00:49:28.808279 env[1312]: time="2025-05-14T00:49:28.808230155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-rdhw2,Uid:d9bb0e0d-4fc9-4f93-8883-d5b9985e2f9f,Namespace:default,Attempt:0,} returns sandbox id \"9350aabb6dda0a2514c74a51c7e83cfb12672358838d03cc58e5f36ac20cf87e\"" May 14 00:49:28.810122 env[1312]: time="2025-05-14T00:49:28.810079600Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 14 00:49:29.193216 kubelet[1569]: E0514 00:49:29.193093 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:30.065842 kubelet[1569]: I0514 00:49:30.065734 1569 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 00:49:30.067188 kubelet[1569]: E0514 00:49:30.067166 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:49:30.193729 kubelet[1569]: E0514 00:49:30.193677 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:30.415210 kubelet[1569]: E0514 00:49:30.415122 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:49:30.852315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3307999378.mount: Deactivated successfully. May 14 00:49:31.194229 kubelet[1569]: E0514 00:49:31.193950 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:32.068142 env[1312]: time="2025-05-14T00:49:32.068098178Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:49:32.069582 env[1312]: time="2025-05-14T00:49:32.069552971Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:49:32.071471 env[1312]: time="2025-05-14T00:49:32.071435907Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:49:32.073995 env[1312]: time="2025-05-14T00:49:32.073957199Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:49:32.074697 env[1312]: time="2025-05-14T00:49:32.074667211Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\"" May 14 00:49:32.077173 env[1312]: time="2025-05-14T00:49:32.077122607Z" level=info msg="CreateContainer within sandbox \"9350aabb6dda0a2514c74a51c7e83cfb12672358838d03cc58e5f36ac20cf87e\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" May 14 00:49:32.093401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2782438703.mount: Deactivated successfully. May 14 00:49:32.102831 env[1312]: time="2025-05-14T00:49:32.102781749Z" level=info msg="CreateContainer within sandbox \"9350aabb6dda0a2514c74a51c7e83cfb12672358838d03cc58e5f36ac20cf87e\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"2b42cea9c7837bd3a58522e514e45c050f4f042070bc75f8a0e06bed2e4ce32f\"" May 14 00:49:32.103556 env[1312]: time="2025-05-14T00:49:32.103519168Z" level=info msg="StartContainer for \"2b42cea9c7837bd3a58522e514e45c050f4f042070bc75f8a0e06bed2e4ce32f\"" May 14 00:49:32.163034 env[1312]: time="2025-05-14T00:49:32.162992031Z" level=info msg="StartContainer for \"2b42cea9c7837bd3a58522e514e45c050f4f042070bc75f8a0e06bed2e4ce32f\" returns successfully" May 14 00:49:32.194953 kubelet[1569]: E0514 00:49:32.194916 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:32.426640 kubelet[1569]: I0514 00:49:32.426113 1569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-rdhw2" podStartSLOduration=5.159932576 podStartE2EDuration="8.426098398s" podCreationTimestamp="2025-05-14 00:49:24 +0000 UTC" firstStartedPulling="2025-05-14 00:49:28.809676834 +0000 UTC m=+21.444193159" lastFinishedPulling="2025-05-14 00:49:32.075842696 +0000 UTC m=+24.710358981" observedRunningTime="2025-05-14 00:49:32.425929917 +0000 UTC m=+25.060446242" watchObservedRunningTime="2025-05-14 00:49:32.426098398 +0000 UTC m=+25.060614723" May 14 00:49:33.196000 kubelet[1569]: E0514 00:49:33.195947 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:34.196371 kubelet[1569]: E0514 00:49:34.196319 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:35.197374 kubelet[1569]: E0514 00:49:35.197329 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:36.198349 kubelet[1569]: E0514 00:49:36.198310 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:36.413808 kubelet[1569]: I0514 00:49:36.413759 1569 topology_manager.go:215] "Topology Admit Handler" podUID="6a6c9247-133f-4812-aed4-7ff548a64007" podNamespace="default" podName="nfs-server-provisioner-0" May 14 00:49:36.457222 kubelet[1569]: I0514 00:49:36.457112 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn2kn\" (UniqueName: \"kubernetes.io/projected/6a6c9247-133f-4812-aed4-7ff548a64007-kube-api-access-tn2kn\") pod \"nfs-server-provisioner-0\" (UID: \"6a6c9247-133f-4812-aed4-7ff548a64007\") " pod="default/nfs-server-provisioner-0" May 14 00:49:36.457413 kubelet[1569]: I0514 00:49:36.457392 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/6a6c9247-133f-4812-aed4-7ff548a64007-data\") pod \"nfs-server-provisioner-0\" (UID: \"6a6c9247-133f-4812-aed4-7ff548a64007\") " pod="default/nfs-server-provisioner-0" May 14 00:49:36.718179 env[1312]: time="2025-05-14T00:49:36.718081914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:6a6c9247-133f-4812-aed4-7ff548a64007,Namespace:default,Attempt:0,}" May 14 00:49:36.749186 systemd-networkd[1100]: lxc3d9916150ba5: Link UP May 14 00:49:36.758948 kernel: eth0: renamed from tmp9cd6c May 14 00:49:36.766470 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 14 00:49:36.766566 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc3d9916150ba5: link becomes ready May 14 00:49:36.766683 systemd-networkd[1100]: lxc3d9916150ba5: Gained carrier May 14 00:49:36.941993 env[1312]: time="2025-05-14T00:49:36.941895256Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:49:36.942175 env[1312]: time="2025-05-14T00:49:36.941979868Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:49:36.942264 env[1312]: time="2025-05-14T00:49:36.942236025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:49:36.942539 env[1312]: time="2025-05-14T00:49:36.942507103Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9cd6ca88e44b1f68f6c67ef06eed247e36922c1df49ceb8fe24d80b03b61ae88 pid=2759 runtime=io.containerd.runc.v2 May 14 00:49:36.974937 systemd-resolved[1238]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 00:49:36.990746 env[1312]: time="2025-05-14T00:49:36.990707997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:6a6c9247-133f-4812-aed4-7ff548a64007,Namespace:default,Attempt:0,} returns sandbox id \"9cd6ca88e44b1f68f6c67ef06eed247e36922c1df49ceb8fe24d80b03b61ae88\"" May 14 00:49:36.992412 env[1312]: time="2025-05-14T00:49:36.992382475Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" May 14 00:49:37.199426 kubelet[1569]: E0514 00:49:37.199382 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:38.200513 kubelet[1569]: E0514 00:49:38.200473 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:38.350017 systemd-networkd[1100]: lxc3d9916150ba5: Gained IPv6LL May 14 00:49:39.022663 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3176340.mount: Deactivated successfully. May 14 00:49:39.201377 kubelet[1569]: E0514 00:49:39.201329 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:40.202370 kubelet[1569]: E0514 00:49:40.202326 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:40.754728 env[1312]: time="2025-05-14T00:49:40.754681146Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:49:40.756428 env[1312]: time="2025-05-14T00:49:40.756395901Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:49:40.758070 env[1312]: time="2025-05-14T00:49:40.758038928Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:49:40.760266 env[1312]: time="2025-05-14T00:49:40.760240499Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:49:40.761818 env[1312]: time="2025-05-14T00:49:40.761789635Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" May 14 00:49:40.764822 env[1312]: time="2025-05-14T00:49:40.764770655Z" level=info msg="CreateContainer within sandbox \"9cd6ca88e44b1f68f6c67ef06eed247e36922c1df49ceb8fe24d80b03b61ae88\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" May 14 00:49:40.777842 env[1312]: time="2025-05-14T00:49:40.777787417Z" level=info msg="CreateContainer within sandbox \"9cd6ca88e44b1f68f6c67ef06eed247e36922c1df49ceb8fe24d80b03b61ae88\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"b415791aa3e037bb2547ceb9bafc7389def1fbeb43a9971547ae1d5c88b0408d\"" May 14 00:49:40.778399 env[1312]: time="2025-05-14T00:49:40.778349241Z" level=info msg="StartContainer for \"b415791aa3e037bb2547ceb9bafc7389def1fbeb43a9971547ae1d5c88b0408d\"" May 14 00:49:40.895646 env[1312]: time="2025-05-14T00:49:40.895603192Z" level=info msg="StartContainer for \"b415791aa3e037bb2547ceb9bafc7389def1fbeb43a9971547ae1d5c88b0408d\" returns successfully" May 14 00:49:41.203133 kubelet[1569]: E0514 00:49:41.203018 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:41.443658 kubelet[1569]: I0514 00:49:41.443494 1569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.6721047580000001 podStartE2EDuration="5.443476636s" podCreationTimestamp="2025-05-14 00:49:36 +0000 UTC" firstStartedPulling="2025-05-14 00:49:36.991882884 +0000 UTC m=+29.626399209" lastFinishedPulling="2025-05-14 00:49:40.763254802 +0000 UTC m=+33.397771087" observedRunningTime="2025-05-14 00:49:41.443115277 +0000 UTC m=+34.077631602" watchObservedRunningTime="2025-05-14 00:49:41.443476636 +0000 UTC m=+34.077992961" May 14 00:49:42.203603 kubelet[1569]: E0514 00:49:42.203557 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:43.204687 kubelet[1569]: E0514 00:49:43.204623 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:44.205005 kubelet[1569]: E0514 00:49:44.204962 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:45.205139 kubelet[1569]: E0514 00:49:45.205094 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:46.011377 update_engine[1305]: I0514 00:49:46.011317 1305 update_attempter.cc:509] Updating boot flags... May 14 00:49:46.205284 kubelet[1569]: E0514 00:49:46.205211 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:47.206347 kubelet[1569]: E0514 00:49:47.206297 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:48.174369 kubelet[1569]: E0514 00:49:48.174332 1569 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:48.206750 kubelet[1569]: E0514 00:49:48.206724 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:49.207315 kubelet[1569]: E0514 00:49:49.207273 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:50.207871 kubelet[1569]: E0514 00:49:50.207828 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:50.831671 kubelet[1569]: I0514 00:49:50.831620 1569 topology_manager.go:215] "Topology Admit Handler" podUID="d8db66c6-6893-4376-88e3-3dec8e540f58" podNamespace="default" podName="test-pod-1" May 14 00:49:51.032251 kubelet[1569]: I0514 00:49:51.032216 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-663a4e91-349b-44fe-bba0-c3037a109a3e\" (UniqueName: \"kubernetes.io/nfs/d8db66c6-6893-4376-88e3-3dec8e540f58-pvc-663a4e91-349b-44fe-bba0-c3037a109a3e\") pod \"test-pod-1\" (UID: \"d8db66c6-6893-4376-88e3-3dec8e540f58\") " pod="default/test-pod-1" May 14 00:49:51.032481 kubelet[1569]: I0514 00:49:51.032458 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjn2b\" (UniqueName: \"kubernetes.io/projected/d8db66c6-6893-4376-88e3-3dec8e540f58-kube-api-access-fjn2b\") pod \"test-pod-1\" (UID: \"d8db66c6-6893-4376-88e3-3dec8e540f58\") " pod="default/test-pod-1" May 14 00:49:51.154939 kernel: FS-Cache: Loaded May 14 00:49:51.185252 kernel: RPC: Registered named UNIX socket transport module. May 14 00:49:51.185315 kernel: RPC: Registered udp transport module. May 14 00:49:51.185344 kernel: RPC: Registered tcp transport module. May 14 00:49:51.186182 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. May 14 00:49:51.208400 kubelet[1569]: E0514 00:49:51.208348 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:51.228915 kernel: FS-Cache: Netfs 'nfs' registered for caching May 14 00:49:51.358060 kernel: NFS: Registering the id_resolver key type May 14 00:49:51.358219 kernel: Key type id_resolver registered May 14 00:49:51.358245 kernel: Key type id_legacy registered May 14 00:49:51.379894 nfsidmap[2892]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 14 00:49:51.383131 nfsidmap[2895]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 14 00:49:51.439658 env[1312]: time="2025-05-14T00:49:51.439221228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:d8db66c6-6893-4376-88e3-3dec8e540f58,Namespace:default,Attempt:0,}" May 14 00:49:51.466084 systemd-networkd[1100]: lxc3785c8c06c2b: Link UP May 14 00:49:51.479929 kernel: eth0: renamed from tmpcb507 May 14 00:49:51.488141 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 14 00:49:51.488225 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc3785c8c06c2b: link becomes ready May 14 00:49:51.488523 systemd-networkd[1100]: lxc3785c8c06c2b: Gained carrier May 14 00:49:51.657884 env[1312]: time="2025-05-14T00:49:51.657801833Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:49:51.657884 env[1312]: time="2025-05-14T00:49:51.657850356Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:49:51.657884 env[1312]: time="2025-05-14T00:49:51.657864157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:49:51.658077 env[1312]: time="2025-05-14T00:49:51.657994485Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cb50752669f3372de7f5936e7d167324a390565ddf34c943113ef50e59d096b8 pid=2929 runtime=io.containerd.runc.v2 May 14 00:49:51.701608 systemd-resolved[1238]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 00:49:51.718466 env[1312]: time="2025-05-14T00:49:51.718415452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:d8db66c6-6893-4376-88e3-3dec8e540f58,Namespace:default,Attempt:0,} returns sandbox id \"cb50752669f3372de7f5936e7d167324a390565ddf34c943113ef50e59d096b8\"" May 14 00:49:51.719857 env[1312]: time="2025-05-14T00:49:51.719831464Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 14 00:49:51.938158 env[1312]: time="2025-05-14T00:49:51.938113850Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:49:51.939377 env[1312]: time="2025-05-14T00:49:51.939345690Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:49:51.942041 env[1312]: time="2025-05-14T00:49:51.941999142Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:49:51.943777 env[1312]: time="2025-05-14T00:49:51.943745376Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:49:51.944397 env[1312]: time="2025-05-14T00:49:51.944363936Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\"" May 14 00:49:51.947035 env[1312]: time="2025-05-14T00:49:51.947007148Z" level=info msg="CreateContainer within sandbox \"cb50752669f3372de7f5936e7d167324a390565ddf34c943113ef50e59d096b8\" for container &ContainerMetadata{Name:test,Attempt:0,}" May 14 00:49:51.957596 env[1312]: time="2025-05-14T00:49:51.957168688Z" level=info msg="CreateContainer within sandbox \"cb50752669f3372de7f5936e7d167324a390565ddf34c943113ef50e59d096b8\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"402391b902652161d77d5275b171feb589a01f4855c1da3d7e6195b235f27d5d\"" May 14 00:49:51.957988 env[1312]: time="2025-05-14T00:49:51.957871254Z" level=info msg="StartContainer for \"402391b902652161d77d5275b171feb589a01f4855c1da3d7e6195b235f27d5d\"" May 14 00:49:52.017156 env[1312]: time="2025-05-14T00:49:52.017091974Z" level=info msg="StartContainer for \"402391b902652161d77d5275b171feb589a01f4855c1da3d7e6195b235f27d5d\" returns successfully" May 14 00:49:52.209079 kubelet[1569]: E0514 00:49:52.208712 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:53.006057 systemd-networkd[1100]: lxc3785c8c06c2b: Gained IPv6LL May 14 00:49:53.209332 kubelet[1569]: E0514 00:49:53.209290 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:54.209806 kubelet[1569]: E0514 00:49:54.209763 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:55.210872 kubelet[1569]: E0514 00:49:55.210824 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:56.211244 kubelet[1569]: E0514 00:49:56.211195 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:57.212182 kubelet[1569]: E0514 00:49:57.212137 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:58.213003 kubelet[1569]: E0514 00:49:58.212958 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:59.129854 kubelet[1569]: I0514 00:49:59.129688 1569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=22.903597815 podStartE2EDuration="23.129670627s" podCreationTimestamp="2025-05-14 00:49:36 +0000 UTC" firstStartedPulling="2025-05-14 00:49:51.719605209 +0000 UTC m=+44.354121494" lastFinishedPulling="2025-05-14 00:49:51.945677981 +0000 UTC m=+44.580194306" observedRunningTime="2025-05-14 00:49:52.459658508 +0000 UTC m=+45.094174793" watchObservedRunningTime="2025-05-14 00:49:59.129670627 +0000 UTC m=+51.764186912" May 14 00:49:59.160605 env[1312]: time="2025-05-14T00:49:59.160543005Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 00:49:59.165666 env[1312]: time="2025-05-14T00:49:59.165631159Z" level=info msg="StopContainer for \"e77fc8480b3f78e9fb616a6e6c9843def917b46f4ea15b973428f64eda908efc\" with timeout 2 (s)" May 14 00:49:59.165943 env[1312]: time="2025-05-14T00:49:59.165915572Z" level=info msg="Stop container \"e77fc8480b3f78e9fb616a6e6c9843def917b46f4ea15b973428f64eda908efc\" with signal terminated" May 14 00:49:59.171022 systemd-networkd[1100]: lxc_health: Link DOWN May 14 00:49:59.171028 systemd-networkd[1100]: lxc_health: Lost carrier May 14 00:49:59.213254 kubelet[1569]: E0514 00:49:59.213222 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:49:59.215176 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e77fc8480b3f78e9fb616a6e6c9843def917b46f4ea15b973428f64eda908efc-rootfs.mount: Deactivated successfully. May 14 00:49:59.227772 env[1312]: time="2025-05-14T00:49:59.227728250Z" level=info msg="shim disconnected" id=e77fc8480b3f78e9fb616a6e6c9843def917b46f4ea15b973428f64eda908efc May 14 00:49:59.227974 env[1312]: time="2025-05-14T00:49:59.227774532Z" level=warning msg="cleaning up after shim disconnected" id=e77fc8480b3f78e9fb616a6e6c9843def917b46f4ea15b973428f64eda908efc namespace=k8s.io May 14 00:49:59.227974 env[1312]: time="2025-05-14T00:49:59.227786013Z" level=info msg="cleaning up dead shim" May 14 00:49:59.234393 env[1312]: time="2025-05-14T00:49:59.234348434Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:49:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3058 runtime=io.containerd.runc.v2\n" May 14 00:49:59.236471 env[1312]: time="2025-05-14T00:49:59.236425650Z" level=info msg="StopContainer for \"e77fc8480b3f78e9fb616a6e6c9843def917b46f4ea15b973428f64eda908efc\" returns successfully" May 14 00:49:59.237063 env[1312]: time="2025-05-14T00:49:59.237034078Z" level=info msg="StopPodSandbox for \"0ddf605fea8dbfb7f8f77db8caafd1c1eb60d922739ec0ac984fb9cb61d00cd8\"" May 14 00:49:59.237205 env[1312]: time="2025-05-14T00:49:59.237183244Z" level=info msg="Container to stop \"e77fc8480b3f78e9fb616a6e6c9843def917b46f4ea15b973428f64eda908efc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:49:59.237274 env[1312]: time="2025-05-14T00:49:59.237258608Z" level=info msg="Container to stop \"845dd3f9fac6e25e59383561c1099732ad0b8336bbe690cb615eacdd19929b5c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:49:59.237339 env[1312]: time="2025-05-14T00:49:59.237323051Z" level=info msg="Container to stop \"76b8f4941061aca1533cbb5547edcde64f3c4e7a276a887e22ad0476454c22c5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:49:59.237400 env[1312]: time="2025-05-14T00:49:59.237385134Z" level=info msg="Container to stop \"f14bebee514af0d85fb4b31976ca02652183dc66da6b8d3db8ae3cc72ef3a435\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:49:59.237457 env[1312]: time="2025-05-14T00:49:59.237441816Z" level=info msg="Container to stop \"9ac5cc4f172435e6428fb4e4bb5f2d67553fbed1d281d5dc217286689203d91a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:49:59.239314 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0ddf605fea8dbfb7f8f77db8caafd1c1eb60d922739ec0ac984fb9cb61d00cd8-shm.mount: Deactivated successfully. May 14 00:49:59.256542 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ddf605fea8dbfb7f8f77db8caafd1c1eb60d922739ec0ac984fb9cb61d00cd8-rootfs.mount: Deactivated successfully. May 14 00:49:59.261630 env[1312]: time="2025-05-14T00:49:59.261581845Z" level=info msg="shim disconnected" id=0ddf605fea8dbfb7f8f77db8caafd1c1eb60d922739ec0ac984fb9cb61d00cd8 May 14 00:49:59.262336 env[1312]: time="2025-05-14T00:49:59.262309038Z" level=warning msg="cleaning up after shim disconnected" id=0ddf605fea8dbfb7f8f77db8caafd1c1eb60d922739ec0ac984fb9cb61d00cd8 namespace=k8s.io May 14 00:49:59.262431 env[1312]: time="2025-05-14T00:49:59.262415843Z" level=info msg="cleaning up dead shim" May 14 00:49:59.268726 env[1312]: time="2025-05-14T00:49:59.268692931Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:49:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3091 runtime=io.containerd.runc.v2\n" May 14 00:49:59.269215 env[1312]: time="2025-05-14T00:49:59.269139672Z" level=info msg="TearDown network for sandbox \"0ddf605fea8dbfb7f8f77db8caafd1c1eb60d922739ec0ac984fb9cb61d00cd8\" successfully" May 14 00:49:59.269319 env[1312]: time="2025-05-14T00:49:59.269300439Z" level=info msg="StopPodSandbox for \"0ddf605fea8dbfb7f8f77db8caafd1c1eb60d922739ec0ac984fb9cb61d00cd8\" returns successfully" May 14 00:49:59.379522 kubelet[1569]: I0514 00:49:59.379470 1569 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-cilium-run\") pod \"e4344ad1-2811-4680-b4a1-b7ef0b3607ba\" (UID: \"e4344ad1-2811-4680-b4a1-b7ef0b3607ba\") " May 14 00:49:59.379522 kubelet[1569]: I0514 00:49:59.379514 1569 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-lib-modules\") pod \"e4344ad1-2811-4680-b4a1-b7ef0b3607ba\" (UID: \"e4344ad1-2811-4680-b4a1-b7ef0b3607ba\") " May 14 00:49:59.379522 kubelet[1569]: I0514 00:49:59.379533 1569 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-hostproc\") pod \"e4344ad1-2811-4680-b4a1-b7ef0b3607ba\" (UID: \"e4344ad1-2811-4680-b4a1-b7ef0b3607ba\") " May 14 00:49:59.379751 kubelet[1569]: I0514 00:49:59.379549 1569 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-xtables-lock\") pod \"e4344ad1-2811-4680-b4a1-b7ef0b3607ba\" (UID: \"e4344ad1-2811-4680-b4a1-b7ef0b3607ba\") " May 14 00:49:59.379751 kubelet[1569]: I0514 00:49:59.379572 1569 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-cilium-config-path\") pod \"e4344ad1-2811-4680-b4a1-b7ef0b3607ba\" (UID: \"e4344ad1-2811-4680-b4a1-b7ef0b3607ba\") " May 14 00:49:59.379751 kubelet[1569]: I0514 00:49:59.379590 1569 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-host-proc-sys-net\") pod \"e4344ad1-2811-4680-b4a1-b7ef0b3607ba\" (UID: \"e4344ad1-2811-4680-b4a1-b7ef0b3607ba\") " May 14 00:49:59.379751 kubelet[1569]: I0514 00:49:59.379605 1569 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-cni-path\") pod \"e4344ad1-2811-4680-b4a1-b7ef0b3607ba\" (UID: \"e4344ad1-2811-4680-b4a1-b7ef0b3607ba\") " May 14 00:49:59.379751 kubelet[1569]: I0514 00:49:59.379589 1569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e4344ad1-2811-4680-b4a1-b7ef0b3607ba" (UID: "e4344ad1-2811-4680-b4a1-b7ef0b3607ba"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:49:59.379751 kubelet[1569]: I0514 00:49:59.379620 1569 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-bpf-maps\") pod \"e4344ad1-2811-4680-b4a1-b7ef0b3607ba\" (UID: \"e4344ad1-2811-4680-b4a1-b7ef0b3607ba\") " May 14 00:49:59.380847 kubelet[1569]: I0514 00:49:59.379652 1569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e4344ad1-2811-4680-b4a1-b7ef0b3607ba" (UID: "e4344ad1-2811-4680-b4a1-b7ef0b3607ba"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:49:59.380847 kubelet[1569]: I0514 00:49:59.379674 1569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e4344ad1-2811-4680-b4a1-b7ef0b3607ba" (UID: "e4344ad1-2811-4680-b4a1-b7ef0b3607ba"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:49:59.380847 kubelet[1569]: I0514 00:49:59.379684 1569 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-host-proc-sys-kernel\") pod \"e4344ad1-2811-4680-b4a1-b7ef0b3607ba\" (UID: \"e4344ad1-2811-4680-b4a1-b7ef0b3607ba\") " May 14 00:49:59.380847 kubelet[1569]: I0514 00:49:59.379689 1569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-hostproc" (OuterVolumeSpecName: "hostproc") pod "e4344ad1-2811-4680-b4a1-b7ef0b3607ba" (UID: "e4344ad1-2811-4680-b4a1-b7ef0b3607ba"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:49:59.380847 kubelet[1569]: I0514 00:49:59.379703 1569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e4344ad1-2811-4680-b4a1-b7ef0b3607ba" (UID: "e4344ad1-2811-4680-b4a1-b7ef0b3607ba"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:49:59.381036 kubelet[1569]: I0514 00:49:59.379705 1569 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-etc-cni-netd\") pod \"e4344ad1-2811-4680-b4a1-b7ef0b3607ba\" (UID: \"e4344ad1-2811-4680-b4a1-b7ef0b3607ba\") " May 14 00:49:59.381036 kubelet[1569]: I0514 00:49:59.379727 1569 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-clustermesh-secrets\") pod \"e4344ad1-2811-4680-b4a1-b7ef0b3607ba\" (UID: \"e4344ad1-2811-4680-b4a1-b7ef0b3607ba\") " May 14 00:49:59.381036 kubelet[1569]: I0514 00:49:59.379746 1569 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bq2mb\" (UniqueName: \"kubernetes.io/projected/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-kube-api-access-bq2mb\") pod \"e4344ad1-2811-4680-b4a1-b7ef0b3607ba\" (UID: \"e4344ad1-2811-4680-b4a1-b7ef0b3607ba\") " May 14 00:49:59.381036 kubelet[1569]: I0514 00:49:59.379777 1569 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-hubble-tls\") pod \"e4344ad1-2811-4680-b4a1-b7ef0b3607ba\" (UID: \"e4344ad1-2811-4680-b4a1-b7ef0b3607ba\") " May 14 00:49:59.381036 kubelet[1569]: I0514 00:49:59.379793 1569 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-cilium-cgroup\") pod \"e4344ad1-2811-4680-b4a1-b7ef0b3607ba\" (UID: \"e4344ad1-2811-4680-b4a1-b7ef0b3607ba\") " May 14 00:49:59.381036 kubelet[1569]: I0514 00:49:59.379825 1569 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-xtables-lock\") on node \"10.0.0.109\" DevicePath \"\"" May 14 00:49:59.381036 kubelet[1569]: I0514 00:49:59.379844 1569 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-bpf-maps\") on node \"10.0.0.109\" DevicePath \"\"" May 14 00:49:59.381194 kubelet[1569]: I0514 00:49:59.379854 1569 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-cilium-run\") on node \"10.0.0.109\" DevicePath \"\"" May 14 00:49:59.381194 kubelet[1569]: I0514 00:49:59.379861 1569 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-lib-modules\") on node \"10.0.0.109\" DevicePath \"\"" May 14 00:49:59.381194 kubelet[1569]: I0514 00:49:59.379869 1569 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-hostproc\") on node \"10.0.0.109\" DevicePath \"\"" May 14 00:49:59.381194 kubelet[1569]: I0514 00:49:59.379888 1569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e4344ad1-2811-4680-b4a1-b7ef0b3607ba" (UID: "e4344ad1-2811-4680-b4a1-b7ef0b3607ba"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:49:59.381194 kubelet[1569]: I0514 00:49:59.379923 1569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e4344ad1-2811-4680-b4a1-b7ef0b3607ba" (UID: "e4344ad1-2811-4680-b4a1-b7ef0b3607ba"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:49:59.381194 kubelet[1569]: I0514 00:49:59.379940 1569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e4344ad1-2811-4680-b4a1-b7ef0b3607ba" (UID: "e4344ad1-2811-4680-b4a1-b7ef0b3607ba"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:49:59.381346 kubelet[1569]: I0514 00:49:59.379986 1569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e4344ad1-2811-4680-b4a1-b7ef0b3607ba" (UID: "e4344ad1-2811-4680-b4a1-b7ef0b3607ba"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:49:59.382142 kubelet[1569]: I0514 00:49:59.382106 1569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e4344ad1-2811-4680-b4a1-b7ef0b3607ba" (UID: "e4344ad1-2811-4680-b4a1-b7ef0b3607ba"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 14 00:49:59.382230 kubelet[1569]: I0514 00:49:59.382191 1569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-cni-path" (OuterVolumeSpecName: "cni-path") pod "e4344ad1-2811-4680-b4a1-b7ef0b3607ba" (UID: "e4344ad1-2811-4680-b4a1-b7ef0b3607ba"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:49:59.383938 systemd[1]: var-lib-kubelet-pods-e4344ad1\x2d2811\x2d4680\x2db4a1\x2db7ef0b3607ba-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbq2mb.mount: Deactivated successfully. May 14 00:49:59.384122 kubelet[1569]: I0514 00:49:59.384091 1569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e4344ad1-2811-4680-b4a1-b7ef0b3607ba" (UID: "e4344ad1-2811-4680-b4a1-b7ef0b3607ba"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 14 00:49:59.384164 kubelet[1569]: I0514 00:49:59.384091 1569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-kube-api-access-bq2mb" (OuterVolumeSpecName: "kube-api-access-bq2mb") pod "e4344ad1-2811-4680-b4a1-b7ef0b3607ba" (UID: "e4344ad1-2811-4680-b4a1-b7ef0b3607ba"). InnerVolumeSpecName "kube-api-access-bq2mb". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 00:49:59.384504 kubelet[1569]: I0514 00:49:59.384455 1569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e4344ad1-2811-4680-b4a1-b7ef0b3607ba" (UID: "e4344ad1-2811-4680-b4a1-b7ef0b3607ba"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 00:49:59.464646 kubelet[1569]: I0514 00:49:59.464618 1569 scope.go:117] "RemoveContainer" containerID="e77fc8480b3f78e9fb616a6e6c9843def917b46f4ea15b973428f64eda908efc" May 14 00:49:59.468344 env[1312]: time="2025-05-14T00:49:59.468303498Z" level=info msg="RemoveContainer for \"e77fc8480b3f78e9fb616a6e6c9843def917b46f4ea15b973428f64eda908efc\"" May 14 00:49:59.472074 env[1312]: time="2025-05-14T00:49:59.472033789Z" level=info msg="RemoveContainer for \"e77fc8480b3f78e9fb616a6e6c9843def917b46f4ea15b973428f64eda908efc\" returns successfully" May 14 00:49:59.472259 kubelet[1569]: I0514 00:49:59.472239 1569 scope.go:117] "RemoveContainer" containerID="76b8f4941061aca1533cbb5547edcde64f3c4e7a276a887e22ad0476454c22c5" May 14 00:49:59.473350 env[1312]: time="2025-05-14T00:49:59.473313688Z" level=info msg="RemoveContainer for \"76b8f4941061aca1533cbb5547edcde64f3c4e7a276a887e22ad0476454c22c5\"" May 14 00:49:59.475646 env[1312]: time="2025-05-14T00:49:59.475606873Z" level=info msg="RemoveContainer for \"76b8f4941061aca1533cbb5547edcde64f3c4e7a276a887e22ad0476454c22c5\" returns successfully" May 14 00:49:59.475818 kubelet[1569]: I0514 00:49:59.475783 1569 scope.go:117] "RemoveContainer" containerID="9ac5cc4f172435e6428fb4e4bb5f2d67553fbed1d281d5dc217286689203d91a" May 14 00:49:59.476776 env[1312]: time="2025-05-14T00:49:59.476743965Z" level=info msg="RemoveContainer for \"9ac5cc4f172435e6428fb4e4bb5f2d67553fbed1d281d5dc217286689203d91a\"" May 14 00:49:59.479233 env[1312]: time="2025-05-14T00:49:59.479189878Z" level=info msg="RemoveContainer for \"9ac5cc4f172435e6428fb4e4bb5f2d67553fbed1d281d5dc217286689203d91a\" returns successfully" May 14 00:49:59.479405 kubelet[1569]: I0514 00:49:59.479374 1569 scope.go:117] "RemoveContainer" containerID="f14bebee514af0d85fb4b31976ca02652183dc66da6b8d3db8ae3cc72ef3a435" May 14 00:49:59.480214 kubelet[1569]: I0514 00:49:59.479952 1569 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-hubble-tls\") on node \"10.0.0.109\" DevicePath \"\"" May 14 00:49:59.480214 kubelet[1569]: I0514 00:49:59.479981 1569 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-cilium-cgroup\") on node \"10.0.0.109\" DevicePath \"\"" May 14 00:49:59.480214 kubelet[1569]: I0514 00:49:59.479990 1569 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-etc-cni-netd\") on node \"10.0.0.109\" DevicePath \"\"" May 14 00:49:59.480214 kubelet[1569]: I0514 00:49:59.479998 1569 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-clustermesh-secrets\") on node \"10.0.0.109\" DevicePath \"\"" May 14 00:49:59.480214 kubelet[1569]: I0514 00:49:59.480008 1569 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-bq2mb\" (UniqueName: \"kubernetes.io/projected/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-kube-api-access-bq2mb\") on node \"10.0.0.109\" DevicePath \"\"" May 14 00:49:59.480214 kubelet[1569]: I0514 00:49:59.480016 1569 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-host-proc-sys-net\") on node \"10.0.0.109\" DevicePath \"\"" May 14 00:49:59.480214 kubelet[1569]: I0514 00:49:59.480025 1569 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-cni-path\") on node \"10.0.0.109\" DevicePath \"\"" May 14 00:49:59.480214 kubelet[1569]: I0514 00:49:59.480044 1569 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-cilium-config-path\") on node \"10.0.0.109\" DevicePath \"\"" May 14 00:49:59.480425 kubelet[1569]: I0514 00:49:59.480054 1569 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e4344ad1-2811-4680-b4a1-b7ef0b3607ba-host-proc-sys-kernel\") on node \"10.0.0.109\" DevicePath \"\"" May 14 00:49:59.480499 env[1312]: time="2025-05-14T00:49:59.480470776Z" level=info msg="RemoveContainer for \"f14bebee514af0d85fb4b31976ca02652183dc66da6b8d3db8ae3cc72ef3a435\"" May 14 00:49:59.482497 env[1312]: time="2025-05-14T00:49:59.482461588Z" level=info msg="RemoveContainer for \"f14bebee514af0d85fb4b31976ca02652183dc66da6b8d3db8ae3cc72ef3a435\" returns successfully" May 14 00:49:59.482659 kubelet[1569]: I0514 00:49:59.482631 1569 scope.go:117] "RemoveContainer" containerID="845dd3f9fac6e25e59383561c1099732ad0b8336bbe690cb615eacdd19929b5c" May 14 00:49:59.483577 env[1312]: time="2025-05-14T00:49:59.483547718Z" level=info msg="RemoveContainer for \"845dd3f9fac6e25e59383561c1099732ad0b8336bbe690cb615eacdd19929b5c\"" May 14 00:49:59.485590 env[1312]: time="2025-05-14T00:49:59.485555810Z" level=info msg="RemoveContainer for \"845dd3f9fac6e25e59383561c1099732ad0b8336bbe690cb615eacdd19929b5c\" returns successfully" May 14 00:49:59.486013 kubelet[1569]: I0514 00:49:59.485978 1569 scope.go:117] "RemoveContainer" containerID="e77fc8480b3f78e9fb616a6e6c9843def917b46f4ea15b973428f64eda908efc" May 14 00:49:59.486260 env[1312]: time="2025-05-14T00:49:59.486165598Z" level=error msg="ContainerStatus for \"e77fc8480b3f78e9fb616a6e6c9843def917b46f4ea15b973428f64eda908efc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e77fc8480b3f78e9fb616a6e6c9843def917b46f4ea15b973428f64eda908efc\": not found" May 14 00:49:59.486400 kubelet[1569]: E0514 00:49:59.486380 1569 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e77fc8480b3f78e9fb616a6e6c9843def917b46f4ea15b973428f64eda908efc\": not found" containerID="e77fc8480b3f78e9fb616a6e6c9843def917b46f4ea15b973428f64eda908efc" May 14 00:49:59.486962 kubelet[1569]: I0514 00:49:59.486824 1569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e77fc8480b3f78e9fb616a6e6c9843def917b46f4ea15b973428f64eda908efc"} err="failed to get container status \"e77fc8480b3f78e9fb616a6e6c9843def917b46f4ea15b973428f64eda908efc\": rpc error: code = NotFound desc = an error occurred when try to find container \"e77fc8480b3f78e9fb616a6e6c9843def917b46f4ea15b973428f64eda908efc\": not found" May 14 00:49:59.487085 kubelet[1569]: I0514 00:49:59.487071 1569 scope.go:117] "RemoveContainer" containerID="76b8f4941061aca1533cbb5547edcde64f3c4e7a276a887e22ad0476454c22c5" May 14 00:49:59.487415 env[1312]: time="2025-05-14T00:49:59.487370733Z" level=error msg="ContainerStatus for \"76b8f4941061aca1533cbb5547edcde64f3c4e7a276a887e22ad0476454c22c5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"76b8f4941061aca1533cbb5547edcde64f3c4e7a276a887e22ad0476454c22c5\": not found" May 14 00:49:59.487571 kubelet[1569]: E0514 00:49:59.487553 1569 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"76b8f4941061aca1533cbb5547edcde64f3c4e7a276a887e22ad0476454c22c5\": not found" containerID="76b8f4941061aca1533cbb5547edcde64f3c4e7a276a887e22ad0476454c22c5" May 14 00:49:59.487764 kubelet[1569]: I0514 00:49:59.487740 1569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"76b8f4941061aca1533cbb5547edcde64f3c4e7a276a887e22ad0476454c22c5"} err="failed to get container status \"76b8f4941061aca1533cbb5547edcde64f3c4e7a276a887e22ad0476454c22c5\": rpc error: code = NotFound desc = an error occurred when try to find container \"76b8f4941061aca1533cbb5547edcde64f3c4e7a276a887e22ad0476454c22c5\": not found" May 14 00:49:59.487876 kubelet[1569]: I0514 00:49:59.487863 1569 scope.go:117] "RemoveContainer" containerID="9ac5cc4f172435e6428fb4e4bb5f2d67553fbed1d281d5dc217286689203d91a" May 14 00:49:59.488225 env[1312]: time="2025-05-14T00:49:59.488170730Z" level=error msg="ContainerStatus for \"9ac5cc4f172435e6428fb4e4bb5f2d67553fbed1d281d5dc217286689203d91a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9ac5cc4f172435e6428fb4e4bb5f2d67553fbed1d281d5dc217286689203d91a\": not found" May 14 00:49:59.488349 kubelet[1569]: E0514 00:49:59.488328 1569 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9ac5cc4f172435e6428fb4e4bb5f2d67553fbed1d281d5dc217286689203d91a\": not found" containerID="9ac5cc4f172435e6428fb4e4bb5f2d67553fbed1d281d5dc217286689203d91a" May 14 00:49:59.488395 kubelet[1569]: I0514 00:49:59.488353 1569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9ac5cc4f172435e6428fb4e4bb5f2d67553fbed1d281d5dc217286689203d91a"} err="failed to get container status \"9ac5cc4f172435e6428fb4e4bb5f2d67553fbed1d281d5dc217286689203d91a\": rpc error: code = NotFound desc = an error occurred when try to find container \"9ac5cc4f172435e6428fb4e4bb5f2d67553fbed1d281d5dc217286689203d91a\": not found" May 14 00:49:59.488395 kubelet[1569]: I0514 00:49:59.488370 1569 scope.go:117] "RemoveContainer" containerID="f14bebee514af0d85fb4b31976ca02652183dc66da6b8d3db8ae3cc72ef3a435" May 14 00:49:59.488591 env[1312]: time="2025-05-14T00:49:59.488531427Z" level=error msg="ContainerStatus for \"f14bebee514af0d85fb4b31976ca02652183dc66da6b8d3db8ae3cc72ef3a435\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f14bebee514af0d85fb4b31976ca02652183dc66da6b8d3db8ae3cc72ef3a435\": not found" May 14 00:49:59.488724 kubelet[1569]: E0514 00:49:59.488699 1569 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f14bebee514af0d85fb4b31976ca02652183dc66da6b8d3db8ae3cc72ef3a435\": not found" containerID="f14bebee514af0d85fb4b31976ca02652183dc66da6b8d3db8ae3cc72ef3a435" May 14 00:49:59.488769 kubelet[1569]: I0514 00:49:59.488728 1569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f14bebee514af0d85fb4b31976ca02652183dc66da6b8d3db8ae3cc72ef3a435"} err="failed to get container status \"f14bebee514af0d85fb4b31976ca02652183dc66da6b8d3db8ae3cc72ef3a435\": rpc error: code = NotFound desc = an error occurred when try to find container \"f14bebee514af0d85fb4b31976ca02652183dc66da6b8d3db8ae3cc72ef3a435\": not found" May 14 00:49:59.488769 kubelet[1569]: I0514 00:49:59.488748 1569 scope.go:117] "RemoveContainer" containerID="845dd3f9fac6e25e59383561c1099732ad0b8336bbe690cb615eacdd19929b5c" May 14 00:49:59.488981 env[1312]: time="2025-05-14T00:49:59.488936805Z" level=error msg="ContainerStatus for \"845dd3f9fac6e25e59383561c1099732ad0b8336bbe690cb615eacdd19929b5c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"845dd3f9fac6e25e59383561c1099732ad0b8336bbe690cb615eacdd19929b5c\": not found" May 14 00:49:59.489118 kubelet[1569]: E0514 00:49:59.489072 1569 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"845dd3f9fac6e25e59383561c1099732ad0b8336bbe690cb615eacdd19929b5c\": not found" containerID="845dd3f9fac6e25e59383561c1099732ad0b8336bbe690cb615eacdd19929b5c" May 14 00:49:59.489168 kubelet[1569]: I0514 00:49:59.489123 1569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"845dd3f9fac6e25e59383561c1099732ad0b8336bbe690cb615eacdd19929b5c"} err="failed to get container status \"845dd3f9fac6e25e59383561c1099732ad0b8336bbe690cb615eacdd19929b5c\": rpc error: code = NotFound desc = an error occurred when try to find container \"845dd3f9fac6e25e59383561c1099732ad0b8336bbe690cb615eacdd19929b5c\": not found" May 14 00:50:00.140975 systemd[1]: var-lib-kubelet-pods-e4344ad1\x2d2811\x2d4680\x2db4a1\x2db7ef0b3607ba-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 14 00:50:00.141133 systemd[1]: var-lib-kubelet-pods-e4344ad1\x2d2811\x2d4680\x2db4a1\x2db7ef0b3607ba-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 14 00:50:00.213649 kubelet[1569]: E0514 00:50:00.213601 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:50:00.360433 kubelet[1569]: I0514 00:50:00.360393 1569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4344ad1-2811-4680-b4a1-b7ef0b3607ba" path="/var/lib/kubelet/pods/e4344ad1-2811-4680-b4a1-b7ef0b3607ba/volumes" May 14 00:50:01.214489 kubelet[1569]: E0514 00:50:01.214435 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:50:02.215295 kubelet[1569]: E0514 00:50:02.215254 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:50:02.240597 kubelet[1569]: I0514 00:50:02.240570 1569 topology_manager.go:215] "Topology Admit Handler" podUID="3e6ee80f-a83d-49f4-916e-de352dcb27a3" podNamespace="kube-system" podName="cilium-operator-599987898-6qt72" May 14 00:50:02.240697 kubelet[1569]: E0514 00:50:02.240619 1569 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e4344ad1-2811-4680-b4a1-b7ef0b3607ba" containerName="apply-sysctl-overwrites" May 14 00:50:02.240697 kubelet[1569]: E0514 00:50:02.240630 1569 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e4344ad1-2811-4680-b4a1-b7ef0b3607ba" containerName="mount-bpf-fs" May 14 00:50:02.240697 kubelet[1569]: E0514 00:50:02.240637 1569 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e4344ad1-2811-4680-b4a1-b7ef0b3607ba" containerName="clean-cilium-state" May 14 00:50:02.240697 kubelet[1569]: E0514 00:50:02.240644 1569 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e4344ad1-2811-4680-b4a1-b7ef0b3607ba" containerName="cilium-agent" May 14 00:50:02.240697 kubelet[1569]: E0514 00:50:02.240651 1569 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e4344ad1-2811-4680-b4a1-b7ef0b3607ba" containerName="mount-cgroup" May 14 00:50:02.240697 kubelet[1569]: I0514 00:50:02.240668 1569 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4344ad1-2811-4680-b4a1-b7ef0b3607ba" containerName="cilium-agent" May 14 00:50:02.247903 kubelet[1569]: I0514 00:50:02.247856 1569 topology_manager.go:215] "Topology Admit Handler" podUID="347b3860-d717-424c-9c4b-3dc50d8cd048" podNamespace="kube-system" podName="cilium-jmphz" May 14 00:50:02.393824 kubelet[1569]: I0514 00:50:02.393788 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e6ee80f-a83d-49f4-916e-de352dcb27a3-cilium-config-path\") pod \"cilium-operator-599987898-6qt72\" (UID: \"3e6ee80f-a83d-49f4-916e-de352dcb27a3\") " pod="kube-system/cilium-operator-599987898-6qt72" May 14 00:50:02.394073 kubelet[1569]: I0514 00:50:02.394051 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/347b3860-d717-424c-9c4b-3dc50d8cd048-etc-cni-netd\") pod \"cilium-jmphz\" (UID: \"347b3860-d717-424c-9c4b-3dc50d8cd048\") " pod="kube-system/cilium-jmphz" May 14 00:50:02.394178 kubelet[1569]: I0514 00:50:02.394164 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/347b3860-d717-424c-9c4b-3dc50d8cd048-lib-modules\") pod \"cilium-jmphz\" (UID: \"347b3860-d717-424c-9c4b-3dc50d8cd048\") " pod="kube-system/cilium-jmphz" May 14 00:50:02.394263 kubelet[1569]: I0514 00:50:02.394250 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/347b3860-d717-424c-9c4b-3dc50d8cd048-clustermesh-secrets\") pod \"cilium-jmphz\" (UID: \"347b3860-d717-424c-9c4b-3dc50d8cd048\") " pod="kube-system/cilium-jmphz" May 14 00:50:02.394347 kubelet[1569]: I0514 00:50:02.394333 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghfgn\" (UniqueName: \"kubernetes.io/projected/347b3860-d717-424c-9c4b-3dc50d8cd048-kube-api-access-ghfgn\") pod \"cilium-jmphz\" (UID: \"347b3860-d717-424c-9c4b-3dc50d8cd048\") " pod="kube-system/cilium-jmphz" May 14 00:50:02.394433 kubelet[1569]: I0514 00:50:02.394419 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/347b3860-d717-424c-9c4b-3dc50d8cd048-cni-path\") pod \"cilium-jmphz\" (UID: \"347b3860-d717-424c-9c4b-3dc50d8cd048\") " pod="kube-system/cilium-jmphz" May 14 00:50:02.394530 kubelet[1569]: I0514 00:50:02.394517 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/347b3860-d717-424c-9c4b-3dc50d8cd048-cilium-config-path\") pod \"cilium-jmphz\" (UID: \"347b3860-d717-424c-9c4b-3dc50d8cd048\") " pod="kube-system/cilium-jmphz" May 14 00:50:02.394621 kubelet[1569]: I0514 00:50:02.394607 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/347b3860-d717-424c-9c4b-3dc50d8cd048-host-proc-sys-net\") pod \"cilium-jmphz\" (UID: \"347b3860-d717-424c-9c4b-3dc50d8cd048\") " pod="kube-system/cilium-jmphz" May 14 00:50:02.394712 kubelet[1569]: I0514 00:50:02.394698 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/347b3860-d717-424c-9c4b-3dc50d8cd048-cilium-ipsec-secrets\") pod \"cilium-jmphz\" (UID: \"347b3860-d717-424c-9c4b-3dc50d8cd048\") " pod="kube-system/cilium-jmphz" May 14 00:50:02.394801 kubelet[1569]: I0514 00:50:02.394788 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/347b3860-d717-424c-9c4b-3dc50d8cd048-cilium-run\") pod \"cilium-jmphz\" (UID: \"347b3860-d717-424c-9c4b-3dc50d8cd048\") " pod="kube-system/cilium-jmphz" May 14 00:50:02.394885 kubelet[1569]: I0514 00:50:02.394872 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/347b3860-d717-424c-9c4b-3dc50d8cd048-bpf-maps\") pod \"cilium-jmphz\" (UID: \"347b3860-d717-424c-9c4b-3dc50d8cd048\") " pod="kube-system/cilium-jmphz" May 14 00:50:02.394997 kubelet[1569]: I0514 00:50:02.394975 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/347b3860-d717-424c-9c4b-3dc50d8cd048-hostproc\") pod \"cilium-jmphz\" (UID: \"347b3860-d717-424c-9c4b-3dc50d8cd048\") " pod="kube-system/cilium-jmphz" May 14 00:50:02.395089 kubelet[1569]: I0514 00:50:02.395076 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/347b3860-d717-424c-9c4b-3dc50d8cd048-cilium-cgroup\") pod \"cilium-jmphz\" (UID: \"347b3860-d717-424c-9c4b-3dc50d8cd048\") " pod="kube-system/cilium-jmphz" May 14 00:50:02.395176 kubelet[1569]: I0514 00:50:02.395163 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/347b3860-d717-424c-9c4b-3dc50d8cd048-xtables-lock\") pod \"cilium-jmphz\" (UID: \"347b3860-d717-424c-9c4b-3dc50d8cd048\") " pod="kube-system/cilium-jmphz" May 14 00:50:02.395258 kubelet[1569]: I0514 00:50:02.395246 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/347b3860-d717-424c-9c4b-3dc50d8cd048-hubble-tls\") pod \"cilium-jmphz\" (UID: \"347b3860-d717-424c-9c4b-3dc50d8cd048\") " pod="kube-system/cilium-jmphz" May 14 00:50:02.395344 kubelet[1569]: I0514 00:50:02.395331 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/347b3860-d717-424c-9c4b-3dc50d8cd048-host-proc-sys-kernel\") pod \"cilium-jmphz\" (UID: \"347b3860-d717-424c-9c4b-3dc50d8cd048\") " pod="kube-system/cilium-jmphz" May 14 00:50:02.395435 kubelet[1569]: I0514 00:50:02.395422 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntwdn\" (UniqueName: \"kubernetes.io/projected/3e6ee80f-a83d-49f4-916e-de352dcb27a3-kube-api-access-ntwdn\") pod \"cilium-operator-599987898-6qt72\" (UID: \"3e6ee80f-a83d-49f4-916e-de352dcb27a3\") " pod="kube-system/cilium-operator-599987898-6qt72" May 14 00:50:02.405558 kubelet[1569]: E0514 00:50:02.405515 1569 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-ghfgn lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-jmphz" podUID="347b3860-d717-424c-9c4b-3dc50d8cd048" May 14 00:50:02.544137 kubelet[1569]: E0514 00:50:02.544027 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:50:02.545147 env[1312]: time="2025-05-14T00:50:02.545109714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-6qt72,Uid:3e6ee80f-a83d-49f4-916e-de352dcb27a3,Namespace:kube-system,Attempt:0,}" May 14 00:50:02.557190 env[1312]: time="2025-05-14T00:50:02.557134287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:50:02.557313 env[1312]: time="2025-05-14T00:50:02.557207050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:50:02.557313 env[1312]: time="2025-05-14T00:50:02.557233291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:50:02.557489 env[1312]: time="2025-05-14T00:50:02.557460100Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5f26172fbf6764976c110cf98fe136f4a1ce3f7b6e52961de8d213cf29062e77 pid=3119 runtime=io.containerd.runc.v2 May 14 00:50:02.598273 kubelet[1569]: I0514 00:50:02.597833 1569 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/347b3860-d717-424c-9c4b-3dc50d8cd048-cilium-run\") pod \"347b3860-d717-424c-9c4b-3dc50d8cd048\" (UID: \"347b3860-d717-424c-9c4b-3dc50d8cd048\") " May 14 00:50:02.598273 kubelet[1569]: I0514 00:50:02.597872 1569 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/347b3860-d717-424c-9c4b-3dc50d8cd048-hostproc\") pod \"347b3860-d717-424c-9c4b-3dc50d8cd048\" (UID: \"347b3860-d717-424c-9c4b-3dc50d8cd048\") " May 14 00:50:02.598273 kubelet[1569]: I0514 00:50:02.597889 1569 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/347b3860-d717-424c-9c4b-3dc50d8cd048-cilium-cgroup\") pod \"347b3860-d717-424c-9c4b-3dc50d8cd048\" (UID: \"347b3860-d717-424c-9c4b-3dc50d8cd048\") " May 14 00:50:02.598273 kubelet[1569]: I0514 00:50:02.597914 1569 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/347b3860-d717-424c-9c4b-3dc50d8cd048-xtables-lock\") pod \"347b3860-d717-424c-9c4b-3dc50d8cd048\" (UID: \"347b3860-d717-424c-9c4b-3dc50d8cd048\") " May 14 00:50:02.598273 kubelet[1569]: I0514 00:50:02.597939 1569 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/347b3860-d717-424c-9c4b-3dc50d8cd048-etc-cni-netd\") pod \"347b3860-d717-424c-9c4b-3dc50d8cd048\" (UID: \"347b3860-d717-424c-9c4b-3dc50d8cd048\") " May 14 00:50:02.598273 kubelet[1569]: I0514 00:50:02.597955 1569 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/347b3860-d717-424c-9c4b-3dc50d8cd048-bpf-maps\") pod \"347b3860-d717-424c-9c4b-3dc50d8cd048\" (UID: \"347b3860-d717-424c-9c4b-3dc50d8cd048\") " May 14 00:50:02.598513 kubelet[1569]: I0514 00:50:02.597968 1569 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/347b3860-d717-424c-9c4b-3dc50d8cd048-host-proc-sys-kernel\") pod \"347b3860-d717-424c-9c4b-3dc50d8cd048\" (UID: \"347b3860-d717-424c-9c4b-3dc50d8cd048\") " May 14 00:50:02.598513 kubelet[1569]: I0514 00:50:02.597985 1569 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/347b3860-d717-424c-9c4b-3dc50d8cd048-lib-modules\") pod \"347b3860-d717-424c-9c4b-3dc50d8cd048\" (UID: \"347b3860-d717-424c-9c4b-3dc50d8cd048\") " May 14 00:50:02.598513 kubelet[1569]: I0514 00:50:02.598006 1569 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/347b3860-d717-424c-9c4b-3dc50d8cd048-cni-path\") pod \"347b3860-d717-424c-9c4b-3dc50d8cd048\" (UID: \"347b3860-d717-424c-9c4b-3dc50d8cd048\") " May 14 00:50:02.598513 kubelet[1569]: I0514 00:50:02.598025 1569 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/347b3860-d717-424c-9c4b-3dc50d8cd048-host-proc-sys-net\") pod \"347b3860-d717-424c-9c4b-3dc50d8cd048\" (UID: \"347b3860-d717-424c-9c4b-3dc50d8cd048\") " May 14 00:50:02.598513 kubelet[1569]: I0514 00:50:02.598089 1569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/347b3860-d717-424c-9c4b-3dc50d8cd048-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "347b3860-d717-424c-9c4b-3dc50d8cd048" (UID: "347b3860-d717-424c-9c4b-3dc50d8cd048"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:50:02.598621 kubelet[1569]: I0514 00:50:02.598114 1569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/347b3860-d717-424c-9c4b-3dc50d8cd048-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "347b3860-d717-424c-9c4b-3dc50d8cd048" (UID: "347b3860-d717-424c-9c4b-3dc50d8cd048"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:50:02.598621 kubelet[1569]: I0514 00:50:02.598128 1569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/347b3860-d717-424c-9c4b-3dc50d8cd048-hostproc" (OuterVolumeSpecName: "hostproc") pod "347b3860-d717-424c-9c4b-3dc50d8cd048" (UID: "347b3860-d717-424c-9c4b-3dc50d8cd048"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:50:02.598621 kubelet[1569]: I0514 00:50:02.598163 1569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/347b3860-d717-424c-9c4b-3dc50d8cd048-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "347b3860-d717-424c-9c4b-3dc50d8cd048" (UID: "347b3860-d717-424c-9c4b-3dc50d8cd048"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:50:02.598621 kubelet[1569]: I0514 00:50:02.598176 1569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/347b3860-d717-424c-9c4b-3dc50d8cd048-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "347b3860-d717-424c-9c4b-3dc50d8cd048" (UID: "347b3860-d717-424c-9c4b-3dc50d8cd048"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:50:02.598621 kubelet[1569]: I0514 00:50:02.598189 1569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/347b3860-d717-424c-9c4b-3dc50d8cd048-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "347b3860-d717-424c-9c4b-3dc50d8cd048" (UID: "347b3860-d717-424c-9c4b-3dc50d8cd048"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:50:02.598730 kubelet[1569]: I0514 00:50:02.598203 1569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/347b3860-d717-424c-9c4b-3dc50d8cd048-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "347b3860-d717-424c-9c4b-3dc50d8cd048" (UID: "347b3860-d717-424c-9c4b-3dc50d8cd048"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:50:02.598730 kubelet[1569]: I0514 00:50:02.598215 1569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/347b3860-d717-424c-9c4b-3dc50d8cd048-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "347b3860-d717-424c-9c4b-3dc50d8cd048" (UID: "347b3860-d717-424c-9c4b-3dc50d8cd048"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:50:02.598730 kubelet[1569]: I0514 00:50:02.598232 1569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/347b3860-d717-424c-9c4b-3dc50d8cd048-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "347b3860-d717-424c-9c4b-3dc50d8cd048" (UID: "347b3860-d717-424c-9c4b-3dc50d8cd048"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:50:02.598730 kubelet[1569]: I0514 00:50:02.598244 1569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/347b3860-d717-424c-9c4b-3dc50d8cd048-cni-path" (OuterVolumeSpecName: "cni-path") pod "347b3860-d717-424c-9c4b-3dc50d8cd048" (UID: "347b3860-d717-424c-9c4b-3dc50d8cd048"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:50:02.627505 env[1312]: time="2025-05-14T00:50:02.627456127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-6qt72,Uid:3e6ee80f-a83d-49f4-916e-de352dcb27a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f26172fbf6764976c110cf98fe136f4a1ce3f7b6e52961de8d213cf29062e77\"" May 14 00:50:02.628323 kubelet[1569]: E0514 00:50:02.628075 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:50:02.628960 env[1312]: time="2025-05-14T00:50:02.628927307Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 14 00:50:02.699013 kubelet[1569]: I0514 00:50:02.698680 1569 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/347b3860-d717-424c-9c4b-3dc50d8cd048-clustermesh-secrets\") pod \"347b3860-d717-424c-9c4b-3dc50d8cd048\" (UID: \"347b3860-d717-424c-9c4b-3dc50d8cd048\") " May 14 00:50:02.699013 kubelet[1569]: I0514 00:50:02.698738 1569 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/347b3860-d717-424c-9c4b-3dc50d8cd048-cilium-config-path\") pod \"347b3860-d717-424c-9c4b-3dc50d8cd048\" (UID: \"347b3860-d717-424c-9c4b-3dc50d8cd048\") " May 14 00:50:02.699013 kubelet[1569]: I0514 00:50:02.698761 1569 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/347b3860-d717-424c-9c4b-3dc50d8cd048-cilium-ipsec-secrets\") pod \"347b3860-d717-424c-9c4b-3dc50d8cd048\" (UID: \"347b3860-d717-424c-9c4b-3dc50d8cd048\") " May 14 00:50:02.699013 kubelet[1569]: I0514 00:50:02.698781 1569 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/347b3860-d717-424c-9c4b-3dc50d8cd048-hubble-tls\") pod \"347b3860-d717-424c-9c4b-3dc50d8cd048\" (UID: \"347b3860-d717-424c-9c4b-3dc50d8cd048\") " May 14 00:50:02.699013 kubelet[1569]: I0514 00:50:02.698801 1569 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghfgn\" (UniqueName: \"kubernetes.io/projected/347b3860-d717-424c-9c4b-3dc50d8cd048-kube-api-access-ghfgn\") pod \"347b3860-d717-424c-9c4b-3dc50d8cd048\" (UID: \"347b3860-d717-424c-9c4b-3dc50d8cd048\") " May 14 00:50:02.699013 kubelet[1569]: I0514 00:50:02.698835 1569 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/347b3860-d717-424c-9c4b-3dc50d8cd048-bpf-maps\") on node \"10.0.0.109\" DevicePath \"\"" May 14 00:50:02.699276 kubelet[1569]: I0514 00:50:02.698846 1569 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/347b3860-d717-424c-9c4b-3dc50d8cd048-cilium-cgroup\") on node \"10.0.0.109\" DevicePath \"\"" May 14 00:50:02.699276 kubelet[1569]: I0514 00:50:02.698854 1569 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/347b3860-d717-424c-9c4b-3dc50d8cd048-xtables-lock\") on node \"10.0.0.109\" DevicePath \"\"" May 14 00:50:02.699276 kubelet[1569]: I0514 00:50:02.698862 1569 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/347b3860-d717-424c-9c4b-3dc50d8cd048-etc-cni-netd\") on node \"10.0.0.109\" DevicePath \"\"" May 14 00:50:02.699276 kubelet[1569]: I0514 00:50:02.698871 1569 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/347b3860-d717-424c-9c4b-3dc50d8cd048-host-proc-sys-kernel\") on node \"10.0.0.109\" DevicePath \"\"" May 14 00:50:02.699276 kubelet[1569]: I0514 00:50:02.698882 1569 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/347b3860-d717-424c-9c4b-3dc50d8cd048-host-proc-sys-net\") on node \"10.0.0.109\" DevicePath \"\"" May 14 00:50:02.699276 kubelet[1569]: I0514 00:50:02.698889 1569 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/347b3860-d717-424c-9c4b-3dc50d8cd048-lib-modules\") on node \"10.0.0.109\" DevicePath \"\"" May 14 00:50:02.699276 kubelet[1569]: I0514 00:50:02.698915 1569 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/347b3860-d717-424c-9c4b-3dc50d8cd048-cni-path\") on node \"10.0.0.109\" DevicePath \"\"" May 14 00:50:02.699276 kubelet[1569]: I0514 00:50:02.698924 1569 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/347b3860-d717-424c-9c4b-3dc50d8cd048-hostproc\") on node \"10.0.0.109\" DevicePath \"\"" May 14 00:50:02.699445 kubelet[1569]: I0514 00:50:02.698932 1569 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/347b3860-d717-424c-9c4b-3dc50d8cd048-cilium-run\") on node \"10.0.0.109\" DevicePath \"\"" May 14 00:50:02.700923 kubelet[1569]: I0514 00:50:02.700861 1569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/347b3860-d717-424c-9c4b-3dc50d8cd048-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "347b3860-d717-424c-9c4b-3dc50d8cd048" (UID: "347b3860-d717-424c-9c4b-3dc50d8cd048"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 14 00:50:02.702249 kubelet[1569]: I0514 00:50:02.702218 1569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/347b3860-d717-424c-9c4b-3dc50d8cd048-kube-api-access-ghfgn" (OuterVolumeSpecName: "kube-api-access-ghfgn") pod "347b3860-d717-424c-9c4b-3dc50d8cd048" (UID: "347b3860-d717-424c-9c4b-3dc50d8cd048"). InnerVolumeSpecName "kube-api-access-ghfgn". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 00:50:02.702506 kubelet[1569]: I0514 00:50:02.702451 1569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/347b3860-d717-424c-9c4b-3dc50d8cd048-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "347b3860-d717-424c-9c4b-3dc50d8cd048" (UID: "347b3860-d717-424c-9c4b-3dc50d8cd048"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 14 00:50:02.702598 kubelet[1569]: I0514 00:50:02.702508 1569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/347b3860-d717-424c-9c4b-3dc50d8cd048-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "347b3860-d717-424c-9c4b-3dc50d8cd048" (UID: "347b3860-d717-424c-9c4b-3dc50d8cd048"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 00:50:02.704570 kubelet[1569]: I0514 00:50:02.704540 1569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/347b3860-d717-424c-9c4b-3dc50d8cd048-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "347b3860-d717-424c-9c4b-3dc50d8cd048" (UID: "347b3860-d717-424c-9c4b-3dc50d8cd048"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 14 00:50:02.800371 kubelet[1569]: I0514 00:50:02.799514 1569 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/347b3860-d717-424c-9c4b-3dc50d8cd048-clustermesh-secrets\") on node \"10.0.0.109\" DevicePath \"\"" May 14 00:50:02.800371 kubelet[1569]: I0514 00:50:02.799541 1569 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/347b3860-d717-424c-9c4b-3dc50d8cd048-cilium-config-path\") on node \"10.0.0.109\" DevicePath \"\"" May 14 00:50:02.800371 kubelet[1569]: I0514 00:50:02.799550 1569 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/347b3860-d717-424c-9c4b-3dc50d8cd048-cilium-ipsec-secrets\") on node \"10.0.0.109\" DevicePath \"\"" May 14 00:50:02.800371 kubelet[1569]: I0514 00:50:02.799560 1569 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/347b3860-d717-424c-9c4b-3dc50d8cd048-hubble-tls\") on node \"10.0.0.109\" DevicePath \"\"" May 14 00:50:02.800371 kubelet[1569]: I0514 00:50:02.799568 1569 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-ghfgn\" (UniqueName: \"kubernetes.io/projected/347b3860-d717-424c-9c4b-3dc50d8cd048-kube-api-access-ghfgn\") on node \"10.0.0.109\" DevicePath \"\"" May 14 00:50:03.215699 kubelet[1569]: E0514 00:50:03.215653 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:50:03.320680 kubelet[1569]: E0514 00:50:03.320639 1569 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 14 00:50:03.501445 systemd[1]: var-lib-kubelet-pods-347b3860\x2dd717\x2d424c\x2d9c4b\x2d3dc50d8cd048-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dghfgn.mount: Deactivated successfully. May 14 00:50:03.501582 systemd[1]: var-lib-kubelet-pods-347b3860\x2dd717\x2d424c\x2d9c4b\x2d3dc50d8cd048-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 14 00:50:03.501672 systemd[1]: var-lib-kubelet-pods-347b3860\x2dd717\x2d424c\x2d9c4b\x2d3dc50d8cd048-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 14 00:50:03.501748 systemd[1]: var-lib-kubelet-pods-347b3860\x2dd717\x2d424c\x2d9c4b\x2d3dc50d8cd048-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 14 00:50:03.512643 kubelet[1569]: I0514 00:50:03.512564 1569 topology_manager.go:215] "Topology Admit Handler" podUID="4d4ac6f4-60ee-4f4c-b13f-ea15b8b1ac00" podNamespace="kube-system" podName="cilium-sg28f" May 14 00:50:03.703927 kubelet[1569]: I0514 00:50:03.703842 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4d4ac6f4-60ee-4f4c-b13f-ea15b8b1ac00-hostproc\") pod \"cilium-sg28f\" (UID: \"4d4ac6f4-60ee-4f4c-b13f-ea15b8b1ac00\") " pod="kube-system/cilium-sg28f" May 14 00:50:03.704097 kubelet[1569]: I0514 00:50:03.703938 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4d4ac6f4-60ee-4f4c-b13f-ea15b8b1ac00-xtables-lock\") pod \"cilium-sg28f\" (UID: \"4d4ac6f4-60ee-4f4c-b13f-ea15b8b1ac00\") " pod="kube-system/cilium-sg28f" May 14 00:50:03.704097 kubelet[1569]: I0514 00:50:03.703965 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4d4ac6f4-60ee-4f4c-b13f-ea15b8b1ac00-cilium-config-path\") pod \"cilium-sg28f\" (UID: \"4d4ac6f4-60ee-4f4c-b13f-ea15b8b1ac00\") " pod="kube-system/cilium-sg28f" May 14 00:50:03.704097 kubelet[1569]: I0514 00:50:03.703983 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkcsk\" (UniqueName: \"kubernetes.io/projected/4d4ac6f4-60ee-4f4c-b13f-ea15b8b1ac00-kube-api-access-bkcsk\") pod \"cilium-sg28f\" (UID: \"4d4ac6f4-60ee-4f4c-b13f-ea15b8b1ac00\") " pod="kube-system/cilium-sg28f" May 14 00:50:03.704097 kubelet[1569]: I0514 00:50:03.704072 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4d4ac6f4-60ee-4f4c-b13f-ea15b8b1ac00-host-proc-sys-kernel\") pod \"cilium-sg28f\" (UID: \"4d4ac6f4-60ee-4f4c-b13f-ea15b8b1ac00\") " pod="kube-system/cilium-sg28f" May 14 00:50:03.704230 kubelet[1569]: I0514 00:50:03.704107 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4d4ac6f4-60ee-4f4c-b13f-ea15b8b1ac00-cilium-run\") pod \"cilium-sg28f\" (UID: \"4d4ac6f4-60ee-4f4c-b13f-ea15b8b1ac00\") " pod="kube-system/cilium-sg28f" May 14 00:50:03.704230 kubelet[1569]: I0514 00:50:03.704138 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4d4ac6f4-60ee-4f4c-b13f-ea15b8b1ac00-cilium-cgroup\") pod \"cilium-sg28f\" (UID: \"4d4ac6f4-60ee-4f4c-b13f-ea15b8b1ac00\") " pod="kube-system/cilium-sg28f" May 14 00:50:03.704230 kubelet[1569]: I0514 00:50:03.704156 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d4ac6f4-60ee-4f4c-b13f-ea15b8b1ac00-lib-modules\") pod \"cilium-sg28f\" (UID: \"4d4ac6f4-60ee-4f4c-b13f-ea15b8b1ac00\") " pod="kube-system/cilium-sg28f" May 14 00:50:03.704230 kubelet[1569]: I0514 00:50:03.704171 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4d4ac6f4-60ee-4f4c-b13f-ea15b8b1ac00-host-proc-sys-net\") pod \"cilium-sg28f\" (UID: \"4d4ac6f4-60ee-4f4c-b13f-ea15b8b1ac00\") " pod="kube-system/cilium-sg28f" May 14 00:50:03.704230 kubelet[1569]: I0514 00:50:03.704185 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4d4ac6f4-60ee-4f4c-b13f-ea15b8b1ac00-hubble-tls\") pod \"cilium-sg28f\" (UID: \"4d4ac6f4-60ee-4f4c-b13f-ea15b8b1ac00\") " pod="kube-system/cilium-sg28f" May 14 00:50:03.704230 kubelet[1569]: I0514 00:50:03.704201 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4d4ac6f4-60ee-4f4c-b13f-ea15b8b1ac00-bpf-maps\") pod \"cilium-sg28f\" (UID: \"4d4ac6f4-60ee-4f4c-b13f-ea15b8b1ac00\") " pod="kube-system/cilium-sg28f" May 14 00:50:03.704366 kubelet[1569]: I0514 00:50:03.704215 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4d4ac6f4-60ee-4f4c-b13f-ea15b8b1ac00-cilium-ipsec-secrets\") pod \"cilium-sg28f\" (UID: \"4d4ac6f4-60ee-4f4c-b13f-ea15b8b1ac00\") " pod="kube-system/cilium-sg28f" May 14 00:50:03.704366 kubelet[1569]: I0514 00:50:03.704230 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4d4ac6f4-60ee-4f4c-b13f-ea15b8b1ac00-cni-path\") pod \"cilium-sg28f\" (UID: \"4d4ac6f4-60ee-4f4c-b13f-ea15b8b1ac00\") " pod="kube-system/cilium-sg28f" May 14 00:50:03.704366 kubelet[1569]: I0514 00:50:03.704244 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4d4ac6f4-60ee-4f4c-b13f-ea15b8b1ac00-etc-cni-netd\") pod \"cilium-sg28f\" (UID: \"4d4ac6f4-60ee-4f4c-b13f-ea15b8b1ac00\") " pod="kube-system/cilium-sg28f" May 14 00:50:03.704366 kubelet[1569]: I0514 00:50:03.704260 1569 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4d4ac6f4-60ee-4f4c-b13f-ea15b8b1ac00-clustermesh-secrets\") pod \"cilium-sg28f\" (UID: \"4d4ac6f4-60ee-4f4c-b13f-ea15b8b1ac00\") " pod="kube-system/cilium-sg28f" May 14 00:50:04.118329 kubelet[1569]: E0514 00:50:04.118297 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:50:04.119411 env[1312]: time="2025-05-14T00:50:04.119375528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sg28f,Uid:4d4ac6f4-60ee-4f4c-b13f-ea15b8b1ac00,Namespace:kube-system,Attempt:0,}" May 14 00:50:04.139061 env[1312]: time="2025-05-14T00:50:04.138963555Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:50:04.139061 env[1312]: time="2025-05-14T00:50:04.139012877Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:50:04.139061 env[1312]: time="2025-05-14T00:50:04.139023117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:50:04.139292 env[1312]: time="2025-05-14T00:50:04.139241566Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bdc8548b09abddb63bd9265cfc1eedc6b5c95418fc67286547840810a5cedb81 pid=3169 runtime=io.containerd.runc.v2 May 14 00:50:04.192209 env[1312]: time="2025-05-14T00:50:04.192170624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sg28f,Uid:4d4ac6f4-60ee-4f4c-b13f-ea15b8b1ac00,Namespace:kube-system,Attempt:0,} returns sandbox id \"bdc8548b09abddb63bd9265cfc1eedc6b5c95418fc67286547840810a5cedb81\"" May 14 00:50:04.193377 kubelet[1569]: E0514 00:50:04.193107 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:50:04.195029 env[1312]: time="2025-05-14T00:50:04.194994092Z" level=info msg="CreateContainer within sandbox \"bdc8548b09abddb63bd9265cfc1eedc6b5c95418fc67286547840810a5cedb81\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 00:50:04.203951 env[1312]: time="2025-05-14T00:50:04.203893271Z" level=info msg="CreateContainer within sandbox \"bdc8548b09abddb63bd9265cfc1eedc6b5c95418fc67286547840810a5cedb81\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0d468cc4413b9390ff7f28f01c8370047c010100a393f86b45bb924b702c4a1f\"" May 14 00:50:04.204327 env[1312]: time="2025-05-14T00:50:04.204298167Z" level=info msg="StartContainer for \"0d468cc4413b9390ff7f28f01c8370047c010100a393f86b45bb924b702c4a1f\"" May 14 00:50:04.215754 kubelet[1569]: E0514 00:50:04.215728 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:50:04.250359 env[1312]: time="2025-05-14T00:50:04.249586814Z" level=info msg="StartContainer for \"0d468cc4413b9390ff7f28f01c8370047c010100a393f86b45bb924b702c4a1f\" returns successfully" May 14 00:50:04.278288 env[1312]: time="2025-05-14T00:50:04.278243187Z" level=info msg="shim disconnected" id=0d468cc4413b9390ff7f28f01c8370047c010100a393f86b45bb924b702c4a1f May 14 00:50:04.278288 env[1312]: time="2025-05-14T00:50:04.278290309Z" level=warning msg="cleaning up after shim disconnected" id=0d468cc4413b9390ff7f28f01c8370047c010100a393f86b45bb924b702c4a1f namespace=k8s.io May 14 00:50:04.278288 env[1312]: time="2025-05-14T00:50:04.278300109Z" level=info msg="cleaning up dead shim" May 14 00:50:04.284997 env[1312]: time="2025-05-14T00:50:04.284959843Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:50:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3253 runtime=io.containerd.runc.v2\n" May 14 00:50:04.361130 kubelet[1569]: I0514 00:50:04.360955 1569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="347b3860-d717-424c-9c4b-3dc50d8cd048" path="/var/lib/kubelet/pods/347b3860-d717-424c-9c4b-3dc50d8cd048/volumes" May 14 00:50:04.475591 kubelet[1569]: E0514 00:50:04.475559 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:50:04.478507 env[1312]: time="2025-05-14T00:50:04.478466663Z" level=info msg="CreateContainer within sandbox \"bdc8548b09abddb63bd9265cfc1eedc6b5c95418fc67286547840810a5cedb81\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 00:50:04.490955 env[1312]: time="2025-05-14T00:50:04.490905698Z" level=info msg="CreateContainer within sandbox \"bdc8548b09abddb63bd9265cfc1eedc6b5c95418fc67286547840810a5cedb81\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"470c6571f4cd9e1303d5cc9e6bd0e55dfc847b629288e8384d7e59ad7294c931\"" May 14 00:50:04.493932 env[1312]: time="2025-05-14T00:50:04.493799808Z" level=info msg="StartContainer for \"470c6571f4cd9e1303d5cc9e6bd0e55dfc847b629288e8384d7e59ad7294c931\"" May 14 00:50:04.621033 env[1312]: time="2025-05-14T00:50:04.620981379Z" level=info msg="StartContainer for \"470c6571f4cd9e1303d5cc9e6bd0e55dfc847b629288e8384d7e59ad7294c931\" returns successfully" May 14 00:50:04.633420 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-470c6571f4cd9e1303d5cc9e6bd0e55dfc847b629288e8384d7e59ad7294c931-rootfs.mount: Deactivated successfully. May 14 00:50:04.653388 env[1312]: time="2025-05-14T00:50:04.653343333Z" level=info msg="shim disconnected" id=470c6571f4cd9e1303d5cc9e6bd0e55dfc847b629288e8384d7e59ad7294c931 May 14 00:50:04.653388 env[1312]: time="2025-05-14T00:50:04.653386615Z" level=warning msg="cleaning up after shim disconnected" id=470c6571f4cd9e1303d5cc9e6bd0e55dfc847b629288e8384d7e59ad7294c931 namespace=k8s.io May 14 00:50:04.653388 env[1312]: time="2025-05-14T00:50:04.653395895Z" level=info msg="cleaning up dead shim" May 14 00:50:04.665053 env[1312]: time="2025-05-14T00:50:04.665001458Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:50:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3317 runtime=io.containerd.runc.v2\n" May 14 00:50:05.065700 env[1312]: time="2025-05-14T00:50:05.065634936Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:50:05.066992 env[1312]: time="2025-05-14T00:50:05.066965025Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:50:05.068624 env[1312]: time="2025-05-14T00:50:05.068589005Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:50:05.069131 env[1312]: time="2025-05-14T00:50:05.069097624Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 14 00:50:05.071649 env[1312]: time="2025-05-14T00:50:05.071622317Z" level=info msg="CreateContainer within sandbox \"5f26172fbf6764976c110cf98fe136f4a1ce3f7b6e52961de8d213cf29062e77\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 14 00:50:05.079738 env[1312]: time="2025-05-14T00:50:05.079697854Z" level=info msg="CreateContainer within sandbox \"5f26172fbf6764976c110cf98fe136f4a1ce3f7b6e52961de8d213cf29062e77\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"edf8aa735081ed8beb9c625ad1eb9dfd8782a801635e990010260606b8fefba1\"" May 14 00:50:05.080180 env[1312]: time="2025-05-14T00:50:05.080156311Z" level=info msg="StartContainer for \"edf8aa735081ed8beb9c625ad1eb9dfd8782a801635e990010260606b8fefba1\"" May 14 00:50:05.134469 env[1312]: time="2025-05-14T00:50:05.134427272Z" level=info msg="StartContainer for \"edf8aa735081ed8beb9c625ad1eb9dfd8782a801635e990010260606b8fefba1\" returns successfully" May 14 00:50:05.216360 kubelet[1569]: E0514 00:50:05.216315 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:50:05.478399 kubelet[1569]: E0514 00:50:05.478363 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:50:05.480364 kubelet[1569]: E0514 00:50:05.480329 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:50:05.482500 env[1312]: time="2025-05-14T00:50:05.482448660Z" level=info msg="CreateContainer within sandbox \"bdc8548b09abddb63bd9265cfc1eedc6b5c95418fc67286547840810a5cedb81\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 00:50:05.495548 env[1312]: time="2025-05-14T00:50:05.495495101Z" level=info msg="CreateContainer within sandbox \"bdc8548b09abddb63bd9265cfc1eedc6b5c95418fc67286547840810a5cedb81\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"21999918edd71062fd668522765b26ff32d5d13952130e666991ce31e1ce3b5b\"" May 14 00:50:05.496002 env[1312]: time="2025-05-14T00:50:05.495932157Z" level=info msg="StartContainer for \"21999918edd71062fd668522765b26ff32d5d13952130e666991ce31e1ce3b5b\"" May 14 00:50:05.513610 kubelet[1569]: I0514 00:50:05.513175 1569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-6qt72" podStartSLOduration=1.071524061 podStartE2EDuration="3.513154552s" podCreationTimestamp="2025-05-14 00:50:02 +0000 UTC" firstStartedPulling="2025-05-14 00:50:02.628645176 +0000 UTC m=+55.263161461" lastFinishedPulling="2025-05-14 00:50:05.070275667 +0000 UTC m=+57.704791952" observedRunningTime="2025-05-14 00:50:05.489471159 +0000 UTC m=+58.123987444" watchObservedRunningTime="2025-05-14 00:50:05.513154552 +0000 UTC m=+58.147670877" May 14 00:50:05.519641 systemd[1]: run-containerd-runc-k8s.io-21999918edd71062fd668522765b26ff32d5d13952130e666991ce31e1ce3b5b-runc.pCaWqK.mount: Deactivated successfully. May 14 00:50:05.617582 env[1312]: time="2025-05-14T00:50:05.617517719Z" level=info msg="StartContainer for \"21999918edd71062fd668522765b26ff32d5d13952130e666991ce31e1ce3b5b\" returns successfully" May 14 00:50:05.633293 env[1312]: time="2025-05-14T00:50:05.633252019Z" level=info msg="shim disconnected" id=21999918edd71062fd668522765b26ff32d5d13952130e666991ce31e1ce3b5b May 14 00:50:05.633490 env[1312]: time="2025-05-14T00:50:05.633472988Z" level=warning msg="cleaning up after shim disconnected" id=21999918edd71062fd668522765b26ff32d5d13952130e666991ce31e1ce3b5b namespace=k8s.io May 14 00:50:05.633562 env[1312]: time="2025-05-14T00:50:05.633549350Z" level=info msg="cleaning up dead shim" May 14 00:50:05.639962 env[1312]: time="2025-05-14T00:50:05.639930506Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:50:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3413 runtime=io.containerd.runc.v2\n" May 14 00:50:06.216928 kubelet[1569]: E0514 00:50:06.216869 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:50:06.487053 kubelet[1569]: E0514 00:50:06.484085 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:50:06.487053 kubelet[1569]: E0514 00:50:06.484583 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:50:06.488199 env[1312]: time="2025-05-14T00:50:06.488152446Z" level=info msg="CreateContainer within sandbox \"bdc8548b09abddb63bd9265cfc1eedc6b5c95418fc67286547840810a5cedb81\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 00:50:06.501695 env[1312]: time="2025-05-14T00:50:06.500269958Z" level=info msg="CreateContainer within sandbox \"bdc8548b09abddb63bd9265cfc1eedc6b5c95418fc67286547840810a5cedb81\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6742f46d95527b7b38486d7e3c3b04f174a182d8a06a51ed85ca20df0b5d6d43\"" May 14 00:50:06.500990 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21999918edd71062fd668522765b26ff32d5d13952130e666991ce31e1ce3b5b-rootfs.mount: Deactivated successfully. May 14 00:50:06.502257 env[1312]: time="2025-05-14T00:50:06.502224068Z" level=info msg="StartContainer for \"6742f46d95527b7b38486d7e3c3b04f174a182d8a06a51ed85ca20df0b5d6d43\"" May 14 00:50:06.548330 env[1312]: time="2025-05-14T00:50:06.548278870Z" level=info msg="StartContainer for \"6742f46d95527b7b38486d7e3c3b04f174a182d8a06a51ed85ca20df0b5d6d43\" returns successfully" May 14 00:50:06.560643 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6742f46d95527b7b38486d7e3c3b04f174a182d8a06a51ed85ca20df0b5d6d43-rootfs.mount: Deactivated successfully. May 14 00:50:06.566284 env[1312]: time="2025-05-14T00:50:06.566240031Z" level=info msg="shim disconnected" id=6742f46d95527b7b38486d7e3c3b04f174a182d8a06a51ed85ca20df0b5d6d43 May 14 00:50:06.566284 env[1312]: time="2025-05-14T00:50:06.566284312Z" level=warning msg="cleaning up after shim disconnected" id=6742f46d95527b7b38486d7e3c3b04f174a182d8a06a51ed85ca20df0b5d6d43 namespace=k8s.io May 14 00:50:06.566398 env[1312]: time="2025-05-14T00:50:06.566293473Z" level=info msg="cleaning up dead shim" May 14 00:50:06.572336 env[1312]: time="2025-05-14T00:50:06.572290926Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:50:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3468 runtime=io.containerd.runc.v2\n" May 14 00:50:07.217264 kubelet[1569]: E0514 00:50:07.217214 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:50:07.487197 kubelet[1569]: E0514 00:50:07.487116 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:50:07.489491 env[1312]: time="2025-05-14T00:50:07.489452975Z" level=info msg="CreateContainer within sandbox \"bdc8548b09abddb63bd9265cfc1eedc6b5c95418fc67286547840810a5cedb81\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 00:50:07.503401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount185410818.mount: Deactivated successfully. May 14 00:50:07.508724 env[1312]: time="2025-05-14T00:50:07.508668479Z" level=info msg="CreateContainer within sandbox \"bdc8548b09abddb63bd9265cfc1eedc6b5c95418fc67286547840810a5cedb81\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"051ea4e26f5376d29ceb1971f8e844ccb8423450717c7c5b048b47b120b5e42e\"" May 14 00:50:07.509698 env[1312]: time="2025-05-14T00:50:07.509655113Z" level=info msg="StartContainer for \"051ea4e26f5376d29ceb1971f8e844ccb8423450717c7c5b048b47b120b5e42e\"" May 14 00:50:07.565109 env[1312]: time="2025-05-14T00:50:07.565056227Z" level=info msg="StartContainer for \"051ea4e26f5376d29ceb1971f8e844ccb8423450717c7c5b048b47b120b5e42e\" returns successfully" May 14 00:50:07.821937 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) May 14 00:50:08.173734 kubelet[1569]: E0514 00:50:08.173680 1569 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:50:08.200861 env[1312]: time="2025-05-14T00:50:08.200817595Z" level=info msg="StopPodSandbox for \"0ddf605fea8dbfb7f8f77db8caafd1c1eb60d922739ec0ac984fb9cb61d00cd8\"" May 14 00:50:08.201016 env[1312]: time="2025-05-14T00:50:08.200935439Z" level=info msg="TearDown network for sandbox \"0ddf605fea8dbfb7f8f77db8caafd1c1eb60d922739ec0ac984fb9cb61d00cd8\" successfully" May 14 00:50:08.201016 env[1312]: time="2025-05-14T00:50:08.200971000Z" level=info msg="StopPodSandbox for \"0ddf605fea8dbfb7f8f77db8caafd1c1eb60d922739ec0ac984fb9cb61d00cd8\" returns successfully" May 14 00:50:08.201544 env[1312]: time="2025-05-14T00:50:08.201509058Z" level=info msg="RemovePodSandbox for \"0ddf605fea8dbfb7f8f77db8caafd1c1eb60d922739ec0ac984fb9cb61d00cd8\"" May 14 00:50:08.201680 env[1312]: time="2025-05-14T00:50:08.201640143Z" level=info msg="Forcibly stopping sandbox \"0ddf605fea8dbfb7f8f77db8caafd1c1eb60d922739ec0ac984fb9cb61d00cd8\"" May 14 00:50:08.201824 env[1312]: time="2025-05-14T00:50:08.201794388Z" level=info msg="TearDown network for sandbox \"0ddf605fea8dbfb7f8f77db8caafd1c1eb60d922739ec0ac984fb9cb61d00cd8\" successfully" May 14 00:50:08.206130 env[1312]: time="2025-05-14T00:50:08.206094012Z" level=info msg="RemovePodSandbox \"0ddf605fea8dbfb7f8f77db8caafd1c1eb60d922739ec0ac984fb9cb61d00cd8\" returns successfully" May 14 00:50:08.218326 kubelet[1569]: E0514 00:50:08.218282 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:50:08.491474 kubelet[1569]: E0514 00:50:08.491443 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:50:08.510720 kubelet[1569]: I0514 00:50:08.510668 1569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-sg28f" podStartSLOduration=5.510648771 podStartE2EDuration="5.510648771s" podCreationTimestamp="2025-05-14 00:50:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:50:08.510235557 +0000 UTC m=+61.144751882" watchObservedRunningTime="2025-05-14 00:50:08.510648771 +0000 UTC m=+61.145165096" May 14 00:50:09.218625 kubelet[1569]: E0514 00:50:09.218521 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:50:10.119948 kubelet[1569]: E0514 00:50:10.119888 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:50:10.219691 kubelet[1569]: E0514 00:50:10.219646 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:50:10.648037 systemd-networkd[1100]: lxc_health: Link UP May 14 00:50:10.664932 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 14 00:50:10.667054 systemd-networkd[1100]: lxc_health: Gained carrier May 14 00:50:11.220431 kubelet[1569]: E0514 00:50:11.220372 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:50:12.120728 kubelet[1569]: E0514 00:50:12.120682 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:50:12.220702 kubelet[1569]: E0514 00:50:12.220670 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:50:12.498742 kubelet[1569]: E0514 00:50:12.498701 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:50:12.590072 systemd-networkd[1100]: lxc_health: Gained IPv6LL May 14 00:50:12.972256 systemd[1]: run-containerd-runc-k8s.io-051ea4e26f5376d29ceb1971f8e844ccb8423450717c7c5b048b47b120b5e42e-runc.wP2gP3.mount: Deactivated successfully. May 14 00:50:13.221586 kubelet[1569]: E0514 00:50:13.221526 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:50:14.222284 kubelet[1569]: E0514 00:50:14.222245 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:50:15.108223 systemd[1]: run-containerd-runc-k8s.io-051ea4e26f5376d29ceb1971f8e844ccb8423450717c7c5b048b47b120b5e42e-runc.PjP8qb.mount: Deactivated successfully. May 14 00:50:15.223311 kubelet[1569]: E0514 00:50:15.223268 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:50:16.224095 kubelet[1569]: E0514 00:50:16.224049 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:50:17.224813 kubelet[1569]: E0514 00:50:17.224774 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:50:17.231120 systemd[1]: run-containerd-runc-k8s.io-051ea4e26f5376d29ceb1971f8e844ccb8423450717c7c5b048b47b120b5e42e-runc.74LGNl.mount: Deactivated successfully. May 14 00:50:18.225293 kubelet[1569]: E0514 00:50:18.225247 1569 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"