May 14 00:51:53.742529 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 14 00:51:53.742549 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Tue May 13 23:17:31 -00 2025 May 14 00:51:53.742558 kernel: efi: EFI v2.70 by EDK II May 14 00:51:53.742563 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 May 14 00:51:53.742568 kernel: random: crng init done May 14 00:51:53.742573 kernel: ACPI: Early table checksum verification disabled May 14 00:51:53.742580 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) May 14 00:51:53.742586 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) May 14 00:51:53.742592 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:51:53.742597 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:51:53.742602 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:51:53.742607 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:51:53.742613 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:51:53.742618 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:51:53.742626 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:51:53.742632 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:51:53.742637 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:51:53.742643 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 14 00:51:53.742649 kernel: NUMA: Failed to initialise from firmware May 14 00:51:53.742654 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 14 00:51:53.742660 kernel: NUMA: NODE_DATA [mem 0xdcb09900-0xdcb0efff] May 14 00:51:53.742666 kernel: Zone ranges: May 14 00:51:53.742671 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 14 00:51:53.742678 kernel: DMA32 empty May 14 00:51:53.742683 kernel: Normal empty May 14 00:51:53.742689 kernel: Movable zone start for each node May 14 00:51:53.742694 kernel: Early memory node ranges May 14 00:51:53.742700 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] May 14 00:51:53.742706 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] May 14 00:51:53.742711 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] May 14 00:51:53.742717 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] May 14 00:51:53.742722 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] May 14 00:51:53.742728 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] May 14 00:51:53.742734 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] May 14 00:51:53.742739 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 14 00:51:53.742746 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 14 00:51:53.742752 kernel: psci: probing for conduit method from ACPI. May 14 00:51:53.742757 kernel: psci: PSCIv1.1 detected in firmware. May 14 00:51:53.742763 kernel: psci: Using standard PSCI v0.2 function IDs May 14 00:51:53.742769 kernel: psci: Trusted OS migration not required May 14 00:51:53.742777 kernel: psci: SMC Calling Convention v1.1 May 14 00:51:53.742783 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 14 00:51:53.742790 kernel: ACPI: SRAT not present May 14 00:51:53.742796 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 May 14 00:51:53.742802 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 May 14 00:51:53.742809 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 14 00:51:53.742815 kernel: Detected PIPT I-cache on CPU0 May 14 00:51:53.742821 kernel: CPU features: detected: GIC system register CPU interface May 14 00:51:53.742827 kernel: CPU features: detected: Hardware dirty bit management May 14 00:51:53.742833 kernel: CPU features: detected: Spectre-v4 May 14 00:51:53.742839 kernel: CPU features: detected: Spectre-BHB May 14 00:51:53.742846 kernel: CPU features: kernel page table isolation forced ON by KASLR May 14 00:51:53.742852 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 14 00:51:53.742859 kernel: CPU features: detected: ARM erratum 1418040 May 14 00:51:53.742865 kernel: CPU features: detected: SSBS not fully self-synchronizing May 14 00:51:53.742871 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 14 00:51:53.742877 kernel: Policy zone: DMA May 14 00:51:53.742884 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=412b3b42de04d7d5abb18ecf506be3ad2c72d6425f1b2391aa97d359e8bd9923 May 14 00:51:53.742890 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 00:51:53.742896 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 14 00:51:53.742902 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 00:51:53.742908 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 00:51:53.742916 kernel: Memory: 2457332K/2572288K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36480K init, 777K bss, 114956K reserved, 0K cma-reserved) May 14 00:51:53.742922 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 14 00:51:53.742928 kernel: trace event string verifier disabled May 14 00:51:53.742934 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 00:51:53.742940 kernel: rcu: RCU event tracing is enabled. May 14 00:51:53.742947 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 14 00:51:53.742953 kernel: Trampoline variant of Tasks RCU enabled. May 14 00:51:53.742959 kernel: Tracing variant of Tasks RCU enabled. May 14 00:51:53.742965 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 00:51:53.742971 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 14 00:51:53.742977 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 14 00:51:53.742984 kernel: GICv3: 256 SPIs implemented May 14 00:51:53.742990 kernel: GICv3: 0 Extended SPIs implemented May 14 00:51:53.742996 kernel: GICv3: Distributor has no Range Selector support May 14 00:51:53.743002 kernel: Root IRQ handler: gic_handle_irq May 14 00:51:53.743008 kernel: GICv3: 16 PPIs implemented May 14 00:51:53.743014 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 14 00:51:53.743020 kernel: ACPI: SRAT not present May 14 00:51:53.743026 kernel: ITS [mem 0x08080000-0x0809ffff] May 14 00:51:53.743032 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) May 14 00:51:53.743038 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) May 14 00:51:53.743044 kernel: GICv3: using LPI property table @0x00000000400d0000 May 14 00:51:53.743050 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 May 14 00:51:53.743057 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:51:53.743063 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 14 00:51:53.743070 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 14 00:51:53.743076 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 14 00:51:53.743082 kernel: arm-pv: using stolen time PV May 14 00:51:53.743088 kernel: Console: colour dummy device 80x25 May 14 00:51:53.743094 kernel: ACPI: Core revision 20210730 May 14 00:51:53.743101 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 14 00:51:53.743107 kernel: pid_max: default: 32768 minimum: 301 May 14 00:51:53.743114 kernel: LSM: Security Framework initializing May 14 00:51:53.743121 kernel: SELinux: Initializing. May 14 00:51:53.743127 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 00:51:53.743133 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 00:51:53.743139 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 14 00:51:53.743146 kernel: rcu: Hierarchical SRCU implementation. May 14 00:51:53.743152 kernel: Platform MSI: ITS@0x8080000 domain created May 14 00:51:53.743158 kernel: PCI/MSI: ITS@0x8080000 domain created May 14 00:51:53.743164 kernel: Remapping and enabling EFI services. May 14 00:51:53.743170 kernel: smp: Bringing up secondary CPUs ... May 14 00:51:53.743177 kernel: Detected PIPT I-cache on CPU1 May 14 00:51:53.743184 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 14 00:51:53.743190 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 May 14 00:51:53.743196 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:51:53.743203 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 14 00:51:53.743209 kernel: Detected PIPT I-cache on CPU2 May 14 00:51:53.743216 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 14 00:51:53.743222 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 May 14 00:51:53.743248 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:51:53.743256 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 14 00:51:53.743264 kernel: Detected PIPT I-cache on CPU3 May 14 00:51:53.743270 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 14 00:51:53.743276 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 May 14 00:51:53.743283 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:51:53.743293 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 14 00:51:53.743301 kernel: smp: Brought up 1 node, 4 CPUs May 14 00:51:53.743307 kernel: SMP: Total of 4 processors activated. May 14 00:51:53.743313 kernel: CPU features: detected: 32-bit EL0 Support May 14 00:51:53.743320 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 14 00:51:53.743327 kernel: CPU features: detected: Common not Private translations May 14 00:51:53.743333 kernel: CPU features: detected: CRC32 instructions May 14 00:51:53.743340 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 14 00:51:53.743347 kernel: CPU features: detected: LSE atomic instructions May 14 00:51:53.743354 kernel: CPU features: detected: Privileged Access Never May 14 00:51:53.743361 kernel: CPU features: detected: RAS Extension Support May 14 00:51:53.743367 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 14 00:51:53.743374 kernel: CPU: All CPU(s) started at EL1 May 14 00:51:53.743381 kernel: alternatives: patching kernel code May 14 00:51:53.743388 kernel: devtmpfs: initialized May 14 00:51:53.743400 kernel: KASLR enabled May 14 00:51:53.743407 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 00:51:53.743414 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 14 00:51:53.743420 kernel: pinctrl core: initialized pinctrl subsystem May 14 00:51:53.743427 kernel: SMBIOS 3.0.0 present. May 14 00:51:53.743433 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 May 14 00:51:53.743440 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 00:51:53.743448 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 14 00:51:53.743455 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 14 00:51:53.743462 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 14 00:51:53.743468 kernel: audit: initializing netlink subsys (disabled) May 14 00:51:53.743475 kernel: audit: type=2000 audit(0.032:1): state=initialized audit_enabled=0 res=1 May 14 00:51:53.743482 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 00:51:53.743488 kernel: cpuidle: using governor menu May 14 00:51:53.743495 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 14 00:51:53.743501 kernel: ASID allocator initialised with 32768 entries May 14 00:51:53.743509 kernel: ACPI: bus type PCI registered May 14 00:51:53.743516 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 00:51:53.743522 kernel: Serial: AMBA PL011 UART driver May 14 00:51:53.743529 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 14 00:51:53.743535 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages May 14 00:51:53.743542 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 14 00:51:53.743548 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages May 14 00:51:53.743555 kernel: cryptd: max_cpu_qlen set to 1000 May 14 00:51:53.743562 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 14 00:51:53.743569 kernel: ACPI: Added _OSI(Module Device) May 14 00:51:53.743576 kernel: ACPI: Added _OSI(Processor Device) May 14 00:51:53.743582 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 00:51:53.743589 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 00:51:53.743595 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 14 00:51:53.743602 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 14 00:51:53.743608 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 14 00:51:53.743615 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 00:51:53.743621 kernel: ACPI: Interpreter enabled May 14 00:51:53.743629 kernel: ACPI: Using GIC for interrupt routing May 14 00:51:53.743636 kernel: ACPI: MCFG table detected, 1 entries May 14 00:51:53.743642 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 14 00:51:53.743649 kernel: printk: console [ttyAMA0] enabled May 14 00:51:53.743656 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 14 00:51:53.743781 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 00:51:53.743844 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 14 00:51:53.743904 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 14 00:51:53.743983 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 14 00:51:53.744043 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 14 00:51:53.744052 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 14 00:51:53.744059 kernel: PCI host bridge to bus 0000:00 May 14 00:51:53.744127 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 14 00:51:53.744180 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 14 00:51:53.744246 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 14 00:51:53.744304 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 14 00:51:53.744378 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 14 00:51:53.744462 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 14 00:51:53.744527 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 14 00:51:53.744586 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 14 00:51:53.744645 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 14 00:51:53.744705 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 14 00:51:53.744763 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 14 00:51:53.744822 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 14 00:51:53.744875 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 14 00:51:53.744927 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 14 00:51:53.744979 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 14 00:51:53.744988 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 14 00:51:53.744995 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 14 00:51:53.745003 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 14 00:51:53.745010 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 14 00:51:53.745016 kernel: iommu: Default domain type: Translated May 14 00:51:53.745023 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 14 00:51:53.745030 kernel: vgaarb: loaded May 14 00:51:53.745036 kernel: pps_core: LinuxPPS API ver. 1 registered May 14 00:51:53.745043 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 14 00:51:53.745050 kernel: PTP clock support registered May 14 00:51:53.745056 kernel: Registered efivars operations May 14 00:51:53.745064 kernel: clocksource: Switched to clocksource arch_sys_counter May 14 00:51:53.745071 kernel: VFS: Disk quotas dquot_6.6.0 May 14 00:51:53.745077 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 00:51:53.745084 kernel: pnp: PnP ACPI init May 14 00:51:53.745147 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 14 00:51:53.745157 kernel: pnp: PnP ACPI: found 1 devices May 14 00:51:53.745164 kernel: NET: Registered PF_INET protocol family May 14 00:51:53.745170 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 14 00:51:53.745178 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 14 00:51:53.745185 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 00:51:53.745192 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 00:51:53.745199 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 14 00:51:53.745205 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 14 00:51:53.745212 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 00:51:53.745219 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 00:51:53.745225 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 00:51:53.745241 kernel: PCI: CLS 0 bytes, default 64 May 14 00:51:53.745250 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 14 00:51:53.745256 kernel: kvm [1]: HYP mode not available May 14 00:51:53.745263 kernel: Initialise system trusted keyrings May 14 00:51:53.745270 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 14 00:51:53.745276 kernel: Key type asymmetric registered May 14 00:51:53.745283 kernel: Asymmetric key parser 'x509' registered May 14 00:51:53.745290 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 14 00:51:53.745297 kernel: io scheduler mq-deadline registered May 14 00:51:53.745303 kernel: io scheduler kyber registered May 14 00:51:53.745312 kernel: io scheduler bfq registered May 14 00:51:53.745318 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 14 00:51:53.745325 kernel: ACPI: button: Power Button [PWRB] May 14 00:51:53.745333 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 14 00:51:53.745403 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 14 00:51:53.745413 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 00:51:53.745419 kernel: thunder_xcv, ver 1.0 May 14 00:51:53.745426 kernel: thunder_bgx, ver 1.0 May 14 00:51:53.745433 kernel: nicpf, ver 1.0 May 14 00:51:53.745441 kernel: nicvf, ver 1.0 May 14 00:51:53.745512 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 14 00:51:53.745569 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-14T00:51:53 UTC (1747183913) May 14 00:51:53.745578 kernel: hid: raw HID events driver (C) Jiri Kosina May 14 00:51:53.745593 kernel: NET: Registered PF_INET6 protocol family May 14 00:51:53.745606 kernel: Segment Routing with IPv6 May 14 00:51:53.745613 kernel: In-situ OAM (IOAM) with IPv6 May 14 00:51:53.745620 kernel: NET: Registered PF_PACKET protocol family May 14 00:51:53.745628 kernel: Key type dns_resolver registered May 14 00:51:53.745635 kernel: registered taskstats version 1 May 14 00:51:53.745641 kernel: Loading compiled-in X.509 certificates May 14 00:51:53.745648 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: 7727f4e7680a5b8534f3d5e7bb84b1f695e8c34b' May 14 00:51:53.745655 kernel: Key type .fscrypt registered May 14 00:51:53.745661 kernel: Key type fscrypt-provisioning registered May 14 00:51:53.745668 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 00:51:53.745675 kernel: ima: Allocated hash algorithm: sha1 May 14 00:51:53.745681 kernel: ima: No architecture policies found May 14 00:51:53.745689 kernel: clk: Disabling unused clocks May 14 00:51:53.745696 kernel: Freeing unused kernel memory: 36480K May 14 00:51:53.745702 kernel: Run /init as init process May 14 00:51:53.745709 kernel: with arguments: May 14 00:51:53.745715 kernel: /init May 14 00:51:53.745722 kernel: with environment: May 14 00:51:53.745728 kernel: HOME=/ May 14 00:51:53.745735 kernel: TERM=linux May 14 00:51:53.745741 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 00:51:53.745751 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 14 00:51:53.745759 systemd[1]: Detected virtualization kvm. May 14 00:51:53.745767 systemd[1]: Detected architecture arm64. May 14 00:51:53.745773 systemd[1]: Running in initrd. May 14 00:51:53.745780 systemd[1]: No hostname configured, using default hostname. May 14 00:51:53.745787 systemd[1]: Hostname set to . May 14 00:51:53.745795 systemd[1]: Initializing machine ID from VM UUID. May 14 00:51:53.745803 systemd[1]: Queued start job for default target initrd.target. May 14 00:51:53.745810 systemd[1]: Started systemd-ask-password-console.path. May 14 00:51:53.745817 systemd[1]: Reached target cryptsetup.target. May 14 00:51:53.745824 systemd[1]: Reached target paths.target. May 14 00:51:53.745832 systemd[1]: Reached target slices.target. May 14 00:51:53.745839 systemd[1]: Reached target swap.target. May 14 00:51:53.745846 systemd[1]: Reached target timers.target. May 14 00:51:53.745853 systemd[1]: Listening on iscsid.socket. May 14 00:51:53.745861 systemd[1]: Listening on iscsiuio.socket. May 14 00:51:53.745869 systemd[1]: Listening on systemd-journald-audit.socket. May 14 00:51:53.745876 systemd[1]: Listening on systemd-journald-dev-log.socket. May 14 00:51:53.745883 systemd[1]: Listening on systemd-journald.socket. May 14 00:51:53.745890 systemd[1]: Listening on systemd-networkd.socket. May 14 00:51:53.745897 systemd[1]: Listening on systemd-udevd-control.socket. May 14 00:51:53.745904 systemd[1]: Listening on systemd-udevd-kernel.socket. May 14 00:51:53.745911 systemd[1]: Reached target sockets.target. May 14 00:51:53.745919 systemd[1]: Starting kmod-static-nodes.service... May 14 00:51:53.745926 systemd[1]: Finished network-cleanup.service. May 14 00:51:53.745933 systemd[1]: Starting systemd-fsck-usr.service... May 14 00:51:53.745941 systemd[1]: Starting systemd-journald.service... May 14 00:51:53.745948 systemd[1]: Starting systemd-modules-load.service... May 14 00:51:53.745955 systemd[1]: Starting systemd-resolved.service... May 14 00:51:53.745962 systemd[1]: Starting systemd-vconsole-setup.service... May 14 00:51:53.745969 systemd[1]: Finished kmod-static-nodes.service. May 14 00:51:53.745976 systemd[1]: Finished systemd-fsck-usr.service. May 14 00:51:53.745984 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 14 00:51:53.745991 systemd[1]: Finished systemd-vconsole-setup.service. May 14 00:51:53.745999 kernel: audit: type=1130 audit(1747183913.741:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:53.746006 systemd[1]: Starting dracut-cmdline-ask.service... May 14 00:51:53.746016 systemd-journald[290]: Journal started May 14 00:51:53.746055 systemd-journald[290]: Runtime Journal (/run/log/journal/8d7b2c7d8ebe4f8db731c39b658aba2c) is 6.0M, max 48.7M, 42.6M free. May 14 00:51:53.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:53.735146 systemd-modules-load[291]: Inserted module 'overlay' May 14 00:51:53.747652 systemd[1]: Started systemd-journald.service. May 14 00:51:53.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:53.751243 kernel: audit: type=1130 audit(1747183913.748:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:53.752584 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 14 00:51:53.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:53.758869 kernel: audit: type=1130 audit(1747183913.753:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:53.758901 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 00:51:53.762720 systemd-resolved[292]: Positive Trust Anchors: May 14 00:51:53.762737 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 00:51:53.765872 kernel: Bridge firewalling registered May 14 00:51:53.762768 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 14 00:51:53.764085 systemd-modules-load[291]: Inserted module 'br_netfilter' May 14 00:51:53.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:53.766930 systemd-resolved[292]: Defaulting to hostname 'linux'. May 14 00:51:53.775892 kernel: audit: type=1130 audit(1747183913.772:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:53.767710 systemd[1]: Started systemd-resolved.service. May 14 00:51:53.780161 kernel: SCSI subsystem initialized May 14 00:51:53.780178 kernel: audit: type=1130 audit(1747183913.776:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:53.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:53.772847 systemd[1]: Finished dracut-cmdline-ask.service. May 14 00:51:53.777179 systemd[1]: Reached target nss-lookup.target. May 14 00:51:53.781759 systemd[1]: Starting dracut-cmdline.service... May 14 00:51:53.786764 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 00:51:53.786809 kernel: device-mapper: uevent: version 1.0.3 May 14 00:51:53.786820 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 14 00:51:53.791509 systemd-modules-load[291]: Inserted module 'dm_multipath' May 14 00:51:53.792448 dracut-cmdline[307]: dracut-dracut-053 May 14 00:51:53.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:53.793508 systemd[1]: Finished systemd-modules-load.service. May 14 00:51:53.798305 kernel: audit: type=1130 audit(1747183913.794:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:53.798352 dracut-cmdline[307]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=412b3b42de04d7d5abb18ecf506be3ad2c72d6425f1b2391aa97d359e8bd9923 May 14 00:51:53.795124 systemd[1]: Starting systemd-sysctl.service... May 14 00:51:53.803862 systemd[1]: Finished systemd-sysctl.service. May 14 00:51:53.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:53.808257 kernel: audit: type=1130 audit(1747183913.804:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:53.859256 kernel: Loading iSCSI transport class v2.0-870. May 14 00:51:53.871246 kernel: iscsi: registered transport (tcp) May 14 00:51:53.887562 kernel: iscsi: registered transport (qla4xxx) May 14 00:51:53.887583 kernel: QLogic iSCSI HBA Driver May 14 00:51:53.923465 systemd[1]: Finished dracut-cmdline.service. May 14 00:51:53.927318 kernel: audit: type=1130 audit(1747183913.924:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:53.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:53.925137 systemd[1]: Starting dracut-pre-udev.service... May 14 00:51:53.971264 kernel: raid6: neonx8 gen() 13722 MB/s May 14 00:51:53.988246 kernel: raid6: neonx8 xor() 10762 MB/s May 14 00:51:54.005249 kernel: raid6: neonx4 gen() 13541 MB/s May 14 00:51:54.022251 kernel: raid6: neonx4 xor() 11197 MB/s May 14 00:51:54.039247 kernel: raid6: neonx2 gen() 12947 MB/s May 14 00:51:54.056246 kernel: raid6: neonx2 xor() 10311 MB/s May 14 00:51:54.073249 kernel: raid6: neonx1 gen() 10560 MB/s May 14 00:51:54.090261 kernel: raid6: neonx1 xor() 8764 MB/s May 14 00:51:54.107252 kernel: raid6: int64x8 gen() 6260 MB/s May 14 00:51:54.124250 kernel: raid6: int64x8 xor() 3542 MB/s May 14 00:51:54.141247 kernel: raid6: int64x4 gen() 7217 MB/s May 14 00:51:54.158251 kernel: raid6: int64x4 xor() 3848 MB/s May 14 00:51:54.175249 kernel: raid6: int64x2 gen() 6145 MB/s May 14 00:51:54.192252 kernel: raid6: int64x2 xor() 3319 MB/s May 14 00:51:54.209248 kernel: raid6: int64x1 gen() 5046 MB/s May 14 00:51:54.226374 kernel: raid6: int64x1 xor() 2644 MB/s May 14 00:51:54.226388 kernel: raid6: using algorithm neonx8 gen() 13722 MB/s May 14 00:51:54.226403 kernel: raid6: .... xor() 10762 MB/s, rmw enabled May 14 00:51:54.227502 kernel: raid6: using neon recovery algorithm May 14 00:51:54.238451 kernel: xor: measuring software checksum speed May 14 00:51:54.238473 kernel: 8regs : 17191 MB/sec May 14 00:51:54.239747 kernel: 32regs : 20697 MB/sec May 14 00:51:54.239763 kernel: arm64_neon : 27616 MB/sec May 14 00:51:54.239772 kernel: xor: using function: arm64_neon (27616 MB/sec) May 14 00:51:54.296259 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no May 14 00:51:54.306188 systemd[1]: Finished dracut-pre-udev.service. May 14 00:51:54.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:54.309000 audit: BPF prog-id=7 op=LOAD May 14 00:51:54.309000 audit: BPF prog-id=8 op=LOAD May 14 00:51:54.310252 kernel: audit: type=1130 audit(1747183914.306:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:54.310408 systemd[1]: Starting systemd-udevd.service... May 14 00:51:54.322584 systemd-udevd[489]: Using default interface naming scheme 'v252'. May 14 00:51:54.325917 systemd[1]: Started systemd-udevd.service. May 14 00:51:54.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:54.328484 systemd[1]: Starting dracut-pre-trigger.service... May 14 00:51:54.339802 dracut-pre-trigger[500]: rd.md=0: removing MD RAID activation May 14 00:51:54.366335 systemd[1]: Finished dracut-pre-trigger.service. May 14 00:51:54.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:54.367942 systemd[1]: Starting systemd-udev-trigger.service... May 14 00:51:54.401048 systemd[1]: Finished systemd-udev-trigger.service. May 14 00:51:54.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:54.438479 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 14 00:51:54.452625 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 14 00:51:54.452641 kernel: GPT:9289727 != 19775487 May 14 00:51:54.452650 kernel: GPT:Alternate GPT header not at the end of the disk. May 14 00:51:54.452659 kernel: GPT:9289727 != 19775487 May 14 00:51:54.452667 kernel: GPT: Use GNU Parted to correct GPT errors. May 14 00:51:54.452681 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 00:51:54.466255 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (544) May 14 00:51:54.467350 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 14 00:51:54.472078 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 14 00:51:54.473143 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 14 00:51:54.479265 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 14 00:51:54.482570 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 14 00:51:54.484215 systemd[1]: Starting disk-uuid.service... May 14 00:51:54.489942 disk-uuid[562]: Primary Header is updated. May 14 00:51:54.489942 disk-uuid[562]: Secondary Entries is updated. May 14 00:51:54.489942 disk-uuid[562]: Secondary Header is updated. May 14 00:51:54.493392 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 00:51:55.508522 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 00:51:55.508569 disk-uuid[563]: The operation has completed successfully. May 14 00:51:55.529624 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 00:51:55.530792 systemd[1]: Finished disk-uuid.service. May 14 00:51:55.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:55.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:55.533113 systemd[1]: Starting verity-setup.service... May 14 00:51:55.547248 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 14 00:51:55.568778 systemd[1]: Found device dev-mapper-usr.device. May 14 00:51:55.570302 systemd[1]: Mounting sysusr-usr.mount... May 14 00:51:55.571453 systemd[1]: Finished verity-setup.service. May 14 00:51:55.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:55.618005 systemd[1]: Mounted sysusr-usr.mount. May 14 00:51:55.619350 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 14 00:51:55.618870 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 14 00:51:55.619628 systemd[1]: Starting ignition-setup.service... May 14 00:51:55.621888 systemd[1]: Starting parse-ip-for-networkd.service... May 14 00:51:55.628648 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 00:51:55.628686 kernel: BTRFS info (device vda6): using free space tree May 14 00:51:55.628701 kernel: BTRFS info (device vda6): has skinny extents May 14 00:51:55.636177 systemd[1]: mnt-oem.mount: Deactivated successfully. May 14 00:51:55.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:55.643080 systemd[1]: Finished ignition-setup.service. May 14 00:51:55.644954 systemd[1]: Starting ignition-fetch-offline.service... May 14 00:51:55.715525 systemd[1]: Finished parse-ip-for-networkd.service. May 14 00:51:55.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:55.716000 audit: BPF prog-id=9 op=LOAD May 14 00:51:55.717749 systemd[1]: Starting systemd-networkd.service... May 14 00:51:55.721051 ignition[646]: Ignition 2.14.0 May 14 00:51:55.721064 ignition[646]: Stage: fetch-offline May 14 00:51:55.721111 ignition[646]: no configs at "/usr/lib/ignition/base.d" May 14 00:51:55.721121 ignition[646]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:51:55.721297 ignition[646]: parsed url from cmdline: "" May 14 00:51:55.721301 ignition[646]: no config URL provided May 14 00:51:55.721305 ignition[646]: reading system config file "/usr/lib/ignition/user.ign" May 14 00:51:55.721312 ignition[646]: no config at "/usr/lib/ignition/user.ign" May 14 00:51:55.721331 ignition[646]: op(1): [started] loading QEMU firmware config module May 14 00:51:55.721335 ignition[646]: op(1): executing: "modprobe" "qemu_fw_cfg" May 14 00:51:55.724583 ignition[646]: op(1): [finished] loading QEMU firmware config module May 14 00:51:55.744789 systemd-networkd[738]: lo: Link UP May 14 00:51:55.744800 systemd-networkd[738]: lo: Gained carrier May 14 00:51:55.745152 systemd-networkd[738]: Enumeration completed May 14 00:51:55.745339 systemd-networkd[738]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 00:51:55.746442 systemd-networkd[738]: eth0: Link UP May 14 00:51:55.746445 systemd-networkd[738]: eth0: Gained carrier May 14 00:51:55.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:55.750183 systemd[1]: Started systemd-networkd.service. May 14 00:51:55.751188 systemd[1]: Reached target network.target. May 14 00:51:55.752805 systemd[1]: Starting iscsiuio.service... May 14 00:51:55.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:55.761586 systemd[1]: Started iscsiuio.service. May 14 00:51:55.763210 systemd[1]: Starting iscsid.service... May 14 00:51:55.766562 iscsid[744]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 14 00:51:55.766562 iscsid[744]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 14 00:51:55.766562 iscsid[744]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 14 00:51:55.766562 iscsid[744]: If using hardware iscsi like qla4xxx this message can be ignored. May 14 00:51:55.766562 iscsid[744]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 14 00:51:55.766562 iscsid[744]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 14 00:51:55.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:55.769216 systemd[1]: Started iscsid.service. May 14 00:51:55.773324 systemd-networkd[738]: eth0: DHCPv4 address 10.0.0.131/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 00:51:55.774719 systemd[1]: Starting dracut-initqueue.service... May 14 00:51:55.784765 ignition[646]: parsing config with SHA512: 8bce714aa0b7b1efae2add9f90920b659145c9b84687b6e4541352768da1436a7036509d1248864ba40348e12d045b8e50b4e0878afbbb67589e5730c9fdcf96 May 14 00:51:55.784924 systemd[1]: Finished dracut-initqueue.service. May 14 00:51:55.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:55.785861 systemd[1]: Reached target remote-fs-pre.target. May 14 00:51:55.787242 systemd[1]: Reached target remote-cryptsetup.target. May 14 00:51:55.788837 systemd[1]: Reached target remote-fs.target. May 14 00:51:55.790940 systemd[1]: Starting dracut-pre-mount.service... May 14 00:51:55.793388 unknown[646]: fetched base config from "system" May 14 00:51:55.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:55.793905 ignition[646]: fetch-offline: fetch-offline passed May 14 00:51:55.793395 unknown[646]: fetched user config from "qemu" May 14 00:51:55.793962 ignition[646]: Ignition finished successfully May 14 00:51:55.794739 systemd[1]: Finished ignition-fetch-offline.service. May 14 00:51:55.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:55.795887 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 14 00:51:55.796595 systemd[1]: Starting ignition-kargs.service... May 14 00:51:55.799822 systemd[1]: Finished dracut-pre-mount.service. May 14 00:51:55.805227 ignition[757]: Ignition 2.14.0 May 14 00:51:55.805252 ignition[757]: Stage: kargs May 14 00:51:55.805343 ignition[757]: no configs at "/usr/lib/ignition/base.d" May 14 00:51:55.805352 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:51:55.807368 systemd[1]: Finished ignition-kargs.service. May 14 00:51:55.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:55.806246 ignition[757]: kargs: kargs passed May 14 00:51:55.806286 ignition[757]: Ignition finished successfully May 14 00:51:55.809625 systemd[1]: Starting ignition-disks.service... May 14 00:51:55.815969 ignition[765]: Ignition 2.14.0 May 14 00:51:55.815986 ignition[765]: Stage: disks May 14 00:51:55.816070 ignition[765]: no configs at "/usr/lib/ignition/base.d" May 14 00:51:55.817954 systemd[1]: Finished ignition-disks.service. May 14 00:51:55.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:55.816079 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:51:55.819444 systemd[1]: Reached target initrd-root-device.target. May 14 00:51:55.816966 ignition[765]: disks: disks passed May 14 00:51:55.820741 systemd[1]: Reached target local-fs-pre.target. May 14 00:51:55.817004 ignition[765]: Ignition finished successfully May 14 00:51:55.822294 systemd[1]: Reached target local-fs.target. May 14 00:51:55.823707 systemd[1]: Reached target sysinit.target. May 14 00:51:55.824859 systemd[1]: Reached target basic.target. May 14 00:51:55.827037 systemd[1]: Starting systemd-fsck-root.service... May 14 00:51:55.840456 systemd-fsck[773]: ROOT: clean, 619/553520 files, 56022/553472 blocks May 14 00:51:55.841422 systemd[1]: Finished systemd-fsck-root.service. May 14 00:51:55.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:55.843687 systemd[1]: Mounting sysroot.mount... May 14 00:51:55.850248 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 14 00:51:55.850599 systemd[1]: Mounted sysroot.mount. May 14 00:51:55.851334 systemd[1]: Reached target initrd-root-fs.target. May 14 00:51:55.853533 systemd[1]: Mounting sysroot-usr.mount... May 14 00:51:55.854440 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 14 00:51:55.854479 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 00:51:55.854503 systemd[1]: Reached target ignition-diskful.target. May 14 00:51:55.856192 systemd[1]: Mounted sysroot-usr.mount. May 14 00:51:55.858001 systemd[1]: Starting initrd-setup-root.service... May 14 00:51:55.862345 initrd-setup-root[783]: cut: /sysroot/etc/passwd: No such file or directory May 14 00:51:55.866828 initrd-setup-root[791]: cut: /sysroot/etc/group: No such file or directory May 14 00:51:55.870773 initrd-setup-root[799]: cut: /sysroot/etc/shadow: No such file or directory May 14 00:51:55.874849 initrd-setup-root[807]: cut: /sysroot/etc/gshadow: No such file or directory May 14 00:51:55.909138 systemd[1]: Finished initrd-setup-root.service. May 14 00:51:55.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:55.910877 systemd[1]: Starting ignition-mount.service... May 14 00:51:55.912271 systemd[1]: Starting sysroot-boot.service... May 14 00:51:55.916748 bash[824]: umount: /sysroot/usr/share/oem: not mounted. May 14 00:51:55.925860 ignition[826]: INFO : Ignition 2.14.0 May 14 00:51:55.925860 ignition[826]: INFO : Stage: mount May 14 00:51:55.927908 ignition[826]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 00:51:55.927908 ignition[826]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:51:55.927908 ignition[826]: INFO : mount: mount passed May 14 00:51:55.927908 ignition[826]: INFO : Ignition finished successfully May 14 00:51:55.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:55.928030 systemd[1]: Finished ignition-mount.service. May 14 00:51:55.933041 systemd[1]: Finished sysroot-boot.service. May 14 00:51:55.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:56.578072 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 14 00:51:56.584755 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (834) May 14 00:51:56.584784 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 00:51:56.584794 kernel: BTRFS info (device vda6): using free space tree May 14 00:51:56.586241 kernel: BTRFS info (device vda6): has skinny extents May 14 00:51:56.588841 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 14 00:51:56.590356 systemd[1]: Starting ignition-files.service... May 14 00:51:56.603838 ignition[854]: INFO : Ignition 2.14.0 May 14 00:51:56.603838 ignition[854]: INFO : Stage: files May 14 00:51:56.605442 ignition[854]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 00:51:56.605442 ignition[854]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:51:56.605442 ignition[854]: DEBUG : files: compiled without relabeling support, skipping May 14 00:51:56.609156 ignition[854]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 00:51:56.609156 ignition[854]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 00:51:56.612203 ignition[854]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 00:51:56.612203 ignition[854]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 00:51:56.612203 ignition[854]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 00:51:56.611971 unknown[854]: wrote ssh authorized keys file for user: core May 14 00:51:56.617356 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 14 00:51:56.617356 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 14 00:51:56.658244 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 00:51:56.836008 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 14 00:51:56.838029 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 00:51:56.839752 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 14 00:51:57.115359 systemd-networkd[738]: eth0: Gained IPv6LL May 14 00:51:57.150677 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 14 00:51:57.249333 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 00:51:57.250954 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 14 00:51:57.250954 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 14 00:51:57.250954 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 00:51:57.250954 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 00:51:57.250954 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 00:51:57.250954 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 00:51:57.250954 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 00:51:57.250954 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 00:51:57.250954 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 00:51:57.250954 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 00:51:57.265371 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 14 00:51:57.265371 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 14 00:51:57.265371 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 14 00:51:57.265371 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 May 14 00:51:57.477154 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 14 00:51:57.825078 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 14 00:51:57.825078 ignition[854]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 14 00:51:57.828391 ignition[854]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 00:51:57.828391 ignition[854]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 00:51:57.828391 ignition[854]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 14 00:51:57.828391 ignition[854]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 14 00:51:57.828391 ignition[854]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 00:51:57.828391 ignition[854]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 00:51:57.828391 ignition[854]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 14 00:51:57.828391 ignition[854]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" May 14 00:51:57.828391 ignition[854]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" May 14 00:51:57.828391 ignition[854]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" May 14 00:51:57.828391 ignition[854]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" May 14 00:51:57.870808 ignition[854]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 14 00:51:57.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:57.875369 ignition[854]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" May 14 00:51:57.875369 ignition[854]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 00:51:57.875369 ignition[854]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 00:51:57.875369 ignition[854]: INFO : files: files passed May 14 00:51:57.875369 ignition[854]: INFO : Ignition finished successfully May 14 00:51:57.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:57.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:57.873224 systemd[1]: Finished ignition-files.service. May 14 00:51:57.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:57.874980 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 14 00:51:57.885861 initrd-setup-root-after-ignition[879]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 14 00:51:57.875778 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 14 00:51:57.889343 initrd-setup-root-after-ignition[882]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 00:51:57.876438 systemd[1]: Starting ignition-quench.service... May 14 00:51:57.880822 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 00:51:57.880902 systemd[1]: Finished ignition-quench.service. May 14 00:51:57.882569 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 14 00:51:57.883818 systemd[1]: Reached target ignition-complete.target. May 14 00:51:57.885653 systemd[1]: Starting initrd-parse-etc.service... May 14 00:51:57.898310 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 00:51:57.898402 systemd[1]: Finished initrd-parse-etc.service. May 14 00:51:57.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:57.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:57.900033 systemd[1]: Reached target initrd-fs.target. May 14 00:51:57.901218 systemd[1]: Reached target initrd.target. May 14 00:51:57.902450 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 14 00:51:57.903145 systemd[1]: Starting dracut-pre-pivot.service... May 14 00:51:57.913350 systemd[1]: Finished dracut-pre-pivot.service. May 14 00:51:57.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:57.914867 systemd[1]: Starting initrd-cleanup.service... May 14 00:51:57.922678 systemd[1]: Stopped target nss-lookup.target. May 14 00:51:57.923611 systemd[1]: Stopped target remote-cryptsetup.target. May 14 00:51:57.925128 systemd[1]: Stopped target timers.target. May 14 00:51:57.926461 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 00:51:57.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:57.926574 systemd[1]: Stopped dracut-pre-pivot.service. May 14 00:51:57.927800 systemd[1]: Stopped target initrd.target. May 14 00:51:57.929128 systemd[1]: Stopped target basic.target. May 14 00:51:57.930379 systemd[1]: Stopped target ignition-complete.target. May 14 00:51:57.931663 systemd[1]: Stopped target ignition-diskful.target. May 14 00:51:57.932932 systemd[1]: Stopped target initrd-root-device.target. May 14 00:51:57.934334 systemd[1]: Stopped target remote-fs.target. May 14 00:51:57.935647 systemd[1]: Stopped target remote-fs-pre.target. May 14 00:51:57.937014 systemd[1]: Stopped target sysinit.target. May 14 00:51:57.938221 systemd[1]: Stopped target local-fs.target. May 14 00:51:57.939499 systemd[1]: Stopped target local-fs-pre.target. May 14 00:51:57.940745 systemd[1]: Stopped target swap.target. May 14 00:51:57.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:57.941911 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 00:51:57.942019 systemd[1]: Stopped dracut-pre-mount.service. May 14 00:51:57.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:57.943298 systemd[1]: Stopped target cryptsetup.target. May 14 00:51:57.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:57.944454 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 00:51:57.944555 systemd[1]: Stopped dracut-initqueue.service. May 14 00:51:57.945957 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 00:51:57.946056 systemd[1]: Stopped ignition-fetch-offline.service. May 14 00:51:57.947341 systemd[1]: Stopped target paths.target. May 14 00:51:57.948495 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 00:51:57.952259 systemd[1]: Stopped systemd-ask-password-console.path. May 14 00:51:57.954014 systemd[1]: Stopped target slices.target. May 14 00:51:57.955312 systemd[1]: Stopped target sockets.target. May 14 00:51:57.956517 systemd[1]: iscsid.socket: Deactivated successfully. May 14 00:51:57.956585 systemd[1]: Closed iscsid.socket. May 14 00:51:57.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:57.957664 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 00:51:57.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:57.957762 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 14 00:51:57.959159 systemd[1]: ignition-files.service: Deactivated successfully. May 14 00:51:57.959263 systemd[1]: Stopped ignition-files.service. May 14 00:51:57.964000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:57.961151 systemd[1]: Stopping ignition-mount.service... May 14 00:51:57.962011 systemd[1]: Stopping iscsiuio.service... May 14 00:51:57.963658 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 00:51:57.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:57.971487 ignition[895]: INFO : Ignition 2.14.0 May 14 00:51:57.971487 ignition[895]: INFO : Stage: umount May 14 00:51:57.971487 ignition[895]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 00:51:57.971487 ignition[895]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:51:57.971487 ignition[895]: INFO : umount: umount passed May 14 00:51:57.971487 ignition[895]: INFO : Ignition finished successfully May 14 00:51:57.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:57.975000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:57.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:57.963784 systemd[1]: Stopped kmod-static-nodes.service. May 14 00:51:57.965946 systemd[1]: Stopping sysroot-boot.service... May 14 00:51:57.968335 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 00:51:57.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:57.968473 systemd[1]: Stopped systemd-udev-trigger.service. May 14 00:51:57.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:57.970018 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 00:51:57.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:57.970117 systemd[1]: Stopped dracut-pre-trigger.service. May 14 00:51:57.974669 systemd[1]: iscsiuio.service: Deactivated successfully. May 14 00:51:57.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:57.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:57.974761 systemd[1]: Stopped iscsiuio.service. May 14 00:51:57.975871 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 00:51:57.975943 systemd[1]: Stopped ignition-mount.service. May 14 00:51:57.978281 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 00:51:57.978802 systemd[1]: Stopped target network.target. May 14 00:51:57.979894 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 00:51:57.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:57.979926 systemd[1]: Closed iscsiuio.socket. May 14 00:51:57.981054 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 00:51:57.981098 systemd[1]: Stopped ignition-disks.service. May 14 00:51:57.982658 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 00:51:58.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:57.982707 systemd[1]: Stopped ignition-kargs.service. May 14 00:51:58.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:57.984389 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 00:51:58.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:57.984440 systemd[1]: Stopped ignition-setup.service. May 14 00:51:57.985834 systemd[1]: Stopping systemd-networkd.service... May 14 00:51:57.987460 systemd[1]: Stopping systemd-resolved.service... May 14 00:51:58.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:57.988945 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 00:51:57.989034 systemd[1]: Finished initrd-cleanup.service. May 14 00:51:57.993288 systemd-networkd[738]: eth0: DHCPv6 lease lost May 14 00:51:58.012000 audit: BPF prog-id=9 op=UNLOAD May 14 00:51:57.994730 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 00:51:58.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:58.014000 audit: BPF prog-id=6 op=UNLOAD May 14 00:51:57.994825 systemd[1]: Stopped systemd-networkd.service. May 14 00:51:57.997556 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 00:51:58.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:57.997587 systemd[1]: Closed systemd-networkd.socket. May 14 00:51:57.999167 systemd[1]: Stopping network-cleanup.service... May 14 00:51:57.999848 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 00:51:58.025131 kernel: kauditd_printk_skb: 55 callbacks suppressed May 14 00:51:58.025154 kernel: audit: type=1131 audit(1747183918.020:66): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:58.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:57.999908 systemd[1]: Stopped parse-ip-for-networkd.service. May 14 00:51:58.028852 kernel: audit: type=1131 audit(1747183918.025:67): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:58.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:58.001386 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 00:51:58.032750 kernel: audit: type=1131 audit(1747183918.029:68): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:58.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:58.001440 systemd[1]: Stopped systemd-sysctl.service. May 14 00:51:58.003512 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 00:51:58.038452 kernel: audit: type=1131 audit(1747183918.034:69): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:58.034000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:58.003555 systemd[1]: Stopped systemd-modules-load.service. May 14 00:51:58.044572 kernel: audit: type=1130 audit(1747183918.038:70): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:58.044590 kernel: audit: type=1131 audit(1747183918.038:71): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:58.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:58.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:58.004539 systemd[1]: Stopping systemd-udevd.service... May 14 00:51:58.048026 kernel: audit: type=1131 audit(1747183918.044:72): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:58.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:58.008759 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 00:51:58.009194 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 00:51:58.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:58.009300 systemd[1]: Stopped systemd-resolved.service. May 14 00:51:58.054814 kernel: audit: type=1131 audit(1747183918.050:73): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:58.012914 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 00:51:58.013017 systemd[1]: Stopped network-cleanup.service. May 14 00:51:58.015324 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 00:51:58.015444 systemd[1]: Stopped systemd-udevd.service. May 14 00:51:58.016982 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 00:51:58.017017 systemd[1]: Closed systemd-udevd-control.socket. May 14 00:51:58.018477 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 00:51:58.018512 systemd[1]: Closed systemd-udevd-kernel.socket. May 14 00:51:58.019830 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 00:51:58.019877 systemd[1]: Stopped dracut-pre-udev.service. May 14 00:51:58.021260 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 00:51:58.021303 systemd[1]: Stopped dracut-cmdline.service. May 14 00:51:58.025913 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 00:51:58.025956 systemd[1]: Stopped dracut-cmdline-ask.service. May 14 00:51:58.030405 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 14 00:51:58.033541 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 00:51:58.033596 systemd[1]: Stopped systemd-vconsole-setup.service. May 14 00:51:58.036029 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 00:51:58.036116 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 14 00:51:58.043517 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 00:51:58.043619 systemd[1]: Stopped sysroot-boot.service. May 14 00:51:58.045318 systemd[1]: Reached target initrd-switch-root.target. May 14 00:51:58.048778 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 00:51:58.048835 systemd[1]: Stopped initrd-setup-root.service. May 14 00:51:58.051175 systemd[1]: Starting initrd-switch-root.service... May 14 00:51:58.057456 systemd[1]: Switching root. May 14 00:51:58.079799 iscsid[744]: iscsid shutting down. May 14 00:51:58.080469 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). May 14 00:51:58.080519 systemd-journald[290]: Journal stopped May 14 00:52:00.069223 kernel: SELinux: Class mctp_socket not defined in policy. May 14 00:52:00.069309 kernel: SELinux: Class anon_inode not defined in policy. May 14 00:52:00.069322 kernel: SELinux: the above unknown classes and permissions will be allowed May 14 00:52:00.069331 kernel: SELinux: policy capability network_peer_controls=1 May 14 00:52:00.069345 kernel: SELinux: policy capability open_perms=1 May 14 00:52:00.069355 kernel: SELinux: policy capability extended_socket_class=1 May 14 00:52:00.069364 kernel: SELinux: policy capability always_check_network=0 May 14 00:52:00.069374 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 00:52:00.069389 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 00:52:00.069400 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 00:52:00.069409 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 00:52:00.069418 kernel: audit: type=1403 audit(1747183918.142:74): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 00:52:00.069439 systemd[1]: Successfully loaded SELinux policy in 36.639ms. May 14 00:52:00.069459 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.905ms. May 14 00:52:00.069475 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 14 00:52:00.069485 systemd[1]: Detected virtualization kvm. May 14 00:52:00.069497 systemd[1]: Detected architecture arm64. May 14 00:52:00.069507 systemd[1]: Detected first boot. May 14 00:52:00.069518 systemd[1]: Initializing machine ID from VM UUID. May 14 00:52:00.069528 kernel: audit: type=1400 audit(1747183918.248:75): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 14 00:52:00.069539 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 14 00:52:00.069550 systemd[1]: Populated /etc with preset unit settings. May 14 00:52:00.069561 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 14 00:52:00.069572 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 14 00:52:00.069584 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:52:00.069595 systemd[1]: iscsid.service: Deactivated successfully. May 14 00:52:00.069606 systemd[1]: Stopped iscsid.service. May 14 00:52:00.069616 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 00:52:00.069627 systemd[1]: Stopped initrd-switch-root.service. May 14 00:52:00.069637 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 00:52:00.069649 systemd[1]: Created slice system-addon\x2dconfig.slice. May 14 00:52:00.069659 systemd[1]: Created slice system-addon\x2drun.slice. May 14 00:52:00.069670 systemd[1]: Created slice system-getty.slice. May 14 00:52:00.069680 systemd[1]: Created slice system-modprobe.slice. May 14 00:52:00.069691 systemd[1]: Created slice system-serial\x2dgetty.slice. May 14 00:52:00.069701 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 14 00:52:00.069715 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 14 00:52:00.069725 systemd[1]: Created slice user.slice. May 14 00:52:00.069735 systemd[1]: Started systemd-ask-password-console.path. May 14 00:52:00.069746 systemd[1]: Started systemd-ask-password-wall.path. May 14 00:52:00.069757 systemd[1]: Set up automount boot.automount. May 14 00:52:00.069767 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 14 00:52:00.069777 systemd[1]: Stopped target initrd-switch-root.target. May 14 00:52:00.069787 systemd[1]: Stopped target initrd-fs.target. May 14 00:52:00.069797 systemd[1]: Stopped target initrd-root-fs.target. May 14 00:52:00.069807 systemd[1]: Reached target integritysetup.target. May 14 00:52:00.069818 systemd[1]: Reached target remote-cryptsetup.target. May 14 00:52:00.069830 systemd[1]: Reached target remote-fs.target. May 14 00:52:00.069840 systemd[1]: Reached target slices.target. May 14 00:52:00.069850 systemd[1]: Reached target swap.target. May 14 00:52:00.069861 systemd[1]: Reached target torcx.target. May 14 00:52:00.069871 systemd[1]: Reached target veritysetup.target. May 14 00:52:00.069881 systemd[1]: Listening on systemd-coredump.socket. May 14 00:52:00.069892 systemd[1]: Listening on systemd-initctl.socket. May 14 00:52:00.069902 systemd[1]: Listening on systemd-networkd.socket. May 14 00:52:00.069912 systemd[1]: Listening on systemd-udevd-control.socket. May 14 00:52:00.069924 systemd[1]: Listening on systemd-udevd-kernel.socket. May 14 00:52:00.069935 systemd[1]: Listening on systemd-userdbd.socket. May 14 00:52:00.069946 systemd[1]: Mounting dev-hugepages.mount... May 14 00:52:00.069956 systemd[1]: Mounting dev-mqueue.mount... May 14 00:52:00.069970 systemd[1]: Mounting media.mount... May 14 00:52:00.069981 systemd[1]: Mounting sys-kernel-debug.mount... May 14 00:52:00.069991 systemd[1]: Mounting sys-kernel-tracing.mount... May 14 00:52:00.070001 systemd[1]: Mounting tmp.mount... May 14 00:52:00.070015 systemd[1]: Starting flatcar-tmpfiles.service... May 14 00:52:00.070025 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 14 00:52:00.070037 systemd[1]: Starting kmod-static-nodes.service... May 14 00:52:00.070048 systemd[1]: Starting modprobe@configfs.service... May 14 00:52:00.070058 systemd[1]: Starting modprobe@dm_mod.service... May 14 00:52:00.070069 systemd[1]: Starting modprobe@drm.service... May 14 00:52:00.070079 systemd[1]: Starting modprobe@efi_pstore.service... May 14 00:52:00.070089 systemd[1]: Starting modprobe@fuse.service... May 14 00:52:00.070100 systemd[1]: Starting modprobe@loop.service... May 14 00:52:00.070111 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 00:52:00.070123 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 00:52:00.070133 systemd[1]: Stopped systemd-fsck-root.service. May 14 00:52:00.070144 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 00:52:00.070154 kernel: loop: module loaded May 14 00:52:00.070164 kernel: fuse: init (API version 7.34) May 14 00:52:00.070174 systemd[1]: Stopped systemd-fsck-usr.service. May 14 00:52:00.070184 systemd[1]: Stopped systemd-journald.service. May 14 00:52:00.070194 systemd[1]: Starting systemd-journald.service... May 14 00:52:00.070205 systemd[1]: Starting systemd-modules-load.service... May 14 00:52:00.070215 systemd[1]: Starting systemd-network-generator.service... May 14 00:52:00.070227 systemd[1]: Starting systemd-remount-fs.service... May 14 00:52:00.070244 systemd[1]: Starting systemd-udev-trigger.service... May 14 00:52:00.070255 systemd[1]: verity-setup.service: Deactivated successfully. May 14 00:52:00.070265 systemd[1]: Stopped verity-setup.service. May 14 00:52:00.070276 systemd[1]: Mounted dev-hugepages.mount. May 14 00:52:00.070286 systemd[1]: Mounted dev-mqueue.mount. May 14 00:52:00.070298 systemd-journald[1004]: Journal started May 14 00:52:00.070337 systemd-journald[1004]: Runtime Journal (/run/log/journal/8d7b2c7d8ebe4f8db731c39b658aba2c) is 6.0M, max 48.7M, 42.6M free. May 14 00:52:00.070368 systemd[1]: Mounted media.mount. May 14 00:51:58.142000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 00:51:58.248000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 14 00:51:58.248000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 14 00:51:58.251000 audit: BPF prog-id=10 op=LOAD May 14 00:51:58.252000 audit: BPF prog-id=10 op=UNLOAD May 14 00:51:58.253000 audit: BPF prog-id=11 op=LOAD May 14 00:51:58.253000 audit: BPF prog-id=11 op=UNLOAD May 14 00:51:58.296000 audit[930]: AVC avc: denied { associate } for pid=930 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 14 00:51:58.296000 audit[930]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c58a2 a1=40000c8de0 a2=40000cf0c0 a3=32 items=0 ppid=913 pid=930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 14 00:51:58.296000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 14 00:51:58.297000 audit[930]: AVC avc: denied { associate } for pid=930 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 14 00:51:58.297000 audit[930]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001c5979 a2=1ed a3=0 items=2 ppid=913 pid=930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 14 00:51:58.297000 audit: CWD cwd="/" May 14 00:51:58.297000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 14 00:51:58.297000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 14 00:51:58.297000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 14 00:51:59.946000 audit: BPF prog-id=12 op=LOAD May 14 00:51:59.946000 audit: BPF prog-id=3 op=UNLOAD May 14 00:51:59.946000 audit: BPF prog-id=13 op=LOAD May 14 00:51:59.946000 audit: BPF prog-id=14 op=LOAD May 14 00:51:59.946000 audit: BPF prog-id=4 op=UNLOAD May 14 00:51:59.946000 audit: BPF prog-id=5 op=UNLOAD May 14 00:51:59.947000 audit: BPF prog-id=15 op=LOAD May 14 00:51:59.947000 audit: BPF prog-id=12 op=UNLOAD May 14 00:51:59.947000 audit: BPF prog-id=16 op=LOAD May 14 00:51:59.947000 audit: BPF prog-id=17 op=LOAD May 14 00:51:59.947000 audit: BPF prog-id=13 op=UNLOAD May 14 00:51:59.947000 audit: BPF prog-id=14 op=UNLOAD May 14 00:51:59.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:59.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:59.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:59.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:59.961000 audit: BPF prog-id=15 op=UNLOAD May 14 00:52:00.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:00.046000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:00.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:00.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:00.048000 audit: BPF prog-id=18 op=LOAD May 14 00:52:00.049000 audit: BPF prog-id=19 op=LOAD May 14 00:52:00.049000 audit: BPF prog-id=20 op=LOAD May 14 00:52:00.049000 audit: BPF prog-id=16 op=UNLOAD May 14 00:52:00.049000 audit: BPF prog-id=17 op=UNLOAD May 14 00:52:00.065000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:00.067000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 14 00:52:00.067000 audit[1004]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffcaa81620 a2=4000 a3=1 items=0 ppid=1 pid=1004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 14 00:52:00.067000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 14 00:51:58.294730 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:51:58Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 14 00:51:59.945467 systemd[1]: Queued start job for default target multi-user.target. May 14 00:51:58.295033 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:51:58Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 14 00:51:59.945479 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 14 00:51:58.295054 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:51:58Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 14 00:51:59.948828 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 00:51:58.295085 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:51:58Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 14 00:51:58.295095 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:51:58Z" level=debug msg="skipped missing lower profile" missing profile=oem May 14 00:51:58.295124 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:51:58Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 14 00:51:58.295135 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:51:58Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 14 00:51:58.295347 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:51:58Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 14 00:51:58.295382 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:51:58Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 14 00:51:58.295394 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:51:58Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 14 00:51:58.295858 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:51:58Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 14 00:51:58.295897 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:51:58Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 14 00:51:58.295915 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:51:58Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 14 00:51:58.295929 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:51:58Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 14 00:51:58.295945 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:51:58Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 14 00:51:58.295957 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:51:58Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 14 00:51:59.708001 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:51:59Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 14 00:51:59.708287 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:51:59Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 14 00:51:59.708398 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:51:59Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 14 00:51:59.708576 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:51:59Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 14 00:51:59.708629 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:51:59Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 14 00:51:59.708684 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:51:59Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 14 00:52:00.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:00.073285 systemd[1]: Started systemd-journald.service. May 14 00:52:00.073602 systemd[1]: Mounted sys-kernel-debug.mount. May 14 00:52:00.074516 systemd[1]: Mounted sys-kernel-tracing.mount. May 14 00:52:00.075419 systemd[1]: Mounted tmp.mount. May 14 00:52:00.076857 systemd[1]: Finished kmod-static-nodes.service. May 14 00:52:00.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:00.077993 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 00:52:00.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:00.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:00.078464 systemd[1]: Finished modprobe@configfs.service. May 14 00:52:00.079594 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:52:00.079763 systemd[1]: Finished modprobe@dm_mod.service. May 14 00:52:00.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:00.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:00.080910 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 00:52:00.081299 systemd[1]: Finished modprobe@drm.service. May 14 00:52:00.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:00.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:00.082432 systemd[1]: Finished flatcar-tmpfiles.service. May 14 00:52:00.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:00.083559 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:52:00.083755 systemd[1]: Finished modprobe@efi_pstore.service. May 14 00:52:00.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:00.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:00.084958 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 00:52:00.085113 systemd[1]: Finished modprobe@fuse.service. May 14 00:52:00.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:00.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:00.086176 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:52:00.086353 systemd[1]: Finished modprobe@loop.service. May 14 00:52:00.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:00.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:00.087625 systemd[1]: Finished systemd-modules-load.service. May 14 00:52:00.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:00.088903 systemd[1]: Finished systemd-network-generator.service. May 14 00:52:00.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:00.090128 systemd[1]: Finished systemd-remount-fs.service. May 14 00:52:00.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:00.091678 systemd[1]: Reached target network-pre.target. May 14 00:52:00.093780 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 14 00:52:00.095771 systemd[1]: Mounting sys-kernel-config.mount... May 14 00:52:00.096522 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 00:52:00.098372 systemd[1]: Starting systemd-hwdb-update.service... May 14 00:52:00.100405 systemd[1]: Starting systemd-journal-flush.service... May 14 00:52:00.101351 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 00:52:00.102613 systemd[1]: Starting systemd-random-seed.service... May 14 00:52:00.103518 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 14 00:52:00.104624 systemd[1]: Starting systemd-sysctl.service... May 14 00:52:00.105675 systemd-journald[1004]: Time spent on flushing to /var/log/journal/8d7b2c7d8ebe4f8db731c39b658aba2c is 18.045ms for 996 entries. May 14 00:52:00.105675 systemd-journald[1004]: System Journal (/var/log/journal/8d7b2c7d8ebe4f8db731c39b658aba2c) is 8.0M, max 195.6M, 187.6M free. May 14 00:52:00.134721 systemd-journald[1004]: Received client request to flush runtime journal. May 14 00:52:00.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:00.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:00.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:00.107998 systemd[1]: Starting systemd-sysusers.service... May 14 00:52:00.111508 systemd[1]: Finished systemd-udev-trigger.service. May 14 00:52:00.112554 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 14 00:52:00.135468 udevadm[1030]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 14 00:52:00.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:00.113496 systemd[1]: Mounted sys-kernel-config.mount. May 14 00:52:00.115757 systemd[1]: Starting systemd-udev-settle.service... May 14 00:52:00.116887 systemd[1]: Finished systemd-random-seed.service. May 14 00:52:00.117936 systemd[1]: Reached target first-boot-complete.target. May 14 00:52:00.129271 systemd[1]: Finished systemd-sysctl.service. May 14 00:52:00.134648 systemd[1]: Finished systemd-sysusers.service. May 14 00:52:00.135897 systemd[1]: Finished systemd-journal-flush.service. May 14 00:52:00.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:00.480608 systemd[1]: Finished systemd-hwdb-update.service. May 14 00:52:00.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:00.481000 audit: BPF prog-id=21 op=LOAD May 14 00:52:00.481000 audit: BPF prog-id=22 op=LOAD May 14 00:52:00.482000 audit: BPF prog-id=7 op=UNLOAD May 14 00:52:00.482000 audit: BPF prog-id=8 op=UNLOAD May 14 00:52:00.482920 systemd[1]: Starting systemd-udevd.service... May 14 00:52:00.501411 systemd-udevd[1033]: Using default interface naming scheme 'v252'. May 14 00:52:00.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:00.514000 audit: BPF prog-id=23 op=LOAD May 14 00:52:00.512780 systemd[1]: Started systemd-udevd.service. May 14 00:52:00.515503 systemd[1]: Starting systemd-networkd.service... May 14 00:52:00.522000 audit: BPF prog-id=24 op=LOAD May 14 00:52:00.522000 audit: BPF prog-id=25 op=LOAD May 14 00:52:00.522000 audit: BPF prog-id=26 op=LOAD May 14 00:52:00.523696 systemd[1]: Starting systemd-userdbd.service... May 14 00:52:00.544971 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. May 14 00:52:00.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:00.557920 systemd[1]: Started systemd-userdbd.service. May 14 00:52:00.571741 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 14 00:52:00.621605 systemd[1]: Finished systemd-udev-settle.service. May 14 00:52:00.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:00.623706 systemd[1]: Starting lvm2-activation-early.service... May 14 00:52:00.624446 systemd-networkd[1042]: lo: Link UP May 14 00:52:00.624454 systemd-networkd[1042]: lo: Gained carrier May 14 00:52:00.624791 systemd-networkd[1042]: Enumeration completed May 14 00:52:00.624863 systemd[1]: Started systemd-networkd.service. May 14 00:52:00.625141 systemd-networkd[1042]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 00:52:00.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:00.628448 systemd-networkd[1042]: eth0: Link UP May 14 00:52:00.628456 systemd-networkd[1042]: eth0: Gained carrier May 14 00:52:00.634550 lvm[1066]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 00:52:00.645418 systemd-networkd[1042]: eth0: DHCPv4 address 10.0.0.131/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 00:52:00.664260 systemd[1]: Finished lvm2-activation-early.service. May 14 00:52:00.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:00.665213 systemd[1]: Reached target cryptsetup.target. May 14 00:52:00.667172 systemd[1]: Starting lvm2-activation.service... May 14 00:52:00.670784 lvm[1067]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 00:52:00.704177 systemd[1]: Finished lvm2-activation.service. May 14 00:52:00.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:00.705135 systemd[1]: Reached target local-fs-pre.target. May 14 00:52:00.705988 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 00:52:00.706021 systemd[1]: Reached target local-fs.target. May 14 00:52:00.706768 systemd[1]: Reached target machines.target. May 14 00:52:00.708807 systemd[1]: Starting ldconfig.service... May 14 00:52:00.709913 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 14 00:52:00.709968 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 14 00:52:00.711103 systemd[1]: Starting systemd-boot-update.service... May 14 00:52:00.713020 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 14 00:52:00.715283 systemd[1]: Starting systemd-machine-id-commit.service... May 14 00:52:00.718337 systemd[1]: Starting systemd-sysext.service... May 14 00:52:00.722655 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1069 (bootctl) May 14 00:52:00.724242 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 14 00:52:00.725864 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 14 00:52:00.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:00.732445 systemd[1]: Unmounting usr-share-oem.mount... May 14 00:52:00.738128 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 14 00:52:00.738375 systemd[1]: Unmounted usr-share-oem.mount. May 14 00:52:00.793251 kernel: loop0: detected capacity change from 0 to 201592 May 14 00:52:00.797002 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 00:52:00.797610 systemd[1]: Finished systemd-machine-id-commit.service. May 14 00:52:00.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:00.805869 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 00:52:00.827261 kernel: loop1: detected capacity change from 0 to 201592 May 14 00:52:00.830724 systemd-fsck[1079]: fsck.fat 4.2 (2021-01-31) May 14 00:52:00.830724 systemd-fsck[1079]: /dev/vda1: 236 files, 117310/258078 clusters May 14 00:52:00.832707 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 14 00:52:00.833523 (sd-sysext)[1083]: Using extensions 'kubernetes'. May 14 00:52:00.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:00.833851 (sd-sysext)[1083]: Merged extensions into '/usr'. May 14 00:52:00.849522 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 14 00:52:00.851031 systemd[1]: Starting modprobe@dm_mod.service... May 14 00:52:00.853481 systemd[1]: Starting modprobe@efi_pstore.service... May 14 00:52:00.855939 systemd[1]: Starting modprobe@loop.service... May 14 00:52:00.856913 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 14 00:52:00.857109 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 14 00:52:00.857934 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:52:00.858059 systemd[1]: Finished modprobe@dm_mod.service. May 14 00:52:00.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:00.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:00.859486 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:52:00.859597 systemd[1]: Finished modprobe@efi_pstore.service. May 14 00:52:00.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:00.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:00.861018 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:52:00.861132 systemd[1]: Finished modprobe@loop.service. May 14 00:52:00.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:00.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:00.862731 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 00:52:00.862837 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 14 00:52:00.901209 ldconfig[1068]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 00:52:00.905206 systemd[1]: Finished ldconfig.service. May 14 00:52:00.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:01.068148 systemd[1]: Mounting boot.mount... May 14 00:52:01.070092 systemd[1]: Mounting usr-share-oem.mount... May 14 00:52:01.075983 systemd[1]: Mounted boot.mount. May 14 00:52:01.076994 systemd[1]: Mounted usr-share-oem.mount. May 14 00:52:01.079050 systemd[1]: Finished systemd-sysext.service. May 14 00:52:01.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:01.081460 systemd[1]: Starting ensure-sysext.service... May 14 00:52:01.083358 systemd[1]: Starting systemd-tmpfiles-setup.service... May 14 00:52:01.088478 systemd[1]: Finished systemd-boot-update.service. May 14 00:52:01.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:01.089578 systemd[1]: Reloading. May 14 00:52:01.096488 systemd-tmpfiles[1091]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 14 00:52:01.098115 systemd-tmpfiles[1091]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 00:52:01.100695 systemd-tmpfiles[1091]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 00:52:01.124516 /usr/lib/systemd/system-generators/torcx-generator[1113]: time="2025-05-14T00:52:01Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 14 00:52:01.124546 /usr/lib/systemd/system-generators/torcx-generator[1113]: time="2025-05-14T00:52:01Z" level=info msg="torcx already run" May 14 00:52:01.185948 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 14 00:52:01.185968 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 14 00:52:01.201348 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:52:01.242000 audit: BPF prog-id=27 op=LOAD May 14 00:52:01.242000 audit: BPF prog-id=23 op=UNLOAD May 14 00:52:01.243000 audit: BPF prog-id=28 op=LOAD May 14 00:52:01.243000 audit: BPF prog-id=24 op=UNLOAD May 14 00:52:01.243000 audit: BPF prog-id=29 op=LOAD May 14 00:52:01.243000 audit: BPF prog-id=30 op=LOAD May 14 00:52:01.243000 audit: BPF prog-id=25 op=UNLOAD May 14 00:52:01.243000 audit: BPF prog-id=26 op=UNLOAD May 14 00:52:01.244000 audit: BPF prog-id=31 op=LOAD May 14 00:52:01.244000 audit: BPF prog-id=18 op=UNLOAD May 14 00:52:01.244000 audit: BPF prog-id=32 op=LOAD May 14 00:52:01.244000 audit: BPF prog-id=33 op=LOAD May 14 00:52:01.244000 audit: BPF prog-id=19 op=UNLOAD May 14 00:52:01.244000 audit: BPF prog-id=20 op=UNLOAD May 14 00:52:01.245000 audit: BPF prog-id=34 op=LOAD May 14 00:52:01.245000 audit: BPF prog-id=35 op=LOAD May 14 00:52:01.245000 audit: BPF prog-id=21 op=UNLOAD May 14 00:52:01.245000 audit: BPF prog-id=22 op=UNLOAD May 14 00:52:01.248877 systemd[1]: Finished systemd-tmpfiles-setup.service. May 14 00:52:01.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:01.253683 systemd[1]: Starting audit-rules.service... May 14 00:52:01.255771 systemd[1]: Starting clean-ca-certificates.service... May 14 00:52:01.258358 systemd[1]: Starting systemd-journal-catalog-update.service... May 14 00:52:01.261000 audit: BPF prog-id=36 op=LOAD May 14 00:52:01.262535 systemd[1]: Starting systemd-resolved.service... May 14 00:52:01.263000 audit: BPF prog-id=37 op=LOAD May 14 00:52:01.264924 systemd[1]: Starting systemd-timesyncd.service... May 14 00:52:01.267196 systemd[1]: Starting systemd-update-utmp.service... May 14 00:52:01.270825 systemd[1]: Finished clean-ca-certificates.service. May 14 00:52:01.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:01.272065 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 00:52:01.274000 audit[1161]: SYSTEM_BOOT pid=1161 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 14 00:52:01.273998 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 14 00:52:01.275633 systemd[1]: Starting modprobe@dm_mod.service... May 14 00:52:01.277558 systemd[1]: Starting modprobe@efi_pstore.service... May 14 00:52:01.279425 systemd[1]: Starting modprobe@loop.service... May 14 00:52:01.280245 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 14 00:52:01.280390 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 14 00:52:01.280524 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 00:52:01.281388 systemd[1]: Finished systemd-journal-catalog-update.service. May 14 00:52:01.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:01.282785 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:52:01.282900 systemd[1]: Finished modprobe@dm_mod.service. May 14 00:52:01.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:01.283000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:01.284216 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:52:01.284342 systemd[1]: Finished modprobe@efi_pstore.service. May 14 00:52:01.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:01.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:01.285589 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:52:01.285698 systemd[1]: Finished modprobe@loop.service. May 14 00:52:01.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:01.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:01.288500 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 00:52:01.288650 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 14 00:52:01.290176 systemd[1]: Starting systemd-update-done.service... May 14 00:52:01.293387 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 14 00:52:01.294626 systemd[1]: Starting modprobe@dm_mod.service... May 14 00:52:01.296545 systemd[1]: Starting modprobe@efi_pstore.service... May 14 00:52:01.298958 systemd[1]: Starting modprobe@loop.service... May 14 00:52:01.299761 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 14 00:52:01.299893 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 14 00:52:01.299986 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 00:52:01.300940 systemd[1]: Finished systemd-update-utmp.service. May 14 00:52:01.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:01.302328 systemd[1]: Finished systemd-update-done.service. May 14 00:52:01.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:01.303618 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:52:01.303808 systemd[1]: Finished modprobe@dm_mod.service. May 14 00:52:01.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:01.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:01.304995 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:52:01.305116 systemd[1]: Finished modprobe@efi_pstore.service. May 14 00:52:01.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:01.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:01.306506 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:52:01.306618 systemd[1]: Finished modprobe@loop.service. May 14 00:52:01.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:01.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:01.310713 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 14 00:52:01.312424 systemd[1]: Starting modprobe@dm_mod.service... May 14 00:52:01.314444 systemd[1]: Starting modprobe@drm.service... May 14 00:52:01.316360 systemd[1]: Starting modprobe@efi_pstore.service... May 14 00:52:01.318146 systemd[1]: Starting modprobe@loop.service... May 14 00:52:01.319045 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 14 00:52:01.319174 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 14 00:52:01.320540 systemd[1]: Starting systemd-networkd-wait-online.service... May 14 00:52:01.321524 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 00:52:01.322693 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:52:01.322845 systemd[1]: Finished modprobe@dm_mod.service. May 14 00:52:01.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:01.323000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:01.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:01.325000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:01.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:01.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:01.324214 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 00:52:01.324369 systemd[1]: Finished modprobe@drm.service. May 14 00:52:01.325590 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:52:01.325694 systemd[1]: Finished modprobe@efi_pstore.service. May 14 00:52:01.327012 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:52:01.327123 systemd[1]: Finished modprobe@loop.service. May 14 00:52:01.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:01.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:01.328641 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 00:52:01.328738 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 14 00:52:01.329471 systemd[1]: Started systemd-timesyncd.service. May 14 00:52:01.330481 systemd-timesyncd[1158]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 14 00:52:01.330537 systemd-timesyncd[1158]: Initial clock synchronization to Wed 2025-05-14 00:52:01.526505 UTC. May 14 00:52:01.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:01.331322 systemd[1]: Finished ensure-sysext.service. May 14 00:52:01.333074 systemd[1]: Reached target time-set.target. May 14 00:52:01.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:52:01.336000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 14 00:52:01.336000 audit[1183]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffcebe2db0 a2=420 a3=0 items=0 ppid=1150 pid=1183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 14 00:52:01.336000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 14 00:52:01.336669 augenrules[1183]: No rules May 14 00:52:01.337596 systemd[1]: Finished audit-rules.service. May 14 00:52:01.340725 systemd-resolved[1154]: Positive Trust Anchors: May 14 00:52:01.342455 systemd-resolved[1154]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 00:52:01.342485 systemd-resolved[1154]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 14 00:52:01.357587 systemd-resolved[1154]: Defaulting to hostname 'linux'. May 14 00:52:01.358944 systemd[1]: Started systemd-resolved.service. May 14 00:52:01.359872 systemd[1]: Reached target network.target. May 14 00:52:01.360674 systemd[1]: Reached target nss-lookup.target. May 14 00:52:01.361486 systemd[1]: Reached target sysinit.target. May 14 00:52:01.362329 systemd[1]: Started motdgen.path. May 14 00:52:01.363036 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 14 00:52:01.364317 systemd[1]: Started logrotate.timer. May 14 00:52:01.365143 systemd[1]: Started mdadm.timer. May 14 00:52:01.365843 systemd[1]: Started systemd-tmpfiles-clean.timer. May 14 00:52:01.366695 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 00:52:01.366727 systemd[1]: Reached target paths.target. May 14 00:52:01.367459 systemd[1]: Reached target timers.target. May 14 00:52:01.368550 systemd[1]: Listening on dbus.socket. May 14 00:52:01.370368 systemd[1]: Starting docker.socket... May 14 00:52:01.373602 systemd[1]: Listening on sshd.socket. May 14 00:52:01.374442 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 14 00:52:01.374891 systemd[1]: Listening on docker.socket. May 14 00:52:01.375753 systemd[1]: Reached target sockets.target. May 14 00:52:01.376535 systemd[1]: Reached target basic.target. May 14 00:52:01.377303 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 14 00:52:01.377340 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 14 00:52:01.378333 systemd[1]: Starting containerd.service... May 14 00:52:01.380027 systemd[1]: Starting dbus.service... May 14 00:52:01.381879 systemd[1]: Starting enable-oem-cloudinit.service... May 14 00:52:01.383972 systemd[1]: Starting extend-filesystems.service... May 14 00:52:01.384911 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 14 00:52:01.386314 systemd[1]: Starting motdgen.service... May 14 00:52:01.388387 systemd[1]: Starting prepare-helm.service... May 14 00:52:01.392665 systemd[1]: Starting ssh-key-proc-cmdline.service... May 14 00:52:01.394586 systemd[1]: Starting sshd-keygen.service... May 14 00:52:01.395982 jq[1192]: false May 14 00:52:01.397791 systemd[1]: Starting systemd-logind.service... May 14 00:52:01.401119 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 14 00:52:01.401189 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 00:52:01.401616 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 00:52:01.402291 systemd[1]: Starting update-engine.service... May 14 00:52:01.406707 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 14 00:52:01.409516 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 00:52:01.412508 jq[1211]: true May 14 00:52:01.409701 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 14 00:52:01.410706 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 00:52:01.410878 systemd[1]: Finished ssh-key-proc-cmdline.service. May 14 00:52:01.412091 systemd[1]: motdgen.service: Deactivated successfully. May 14 00:52:01.412255 systemd[1]: Finished motdgen.service. May 14 00:52:01.417537 extend-filesystems[1193]: Found loop1 May 14 00:52:01.418564 extend-filesystems[1193]: Found vda May 14 00:52:01.419625 extend-filesystems[1193]: Found vda1 May 14 00:52:01.419625 extend-filesystems[1193]: Found vda2 May 14 00:52:01.419625 extend-filesystems[1193]: Found vda3 May 14 00:52:01.419625 extend-filesystems[1193]: Found usr May 14 00:52:01.419625 extend-filesystems[1193]: Found vda4 May 14 00:52:01.419625 extend-filesystems[1193]: Found vda6 May 14 00:52:01.419625 extend-filesystems[1193]: Found vda7 May 14 00:52:01.419625 extend-filesystems[1193]: Found vda9 May 14 00:52:01.419625 extend-filesystems[1193]: Checking size of /dev/vda9 May 14 00:52:01.426703 tar[1213]: linux-arm64/LICENSE May 14 00:52:01.426703 tar[1213]: linux-arm64/helm May 14 00:52:01.427034 jq[1214]: true May 14 00:52:01.460467 dbus-daemon[1191]: [system] SELinux support is enabled May 14 00:52:01.460619 systemd[1]: Started dbus.service. May 14 00:52:01.465601 extend-filesystems[1193]: Resized partition /dev/vda9 May 14 00:52:01.463038 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 00:52:01.466598 extend-filesystems[1234]: resize2fs 1.46.5 (30-Dec-2021) May 14 00:52:01.463060 systemd[1]: Reached target system-config.target. May 14 00:52:01.464039 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 00:52:01.464057 systemd[1]: Reached target user-config.target. May 14 00:52:01.471969 systemd-logind[1203]: Watching system buttons on /dev/input/event0 (Power Button) May 14 00:52:01.472958 systemd-logind[1203]: New seat seat0. May 14 00:52:01.477191 systemd[1]: Started systemd-logind.service. May 14 00:52:01.484290 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 14 00:52:01.507034 update_engine[1210]: I0514 00:52:01.506603 1210 main.cc:92] Flatcar Update Engine starting May 14 00:52:01.516556 update_engine[1210]: I0514 00:52:01.510046 1210 update_check_scheduler.cc:74] Next update check in 11m8s May 14 00:52:01.510051 systemd[1]: Started update-engine.service. May 14 00:52:01.512894 systemd[1]: Started locksmithd.service. May 14 00:52:01.521880 env[1215]: time="2025-05-14T00:52:01.521829720Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 14 00:52:01.525252 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 14 00:52:01.542094 extend-filesystems[1234]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 14 00:52:01.542094 extend-filesystems[1234]: old_desc_blocks = 1, new_desc_blocks = 1 May 14 00:52:01.542094 extend-filesystems[1234]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 14 00:52:01.547616 bash[1241]: Updated "/home/core/.ssh/authorized_keys" May 14 00:52:01.545045 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 00:52:01.547759 extend-filesystems[1193]: Resized filesystem in /dev/vda9 May 14 00:52:01.549278 env[1215]: time="2025-05-14T00:52:01.543713600Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 14 00:52:01.549278 env[1215]: time="2025-05-14T00:52:01.543883280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 14 00:52:01.545217 systemd[1]: Finished extend-filesystems.service. May 14 00:52:01.547019 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 14 00:52:01.549909 env[1215]: time="2025-05-14T00:52:01.549851880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 14 00:52:01.549909 env[1215]: time="2025-05-14T00:52:01.549897320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 14 00:52:01.550244 env[1215]: time="2025-05-14T00:52:01.550117800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 14 00:52:01.550244 env[1215]: time="2025-05-14T00:52:01.550143120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 14 00:52:01.550244 env[1215]: time="2025-05-14T00:52:01.550156320Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 14 00:52:01.550244 env[1215]: time="2025-05-14T00:52:01.550165960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 14 00:52:01.550369 env[1215]: time="2025-05-14T00:52:01.550258240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 14 00:52:01.550502 env[1215]: time="2025-05-14T00:52:01.550483120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 14 00:52:01.550636 env[1215]: time="2025-05-14T00:52:01.550604360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 14 00:52:01.550636 env[1215]: time="2025-05-14T00:52:01.550620080Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 14 00:52:01.550697 env[1215]: time="2025-05-14T00:52:01.550673040Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 14 00:52:01.550697 env[1215]: time="2025-05-14T00:52:01.550685800Z" level=info msg="metadata content store policy set" policy=shared May 14 00:52:01.563409 env[1215]: time="2025-05-14T00:52:01.562490960Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 14 00:52:01.563409 env[1215]: time="2025-05-14T00:52:01.562540400Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 14 00:52:01.563409 env[1215]: time="2025-05-14T00:52:01.562557880Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 14 00:52:01.563409 env[1215]: time="2025-05-14T00:52:01.562597800Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 14 00:52:01.563409 env[1215]: time="2025-05-14T00:52:01.562616680Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 14 00:52:01.563409 env[1215]: time="2025-05-14T00:52:01.562635120Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 14 00:52:01.563409 env[1215]: time="2025-05-14T00:52:01.562652120Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 14 00:52:01.563409 env[1215]: time="2025-05-14T00:52:01.562981720Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 14 00:52:01.563409 env[1215]: time="2025-05-14T00:52:01.563003000Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 14 00:52:01.563409 env[1215]: time="2025-05-14T00:52:01.563021240Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 14 00:52:01.563409 env[1215]: time="2025-05-14T00:52:01.563039960Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 14 00:52:01.563409 env[1215]: time="2025-05-14T00:52:01.563057360Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 14 00:52:01.563409 env[1215]: time="2025-05-14T00:52:01.563220040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 14 00:52:01.563409 env[1215]: time="2025-05-14T00:52:01.563338840Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 14 00:52:01.567861 env[1215]: time="2025-05-14T00:52:01.567827880Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 14 00:52:01.567950 env[1215]: time="2025-05-14T00:52:01.567874720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 14 00:52:01.567950 env[1215]: time="2025-05-14T00:52:01.567889720Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 14 00:52:01.568067 env[1215]: time="2025-05-14T00:52:01.568051600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 14 00:52:01.568095 env[1215]: time="2025-05-14T00:52:01.568073760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 14 00:52:01.568095 env[1215]: time="2025-05-14T00:52:01.568088600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 14 00:52:01.568137 env[1215]: time="2025-05-14T00:52:01.568101000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 14 00:52:01.568137 env[1215]: time="2025-05-14T00:52:01.568113720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 14 00:52:01.568137 env[1215]: time="2025-05-14T00:52:01.568126640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 14 00:52:01.568190 env[1215]: time="2025-05-14T00:52:01.568138280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 14 00:52:01.568190 env[1215]: time="2025-05-14T00:52:01.568150640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 14 00:52:01.568190 env[1215]: time="2025-05-14T00:52:01.568164360Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 14 00:52:01.568353 env[1215]: time="2025-05-14T00:52:01.568332840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 14 00:52:01.568378 env[1215]: time="2025-05-14T00:52:01.568358200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 14 00:52:01.568378 env[1215]: time="2025-05-14T00:52:01.568372400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 14 00:52:01.568415 env[1215]: time="2025-05-14T00:52:01.568384760Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 14 00:52:01.568415 env[1215]: time="2025-05-14T00:52:01.568401160Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 14 00:52:01.568469 env[1215]: time="2025-05-14T00:52:01.568414880Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 14 00:52:01.568469 env[1215]: time="2025-05-14T00:52:01.568440440Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 14 00:52:01.568508 env[1215]: time="2025-05-14T00:52:01.568484840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 14 00:52:01.568736 env[1215]: time="2025-05-14T00:52:01.568686200Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 14 00:52:01.569509 env[1215]: time="2025-05-14T00:52:01.568748120Z" level=info msg="Connect containerd service" May 14 00:52:01.569509 env[1215]: time="2025-05-14T00:52:01.568785440Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 14 00:52:01.569509 env[1215]: time="2025-05-14T00:52:01.569485880Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 00:52:01.569873 env[1215]: time="2025-05-14T00:52:01.569846920Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 00:52:01.569912 env[1215]: time="2025-05-14T00:52:01.569901240Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 00:52:01.569956 env[1215]: time="2025-05-14T00:52:01.569945760Z" level=info msg="containerd successfully booted in 0.049259s" May 14 00:52:01.570049 systemd[1]: Started containerd.service. May 14 00:52:01.570566 env[1215]: time="2025-05-14T00:52:01.570282000Z" level=info msg="Start subscribing containerd event" May 14 00:52:01.570566 env[1215]: time="2025-05-14T00:52:01.570339160Z" level=info msg="Start recovering state" May 14 00:52:01.570566 env[1215]: time="2025-05-14T00:52:01.570398080Z" level=info msg="Start event monitor" May 14 00:52:01.570566 env[1215]: time="2025-05-14T00:52:01.570417920Z" level=info msg="Start snapshots syncer" May 14 00:52:01.570566 env[1215]: time="2025-05-14T00:52:01.570427240Z" level=info msg="Start cni network conf syncer for default" May 14 00:52:01.570566 env[1215]: time="2025-05-14T00:52:01.570444680Z" level=info msg="Start streaming server" May 14 00:52:01.584987 locksmithd[1245]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 00:52:01.862331 tar[1213]: linux-arm64/README.md May 14 00:52:01.866694 systemd[1]: Finished prepare-helm.service. May 14 00:52:02.299469 systemd-networkd[1042]: eth0: Gained IPv6LL May 14 00:52:02.301138 systemd[1]: Finished systemd-networkd-wait-online.service. May 14 00:52:02.302473 systemd[1]: Reached target network-online.target. May 14 00:52:02.304890 systemd[1]: Starting kubelet.service... May 14 00:52:02.807706 sshd_keygen[1209]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 00:52:02.827110 systemd[1]: Finished sshd-keygen.service. May 14 00:52:02.829588 systemd[1]: Starting issuegen.service... May 14 00:52:02.834508 systemd[1]: issuegen.service: Deactivated successfully. May 14 00:52:02.834664 systemd[1]: Finished issuegen.service. May 14 00:52:02.836935 systemd[1]: Starting systemd-user-sessions.service... May 14 00:52:02.844868 systemd[1]: Finished systemd-user-sessions.service. May 14 00:52:02.847324 systemd[1]: Started getty@tty1.service. May 14 00:52:02.849520 systemd[1]: Started serial-getty@ttyAMA0.service. May 14 00:52:02.850681 systemd[1]: Reached target getty.target. May 14 00:52:02.901918 systemd[1]: Started kubelet.service. May 14 00:52:02.903338 systemd[1]: Reached target multi-user.target. May 14 00:52:02.905509 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 14 00:52:02.912946 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 14 00:52:02.913112 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 14 00:52:02.914292 systemd[1]: Startup finished in 587ms (kernel) + 4.529s (initrd) + 4.809s (userspace) = 9.926s. May 14 00:52:03.338642 kubelet[1271]: E0514 00:52:03.338576 1271 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:52:03.340505 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:52:03.340643 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:52:06.048497 systemd[1]: Created slice system-sshd.slice. May 14 00:52:06.049680 systemd[1]: Started sshd@0-10.0.0.131:22-10.0.0.1:33700.service. May 14 00:52:06.096608 sshd[1280]: Accepted publickey for core from 10.0.0.1 port 33700 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:52:06.099079 sshd[1280]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:52:06.111731 systemd-logind[1203]: New session 1 of user core. May 14 00:52:06.112673 systemd[1]: Created slice user-500.slice. May 14 00:52:06.113834 systemd[1]: Starting user-runtime-dir@500.service... May 14 00:52:06.122479 systemd[1]: Finished user-runtime-dir@500.service. May 14 00:52:06.123841 systemd[1]: Starting user@500.service... May 14 00:52:06.126661 (systemd)[1283]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 00:52:06.186209 systemd[1283]: Queued start job for default target default.target. May 14 00:52:06.186720 systemd[1283]: Reached target paths.target. May 14 00:52:06.186752 systemd[1283]: Reached target sockets.target. May 14 00:52:06.186764 systemd[1283]: Reached target timers.target. May 14 00:52:06.186774 systemd[1283]: Reached target basic.target. May 14 00:52:06.186818 systemd[1283]: Reached target default.target. May 14 00:52:06.186843 systemd[1283]: Startup finished in 54ms. May 14 00:52:06.186918 systemd[1]: Started user@500.service. May 14 00:52:06.187899 systemd[1]: Started session-1.scope. May 14 00:52:06.239241 systemd[1]: Started sshd@1-10.0.0.131:22-10.0.0.1:33704.service. May 14 00:52:06.287620 sshd[1292]: Accepted publickey for core from 10.0.0.1 port 33704 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:52:06.289319 sshd[1292]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:52:06.294126 systemd[1]: Started session-2.scope. May 14 00:52:06.294450 systemd-logind[1203]: New session 2 of user core. May 14 00:52:06.348870 sshd[1292]: pam_unix(sshd:session): session closed for user core May 14 00:52:06.351499 systemd[1]: sshd@1-10.0.0.131:22-10.0.0.1:33704.service: Deactivated successfully. May 14 00:52:06.352096 systemd[1]: session-2.scope: Deactivated successfully. May 14 00:52:06.352615 systemd-logind[1203]: Session 2 logged out. Waiting for processes to exit. May 14 00:52:06.353678 systemd[1]: Started sshd@2-10.0.0.131:22-10.0.0.1:33712.service. May 14 00:52:06.354274 systemd-logind[1203]: Removed session 2. May 14 00:52:06.393876 sshd[1298]: Accepted publickey for core from 10.0.0.1 port 33712 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:52:06.395010 sshd[1298]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:52:06.398170 systemd-logind[1203]: New session 3 of user core. May 14 00:52:06.398978 systemd[1]: Started session-3.scope. May 14 00:52:06.449124 sshd[1298]: pam_unix(sshd:session): session closed for user core May 14 00:52:06.453024 systemd[1]: sshd@2-10.0.0.131:22-10.0.0.1:33712.service: Deactivated successfully. May 14 00:52:06.453663 systemd[1]: session-3.scope: Deactivated successfully. May 14 00:52:06.454159 systemd-logind[1203]: Session 3 logged out. Waiting for processes to exit. May 14 00:52:06.455214 systemd[1]: Started sshd@3-10.0.0.131:22-10.0.0.1:33726.service. May 14 00:52:06.455906 systemd-logind[1203]: Removed session 3. May 14 00:52:06.495686 sshd[1304]: Accepted publickey for core from 10.0.0.1 port 33726 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:52:06.496953 sshd[1304]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:52:06.500563 systemd-logind[1203]: New session 4 of user core. May 14 00:52:06.501421 systemd[1]: Started session-4.scope. May 14 00:52:06.556022 sshd[1304]: pam_unix(sshd:session): session closed for user core May 14 00:52:06.560032 systemd[1]: sshd@3-10.0.0.131:22-10.0.0.1:33726.service: Deactivated successfully. May 14 00:52:06.560642 systemd[1]: session-4.scope: Deactivated successfully. May 14 00:52:06.561151 systemd-logind[1203]: Session 4 logged out. Waiting for processes to exit. May 14 00:52:06.562194 systemd[1]: Started sshd@4-10.0.0.131:22-10.0.0.1:33740.service. May 14 00:52:06.562883 systemd-logind[1203]: Removed session 4. May 14 00:52:06.605590 sshd[1310]: Accepted publickey for core from 10.0.0.1 port 33740 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:52:06.607634 sshd[1310]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:52:06.612080 systemd[1]: Started session-5.scope. May 14 00:52:06.612381 systemd-logind[1203]: New session 5 of user core. May 14 00:52:06.672965 sudo[1313]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 00:52:06.673278 sudo[1313]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 14 00:52:06.730190 systemd[1]: Starting docker.service... May 14 00:52:06.831882 env[1325]: time="2025-05-14T00:52:06.831812968Z" level=info msg="Starting up" May 14 00:52:06.833594 env[1325]: time="2025-05-14T00:52:06.833562351Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 14 00:52:06.833594 env[1325]: time="2025-05-14T00:52:06.833585519Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 14 00:52:06.833681 env[1325]: time="2025-05-14T00:52:06.833606862Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 14 00:52:06.833681 env[1325]: time="2025-05-14T00:52:06.833618060Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 14 00:52:06.835739 env[1325]: time="2025-05-14T00:52:06.835704864Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 14 00:52:06.835739 env[1325]: time="2025-05-14T00:52:06.835729249Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 14 00:52:06.835828 env[1325]: time="2025-05-14T00:52:06.835744992Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 14 00:52:06.835828 env[1325]: time="2025-05-14T00:52:06.835757408Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 14 00:52:06.840164 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3317714981-merged.mount: Deactivated successfully. May 14 00:52:07.007561 env[1325]: time="2025-05-14T00:52:07.007090178Z" level=info msg="Loading containers: start." May 14 00:52:07.127268 kernel: Initializing XFRM netlink socket May 14 00:52:07.151157 env[1325]: time="2025-05-14T00:52:07.151120599Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 14 00:52:07.202402 systemd-networkd[1042]: docker0: Link UP May 14 00:52:07.222510 env[1325]: time="2025-05-14T00:52:07.222480714Z" level=info msg="Loading containers: done." May 14 00:52:07.244754 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3956304959-merged.mount: Deactivated successfully. May 14 00:52:07.249769 env[1325]: time="2025-05-14T00:52:07.249722687Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 00:52:07.249921 env[1325]: time="2025-05-14T00:52:07.249890572Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 14 00:52:07.250009 env[1325]: time="2025-05-14T00:52:07.249986928Z" level=info msg="Daemon has completed initialization" May 14 00:52:07.265292 systemd[1]: Started docker.service. May 14 00:52:07.272330 env[1325]: time="2025-05-14T00:52:07.272284825Z" level=info msg="API listen on /run/docker.sock" May 14 00:52:08.005791 env[1215]: time="2025-05-14T00:52:08.005720562Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 14 00:52:08.729286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2373007727.mount: Deactivated successfully. May 14 00:52:10.286065 env[1215]: time="2025-05-14T00:52:10.286019217Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:10.287649 env[1215]: time="2025-05-14T00:52:10.287613575Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:10.289507 env[1215]: time="2025-05-14T00:52:10.289485169Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:10.291313 env[1215]: time="2025-05-14T00:52:10.291284963Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:10.292104 env[1215]: time="2025-05-14T00:52:10.292073672Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\"" May 14 00:52:10.292771 env[1215]: time="2025-05-14T00:52:10.292748590Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 14 00:52:11.945084 env[1215]: time="2025-05-14T00:52:11.945000229Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:11.946688 env[1215]: time="2025-05-14T00:52:11.946652277Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:11.948394 env[1215]: time="2025-05-14T00:52:11.948354653Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:11.950385 env[1215]: time="2025-05-14T00:52:11.950343928Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:11.951076 env[1215]: time="2025-05-14T00:52:11.951044333Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\"" May 14 00:52:11.951523 env[1215]: time="2025-05-14T00:52:11.951497448Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 14 00:52:13.269081 env[1215]: time="2025-05-14T00:52:13.269030631Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:13.270997 env[1215]: time="2025-05-14T00:52:13.270962190Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:13.273219 env[1215]: time="2025-05-14T00:52:13.273195643Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:13.274725 env[1215]: time="2025-05-14T00:52:13.274689748Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:13.275525 env[1215]: time="2025-05-14T00:52:13.275498808Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\"" May 14 00:52:13.276053 env[1215]: time="2025-05-14T00:52:13.276033207Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 14 00:52:13.591486 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 00:52:13.591659 systemd[1]: Stopped kubelet.service. May 14 00:52:13.593191 systemd[1]: Starting kubelet.service... May 14 00:52:13.680327 systemd[1]: Started kubelet.service. May 14 00:52:13.717854 kubelet[1460]: E0514 00:52:13.717801 1460 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:52:13.720535 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:52:13.720660 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:52:14.451545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3547550465.mount: Deactivated successfully. May 14 00:52:15.044626 env[1215]: time="2025-05-14T00:52:15.044582710Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:15.045981 env[1215]: time="2025-05-14T00:52:15.045955094Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:15.047475 env[1215]: time="2025-05-14T00:52:15.047438114Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:15.048847 env[1215]: time="2025-05-14T00:52:15.048823876Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:15.049247 env[1215]: time="2025-05-14T00:52:15.049209213Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\"" May 14 00:52:15.049806 env[1215]: time="2025-05-14T00:52:15.049779426Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 14 00:52:15.585980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount78872149.mount: Deactivated successfully. May 14 00:52:16.755730 env[1215]: time="2025-05-14T00:52:16.755672846Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:16.757306 env[1215]: time="2025-05-14T00:52:16.757271106Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:16.759468 env[1215]: time="2025-05-14T00:52:16.759436220Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:16.761539 env[1215]: time="2025-05-14T00:52:16.761511798Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:16.762365 env[1215]: time="2025-05-14T00:52:16.762323294Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 14 00:52:16.763584 env[1215]: time="2025-05-14T00:52:16.763558466Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 14 00:52:17.229494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1605942177.mount: Deactivated successfully. May 14 00:52:17.237548 env[1215]: time="2025-05-14T00:52:17.237491043Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:17.239116 env[1215]: time="2025-05-14T00:52:17.239070169Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:17.240823 env[1215]: time="2025-05-14T00:52:17.240788956Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:17.242329 env[1215]: time="2025-05-14T00:52:17.242295162Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:17.242846 env[1215]: time="2025-05-14T00:52:17.242812026Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 14 00:52:17.243506 env[1215]: time="2025-05-14T00:52:17.243481835Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 14 00:52:17.809772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount246489851.mount: Deactivated successfully. May 14 00:52:20.354306 env[1215]: time="2025-05-14T00:52:20.354228154Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:20.355877 env[1215]: time="2025-05-14T00:52:20.355840799Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:20.358691 env[1215]: time="2025-05-14T00:52:20.358659753Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:20.360638 env[1215]: time="2025-05-14T00:52:20.360607900Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:20.361549 env[1215]: time="2025-05-14T00:52:20.361518354Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 14 00:52:23.971556 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 14 00:52:23.971735 systemd[1]: Stopped kubelet.service. May 14 00:52:23.973273 systemd[1]: Starting kubelet.service... May 14 00:52:24.063596 systemd[1]: Started kubelet.service. May 14 00:52:24.094832 kubelet[1493]: E0514 00:52:24.094788 1493 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:52:24.096984 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:52:24.097108 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:52:25.239570 systemd[1]: Stopped kubelet.service. May 14 00:52:25.241546 systemd[1]: Starting kubelet.service... May 14 00:52:25.262078 systemd[1]: Reloading. May 14 00:52:25.314499 /usr/lib/systemd/system-generators/torcx-generator[1527]: time="2025-05-14T00:52:25Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 14 00:52:25.314535 /usr/lib/systemd/system-generators/torcx-generator[1527]: time="2025-05-14T00:52:25Z" level=info msg="torcx already run" May 14 00:52:25.460442 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 14 00:52:25.460461 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 14 00:52:25.475870 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:52:25.550155 systemd[1]: Started kubelet.service. May 14 00:52:25.551680 systemd[1]: Stopping kubelet.service... May 14 00:52:25.552049 systemd[1]: kubelet.service: Deactivated successfully. May 14 00:52:25.552204 systemd[1]: Stopped kubelet.service. May 14 00:52:25.553556 systemd[1]: Starting kubelet.service... May 14 00:52:25.642446 systemd[1]: Started kubelet.service. May 14 00:52:25.676194 kubelet[1573]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:52:25.676194 kubelet[1573]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 14 00:52:25.676194 kubelet[1573]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:52:25.676546 kubelet[1573]: I0514 00:52:25.676259 1573 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 00:52:26.413912 kubelet[1573]: I0514 00:52:26.413869 1573 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 14 00:52:26.413912 kubelet[1573]: I0514 00:52:26.413899 1573 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 00:52:26.414176 kubelet[1573]: I0514 00:52:26.414151 1573 server.go:954] "Client rotation is on, will bootstrap in background" May 14 00:52:26.474203 kubelet[1573]: I0514 00:52:26.474169 1573 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 00:52:26.474624 kubelet[1573]: E0514 00:52:26.474597 1573 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.131:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" May 14 00:52:26.483326 kubelet[1573]: E0514 00:52:26.483283 1573 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 14 00:52:26.483326 kubelet[1573]: I0514 00:52:26.483315 1573 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 14 00:52:26.486047 kubelet[1573]: I0514 00:52:26.486027 1573 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 00:52:26.486770 kubelet[1573]: I0514 00:52:26.486721 1573 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 00:52:26.486938 kubelet[1573]: I0514 00:52:26.486763 1573 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 00:52:26.487019 kubelet[1573]: I0514 00:52:26.487010 1573 topology_manager.go:138] "Creating topology manager with none policy" May 14 00:52:26.487019 kubelet[1573]: I0514 00:52:26.487020 1573 container_manager_linux.go:304] "Creating device plugin manager" May 14 00:52:26.487215 kubelet[1573]: I0514 00:52:26.487202 1573 state_mem.go:36] "Initialized new in-memory state store" May 14 00:52:26.491644 kubelet[1573]: I0514 00:52:26.491613 1573 kubelet.go:446] "Attempting to sync node with API server" May 14 00:52:26.491644 kubelet[1573]: I0514 00:52:26.491637 1573 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 00:52:26.491763 kubelet[1573]: I0514 00:52:26.491658 1573 kubelet.go:352] "Adding apiserver pod source" May 14 00:52:26.491763 kubelet[1573]: I0514 00:52:26.491668 1573 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 00:52:26.511510 kubelet[1573]: I0514 00:52:26.511471 1573 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 14 00:52:26.512096 kubelet[1573]: I0514 00:52:26.512066 1573 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 00:52:26.512197 kubelet[1573]: W0514 00:52:26.512179 1573 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 00:52:26.512314 kubelet[1573]: W0514 00:52:26.512260 1573 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.131:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused May 14 00:52:26.512366 kubelet[1573]: E0514 00:52:26.512325 1573 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.131:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" May 14 00:52:26.512366 kubelet[1573]: W0514 00:52:26.512326 1573 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.131:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused May 14 00:52:26.512414 kubelet[1573]: E0514 00:52:26.512366 1573 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.131:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" May 14 00:52:26.512998 kubelet[1573]: I0514 00:52:26.512978 1573 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 14 00:52:26.513069 kubelet[1573]: I0514 00:52:26.513013 1573 server.go:1287] "Started kubelet" May 14 00:52:26.525718 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 14 00:52:26.525883 kubelet[1573]: I0514 00:52:26.525839 1573 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 00:52:26.527069 kubelet[1573]: I0514 00:52:26.527039 1573 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 14 00:52:26.528001 kubelet[1573]: I0514 00:52:26.527941 1573 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 00:52:26.528407 kubelet[1573]: I0514 00:52:26.528384 1573 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 00:52:26.528609 kubelet[1573]: I0514 00:52:26.528547 1573 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 00:52:26.529097 kubelet[1573]: I0514 00:52:26.529049 1573 volume_manager.go:297] "Starting Kubelet Volume Manager" May 14 00:52:26.529387 kubelet[1573]: E0514 00:52:26.529355 1573 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:52:26.529387 kubelet[1573]: I0514 00:52:26.529329 1573 server.go:490] "Adding debug handlers to kubelet server" May 14 00:52:26.529509 kubelet[1573]: I0514 00:52:26.529161 1573 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 00:52:26.529666 kubelet[1573]: I0514 00:52:26.529655 1573 reconciler.go:26] "Reconciler: start to sync state" May 14 00:52:26.530030 kubelet[1573]: I0514 00:52:26.530008 1573 factory.go:221] Registration of the systemd container factory successfully May 14 00:52:26.530155 kubelet[1573]: E0514 00:52:26.530125 1573 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 00:52:26.530284 kubelet[1573]: I0514 00:52:26.530267 1573 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 00:52:26.530988 kubelet[1573]: W0514 00:52:26.530948 1573 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused May 14 00:52:26.531045 kubelet[1573]: E0514 00:52:26.531004 1573 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" May 14 00:52:26.531045 kubelet[1573]: E0514 00:52:26.530404 1573 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.131:6443: connect: connection refused" interval="200ms" May 14 00:52:26.531045 kubelet[1573]: E0514 00:52:26.529849 1573 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.131:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.131:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f3e8ad92c0883 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-14 00:52:26.512992387 +0000 UTC m=+0.867364733,LastTimestamp:2025-05-14 00:52:26.512992387 +0000 UTC m=+0.867364733,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 14 00:52:26.532264 kubelet[1573]: I0514 00:52:26.532184 1573 factory.go:221] Registration of the containerd container factory successfully May 14 00:52:26.543862 kubelet[1573]: I0514 00:52:26.543842 1573 cpu_manager.go:221] "Starting CPU manager" policy="none" May 14 00:52:26.543862 kubelet[1573]: I0514 00:52:26.543857 1573 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 14 00:52:26.543960 kubelet[1573]: I0514 00:52:26.543874 1573 state_mem.go:36] "Initialized new in-memory state store" May 14 00:52:26.548243 kubelet[1573]: I0514 00:52:26.548193 1573 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 00:52:26.549173 kubelet[1573]: I0514 00:52:26.549145 1573 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 00:52:26.549173 kubelet[1573]: I0514 00:52:26.549172 1573 status_manager.go:227] "Starting to sync pod status with apiserver" May 14 00:52:26.549298 kubelet[1573]: I0514 00:52:26.549192 1573 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 14 00:52:26.549298 kubelet[1573]: I0514 00:52:26.549198 1573 kubelet.go:2388] "Starting kubelet main sync loop" May 14 00:52:26.549298 kubelet[1573]: E0514 00:52:26.549271 1573 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 00:52:26.549809 kubelet[1573]: W0514 00:52:26.549771 1573 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused May 14 00:52:26.549890 kubelet[1573]: E0514 00:52:26.549819 1573 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" May 14 00:52:26.629632 kubelet[1573]: E0514 00:52:26.629582 1573 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:52:26.649881 kubelet[1573]: E0514 00:52:26.649841 1573 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 14 00:52:26.730928 kubelet[1573]: E0514 00:52:26.729906 1573 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:52:26.732514 kubelet[1573]: E0514 00:52:26.732464 1573 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.131:6443: connect: connection refused" interval="400ms" May 14 00:52:26.737880 kubelet[1573]: I0514 00:52:26.737850 1573 policy_none.go:49] "None policy: Start" May 14 00:52:26.737880 kubelet[1573]: I0514 00:52:26.737886 1573 memory_manager.go:186] "Starting memorymanager" policy="None" May 14 00:52:26.737964 kubelet[1573]: I0514 00:52:26.737899 1573 state_mem.go:35] "Initializing new in-memory state store" May 14 00:52:26.742616 systemd[1]: Created slice kubepods.slice. May 14 00:52:26.746468 systemd[1]: Created slice kubepods-burstable.slice. May 14 00:52:26.748737 systemd[1]: Created slice kubepods-besteffort.slice. May 14 00:52:26.749657 kubelet[1573]: W0514 00:52:26.749632 1573 helpers.go:245] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/cpuset.cpus.effective": read /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/cpuset.cpus.effective: no such device May 14 00:52:26.762932 kubelet[1573]: I0514 00:52:26.762867 1573 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 00:52:26.763019 kubelet[1573]: I0514 00:52:26.763003 1573 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 00:52:26.763049 kubelet[1573]: I0514 00:52:26.763013 1573 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 00:52:26.763398 kubelet[1573]: I0514 00:52:26.763373 1573 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 00:52:26.764205 kubelet[1573]: E0514 00:52:26.764186 1573 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 14 00:52:26.764417 kubelet[1573]: E0514 00:52:26.764359 1573 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 14 00:52:26.857357 systemd[1]: Created slice kubepods-burstable-pod2912a870ca8bd8708d5701cfa27ed384.slice. May 14 00:52:26.864468 kubelet[1573]: I0514 00:52:26.864430 1573 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 14 00:52:26.865015 kubelet[1573]: E0514 00:52:26.864990 1573 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.131:6443/api/v1/nodes\": dial tcp 10.0.0.131:6443: connect: connection refused" node="localhost" May 14 00:52:26.868237 kubelet[1573]: E0514 00:52:26.868214 1573 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 00:52:26.869980 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. May 14 00:52:26.887143 kubelet[1573]: E0514 00:52:26.887122 1573 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 00:52:26.888876 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. May 14 00:52:26.890168 kubelet[1573]: E0514 00:52:26.890151 1573 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 00:52:26.931493 kubelet[1573]: I0514 00:52:26.931461 1573 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2912a870ca8bd8708d5701cfa27ed384-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2912a870ca8bd8708d5701cfa27ed384\") " pod="kube-system/kube-apiserver-localhost" May 14 00:52:26.931647 kubelet[1573]: I0514 00:52:26.931618 1573 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2912a870ca8bd8708d5701cfa27ed384-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2912a870ca8bd8708d5701cfa27ed384\") " pod="kube-system/kube-apiserver-localhost" May 14 00:52:26.931760 kubelet[1573]: I0514 00:52:26.931745 1573 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:52:26.931858 kubelet[1573]: I0514 00:52:26.931842 1573 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:52:26.931935 kubelet[1573]: I0514 00:52:26.931922 1573 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2912a870ca8bd8708d5701cfa27ed384-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2912a870ca8bd8708d5701cfa27ed384\") " pod="kube-system/kube-apiserver-localhost" May 14 00:52:26.932009 kubelet[1573]: I0514 00:52:26.931997 1573 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:52:26.932085 kubelet[1573]: I0514 00:52:26.932073 1573 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:52:26.932167 kubelet[1573]: I0514 00:52:26.932153 1573 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:52:26.932286 kubelet[1573]: I0514 00:52:26.932271 1573 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 14 00:52:26.936943 kubelet[1573]: E0514 00:52:26.936825 1573 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.131:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.131:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f3e8ad92c0883 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-14 00:52:26.512992387 +0000 UTC m=+0.867364733,LastTimestamp:2025-05-14 00:52:26.512992387 +0000 UTC m=+0.867364733,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 14 00:52:27.067091 kubelet[1573]: I0514 00:52:27.066990 1573 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 14 00:52:27.067359 kubelet[1573]: E0514 00:52:27.067315 1573 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.131:6443/api/v1/nodes\": dial tcp 10.0.0.131:6443: connect: connection refused" node="localhost" May 14 00:52:27.133178 kubelet[1573]: E0514 00:52:27.133141 1573 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.131:6443: connect: connection refused" interval="800ms" May 14 00:52:27.169478 kubelet[1573]: E0514 00:52:27.169428 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:27.170175 env[1215]: time="2025-05-14T00:52:27.170129287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2912a870ca8bd8708d5701cfa27ed384,Namespace:kube-system,Attempt:0,}" May 14 00:52:27.187875 kubelet[1573]: E0514 00:52:27.187844 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:27.188499 env[1215]: time="2025-05-14T00:52:27.188394689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" May 14 00:52:27.191010 kubelet[1573]: E0514 00:52:27.190916 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:27.191340 env[1215]: time="2025-05-14T00:52:27.191293850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" May 14 00:52:27.401337 kubelet[1573]: W0514 00:52:27.401198 1573 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused May 14 00:52:27.401337 kubelet[1573]: E0514 00:52:27.401282 1573 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" May 14 00:52:27.468998 kubelet[1573]: I0514 00:52:27.468971 1573 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 14 00:52:27.469328 kubelet[1573]: E0514 00:52:27.469307 1573 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.131:6443/api/v1/nodes\": dial tcp 10.0.0.131:6443: connect: connection refused" node="localhost" May 14 00:52:27.485983 kubelet[1573]: W0514 00:52:27.485948 1573 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused May 14 00:52:27.485983 kubelet[1573]: E0514 00:52:27.485986 1573 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" May 14 00:52:27.582246 kubelet[1573]: W0514 00:52:27.582122 1573 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.131:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused May 14 00:52:27.582246 kubelet[1573]: E0514 00:52:27.582185 1573 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.131:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" May 14 00:52:27.654776 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1662470012.mount: Deactivated successfully. May 14 00:52:27.659020 env[1215]: time="2025-05-14T00:52:27.658971490Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:27.661820 env[1215]: time="2025-05-14T00:52:27.661777690Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:27.662575 env[1215]: time="2025-05-14T00:52:27.662550322Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:27.663609 env[1215]: time="2025-05-14T00:52:27.663571410Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:27.665291 env[1215]: time="2025-05-14T00:52:27.665259077Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:27.666746 env[1215]: time="2025-05-14T00:52:27.666713502Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:27.668302 env[1215]: time="2025-05-14T00:52:27.668271417Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:27.671386 env[1215]: time="2025-05-14T00:52:27.671360863Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:27.673537 env[1215]: time="2025-05-14T00:52:27.673502885Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:27.674267 env[1215]: time="2025-05-14T00:52:27.674212903Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:27.675005 env[1215]: time="2025-05-14T00:52:27.674979810Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:27.676563 env[1215]: time="2025-05-14T00:52:27.676535923Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:27.718494 env[1215]: time="2025-05-14T00:52:27.718400684Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:52:27.718494 env[1215]: time="2025-05-14T00:52:27.718443040Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:52:27.718494 env[1215]: time="2025-05-14T00:52:27.718453770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:52:27.718712 env[1215]: time="2025-05-14T00:52:27.718663432Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e2394ababb801adf087827408313b60cc64684890d4aede28d3c42be1b6e1917 pid=1624 runtime=io.containerd.runc.v2 May 14 00:52:27.719410 env[1215]: time="2025-05-14T00:52:27.719330052Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:52:27.719410 env[1215]: time="2025-05-14T00:52:27.719362560Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:52:27.719410 env[1215]: time="2025-05-14T00:52:27.719373089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:52:27.720157 env[1215]: time="2025-05-14T00:52:27.720094877Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:52:27.720157 env[1215]: time="2025-05-14T00:52:27.720126544Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:52:27.720157 env[1215]: time="2025-05-14T00:52:27.720136753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:52:27.720410 env[1215]: time="2025-05-14T00:52:27.720371918Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8018c5816e73c5b761fe3c807f5fa9bf11a7e50cb3fc41613b2e91f1e42b9bde pid=1635 runtime=io.containerd.runc.v2 May 14 00:52:27.721226 env[1215]: time="2025-05-14T00:52:27.720469642Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7dd282db100bae7b8c994625f029d72a11df5947a3b46a3351eba60d99aa9ee4 pid=1625 runtime=io.containerd.runc.v2 May 14 00:52:27.733620 systemd[1]: Started cri-containerd-e2394ababb801adf087827408313b60cc64684890d4aede28d3c42be1b6e1917.scope. May 14 00:52:27.738492 systemd[1]: Started cri-containerd-8018c5816e73c5b761fe3c807f5fa9bf11a7e50cb3fc41613b2e91f1e42b9bde.scope. May 14 00:52:27.747399 systemd[1]: Started cri-containerd-7dd282db100bae7b8c994625f029d72a11df5947a3b46a3351eba60d99aa9ee4.scope. May 14 00:52:27.797324 env[1215]: time="2025-05-14T00:52:27.797272942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2912a870ca8bd8708d5701cfa27ed384,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2394ababb801adf087827408313b60cc64684890d4aede28d3c42be1b6e1917\"" May 14 00:52:27.798155 kubelet[1573]: E0514 00:52:27.798123 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:27.805622 env[1215]: time="2025-05-14T00:52:27.805579124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"8018c5816e73c5b761fe3c807f5fa9bf11a7e50cb3fc41613b2e91f1e42b9bde\"" May 14 00:52:27.806053 env[1215]: time="2025-05-14T00:52:27.806017505Z" level=info msg="CreateContainer within sandbox \"e2394ababb801adf087827408313b60cc64684890d4aede28d3c42be1b6e1917\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 00:52:27.806580 kubelet[1573]: E0514 00:52:27.806551 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:27.808073 env[1215]: time="2025-05-14T00:52:27.808034579Z" level=info msg="CreateContainer within sandbox \"8018c5816e73c5b761fe3c807f5fa9bf11a7e50cb3fc41613b2e91f1e42b9bde\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 00:52:27.819157 env[1215]: time="2025-05-14T00:52:27.819119137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"7dd282db100bae7b8c994625f029d72a11df5947a3b46a3351eba60d99aa9ee4\"" May 14 00:52:27.820133 kubelet[1573]: E0514 00:52:27.819909 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:27.821953 env[1215]: time="2025-05-14T00:52:27.821901917Z" level=info msg="CreateContainer within sandbox \"7dd282db100bae7b8c994625f029d72a11df5947a3b46a3351eba60d99aa9ee4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 00:52:27.830169 env[1215]: time="2025-05-14T00:52:27.830119742Z" level=info msg="CreateContainer within sandbox \"8018c5816e73c5b761fe3c807f5fa9bf11a7e50cb3fc41613b2e91f1e42b9bde\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"957719819ead333ad3b500d9f0538661075d02fee55454733d9505f6da7933ae\"" May 14 00:52:27.830760 env[1215]: time="2025-05-14T00:52:27.830732475Z" level=info msg="StartContainer for \"957719819ead333ad3b500d9f0538661075d02fee55454733d9505f6da7933ae\"" May 14 00:52:27.831771 env[1215]: time="2025-05-14T00:52:27.831727820Z" level=info msg="CreateContainer within sandbox \"e2394ababb801adf087827408313b60cc64684890d4aede28d3c42be1b6e1917\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ec8316de5d2f555f9ad7d4b9e1c504a06f9f980827a3dc7f5870af38f6da9810\"" May 14 00:52:27.832134 env[1215]: time="2025-05-14T00:52:27.832087173Z" level=info msg="StartContainer for \"ec8316de5d2f555f9ad7d4b9e1c504a06f9f980827a3dc7f5870af38f6da9810\"" May 14 00:52:27.840049 env[1215]: time="2025-05-14T00:52:27.839999653Z" level=info msg="CreateContainer within sandbox \"7dd282db100bae7b8c994625f029d72a11df5947a3b46a3351eba60d99aa9ee4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ed2f326a0e4031054433718dd5ffb7a57f349a5d48f1fce6682f3ec19c0f0a5c\"" May 14 00:52:27.840497 env[1215]: time="2025-05-14T00:52:27.840471183Z" level=info msg="StartContainer for \"ed2f326a0e4031054433718dd5ffb7a57f349a5d48f1fce6682f3ec19c0f0a5c\"" May 14 00:52:27.847539 systemd[1]: Started cri-containerd-957719819ead333ad3b500d9f0538661075d02fee55454733d9505f6da7933ae.scope. May 14 00:52:27.849518 systemd[1]: Started cri-containerd-ec8316de5d2f555f9ad7d4b9e1c504a06f9f980827a3dc7f5870af38f6da9810.scope. May 14 00:52:27.863452 systemd[1]: Started cri-containerd-ed2f326a0e4031054433718dd5ffb7a57f349a5d48f1fce6682f3ec19c0f0a5c.scope. May 14 00:52:27.910542 env[1215]: time="2025-05-14T00:52:27.910444624Z" level=info msg="StartContainer for \"ec8316de5d2f555f9ad7d4b9e1c504a06f9f980827a3dc7f5870af38f6da9810\" returns successfully" May 14 00:52:27.917140 kubelet[1573]: W0514 00:52:27.917026 1573 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.131:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused May 14 00:52:27.917140 kubelet[1573]: E0514 00:52:27.917099 1573 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.131:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" May 14 00:52:27.922714 env[1215]: time="2025-05-14T00:52:27.922673417Z" level=info msg="StartContainer for \"ed2f326a0e4031054433718dd5ffb7a57f349a5d48f1fce6682f3ec19c0f0a5c\" returns successfully" May 14 00:52:27.938506 kubelet[1573]: E0514 00:52:27.934476 1573 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.131:6443: connect: connection refused" interval="1.6s" May 14 00:52:27.947419 env[1215]: time="2025-05-14T00:52:27.944425290Z" level=info msg="StartContainer for \"957719819ead333ad3b500d9f0538661075d02fee55454733d9505f6da7933ae\" returns successfully" May 14 00:52:28.270829 kubelet[1573]: I0514 00:52:28.270731 1573 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 14 00:52:28.555470 kubelet[1573]: E0514 00:52:28.555054 1573 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 00:52:28.555470 kubelet[1573]: E0514 00:52:28.555184 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:28.557071 kubelet[1573]: E0514 00:52:28.557051 1573 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 00:52:28.557421 kubelet[1573]: E0514 00:52:28.557401 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:28.558920 kubelet[1573]: E0514 00:52:28.558900 1573 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 00:52:28.559013 kubelet[1573]: E0514 00:52:28.558999 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:29.560609 kubelet[1573]: E0514 00:52:29.560577 1573 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 00:52:29.561291 kubelet[1573]: E0514 00:52:29.561269 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:29.561471 kubelet[1573]: E0514 00:52:29.561130 1573 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 00:52:29.561643 kubelet[1573]: E0514 00:52:29.561627 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:30.066462 kubelet[1573]: E0514 00:52:30.066421 1573 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 14 00:52:30.147156 kubelet[1573]: I0514 00:52:30.147120 1573 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 14 00:52:30.147405 kubelet[1573]: E0514 00:52:30.147390 1573 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 14 00:52:30.150051 kubelet[1573]: E0514 00:52:30.150023 1573 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:52:30.231326 kubelet[1573]: I0514 00:52:30.231294 1573 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 14 00:52:30.239548 kubelet[1573]: E0514 00:52:30.239517 1573 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 14 00:52:30.239691 kubelet[1573]: I0514 00:52:30.239680 1573 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 14 00:52:30.241773 kubelet[1573]: E0514 00:52:30.241738 1573 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 14 00:52:30.241884 kubelet[1573]: I0514 00:52:30.241872 1573 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 14 00:52:30.243359 kubelet[1573]: E0514 00:52:30.243339 1573 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 14 00:52:30.512275 kubelet[1573]: I0514 00:52:30.512158 1573 apiserver.go:52] "Watching apiserver" May 14 00:52:30.529735 kubelet[1573]: I0514 00:52:30.529698 1573 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 00:52:32.095017 systemd[1]: Reloading. May 14 00:52:32.142604 /usr/lib/systemd/system-generators/torcx-generator[1870]: time="2025-05-14T00:52:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 14 00:52:32.142633 /usr/lib/systemd/system-generators/torcx-generator[1870]: time="2025-05-14T00:52:32Z" level=info msg="torcx already run" May 14 00:52:32.200534 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 14 00:52:32.200557 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 14 00:52:32.215862 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:52:32.295136 systemd[1]: Stopping kubelet.service... May 14 00:52:32.316729 systemd[1]: kubelet.service: Deactivated successfully. May 14 00:52:32.316938 systemd[1]: Stopped kubelet.service. May 14 00:52:32.316989 systemd[1]: kubelet.service: Consumed 1.251s CPU time. May 14 00:52:32.318713 systemd[1]: Starting kubelet.service... May 14 00:52:32.406517 systemd[1]: Started kubelet.service. May 14 00:52:32.445414 kubelet[1913]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:52:32.445414 kubelet[1913]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 14 00:52:32.445414 kubelet[1913]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:52:32.445759 kubelet[1913]: I0514 00:52:32.445466 1913 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 00:52:32.451074 kubelet[1913]: I0514 00:52:32.451035 1913 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 14 00:52:32.451074 kubelet[1913]: I0514 00:52:32.451063 1913 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 00:52:32.451321 kubelet[1913]: I0514 00:52:32.451300 1913 server.go:954] "Client rotation is on, will bootstrap in background" May 14 00:52:32.452489 kubelet[1913]: I0514 00:52:32.452467 1913 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 00:52:32.454735 kubelet[1913]: I0514 00:52:32.454710 1913 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 00:52:32.459911 kubelet[1913]: E0514 00:52:32.459865 1913 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 14 00:52:32.459986 kubelet[1913]: I0514 00:52:32.459913 1913 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 14 00:52:32.462410 kubelet[1913]: I0514 00:52:32.462389 1913 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 00:52:32.462607 kubelet[1913]: I0514 00:52:32.462576 1913 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 00:52:32.462748 kubelet[1913]: I0514 00:52:32.462602 1913 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 00:52:32.462825 kubelet[1913]: I0514 00:52:32.462753 1913 topology_manager.go:138] "Creating topology manager with none policy" May 14 00:52:32.462825 kubelet[1913]: I0514 00:52:32.462761 1913 container_manager_linux.go:304] "Creating device plugin manager" May 14 00:52:32.462825 kubelet[1913]: I0514 00:52:32.462799 1913 state_mem.go:36] "Initialized new in-memory state store" May 14 00:52:32.462922 kubelet[1913]: I0514 00:52:32.462903 1913 kubelet.go:446] "Attempting to sync node with API server" May 14 00:52:32.462951 kubelet[1913]: I0514 00:52:32.462925 1913 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 00:52:32.462951 kubelet[1913]: I0514 00:52:32.462943 1913 kubelet.go:352] "Adding apiserver pod source" May 14 00:52:32.462992 kubelet[1913]: I0514 00:52:32.462952 1913 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 00:52:32.463459 kubelet[1913]: I0514 00:52:32.463435 1913 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 14 00:52:32.463997 kubelet[1913]: I0514 00:52:32.463981 1913 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 00:52:32.469069 kubelet[1913]: I0514 00:52:32.469040 1913 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 14 00:52:32.469148 kubelet[1913]: I0514 00:52:32.469075 1913 server.go:1287] "Started kubelet" May 14 00:52:32.470480 kubelet[1913]: I0514 00:52:32.470427 1913 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 00:52:32.470800 kubelet[1913]: I0514 00:52:32.470778 1913 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 00:52:32.470942 kubelet[1913]: I0514 00:52:32.470909 1913 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 14 00:52:32.471483 kubelet[1913]: I0514 00:52:32.471453 1913 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 00:52:32.471967 kubelet[1913]: I0514 00:52:32.471945 1913 server.go:490] "Adding debug handlers to kubelet server" May 14 00:52:32.476540 kubelet[1913]: I0514 00:52:32.476513 1913 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 00:52:32.478498 kubelet[1913]: I0514 00:52:32.478473 1913 volume_manager.go:297] "Starting Kubelet Volume Manager" May 14 00:52:32.479006 kubelet[1913]: E0514 00:52:32.478978 1913 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:52:32.480923 kubelet[1913]: I0514 00:52:32.480714 1913 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 00:52:32.481418 kubelet[1913]: I0514 00:52:32.481405 1913 reconciler.go:26] "Reconciler: start to sync state" May 14 00:52:32.489852 kubelet[1913]: I0514 00:52:32.484617 1913 factory.go:221] Registration of the systemd container factory successfully May 14 00:52:32.489852 kubelet[1913]: I0514 00:52:32.484747 1913 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 00:52:32.491142 kubelet[1913]: E0514 00:52:32.491115 1913 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 00:52:32.491411 kubelet[1913]: I0514 00:52:32.491347 1913 factory.go:221] Registration of the containerd container factory successfully May 14 00:52:32.506655 kubelet[1913]: I0514 00:52:32.506623 1913 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 00:52:32.509192 kubelet[1913]: I0514 00:52:32.509169 1913 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 00:52:32.509311 kubelet[1913]: I0514 00:52:32.509299 1913 status_manager.go:227] "Starting to sync pod status with apiserver" May 14 00:52:32.509381 kubelet[1913]: I0514 00:52:32.509371 1913 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 14 00:52:32.509433 kubelet[1913]: I0514 00:52:32.509424 1913 kubelet.go:2388] "Starting kubelet main sync loop" May 14 00:52:32.509530 kubelet[1913]: E0514 00:52:32.509513 1913 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 00:52:32.535707 kubelet[1913]: I0514 00:52:32.535676 1913 cpu_manager.go:221] "Starting CPU manager" policy="none" May 14 00:52:32.535835 kubelet[1913]: I0514 00:52:32.535727 1913 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 14 00:52:32.535835 kubelet[1913]: I0514 00:52:32.535750 1913 state_mem.go:36] "Initialized new in-memory state store" May 14 00:52:32.535984 kubelet[1913]: I0514 00:52:32.535964 1913 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 00:52:32.536028 kubelet[1913]: I0514 00:52:32.535982 1913 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 00:52:32.536028 kubelet[1913]: I0514 00:52:32.536002 1913 policy_none.go:49] "None policy: Start" May 14 00:52:32.536028 kubelet[1913]: I0514 00:52:32.536011 1913 memory_manager.go:186] "Starting memorymanager" policy="None" May 14 00:52:32.536028 kubelet[1913]: I0514 00:52:32.536026 1913 state_mem.go:35] "Initializing new in-memory state store" May 14 00:52:32.536134 kubelet[1913]: I0514 00:52:32.536121 1913 state_mem.go:75] "Updated machine memory state" May 14 00:52:32.539821 kubelet[1913]: I0514 00:52:32.539632 1913 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 00:52:32.539989 kubelet[1913]: I0514 00:52:32.539967 1913 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 00:52:32.540033 kubelet[1913]: I0514 00:52:32.539984 1913 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 00:52:32.540378 kubelet[1913]: I0514 00:52:32.540364 1913 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 00:52:32.540899 kubelet[1913]: E0514 00:52:32.540881 1913 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 14 00:52:32.610746 kubelet[1913]: I0514 00:52:32.610692 1913 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 14 00:52:32.612051 kubelet[1913]: I0514 00:52:32.612022 1913 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 14 00:52:32.613059 kubelet[1913]: I0514 00:52:32.613040 1913 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 14 00:52:32.643896 kubelet[1913]: I0514 00:52:32.643878 1913 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 14 00:52:32.650260 kubelet[1913]: I0514 00:52:32.650220 1913 kubelet_node_status.go:125] "Node was previously registered" node="localhost" May 14 00:52:32.650428 kubelet[1913]: I0514 00:52:32.650416 1913 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 14 00:52:32.682556 kubelet[1913]: I0514 00:52:32.682431 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:52:32.682556 kubelet[1913]: I0514 00:52:32.682475 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:52:32.682556 kubelet[1913]: I0514 00:52:32.682499 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 14 00:52:32.682556 kubelet[1913]: I0514 00:52:32.682535 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2912a870ca8bd8708d5701cfa27ed384-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2912a870ca8bd8708d5701cfa27ed384\") " pod="kube-system/kube-apiserver-localhost" May 14 00:52:32.682731 kubelet[1913]: I0514 00:52:32.682568 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2912a870ca8bd8708d5701cfa27ed384-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2912a870ca8bd8708d5701cfa27ed384\") " pod="kube-system/kube-apiserver-localhost" May 14 00:52:32.682731 kubelet[1913]: I0514 00:52:32.682586 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:52:32.682731 kubelet[1913]: I0514 00:52:32.682601 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:52:32.682731 kubelet[1913]: I0514 00:52:32.682621 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:52:32.682731 kubelet[1913]: I0514 00:52:32.682648 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2912a870ca8bd8708d5701cfa27ed384-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2912a870ca8bd8708d5701cfa27ed384\") " pod="kube-system/kube-apiserver-localhost" May 14 00:52:32.919330 kubelet[1913]: E0514 00:52:32.919291 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:32.919502 kubelet[1913]: E0514 00:52:32.919310 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:32.919578 kubelet[1913]: E0514 00:52:32.919522 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:33.154632 sudo[1949]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 14 00:52:33.155208 sudo[1949]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) May 14 00:52:33.466583 kubelet[1913]: I0514 00:52:33.466485 1913 apiserver.go:52] "Watching apiserver" May 14 00:52:33.481804 kubelet[1913]: I0514 00:52:33.481773 1913 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 00:52:33.519458 kubelet[1913]: I0514 00:52:33.519430 1913 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 14 00:52:33.519653 kubelet[1913]: E0514 00:52:33.519629 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:33.519784 kubelet[1913]: E0514 00:52:33.519495 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:33.526297 kubelet[1913]: E0514 00:52:33.526266 1913 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 14 00:52:33.526436 kubelet[1913]: E0514 00:52:33.526417 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:33.544984 kubelet[1913]: I0514 00:52:33.544833 1913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.544816386 podStartE2EDuration="1.544816386s" podCreationTimestamp="2025-05-14 00:52:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:52:33.538655341 +0000 UTC m=+1.128448438" watchObservedRunningTime="2025-05-14 00:52:33.544816386 +0000 UTC m=+1.134609523" May 14 00:52:33.551894 kubelet[1913]: I0514 00:52:33.551845 1913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.551832325 podStartE2EDuration="1.551832325s" podCreationTimestamp="2025-05-14 00:52:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:52:33.545061042 +0000 UTC m=+1.134854179" watchObservedRunningTime="2025-05-14 00:52:33.551832325 +0000 UTC m=+1.141625462" May 14 00:52:33.559041 kubelet[1913]: I0514 00:52:33.559000 1913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.5589889989999999 podStartE2EDuration="1.558988999s" podCreationTimestamp="2025-05-14 00:52:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:52:33.55210131 +0000 UTC m=+1.141894447" watchObservedRunningTime="2025-05-14 00:52:33.558988999 +0000 UTC m=+1.148782136" May 14 00:52:33.598265 sudo[1949]: pam_unix(sudo:session): session closed for user root May 14 00:52:34.520814 kubelet[1913]: E0514 00:52:34.520776 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:34.521288 kubelet[1913]: E0514 00:52:34.520896 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:35.522336 kubelet[1913]: E0514 00:52:35.522308 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:35.647986 sudo[1313]: pam_unix(sudo:session): session closed for user root May 14 00:52:35.649382 sshd[1310]: pam_unix(sshd:session): session closed for user core May 14 00:52:35.651710 systemd[1]: sshd@4-10.0.0.131:22-10.0.0.1:33740.service: Deactivated successfully. May 14 00:52:35.652454 systemd[1]: session-5.scope: Deactivated successfully. May 14 00:52:35.652609 systemd[1]: session-5.scope: Consumed 7.192s CPU time. May 14 00:52:35.653054 systemd-logind[1203]: Session 5 logged out. Waiting for processes to exit. May 14 00:52:35.653829 systemd-logind[1203]: Removed session 5. May 14 00:52:38.877640 kubelet[1913]: I0514 00:52:38.877603 1913 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 00:52:38.877963 env[1215]: time="2025-05-14T00:52:38.877888592Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 00:52:38.878304 kubelet[1913]: I0514 00:52:38.878284 1913 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 00:52:39.673918 systemd[1]: Created slice kubepods-besteffort-pod79eb726c_f91c_43ce_b382_15cd399d7098.slice. May 14 00:52:39.684398 systemd[1]: Created slice kubepods-burstable-pod8e075227_ab31_47bf_a1d5_f4ba469d9776.slice. May 14 00:52:39.732576 kubelet[1913]: I0514 00:52:39.732514 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e075227-ab31-47bf-a1d5-f4ba469d9776-xtables-lock\") pod \"cilium-bq4n6\" (UID: \"8e075227-ab31-47bf-a1d5-f4ba469d9776\") " pod="kube-system/cilium-bq4n6" May 14 00:52:39.732576 kubelet[1913]: I0514 00:52:39.732579 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8e075227-ab31-47bf-a1d5-f4ba469d9776-hubble-tls\") pod \"cilium-bq4n6\" (UID: \"8e075227-ab31-47bf-a1d5-f4ba469d9776\") " pod="kube-system/cilium-bq4n6" May 14 00:52:39.732743 kubelet[1913]: I0514 00:52:39.732599 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2gh7\" (UniqueName: \"kubernetes.io/projected/8e075227-ab31-47bf-a1d5-f4ba469d9776-kube-api-access-j2gh7\") pod \"cilium-bq4n6\" (UID: \"8e075227-ab31-47bf-a1d5-f4ba469d9776\") " pod="kube-system/cilium-bq4n6" May 14 00:52:39.732743 kubelet[1913]: I0514 00:52:39.732616 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/79eb726c-f91c-43ce-b382-15cd399d7098-xtables-lock\") pod \"kube-proxy-wwxqj\" (UID: \"79eb726c-f91c-43ce-b382-15cd399d7098\") " pod="kube-system/kube-proxy-wwxqj" May 14 00:52:39.732743 kubelet[1913]: I0514 00:52:39.732630 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8e075227-ab31-47bf-a1d5-f4ba469d9776-cilium-run\") pod \"cilium-bq4n6\" (UID: \"8e075227-ab31-47bf-a1d5-f4ba469d9776\") " pod="kube-system/cilium-bq4n6" May 14 00:52:39.732743 kubelet[1913]: I0514 00:52:39.732644 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8e075227-ab31-47bf-a1d5-f4ba469d9776-cilium-config-path\") pod \"cilium-bq4n6\" (UID: \"8e075227-ab31-47bf-a1d5-f4ba469d9776\") " pod="kube-system/cilium-bq4n6" May 14 00:52:39.732743 kubelet[1913]: I0514 00:52:39.732658 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8e075227-ab31-47bf-a1d5-f4ba469d9776-hostproc\") pod \"cilium-bq4n6\" (UID: \"8e075227-ab31-47bf-a1d5-f4ba469d9776\") " pod="kube-system/cilium-bq4n6" May 14 00:52:39.732864 kubelet[1913]: I0514 00:52:39.732674 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8e075227-ab31-47bf-a1d5-f4ba469d9776-host-proc-sys-kernel\") pod \"cilium-bq4n6\" (UID: \"8e075227-ab31-47bf-a1d5-f4ba469d9776\") " pod="kube-system/cilium-bq4n6" May 14 00:52:39.732864 kubelet[1913]: I0514 00:52:39.732691 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/79eb726c-f91c-43ce-b382-15cd399d7098-kube-proxy\") pod \"kube-proxy-wwxqj\" (UID: \"79eb726c-f91c-43ce-b382-15cd399d7098\") " pod="kube-system/kube-proxy-wwxqj" May 14 00:52:39.732864 kubelet[1913]: I0514 00:52:39.732707 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8e075227-ab31-47bf-a1d5-f4ba469d9776-cni-path\") pod \"cilium-bq4n6\" (UID: \"8e075227-ab31-47bf-a1d5-f4ba469d9776\") " pod="kube-system/cilium-bq4n6" May 14 00:52:39.732864 kubelet[1913]: I0514 00:52:39.732720 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8e075227-ab31-47bf-a1d5-f4ba469d9776-clustermesh-secrets\") pod \"cilium-bq4n6\" (UID: \"8e075227-ab31-47bf-a1d5-f4ba469d9776\") " pod="kube-system/cilium-bq4n6" May 14 00:52:39.732864 kubelet[1913]: I0514 00:52:39.732734 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8e075227-ab31-47bf-a1d5-f4ba469d9776-host-proc-sys-net\") pod \"cilium-bq4n6\" (UID: \"8e075227-ab31-47bf-a1d5-f4ba469d9776\") " pod="kube-system/cilium-bq4n6" May 14 00:52:39.732864 kubelet[1913]: I0514 00:52:39.732753 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79eb726c-f91c-43ce-b382-15cd399d7098-lib-modules\") pod \"kube-proxy-wwxqj\" (UID: \"79eb726c-f91c-43ce-b382-15cd399d7098\") " pod="kube-system/kube-proxy-wwxqj" May 14 00:52:39.732996 kubelet[1913]: I0514 00:52:39.732767 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e075227-ab31-47bf-a1d5-f4ba469d9776-lib-modules\") pod \"cilium-bq4n6\" (UID: \"8e075227-ab31-47bf-a1d5-f4ba469d9776\") " pod="kube-system/cilium-bq4n6" May 14 00:52:39.732996 kubelet[1913]: I0514 00:52:39.732781 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62jpr\" (UniqueName: \"kubernetes.io/projected/79eb726c-f91c-43ce-b382-15cd399d7098-kube-api-access-62jpr\") pod \"kube-proxy-wwxqj\" (UID: \"79eb726c-f91c-43ce-b382-15cd399d7098\") " pod="kube-system/kube-proxy-wwxqj" May 14 00:52:39.732996 kubelet[1913]: I0514 00:52:39.732797 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8e075227-ab31-47bf-a1d5-f4ba469d9776-bpf-maps\") pod \"cilium-bq4n6\" (UID: \"8e075227-ab31-47bf-a1d5-f4ba469d9776\") " pod="kube-system/cilium-bq4n6" May 14 00:52:39.732996 kubelet[1913]: I0514 00:52:39.732810 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8e075227-ab31-47bf-a1d5-f4ba469d9776-cilium-cgroup\") pod \"cilium-bq4n6\" (UID: \"8e075227-ab31-47bf-a1d5-f4ba469d9776\") " pod="kube-system/cilium-bq4n6" May 14 00:52:39.732996 kubelet[1913]: I0514 00:52:39.732828 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8e075227-ab31-47bf-a1d5-f4ba469d9776-etc-cni-netd\") pod \"cilium-bq4n6\" (UID: \"8e075227-ab31-47bf-a1d5-f4ba469d9776\") " pod="kube-system/cilium-bq4n6" May 14 00:52:39.834373 kubelet[1913]: I0514 00:52:39.834337 1913 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 14 00:52:39.922100 systemd[1]: Created slice kubepods-besteffort-pod6f3ac2d5_9c5a_49e1_938e_e58a16060912.slice. May 14 00:52:39.934239 kubelet[1913]: I0514 00:52:39.934120 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f3ac2d5-9c5a-49e1-938e-e58a16060912-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-jbqpc\" (UID: \"6f3ac2d5-9c5a-49e1-938e-e58a16060912\") " pod="kube-system/cilium-operator-6c4d7847fc-jbqpc" May 14 00:52:39.934239 kubelet[1913]: I0514 00:52:39.934158 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nv5k\" (UniqueName: \"kubernetes.io/projected/6f3ac2d5-9c5a-49e1-938e-e58a16060912-kube-api-access-8nv5k\") pod \"cilium-operator-6c4d7847fc-jbqpc\" (UID: \"6f3ac2d5-9c5a-49e1-938e-e58a16060912\") " pod="kube-system/cilium-operator-6c4d7847fc-jbqpc" May 14 00:52:39.982791 kubelet[1913]: E0514 00:52:39.982758 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:39.983492 env[1215]: time="2025-05-14T00:52:39.983439919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wwxqj,Uid:79eb726c-f91c-43ce-b382-15cd399d7098,Namespace:kube-system,Attempt:0,}" May 14 00:52:39.987224 kubelet[1913]: E0514 00:52:39.987199 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:39.987869 env[1215]: time="2025-05-14T00:52:39.987664260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bq4n6,Uid:8e075227-ab31-47bf-a1d5-f4ba469d9776,Namespace:kube-system,Attempt:0,}" May 14 00:52:40.004680 env[1215]: time="2025-05-14T00:52:40.004609587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:52:40.004680 env[1215]: time="2025-05-14T00:52:40.004647197Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:52:40.004839 env[1215]: time="2025-05-14T00:52:40.004657920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:52:40.005158 env[1215]: time="2025-05-14T00:52:40.005124519Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c476ecc64d8615d5b7abc827b4079b0a508672a405856fd80e8f967b6c05927a pid=2010 runtime=io.containerd.runc.v2 May 14 00:52:40.005413 env[1215]: time="2025-05-14T00:52:40.005347256Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:52:40.005480 env[1215]: time="2025-05-14T00:52:40.005433798Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:52:40.005480 env[1215]: time="2025-05-14T00:52:40.005468367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:52:40.005769 env[1215]: time="2025-05-14T00:52:40.005707028Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/45a1aa091ac1c1102d7bec0c9b115bc6e50bfabd16a7a59095689d66c091f5e9 pid=2017 runtime=io.containerd.runc.v2 May 14 00:52:40.015459 systemd[1]: Started cri-containerd-c476ecc64d8615d5b7abc827b4079b0a508672a405856fd80e8f967b6c05927a.scope. May 14 00:52:40.020139 systemd[1]: Started cri-containerd-45a1aa091ac1c1102d7bec0c9b115bc6e50bfabd16a7a59095689d66c091f5e9.scope. May 14 00:52:40.061263 env[1215]: time="2025-05-14T00:52:40.061214499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wwxqj,Uid:79eb726c-f91c-43ce-b382-15cd399d7098,Namespace:kube-system,Attempt:0,} returns sandbox id \"c476ecc64d8615d5b7abc827b4079b0a508672a405856fd80e8f967b6c05927a\"" May 14 00:52:40.061825 kubelet[1913]: E0514 00:52:40.061803 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:40.072942 env[1215]: time="2025-05-14T00:52:40.072893925Z" level=info msg="CreateContainer within sandbox \"c476ecc64d8615d5b7abc827b4079b0a508672a405856fd80e8f967b6c05927a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 00:52:40.076282 env[1215]: time="2025-05-14T00:52:40.076136634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bq4n6,Uid:8e075227-ab31-47bf-a1d5-f4ba469d9776,Namespace:kube-system,Attempt:0,} returns sandbox id \"45a1aa091ac1c1102d7bec0c9b115bc6e50bfabd16a7a59095689d66c091f5e9\"" May 14 00:52:40.080735 kubelet[1913]: E0514 00:52:40.080511 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:40.088384 env[1215]: time="2025-05-14T00:52:40.088332032Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 14 00:52:40.099524 env[1215]: time="2025-05-14T00:52:40.099475361Z" level=info msg="CreateContainer within sandbox \"c476ecc64d8615d5b7abc827b4079b0a508672a405856fd80e8f967b6c05927a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c590fbd3efffbd89a4e9fbee0ff3b7af30b58fc916da4ad19113dbc038346108\"" May 14 00:52:40.100348 env[1215]: time="2025-05-14T00:52:40.100320817Z" level=info msg="StartContainer for \"c590fbd3efffbd89a4e9fbee0ff3b7af30b58fc916da4ad19113dbc038346108\"" May 14 00:52:40.114562 systemd[1]: Started cri-containerd-c590fbd3efffbd89a4e9fbee0ff3b7af30b58fc916da4ad19113dbc038346108.scope. May 14 00:52:40.156483 env[1215]: time="2025-05-14T00:52:40.156428362Z" level=info msg="StartContainer for \"c590fbd3efffbd89a4e9fbee0ff3b7af30b58fc916da4ad19113dbc038346108\" returns successfully" May 14 00:52:40.225886 kubelet[1913]: E0514 00:52:40.225702 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:40.227540 env[1215]: time="2025-05-14T00:52:40.226848326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-jbqpc,Uid:6f3ac2d5-9c5a-49e1-938e-e58a16060912,Namespace:kube-system,Attempt:0,}" May 14 00:52:40.239721 env[1215]: time="2025-05-14T00:52:40.239634074Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:52:40.239721 env[1215]: time="2025-05-14T00:52:40.239681887Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:52:40.239721 env[1215]: time="2025-05-14T00:52:40.239692529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:52:40.239881 env[1215]: time="2025-05-14T00:52:40.239846409Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/988c603c842b5d575e5bb1167bf9277971b209972b4cd34339a0bb1210d8464e pid=2126 runtime=io.containerd.runc.v2 May 14 00:52:40.256275 systemd[1]: Started cri-containerd-988c603c842b5d575e5bb1167bf9277971b209972b4cd34339a0bb1210d8464e.scope. May 14 00:52:40.297710 env[1215]: time="2025-05-14T00:52:40.297656749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-jbqpc,Uid:6f3ac2d5-9c5a-49e1-938e-e58a16060912,Namespace:kube-system,Attempt:0,} returns sandbox id \"988c603c842b5d575e5bb1167bf9277971b209972b4cd34339a0bb1210d8464e\"" May 14 00:52:40.298278 kubelet[1913]: E0514 00:52:40.298257 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:40.531302 kubelet[1913]: E0514 00:52:40.531210 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:40.792093 kubelet[1913]: E0514 00:52:40.791998 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:40.806430 kubelet[1913]: I0514 00:52:40.806370 1913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wwxqj" podStartSLOduration=1.806351443 podStartE2EDuration="1.806351443s" podCreationTimestamp="2025-05-14 00:52:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:52:40.541881268 +0000 UTC m=+8.131674405" watchObservedRunningTime="2025-05-14 00:52:40.806351443 +0000 UTC m=+8.396144580" May 14 00:52:41.532517 kubelet[1913]: E0514 00:52:41.532484 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:41.739598 kubelet[1913]: E0514 00:52:41.739330 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:42.533808 kubelet[1913]: E0514 00:52:42.533771 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:44.516575 kubelet[1913]: E0514 00:52:44.516518 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:44.536683 kubelet[1913]: E0514 00:52:44.536658 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:44.656775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2277006028.mount: Deactivated successfully. May 14 00:52:46.878347 env[1215]: time="2025-05-14T00:52:46.878295829Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:46.879933 env[1215]: time="2025-05-14T00:52:46.879902328Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:46.884089 env[1215]: time="2025-05-14T00:52:46.884053062Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:46.884713 env[1215]: time="2025-05-14T00:52:46.884685939Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 14 00:52:46.889588 env[1215]: time="2025-05-14T00:52:46.889559008Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 14 00:52:46.890609 env[1215]: time="2025-05-14T00:52:46.890577957Z" level=info msg="CreateContainer within sandbox \"45a1aa091ac1c1102d7bec0c9b115bc6e50bfabd16a7a59095689d66c091f5e9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 00:52:46.901907 env[1215]: time="2025-05-14T00:52:46.901858700Z" level=info msg="CreateContainer within sandbox \"45a1aa091ac1c1102d7bec0c9b115bc6e50bfabd16a7a59095689d66c091f5e9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e072a64a7da97f9d4893165293116a85e6dde27545cf5ef6d0f356aef0c453f1\"" May 14 00:52:46.902480 env[1215]: time="2025-05-14T00:52:46.902429926Z" level=info msg="StartContainer for \"e072a64a7da97f9d4893165293116a85e6dde27545cf5ef6d0f356aef0c453f1\"" May 14 00:52:46.926906 systemd[1]: run-containerd-runc-k8s.io-e072a64a7da97f9d4893165293116a85e6dde27545cf5ef6d0f356aef0c453f1-runc.w1N4zx.mount: Deactivated successfully. May 14 00:52:46.928664 systemd[1]: Started cri-containerd-e072a64a7da97f9d4893165293116a85e6dde27545cf5ef6d0f356aef0c453f1.scope. May 14 00:52:46.991350 update_engine[1210]: I0514 00:52:46.991299 1210 update_attempter.cc:509] Updating boot flags... May 14 00:52:47.017781 systemd[1]: cri-containerd-e072a64a7da97f9d4893165293116a85e6dde27545cf5ef6d0f356aef0c453f1.scope: Deactivated successfully. May 14 00:52:47.023827 env[1215]: time="2025-05-14T00:52:47.023790615Z" level=info msg="StartContainer for \"e072a64a7da97f9d4893165293116a85e6dde27545cf5ef6d0f356aef0c453f1\" returns successfully" May 14 00:52:47.109027 env[1215]: time="2025-05-14T00:52:47.108622326Z" level=info msg="shim disconnected" id=e072a64a7da97f9d4893165293116a85e6dde27545cf5ef6d0f356aef0c453f1 May 14 00:52:47.109027 env[1215]: time="2025-05-14T00:52:47.108672495Z" level=warning msg="cleaning up after shim disconnected" id=e072a64a7da97f9d4893165293116a85e6dde27545cf5ef6d0f356aef0c453f1 namespace=k8s.io May 14 00:52:47.109027 env[1215]: time="2025-05-14T00:52:47.108681777Z" level=info msg="cleaning up dead shim" May 14 00:52:47.126460 env[1215]: time="2025-05-14T00:52:47.125967560Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:52:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2349 runtime=io.containerd.runc.v2\n" May 14 00:52:47.552903 kubelet[1913]: E0514 00:52:47.552855 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:47.556034 env[1215]: time="2025-05-14T00:52:47.555302035Z" level=info msg="CreateContainer within sandbox \"45a1aa091ac1c1102d7bec0c9b115bc6e50bfabd16a7a59095689d66c091f5e9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 00:52:47.566348 env[1215]: time="2025-05-14T00:52:47.566302984Z" level=info msg="CreateContainer within sandbox \"45a1aa091ac1c1102d7bec0c9b115bc6e50bfabd16a7a59095689d66c091f5e9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bafe2c04a9121ccd8e058d69c4f60ee6b36d8687196f5c012fed6ab4592103f8\"" May 14 00:52:47.566769 env[1215]: time="2025-05-14T00:52:47.566701135Z" level=info msg="StartContainer for \"bafe2c04a9121ccd8e058d69c4f60ee6b36d8687196f5c012fed6ab4592103f8\"" May 14 00:52:47.585395 systemd[1]: Started cri-containerd-bafe2c04a9121ccd8e058d69c4f60ee6b36d8687196f5c012fed6ab4592103f8.scope. May 14 00:52:47.622320 env[1215]: time="2025-05-14T00:52:47.622264500Z" level=info msg="StartContainer for \"bafe2c04a9121ccd8e058d69c4f60ee6b36d8687196f5c012fed6ab4592103f8\" returns successfully" May 14 00:52:47.633785 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 00:52:47.634017 systemd[1]: Stopped systemd-sysctl.service. May 14 00:52:47.634210 systemd[1]: Stopping systemd-sysctl.service... May 14 00:52:47.635686 systemd[1]: Starting systemd-sysctl.service... May 14 00:52:47.636678 systemd[1]: cri-containerd-bafe2c04a9121ccd8e058d69c4f60ee6b36d8687196f5c012fed6ab4592103f8.scope: Deactivated successfully. May 14 00:52:47.649698 systemd[1]: Finished systemd-sysctl.service. May 14 00:52:47.673735 env[1215]: time="2025-05-14T00:52:47.673687532Z" level=info msg="shim disconnected" id=bafe2c04a9121ccd8e058d69c4f60ee6b36d8687196f5c012fed6ab4592103f8 May 14 00:52:47.673735 env[1215]: time="2025-05-14T00:52:47.673732820Z" level=warning msg="cleaning up after shim disconnected" id=bafe2c04a9121ccd8e058d69c4f60ee6b36d8687196f5c012fed6ab4592103f8 namespace=k8s.io May 14 00:52:47.673735 env[1215]: time="2025-05-14T00:52:47.673742341Z" level=info msg="cleaning up dead shim" May 14 00:52:47.680051 env[1215]: time="2025-05-14T00:52:47.680010652Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:52:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2413 runtime=io.containerd.runc.v2\n" May 14 00:52:47.899608 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e072a64a7da97f9d4893165293116a85e6dde27545cf5ef6d0f356aef0c453f1-rootfs.mount: Deactivated successfully. May 14 00:52:48.547838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3938052188.mount: Deactivated successfully. May 14 00:52:48.554312 kubelet[1913]: E0514 00:52:48.554273 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:48.559751 env[1215]: time="2025-05-14T00:52:48.558887073Z" level=info msg="CreateContainer within sandbox \"45a1aa091ac1c1102d7bec0c9b115bc6e50bfabd16a7a59095689d66c091f5e9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 00:52:48.573651 env[1215]: time="2025-05-14T00:52:48.573596874Z" level=info msg="CreateContainer within sandbox \"45a1aa091ac1c1102d7bec0c9b115bc6e50bfabd16a7a59095689d66c091f5e9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"523daec23a205b0b08e727113afd6be2d2126789045d59466cef958c34258068\"" May 14 00:52:48.574287 env[1215]: time="2025-05-14T00:52:48.574260705Z" level=info msg="StartContainer for \"523daec23a205b0b08e727113afd6be2d2126789045d59466cef958c34258068\"" May 14 00:52:48.592019 systemd[1]: Started cri-containerd-523daec23a205b0b08e727113afd6be2d2126789045d59466cef958c34258068.scope. May 14 00:52:48.648123 env[1215]: time="2025-05-14T00:52:48.647967693Z" level=info msg="StartContainer for \"523daec23a205b0b08e727113afd6be2d2126789045d59466cef958c34258068\" returns successfully" May 14 00:52:48.648277 systemd[1]: cri-containerd-523daec23a205b0b08e727113afd6be2d2126789045d59466cef958c34258068.scope: Deactivated successfully. May 14 00:52:48.674674 env[1215]: time="2025-05-14T00:52:48.674628948Z" level=info msg="shim disconnected" id=523daec23a205b0b08e727113afd6be2d2126789045d59466cef958c34258068 May 14 00:52:48.674887 env[1215]: time="2025-05-14T00:52:48.674867868Z" level=warning msg="cleaning up after shim disconnected" id=523daec23a205b0b08e727113afd6be2d2126789045d59466cef958c34258068 namespace=k8s.io May 14 00:52:48.674944 env[1215]: time="2025-05-14T00:52:48.674931759Z" level=info msg="cleaning up dead shim" May 14 00:52:48.681014 env[1215]: time="2025-05-14T00:52:48.680975258Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:52:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2469 runtime=io.containerd.runc.v2\n" May 14 00:52:48.899300 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-523daec23a205b0b08e727113afd6be2d2126789045d59466cef958c34258068-rootfs.mount: Deactivated successfully. May 14 00:52:49.043737 env[1215]: time="2025-05-14T00:52:49.043687471Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:49.045085 env[1215]: time="2025-05-14T00:52:49.045056531Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:49.049654 env[1215]: time="2025-05-14T00:52:49.049622224Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:49.050386 env[1215]: time="2025-05-14T00:52:49.050347740Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 14 00:52:49.054887 env[1215]: time="2025-05-14T00:52:49.054850383Z" level=info msg="CreateContainer within sandbox \"988c603c842b5d575e5bb1167bf9277971b209972b4cd34339a0bb1210d8464e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 14 00:52:49.064023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount303257973.mount: Deactivated successfully. May 14 00:52:49.064817 env[1215]: time="2025-05-14T00:52:49.064776457Z" level=info msg="CreateContainer within sandbox \"988c603c842b5d575e5bb1167bf9277971b209972b4cd34339a0bb1210d8464e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e03a9c330b9d26e33baf7c0938e55c3091856a742364c581f1c711eb0abff897\"" May 14 00:52:49.065269 env[1215]: time="2025-05-14T00:52:49.065194004Z" level=info msg="StartContainer for \"e03a9c330b9d26e33baf7c0938e55c3091856a742364c581f1c711eb0abff897\"" May 14 00:52:49.080251 systemd[1]: Started cri-containerd-e03a9c330b9d26e33baf7c0938e55c3091856a742364c581f1c711eb0abff897.scope. May 14 00:52:49.155303 env[1215]: time="2025-05-14T00:52:49.155152247Z" level=info msg="StartContainer for \"e03a9c330b9d26e33baf7c0938e55c3091856a742364c581f1c711eb0abff897\" returns successfully" May 14 00:52:49.567808 kubelet[1913]: E0514 00:52:49.567709 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:49.568121 kubelet[1913]: E0514 00:52:49.567904 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:49.573169 env[1215]: time="2025-05-14T00:52:49.573129034Z" level=info msg="CreateContainer within sandbox \"45a1aa091ac1c1102d7bec0c9b115bc6e50bfabd16a7a59095689d66c091f5e9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 00:52:49.593807 env[1215]: time="2025-05-14T00:52:49.593667132Z" level=info msg="CreateContainer within sandbox \"45a1aa091ac1c1102d7bec0c9b115bc6e50bfabd16a7a59095689d66c091f5e9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"70854290db81085780c905c043540c7b832aff445852701defdfa9ac2f1bd516\"" May 14 00:52:49.594204 env[1215]: time="2025-05-14T00:52:49.594146129Z" level=info msg="StartContainer for \"70854290db81085780c905c043540c7b832aff445852701defdfa9ac2f1bd516\"" May 14 00:52:49.610901 kubelet[1913]: I0514 00:52:49.610838 1913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-jbqpc" podStartSLOduration=1.8561213030000001 podStartE2EDuration="10.610814365s" podCreationTimestamp="2025-05-14 00:52:39 +0000 UTC" firstStartedPulling="2025-05-14 00:52:40.299035301 +0000 UTC m=+7.888828438" lastFinishedPulling="2025-05-14 00:52:49.053728403 +0000 UTC m=+16.643521500" observedRunningTime="2025-05-14 00:52:49.610034319 +0000 UTC m=+17.199827456" watchObservedRunningTime="2025-05-14 00:52:49.610814365 +0000 UTC m=+17.200607462" May 14 00:52:49.632878 systemd[1]: Started cri-containerd-70854290db81085780c905c043540c7b832aff445852701defdfa9ac2f1bd516.scope. May 14 00:52:49.679142 systemd[1]: cri-containerd-70854290db81085780c905c043540c7b832aff445852701defdfa9ac2f1bd516.scope: Deactivated successfully. May 14 00:52:49.683823 env[1215]: time="2025-05-14T00:52:49.683778159Z" level=info msg="StartContainer for \"70854290db81085780c905c043540c7b832aff445852701defdfa9ac2f1bd516\" returns successfully" May 14 00:52:49.717958 env[1215]: time="2025-05-14T00:52:49.717902798Z" level=info msg="shim disconnected" id=70854290db81085780c905c043540c7b832aff445852701defdfa9ac2f1bd516 May 14 00:52:49.717958 env[1215]: time="2025-05-14T00:52:49.717942484Z" level=warning msg="cleaning up after shim disconnected" id=70854290db81085780c905c043540c7b832aff445852701defdfa9ac2f1bd516 namespace=k8s.io May 14 00:52:49.717958 env[1215]: time="2025-05-14T00:52:49.717951806Z" level=info msg="cleaning up dead shim" May 14 00:52:49.739674 env[1215]: time="2025-05-14T00:52:49.739621725Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:52:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2562 runtime=io.containerd.runc.v2\n" May 14 00:52:50.571301 kubelet[1913]: E0514 00:52:50.569307 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:50.571301 kubelet[1913]: E0514 00:52:50.569834 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:50.573475 env[1215]: time="2025-05-14T00:52:50.571507262Z" level=info msg="CreateContainer within sandbox \"45a1aa091ac1c1102d7bec0c9b115bc6e50bfabd16a7a59095689d66c091f5e9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 00:52:50.583517 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1194720508.mount: Deactivated successfully. May 14 00:52:50.588062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2539765559.mount: Deactivated successfully. May 14 00:52:50.592161 env[1215]: time="2025-05-14T00:52:50.592114935Z" level=info msg="CreateContainer within sandbox \"45a1aa091ac1c1102d7bec0c9b115bc6e50bfabd16a7a59095689d66c091f5e9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c266efa393bd102729935a1dbaaeca5d2e3793b331c8e3fe2c55d1e19d06c746\"" May 14 00:52:50.592784 env[1215]: time="2025-05-14T00:52:50.592754432Z" level=info msg="StartContainer for \"c266efa393bd102729935a1dbaaeca5d2e3793b331c8e3fe2c55d1e19d06c746\"" May 14 00:52:50.607807 systemd[1]: Started cri-containerd-c266efa393bd102729935a1dbaaeca5d2e3793b331c8e3fe2c55d1e19d06c746.scope. May 14 00:52:50.666267 env[1215]: time="2025-05-14T00:52:50.663603273Z" level=info msg="StartContainer for \"c266efa393bd102729935a1dbaaeca5d2e3793b331c8e3fe2c55d1e19d06c746\" returns successfully" May 14 00:52:50.742915 kubelet[1913]: I0514 00:52:50.742103 1913 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 14 00:52:50.782322 systemd[1]: Created slice kubepods-burstable-pod145a0b0d_3045_4a88_a337_a8d5ca4b5e3f.slice. May 14 00:52:50.786820 systemd[1]: Created slice kubepods-burstable-pod0ba1cf6c_ac1c_4a4d_a0ed_40981fe9f628.slice. May 14 00:52:50.813634 kubelet[1913]: I0514 00:52:50.811315 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pvrq\" (UniqueName: \"kubernetes.io/projected/145a0b0d-3045-4a88-a337-a8d5ca4b5e3f-kube-api-access-7pvrq\") pod \"coredns-668d6bf9bc-v9pmt\" (UID: \"145a0b0d-3045-4a88-a337-a8d5ca4b5e3f\") " pod="kube-system/coredns-668d6bf9bc-v9pmt" May 14 00:52:50.813634 kubelet[1913]: I0514 00:52:50.811369 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/145a0b0d-3045-4a88-a337-a8d5ca4b5e3f-config-volume\") pod \"coredns-668d6bf9bc-v9pmt\" (UID: \"145a0b0d-3045-4a88-a337-a8d5ca4b5e3f\") " pod="kube-system/coredns-668d6bf9bc-v9pmt" May 14 00:52:50.813634 kubelet[1913]: I0514 00:52:50.811397 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ba1cf6c-ac1c-4a4d-a0ed-40981fe9f628-config-volume\") pod \"coredns-668d6bf9bc-848l5\" (UID: \"0ba1cf6c-ac1c-4a4d-a0ed-40981fe9f628\") " pod="kube-system/coredns-668d6bf9bc-848l5" May 14 00:52:50.813634 kubelet[1913]: I0514 00:52:50.811424 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkj4f\" (UniqueName: \"kubernetes.io/projected/0ba1cf6c-ac1c-4a4d-a0ed-40981fe9f628-kube-api-access-qkj4f\") pod \"coredns-668d6bf9bc-848l5\" (UID: \"0ba1cf6c-ac1c-4a4d-a0ed-40981fe9f628\") " pod="kube-system/coredns-668d6bf9bc-848l5" May 14 00:52:50.925261 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 14 00:52:51.085840 kubelet[1913]: E0514 00:52:51.085802 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:51.086464 env[1215]: time="2025-05-14T00:52:51.086414634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v9pmt,Uid:145a0b0d-3045-4a88-a337-a8d5ca4b5e3f,Namespace:kube-system,Attempt:0,}" May 14 00:52:51.089928 kubelet[1913]: E0514 00:52:51.089902 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:51.092591 env[1215]: time="2025-05-14T00:52:51.092547209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-848l5,Uid:0ba1cf6c-ac1c-4a4d-a0ed-40981fe9f628,Namespace:kube-system,Attempt:0,}" May 14 00:52:51.170315 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 14 00:52:51.574624 kubelet[1913]: E0514 00:52:51.574534 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:51.605686 kubelet[1913]: I0514 00:52:51.605623 1913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bq4n6" podStartSLOduration=5.797610002 podStartE2EDuration="12.605608319s" podCreationTimestamp="2025-05-14 00:52:39 +0000 UTC" firstStartedPulling="2025-05-14 00:52:40.081412023 +0000 UTC m=+7.671205160" lastFinishedPulling="2025-05-14 00:52:46.8894103 +0000 UTC m=+14.479203477" observedRunningTime="2025-05-14 00:52:51.605350921 +0000 UTC m=+19.195144058" watchObservedRunningTime="2025-05-14 00:52:51.605608319 +0000 UTC m=+19.195401496" May 14 00:52:52.576472 kubelet[1913]: E0514 00:52:52.576441 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:52.785584 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 14 00:52:52.785677 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 14 00:52:52.783818 systemd-networkd[1042]: cilium_host: Link UP May 14 00:52:52.783914 systemd-networkd[1042]: cilium_net: Link UP May 14 00:52:52.784039 systemd-networkd[1042]: cilium_net: Gained carrier May 14 00:52:52.784157 systemd-networkd[1042]: cilium_host: Gained carrier May 14 00:52:52.860762 systemd-networkd[1042]: cilium_vxlan: Link UP May 14 00:52:52.860768 systemd-networkd[1042]: cilium_vxlan: Gained carrier May 14 00:52:53.180271 kernel: NET: Registered PF_ALG protocol family May 14 00:52:53.267382 systemd-networkd[1042]: cilium_host: Gained IPv6LL May 14 00:52:53.578717 kubelet[1913]: E0514 00:52:53.578626 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:53.628405 systemd-networkd[1042]: cilium_net: Gained IPv6LL May 14 00:52:53.752384 systemd-networkd[1042]: lxc_health: Link UP May 14 00:52:53.758792 systemd-networkd[1042]: lxc_health: Gained carrier May 14 00:52:53.759257 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 14 00:52:54.012375 systemd-networkd[1042]: cilium_vxlan: Gained IPv6LL May 14 00:52:54.149441 systemd-networkd[1042]: lxc8315ea235f19: Link UP May 14 00:52:54.156049 kernel: eth0: renamed from tmp4f1e9 May 14 00:52:54.163719 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 14 00:52:54.163796 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc8315ea235f19: link becomes ready May 14 00:52:54.163836 systemd-networkd[1042]: lxc8315ea235f19: Gained carrier May 14 00:52:54.163994 systemd-networkd[1042]: lxc4cf87243ce38: Link UP May 14 00:52:54.172259 kernel: eth0: renamed from tmp847ff May 14 00:52:54.180283 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 14 00:52:54.180496 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc4cf87243ce38: link becomes ready May 14 00:52:54.180385 systemd-networkd[1042]: lxc4cf87243ce38: Gained carrier May 14 00:52:54.579687 kubelet[1913]: E0514 00:52:54.579660 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:55.291362 systemd-networkd[1042]: lxc8315ea235f19: Gained IPv6LL May 14 00:52:55.611362 systemd-networkd[1042]: lxc_health: Gained IPv6LL May 14 00:52:55.675376 systemd-networkd[1042]: lxc4cf87243ce38: Gained IPv6LL May 14 00:52:56.918481 kubelet[1913]: I0514 00:52:56.918441 1913 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 00:52:56.918872 kubelet[1913]: E0514 00:52:56.918847 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:57.476496 systemd[1]: Started sshd@5-10.0.0.131:22-10.0.0.1:35906.service. May 14 00:52:57.520880 sshd[3117]: Accepted publickey for core from 10.0.0.1 port 35906 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:52:57.522338 sshd[3117]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:52:57.526748 systemd[1]: Started session-6.scope. May 14 00:52:57.527058 systemd-logind[1203]: New session 6 of user core. May 14 00:52:57.588994 kubelet[1913]: E0514 00:52:57.588962 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:57.670830 sshd[3117]: pam_unix(sshd:session): session closed for user core May 14 00:52:57.673514 systemd[1]: sshd@5-10.0.0.131:22-10.0.0.1:35906.service: Deactivated successfully. May 14 00:52:57.674211 systemd[1]: session-6.scope: Deactivated successfully. May 14 00:52:57.674615 systemd-logind[1203]: Session 6 logged out. Waiting for processes to exit. May 14 00:52:57.675291 systemd-logind[1203]: Removed session 6. May 14 00:52:57.745803 env[1215]: time="2025-05-14T00:52:57.745133228Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:52:57.745803 env[1215]: time="2025-05-14T00:52:57.745169593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:52:57.745803 env[1215]: time="2025-05-14T00:52:57.745179474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:52:57.745803 env[1215]: time="2025-05-14T00:52:57.745412620Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4f1e9d9d230bc13984c504e7dbb3b6a398c613f6b9be786c146d2f738a29ab1c pid=3145 runtime=io.containerd.runc.v2 May 14 00:52:57.751579 env[1215]: time="2025-05-14T00:52:57.746851101Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:52:57.751579 env[1215]: time="2025-05-14T00:52:57.746889665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:52:57.751579 env[1215]: time="2025-05-14T00:52:57.746904307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:52:57.751579 env[1215]: time="2025-05-14T00:52:57.747412923Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/847ff02bf50406adfb4b0e04c8bbeacf1126edb5c239de795599b7ead24b77b5 pid=3158 runtime=io.containerd.runc.v2 May 14 00:52:57.766856 systemd[1]: Started cri-containerd-4f1e9d9d230bc13984c504e7dbb3b6a398c613f6b9be786c146d2f738a29ab1c.scope. May 14 00:52:57.768114 systemd[1]: Started cri-containerd-847ff02bf50406adfb4b0e04c8bbeacf1126edb5c239de795599b7ead24b77b5.scope. May 14 00:52:57.799360 systemd-resolved[1154]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 00:52:57.804197 systemd-resolved[1154]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 00:52:57.815838 env[1215]: time="2025-05-14T00:52:57.815798572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v9pmt,Uid:145a0b0d-3045-4a88-a337-a8d5ca4b5e3f,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f1e9d9d230bc13984c504e7dbb3b6a398c613f6b9be786c146d2f738a29ab1c\"" May 14 00:52:57.816334 kubelet[1913]: E0514 00:52:57.816306 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:57.818632 env[1215]: time="2025-05-14T00:52:57.818590124Z" level=info msg="CreateContainer within sandbox \"4f1e9d9d230bc13984c504e7dbb3b6a398c613f6b9be786c146d2f738a29ab1c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 00:52:57.826141 env[1215]: time="2025-05-14T00:52:57.826084802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-848l5,Uid:0ba1cf6c-ac1c-4a4d-a0ed-40981fe9f628,Namespace:kube-system,Attempt:0,} returns sandbox id \"847ff02bf50406adfb4b0e04c8bbeacf1126edb5c239de795599b7ead24b77b5\"" May 14 00:52:57.827091 kubelet[1913]: E0514 00:52:57.826927 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:57.828931 env[1215]: time="2025-05-14T00:52:57.828900317Z" level=info msg="CreateContainer within sandbox \"847ff02bf50406adfb4b0e04c8bbeacf1126edb5c239de795599b7ead24b77b5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 00:52:57.836743 env[1215]: time="2025-05-14T00:52:57.836706270Z" level=info msg="CreateContainer within sandbox \"4f1e9d9d230bc13984c504e7dbb3b6a398c613f6b9be786c146d2f738a29ab1c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0809f4faf7680f18028040a1d89a01318da789b3622de11cd32972069b7140ac\"" May 14 00:52:57.838188 env[1215]: time="2025-05-14T00:52:57.838159713Z" level=info msg="StartContainer for \"0809f4faf7680f18028040a1d89a01318da789b3622de11cd32972069b7140ac\"" May 14 00:52:57.850492 env[1215]: time="2025-05-14T00:52:57.850448807Z" level=info msg="CreateContainer within sandbox \"847ff02bf50406adfb4b0e04c8bbeacf1126edb5c239de795599b7ead24b77b5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c6d128f492bd0daa56c3f0593da3f80a60ec3370429025713eefa9199c47983d\"" May 14 00:52:57.851314 env[1215]: time="2025-05-14T00:52:57.851284981Z" level=info msg="StartContainer for \"c6d128f492bd0daa56c3f0593da3f80a60ec3370429025713eefa9199c47983d\"" May 14 00:52:57.855592 systemd[1]: Started cri-containerd-0809f4faf7680f18028040a1d89a01318da789b3622de11cd32972069b7140ac.scope. May 14 00:52:57.866925 systemd[1]: Started cri-containerd-c6d128f492bd0daa56c3f0593da3f80a60ec3370429025713eefa9199c47983d.scope. May 14 00:52:57.916200 env[1215]: time="2025-05-14T00:52:57.916158116Z" level=info msg="StartContainer for \"c6d128f492bd0daa56c3f0593da3f80a60ec3370429025713eefa9199c47983d\" returns successfully" May 14 00:52:57.920821 env[1215]: time="2025-05-14T00:52:57.920770872Z" level=info msg="StartContainer for \"0809f4faf7680f18028040a1d89a01318da789b3622de11cd32972069b7140ac\" returns successfully" May 14 00:52:58.591923 kubelet[1913]: E0514 00:52:58.591792 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:58.593913 kubelet[1913]: E0514 00:52:58.593820 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:58.601258 kubelet[1913]: I0514 00:52:58.601201 1913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-848l5" podStartSLOduration=19.601186989 podStartE2EDuration="19.601186989s" podCreationTimestamp="2025-05-14 00:52:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:52:58.600247248 +0000 UTC m=+26.190040385" watchObservedRunningTime="2025-05-14 00:52:58.601186989 +0000 UTC m=+26.190980126" May 14 00:52:58.621836 kubelet[1913]: I0514 00:52:58.621778 1913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-v9pmt" podStartSLOduration=19.621762638 podStartE2EDuration="19.621762638s" podCreationTimestamp="2025-05-14 00:52:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:52:58.611131257 +0000 UTC m=+26.200924394" watchObservedRunningTime="2025-05-14 00:52:58.621762638 +0000 UTC m=+26.211555735" May 14 00:52:58.757095 systemd[1]: run-containerd-runc-k8s.io-4f1e9d9d230bc13984c504e7dbb3b6a398c613f6b9be786c146d2f738a29ab1c-runc.ZK9Oiy.mount: Deactivated successfully. May 14 00:52:59.595771 kubelet[1913]: E0514 00:52:59.595731 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:59.596107 kubelet[1913]: E0514 00:52:59.595736 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:53:00.597714 kubelet[1913]: E0514 00:53:00.597689 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:53:00.598089 kubelet[1913]: E0514 00:53:00.597770 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:53:02.676143 systemd[1]: Started sshd@6-10.0.0.131:22-10.0.0.1:59728.service. May 14 00:53:02.720579 sshd[3305]: Accepted publickey for core from 10.0.0.1 port 59728 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:53:02.722317 sshd[3305]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:53:02.726047 systemd-logind[1203]: New session 7 of user core. May 14 00:53:02.726469 systemd[1]: Started session-7.scope. May 14 00:53:02.834446 sshd[3305]: pam_unix(sshd:session): session closed for user core May 14 00:53:02.836904 systemd[1]: sshd@6-10.0.0.131:22-10.0.0.1:59728.service: Deactivated successfully. May 14 00:53:02.837604 systemd[1]: session-7.scope: Deactivated successfully. May 14 00:53:02.838103 systemd-logind[1203]: Session 7 logged out. Waiting for processes to exit. May 14 00:53:02.838791 systemd-logind[1203]: Removed session 7. May 14 00:53:07.839636 systemd[1]: Started sshd@7-10.0.0.131:22-10.0.0.1:59744.service. May 14 00:53:07.879304 sshd[3319]: Accepted publickey for core from 10.0.0.1 port 59744 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:53:07.880692 sshd[3319]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:53:07.883811 systemd-logind[1203]: New session 8 of user core. May 14 00:53:07.884631 systemd[1]: Started session-8.scope. May 14 00:53:08.001589 sshd[3319]: pam_unix(sshd:session): session closed for user core May 14 00:53:08.003993 systemd[1]: sshd@7-10.0.0.131:22-10.0.0.1:59744.service: Deactivated successfully. May 14 00:53:08.004715 systemd[1]: session-8.scope: Deactivated successfully. May 14 00:53:08.005256 systemd-logind[1203]: Session 8 logged out. Waiting for processes to exit. May 14 00:53:08.006005 systemd-logind[1203]: Removed session 8. May 14 00:53:13.009381 systemd[1]: Started sshd@8-10.0.0.131:22-10.0.0.1:43964.service. May 14 00:53:13.061505 sshd[3335]: Accepted publickey for core from 10.0.0.1 port 43964 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:53:13.064197 sshd[3335]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:53:13.070159 systemd[1]: Started session-9.scope. May 14 00:53:13.070651 systemd-logind[1203]: New session 9 of user core. May 14 00:53:13.200944 sshd[3335]: pam_unix(sshd:session): session closed for user core May 14 00:53:13.205416 systemd[1]: Started sshd@9-10.0.0.131:22-10.0.0.1:43974.service. May 14 00:53:13.206010 systemd[1]: sshd@8-10.0.0.131:22-10.0.0.1:43964.service: Deactivated successfully. May 14 00:53:13.206660 systemd[1]: session-9.scope: Deactivated successfully. May 14 00:53:13.207376 systemd-logind[1203]: Session 9 logged out. Waiting for processes to exit. May 14 00:53:13.208321 systemd-logind[1203]: Removed session 9. May 14 00:53:13.251096 sshd[3348]: Accepted publickey for core from 10.0.0.1 port 43974 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:53:13.252390 sshd[3348]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:53:13.256502 systemd-logind[1203]: New session 10 of user core. May 14 00:53:13.257414 systemd[1]: Started session-10.scope. May 14 00:53:13.405794 sshd[3348]: pam_unix(sshd:session): session closed for user core May 14 00:53:13.409939 systemd[1]: Started sshd@10-10.0.0.131:22-10.0.0.1:43984.service. May 14 00:53:13.417077 systemd[1]: session-10.scope: Deactivated successfully. May 14 00:53:13.417833 systemd[1]: sshd@9-10.0.0.131:22-10.0.0.1:43974.service: Deactivated successfully. May 14 00:53:13.418549 systemd-logind[1203]: Session 10 logged out. Waiting for processes to exit. May 14 00:53:13.419287 systemd-logind[1203]: Removed session 10. May 14 00:53:13.455327 sshd[3359]: Accepted publickey for core from 10.0.0.1 port 43984 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:53:13.457079 sshd[3359]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:53:13.460708 systemd-logind[1203]: New session 11 of user core. May 14 00:53:13.461479 systemd[1]: Started session-11.scope. May 14 00:53:13.573148 sshd[3359]: pam_unix(sshd:session): session closed for user core May 14 00:53:13.575587 systemd[1]: sshd@10-10.0.0.131:22-10.0.0.1:43984.service: Deactivated successfully. May 14 00:53:13.576297 systemd[1]: session-11.scope: Deactivated successfully. May 14 00:53:13.576763 systemd-logind[1203]: Session 11 logged out. Waiting for processes to exit. May 14 00:53:13.577401 systemd-logind[1203]: Removed session 11. May 14 00:53:18.578058 systemd[1]: Started sshd@11-10.0.0.131:22-10.0.0.1:43996.service. May 14 00:53:18.618194 sshd[3374]: Accepted publickey for core from 10.0.0.1 port 43996 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:53:18.619671 sshd[3374]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:53:18.622858 systemd-logind[1203]: New session 12 of user core. May 14 00:53:18.623669 systemd[1]: Started session-12.scope. May 14 00:53:18.729104 sshd[3374]: pam_unix(sshd:session): session closed for user core May 14 00:53:18.731506 systemd[1]: sshd@11-10.0.0.131:22-10.0.0.1:43996.service: Deactivated successfully. May 14 00:53:18.732193 systemd[1]: session-12.scope: Deactivated successfully. May 14 00:53:18.732784 systemd-logind[1203]: Session 12 logged out. Waiting for processes to exit. May 14 00:53:18.733543 systemd-logind[1203]: Removed session 12. May 14 00:53:23.733940 systemd[1]: Started sshd@12-10.0.0.131:22-10.0.0.1:55256.service. May 14 00:53:23.774041 sshd[3387]: Accepted publickey for core from 10.0.0.1 port 55256 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:53:23.775953 sshd[3387]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:53:23.779775 systemd-logind[1203]: New session 13 of user core. May 14 00:53:23.780355 systemd[1]: Started session-13.scope. May 14 00:53:23.887989 sshd[3387]: pam_unix(sshd:session): session closed for user core May 14 00:53:23.891976 systemd[1]: Started sshd@13-10.0.0.131:22-10.0.0.1:55264.service. May 14 00:53:23.892527 systemd[1]: sshd@12-10.0.0.131:22-10.0.0.1:55256.service: Deactivated successfully. May 14 00:53:23.893213 systemd[1]: session-13.scope: Deactivated successfully. May 14 00:53:23.893756 systemd-logind[1203]: Session 13 logged out. Waiting for processes to exit. May 14 00:53:23.894563 systemd-logind[1203]: Removed session 13. May 14 00:53:23.933728 sshd[3400]: Accepted publickey for core from 10.0.0.1 port 55264 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:53:23.935007 sshd[3400]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:53:23.938958 systemd-logind[1203]: New session 14 of user core. May 14 00:53:23.939085 systemd[1]: Started session-14.scope. May 14 00:53:24.131460 sshd[3400]: pam_unix(sshd:session): session closed for user core May 14 00:53:24.134487 systemd[1]: Started sshd@14-10.0.0.131:22-10.0.0.1:55278.service. May 14 00:53:24.135008 systemd[1]: sshd@13-10.0.0.131:22-10.0.0.1:55264.service: Deactivated successfully. May 14 00:53:24.135922 systemd[1]: session-14.scope: Deactivated successfully. May 14 00:53:24.136583 systemd-logind[1203]: Session 14 logged out. Waiting for processes to exit. May 14 00:53:24.137604 systemd-logind[1203]: Removed session 14. May 14 00:53:24.178801 sshd[3411]: Accepted publickey for core from 10.0.0.1 port 55278 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:53:24.180353 sshd[3411]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:53:24.183836 systemd-logind[1203]: New session 15 of user core. May 14 00:53:24.184667 systemd[1]: Started session-15.scope. May 14 00:53:24.927957 sshd[3411]: pam_unix(sshd:session): session closed for user core May 14 00:53:24.931712 systemd[1]: Started sshd@15-10.0.0.131:22-10.0.0.1:55294.service. May 14 00:53:24.932146 systemd[1]: sshd@14-10.0.0.131:22-10.0.0.1:55278.service: Deactivated successfully. May 14 00:53:24.932968 systemd[1]: session-15.scope: Deactivated successfully. May 14 00:53:24.933477 systemd-logind[1203]: Session 15 logged out. Waiting for processes to exit. May 14 00:53:24.934471 systemd-logind[1203]: Removed session 15. May 14 00:53:24.974087 sshd[3430]: Accepted publickey for core from 10.0.0.1 port 55294 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:53:24.975900 sshd[3430]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:53:24.979495 systemd-logind[1203]: New session 16 of user core. May 14 00:53:24.980319 systemd[1]: Started session-16.scope. May 14 00:53:25.195190 sshd[3430]: pam_unix(sshd:session): session closed for user core May 14 00:53:25.198915 systemd[1]: Started sshd@16-10.0.0.131:22-10.0.0.1:55304.service. May 14 00:53:25.199424 systemd[1]: sshd@15-10.0.0.131:22-10.0.0.1:55294.service: Deactivated successfully. May 14 00:53:25.200350 systemd[1]: session-16.scope: Deactivated successfully. May 14 00:53:25.201411 systemd-logind[1203]: Session 16 logged out. Waiting for processes to exit. May 14 00:53:25.204202 systemd-logind[1203]: Removed session 16. May 14 00:53:25.242638 sshd[3444]: Accepted publickey for core from 10.0.0.1 port 55304 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:53:25.243880 sshd[3444]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:53:25.247167 systemd-logind[1203]: New session 17 of user core. May 14 00:53:25.247963 systemd[1]: Started session-17.scope. May 14 00:53:25.357863 sshd[3444]: pam_unix(sshd:session): session closed for user core May 14 00:53:25.361325 systemd[1]: sshd@16-10.0.0.131:22-10.0.0.1:55304.service: Deactivated successfully. May 14 00:53:25.362024 systemd[1]: session-17.scope: Deactivated successfully. May 14 00:53:25.362778 systemd-logind[1203]: Session 17 logged out. Waiting for processes to exit. May 14 00:53:25.363769 systemd-logind[1203]: Removed session 17. May 14 00:53:30.362547 systemd[1]: Started sshd@17-10.0.0.131:22-10.0.0.1:55310.service. May 14 00:53:30.403442 sshd[3463]: Accepted publickey for core from 10.0.0.1 port 55310 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:53:30.405294 sshd[3463]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:53:30.409160 systemd-logind[1203]: New session 18 of user core. May 14 00:53:30.409643 systemd[1]: Started session-18.scope. May 14 00:53:30.526320 sshd[3463]: pam_unix(sshd:session): session closed for user core May 14 00:53:30.529469 systemd[1]: session-18.scope: Deactivated successfully. May 14 00:53:30.530111 systemd-logind[1203]: Session 18 logged out. Waiting for processes to exit. May 14 00:53:30.530204 systemd[1]: sshd@17-10.0.0.131:22-10.0.0.1:55310.service: Deactivated successfully. May 14 00:53:30.531812 systemd-logind[1203]: Removed session 18. May 14 00:53:35.531690 systemd[1]: Started sshd@18-10.0.0.131:22-10.0.0.1:60818.service. May 14 00:53:35.572918 sshd[3479]: Accepted publickey for core from 10.0.0.1 port 60818 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:53:35.574025 sshd[3479]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:53:35.580123 systemd-logind[1203]: New session 19 of user core. May 14 00:53:35.580898 systemd[1]: Started session-19.scope. May 14 00:53:35.708187 sshd[3479]: pam_unix(sshd:session): session closed for user core May 14 00:53:35.712218 systemd[1]: sshd@18-10.0.0.131:22-10.0.0.1:60818.service: Deactivated successfully. May 14 00:53:35.712948 systemd[1]: session-19.scope: Deactivated successfully. May 14 00:53:35.713838 systemd-logind[1203]: Session 19 logged out. Waiting for processes to exit. May 14 00:53:35.716632 systemd-logind[1203]: Removed session 19. May 14 00:53:40.712331 systemd[1]: Started sshd@19-10.0.0.131:22-10.0.0.1:60822.service. May 14 00:53:40.752436 sshd[3495]: Accepted publickey for core from 10.0.0.1 port 60822 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:53:40.753667 sshd[3495]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:53:40.757292 systemd-logind[1203]: New session 20 of user core. May 14 00:53:40.758090 systemd[1]: Started session-20.scope. May 14 00:53:40.862048 sshd[3495]: pam_unix(sshd:session): session closed for user core May 14 00:53:40.864836 systemd[1]: sshd@19-10.0.0.131:22-10.0.0.1:60822.service: Deactivated successfully. May 14 00:53:40.865548 systemd[1]: session-20.scope: Deactivated successfully. May 14 00:53:40.866021 systemd-logind[1203]: Session 20 logged out. Waiting for processes to exit. May 14 00:53:40.866644 systemd-logind[1203]: Removed session 20. May 14 00:53:44.510709 kubelet[1913]: E0514 00:53:44.510640 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:53:45.866414 systemd[1]: Started sshd@20-10.0.0.131:22-10.0.0.1:50680.service. May 14 00:53:45.906798 sshd[3508]: Accepted publickey for core from 10.0.0.1 port 50680 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:53:45.908364 sshd[3508]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:53:45.911695 systemd-logind[1203]: New session 21 of user core. May 14 00:53:45.912538 systemd[1]: Started session-21.scope. May 14 00:53:46.021203 sshd[3508]: pam_unix(sshd:session): session closed for user core May 14 00:53:46.025591 systemd[1]: Started sshd@21-10.0.0.131:22-10.0.0.1:50682.service. May 14 00:53:46.026257 systemd[1]: sshd@20-10.0.0.131:22-10.0.0.1:50680.service: Deactivated successfully. May 14 00:53:46.027044 systemd[1]: session-21.scope: Deactivated successfully. May 14 00:53:46.027736 systemd-logind[1203]: Session 21 logged out. Waiting for processes to exit. May 14 00:53:46.028651 systemd-logind[1203]: Removed session 21. May 14 00:53:46.066530 sshd[3520]: Accepted publickey for core from 10.0.0.1 port 50682 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:53:46.067746 sshd[3520]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:53:46.071116 systemd-logind[1203]: New session 22 of user core. May 14 00:53:46.071979 systemd[1]: Started session-22.scope. May 14 00:53:47.806875 env[1215]: time="2025-05-14T00:53:47.806809577Z" level=info msg="StopContainer for \"e03a9c330b9d26e33baf7c0938e55c3091856a742364c581f1c711eb0abff897\" with timeout 30 (s)" May 14 00:53:47.807346 env[1215]: time="2025-05-14T00:53:47.807166163Z" level=info msg="Stop container \"e03a9c330b9d26e33baf7c0938e55c3091856a742364c581f1c711eb0abff897\" with signal terminated" May 14 00:53:47.818398 systemd[1]: cri-containerd-e03a9c330b9d26e33baf7c0938e55c3091856a742364c581f1c711eb0abff897.scope: Deactivated successfully. May 14 00:53:47.844479 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e03a9c330b9d26e33baf7c0938e55c3091856a742364c581f1c711eb0abff897-rootfs.mount: Deactivated successfully. May 14 00:53:47.852634 env[1215]: time="2025-05-14T00:53:47.852589983Z" level=info msg="shim disconnected" id=e03a9c330b9d26e33baf7c0938e55c3091856a742364c581f1c711eb0abff897 May 14 00:53:47.852866 env[1215]: time="2025-05-14T00:53:47.852637502Z" level=warning msg="cleaning up after shim disconnected" id=e03a9c330b9d26e33baf7c0938e55c3091856a742364c581f1c711eb0abff897 namespace=k8s.io May 14 00:53:47.852866 env[1215]: time="2025-05-14T00:53:47.852647461Z" level=info msg="cleaning up dead shim" May 14 00:53:47.863103 env[1215]: time="2025-05-14T00:53:47.863060462Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:53:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3566 runtime=io.containerd.runc.v2\n" May 14 00:53:47.865243 env[1215]: time="2025-05-14T00:53:47.865200540Z" level=info msg="StopContainer for \"e03a9c330b9d26e33baf7c0938e55c3091856a742364c581f1c711eb0abff897\" returns successfully" May 14 00:53:47.865950 env[1215]: time="2025-05-14T00:53:47.865919713Z" level=info msg="StopPodSandbox for \"988c603c842b5d575e5bb1167bf9277971b209972b4cd34339a0bb1210d8464e\"" May 14 00:53:47.866107 env[1215]: time="2025-05-14T00:53:47.866082987Z" level=info msg="Container to stop \"e03a9c330b9d26e33baf7c0938e55c3091856a742364c581f1c711eb0abff897\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:53:47.869336 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-988c603c842b5d575e5bb1167bf9277971b209972b4cd34339a0bb1210d8464e-shm.mount: Deactivated successfully. May 14 00:53:47.872394 systemd[1]: cri-containerd-988c603c842b5d575e5bb1167bf9277971b209972b4cd34339a0bb1210d8464e.scope: Deactivated successfully. May 14 00:53:47.873722 env[1215]: time="2025-05-14T00:53:47.873657537Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 00:53:47.880176 env[1215]: time="2025-05-14T00:53:47.880142968Z" level=info msg="StopContainer for \"c266efa393bd102729935a1dbaaeca5d2e3793b331c8e3fe2c55d1e19d06c746\" with timeout 2 (s)" May 14 00:53:47.880555 env[1215]: time="2025-05-14T00:53:47.880529473Z" level=info msg="Stop container \"c266efa393bd102729935a1dbaaeca5d2e3793b331c8e3fe2c55d1e19d06c746\" with signal terminated" May 14 00:53:47.886308 systemd-networkd[1042]: lxc_health: Link DOWN May 14 00:53:47.886591 systemd-networkd[1042]: lxc_health: Lost carrier May 14 00:53:47.898703 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-988c603c842b5d575e5bb1167bf9277971b209972b4cd34339a0bb1210d8464e-rootfs.mount: Deactivated successfully. May 14 00:53:47.901563 env[1215]: time="2025-05-14T00:53:47.901287999Z" level=info msg="shim disconnected" id=988c603c842b5d575e5bb1167bf9277971b209972b4cd34339a0bb1210d8464e May 14 00:53:47.901563 env[1215]: time="2025-05-14T00:53:47.901339917Z" level=warning msg="cleaning up after shim disconnected" id=988c603c842b5d575e5bb1167bf9277971b209972b4cd34339a0bb1210d8464e namespace=k8s.io May 14 00:53:47.901563 env[1215]: time="2025-05-14T00:53:47.901350716Z" level=info msg="cleaning up dead shim" May 14 00:53:47.910298 env[1215]: time="2025-05-14T00:53:47.909746995Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:53:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3606 runtime=io.containerd.runc.v2\n" May 14 00:53:47.910298 env[1215]: time="2025-05-14T00:53:47.910043103Z" level=info msg="TearDown network for sandbox \"988c603c842b5d575e5bb1167bf9277971b209972b4cd34339a0bb1210d8464e\" successfully" May 14 00:53:47.910298 env[1215]: time="2025-05-14T00:53:47.910064702Z" level=info msg="StopPodSandbox for \"988c603c842b5d575e5bb1167bf9277971b209972b4cd34339a0bb1210d8464e\" returns successfully" May 14 00:53:47.918042 systemd[1]: cri-containerd-c266efa393bd102729935a1dbaaeca5d2e3793b331c8e3fe2c55d1e19d06c746.scope: Deactivated successfully. May 14 00:53:47.918406 systemd[1]: cri-containerd-c266efa393bd102729935a1dbaaeca5d2e3793b331c8e3fe2c55d1e19d06c746.scope: Consumed 6.428s CPU time. May 14 00:53:47.936533 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c266efa393bd102729935a1dbaaeca5d2e3793b331c8e3fe2c55d1e19d06c746-rootfs.mount: Deactivated successfully. May 14 00:53:47.942930 env[1215]: time="2025-05-14T00:53:47.942863326Z" level=info msg="shim disconnected" id=c266efa393bd102729935a1dbaaeca5d2e3793b331c8e3fe2c55d1e19d06c746 May 14 00:53:47.942930 env[1215]: time="2025-05-14T00:53:47.942920644Z" level=warning msg="cleaning up after shim disconnected" id=c266efa393bd102729935a1dbaaeca5d2e3793b331c8e3fe2c55d1e19d06c746 namespace=k8s.io May 14 00:53:47.942930 env[1215]: time="2025-05-14T00:53:47.942930124Z" level=info msg="cleaning up dead shim" May 14 00:53:47.949579 env[1215]: time="2025-05-14T00:53:47.949528751Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:53:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3634 runtime=io.containerd.runc.v2\n" May 14 00:53:47.951565 env[1215]: time="2025-05-14T00:53:47.951527035Z" level=info msg="StopContainer for \"c266efa393bd102729935a1dbaaeca5d2e3793b331c8e3fe2c55d1e19d06c746\" returns successfully" May 14 00:53:47.952030 env[1215]: time="2025-05-14T00:53:47.951992577Z" level=info msg="StopPodSandbox for \"45a1aa091ac1c1102d7bec0c9b115bc6e50bfabd16a7a59095689d66c091f5e9\"" May 14 00:53:47.952099 env[1215]: time="2025-05-14T00:53:47.952048775Z" level=info msg="Container to stop \"e072a64a7da97f9d4893165293116a85e6dde27545cf5ef6d0f356aef0c453f1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:53:47.952099 env[1215]: time="2025-05-14T00:53:47.952064134Z" level=info msg="Container to stop \"523daec23a205b0b08e727113afd6be2d2126789045d59466cef958c34258068\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:53:47.952099 env[1215]: time="2025-05-14T00:53:47.952074294Z" level=info msg="Container to stop \"c266efa393bd102729935a1dbaaeca5d2e3793b331c8e3fe2c55d1e19d06c746\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:53:47.952099 env[1215]: time="2025-05-14T00:53:47.952085653Z" level=info msg="Container to stop \"bafe2c04a9121ccd8e058d69c4f60ee6b36d8687196f5c012fed6ab4592103f8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:53:47.952099 env[1215]: time="2025-05-14T00:53:47.952096213Z" level=info msg="Container to stop \"70854290db81085780c905c043540c7b832aff445852701defdfa9ac2f1bd516\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:53:47.956786 systemd[1]: cri-containerd-45a1aa091ac1c1102d7bec0c9b115bc6e50bfabd16a7a59095689d66c091f5e9.scope: Deactivated successfully. May 14 00:53:47.976461 kubelet[1913]: I0514 00:53:47.976362 1913 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nv5k\" (UniqueName: \"kubernetes.io/projected/6f3ac2d5-9c5a-49e1-938e-e58a16060912-kube-api-access-8nv5k\") pod \"6f3ac2d5-9c5a-49e1-938e-e58a16060912\" (UID: \"6f3ac2d5-9c5a-49e1-938e-e58a16060912\") " May 14 00:53:47.976823 kubelet[1913]: I0514 00:53:47.976480 1913 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f3ac2d5-9c5a-49e1-938e-e58a16060912-cilium-config-path\") pod \"6f3ac2d5-9c5a-49e1-938e-e58a16060912\" (UID: \"6f3ac2d5-9c5a-49e1-938e-e58a16060912\") " May 14 00:53:47.979531 kubelet[1913]: I0514 00:53:47.979484 1913 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f3ac2d5-9c5a-49e1-938e-e58a16060912-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6f3ac2d5-9c5a-49e1-938e-e58a16060912" (UID: "6f3ac2d5-9c5a-49e1-938e-e58a16060912"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 14 00:53:47.980541 kubelet[1913]: I0514 00:53:47.980511 1913 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f3ac2d5-9c5a-49e1-938e-e58a16060912-kube-api-access-8nv5k" (OuterVolumeSpecName: "kube-api-access-8nv5k") pod "6f3ac2d5-9c5a-49e1-938e-e58a16060912" (UID: "6f3ac2d5-9c5a-49e1-938e-e58a16060912"). InnerVolumeSpecName "kube-api-access-8nv5k". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 14 00:53:47.985416 env[1215]: time="2025-05-14T00:53:47.985366779Z" level=info msg="shim disconnected" id=45a1aa091ac1c1102d7bec0c9b115bc6e50bfabd16a7a59095689d66c091f5e9 May 14 00:53:47.985558 env[1215]: time="2025-05-14T00:53:47.985419657Z" level=warning msg="cleaning up after shim disconnected" id=45a1aa091ac1c1102d7bec0c9b115bc6e50bfabd16a7a59095689d66c091f5e9 namespace=k8s.io May 14 00:53:47.985558 env[1215]: time="2025-05-14T00:53:47.985431896Z" level=info msg="cleaning up dead shim" May 14 00:53:47.993147 env[1215]: time="2025-05-14T00:53:47.993105242Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:53:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3666 runtime=io.containerd.runc.v2\n" May 14 00:53:47.993453 env[1215]: time="2025-05-14T00:53:47.993423510Z" level=info msg="TearDown network for sandbox \"45a1aa091ac1c1102d7bec0c9b115bc6e50bfabd16a7a59095689d66c091f5e9\" successfully" May 14 00:53:47.993493 env[1215]: time="2025-05-14T00:53:47.993453109Z" level=info msg="StopPodSandbox for \"45a1aa091ac1c1102d7bec0c9b115bc6e50bfabd16a7a59095689d66c091f5e9\" returns successfully" May 14 00:53:48.077601 kubelet[1913]: I0514 00:53:48.077493 1913 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8e075227-ab31-47bf-a1d5-f4ba469d9776-cni-path\") pod \"8e075227-ab31-47bf-a1d5-f4ba469d9776\" (UID: \"8e075227-ab31-47bf-a1d5-f4ba469d9776\") " May 14 00:53:48.077947 kubelet[1913]: I0514 00:53:48.077925 1913 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8e075227-ab31-47bf-a1d5-f4ba469d9776-host-proc-sys-net\") pod \"8e075227-ab31-47bf-a1d5-f4ba469d9776\" (UID: \"8e075227-ab31-47bf-a1d5-f4ba469d9776\") " May 14 00:53:48.078060 kubelet[1913]: I0514 00:53:48.077578 1913 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e075227-ab31-47bf-a1d5-f4ba469d9776-cni-path" (OuterVolumeSpecName: "cni-path") pod "8e075227-ab31-47bf-a1d5-f4ba469d9776" (UID: "8e075227-ab31-47bf-a1d5-f4ba469d9776"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 00:53:48.078109 kubelet[1913]: I0514 00:53:48.077976 1913 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e075227-ab31-47bf-a1d5-f4ba469d9776-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8e075227-ab31-47bf-a1d5-f4ba469d9776" (UID: "8e075227-ab31-47bf-a1d5-f4ba469d9776"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 00:53:48.078141 kubelet[1913]: I0514 00:53:48.078120 1913 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e075227-ab31-47bf-a1d5-f4ba469d9776-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8e075227-ab31-47bf-a1d5-f4ba469d9776" (UID: "8e075227-ab31-47bf-a1d5-f4ba469d9776"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 00:53:48.078185 kubelet[1913]: I0514 00:53:48.078046 1913 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8e075227-ab31-47bf-a1d5-f4ba469d9776-etc-cni-netd\") pod \"8e075227-ab31-47bf-a1d5-f4ba469d9776\" (UID: \"8e075227-ab31-47bf-a1d5-f4ba469d9776\") " May 14 00:53:48.078291 kubelet[1913]: I0514 00:53:48.078277 1913 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2gh7\" (UniqueName: \"kubernetes.io/projected/8e075227-ab31-47bf-a1d5-f4ba469d9776-kube-api-access-j2gh7\") pod \"8e075227-ab31-47bf-a1d5-f4ba469d9776\" (UID: \"8e075227-ab31-47bf-a1d5-f4ba469d9776\") " May 14 00:53:48.078381 kubelet[1913]: I0514 00:53:48.078369 1913 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8e075227-ab31-47bf-a1d5-f4ba469d9776-cilium-config-path\") pod \"8e075227-ab31-47bf-a1d5-f4ba469d9776\" (UID: \"8e075227-ab31-47bf-a1d5-f4ba469d9776\") " May 14 00:53:48.078460 kubelet[1913]: I0514 00:53:48.078448 1913 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8e075227-ab31-47bf-a1d5-f4ba469d9776-cilium-cgroup\") pod \"8e075227-ab31-47bf-a1d5-f4ba469d9776\" (UID: \"8e075227-ab31-47bf-a1d5-f4ba469d9776\") " May 14 00:53:48.078530 kubelet[1913]: I0514 00:53:48.078518 1913 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e075227-ab31-47bf-a1d5-f4ba469d9776-xtables-lock\") pod \"8e075227-ab31-47bf-a1d5-f4ba469d9776\" (UID: \"8e075227-ab31-47bf-a1d5-f4ba469d9776\") " May 14 00:53:48.078608 kubelet[1913]: I0514 00:53:48.078597 1913 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8e075227-ab31-47bf-a1d5-f4ba469d9776-hubble-tls\") pod \"8e075227-ab31-47bf-a1d5-f4ba469d9776\" (UID: \"8e075227-ab31-47bf-a1d5-f4ba469d9776\") " May 14 00:53:48.078677 kubelet[1913]: I0514 00:53:48.078666 1913 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8e075227-ab31-47bf-a1d5-f4ba469d9776-cilium-run\") pod \"8e075227-ab31-47bf-a1d5-f4ba469d9776\" (UID: \"8e075227-ab31-47bf-a1d5-f4ba469d9776\") " May 14 00:53:48.078743 kubelet[1913]: I0514 00:53:48.078732 1913 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8e075227-ab31-47bf-a1d5-f4ba469d9776-hostproc\") pod \"8e075227-ab31-47bf-a1d5-f4ba469d9776\" (UID: \"8e075227-ab31-47bf-a1d5-f4ba469d9776\") " May 14 00:53:48.078826 kubelet[1913]: I0514 00:53:48.078812 1913 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8e075227-ab31-47bf-a1d5-f4ba469d9776-clustermesh-secrets\") pod \"8e075227-ab31-47bf-a1d5-f4ba469d9776\" (UID: \"8e075227-ab31-47bf-a1d5-f4ba469d9776\") " May 14 00:53:48.080366 kubelet[1913]: I0514 00:53:48.080332 1913 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e075227-ab31-47bf-a1d5-f4ba469d9776-lib-modules\") pod \"8e075227-ab31-47bf-a1d5-f4ba469d9776\" (UID: \"8e075227-ab31-47bf-a1d5-f4ba469d9776\") " May 14 00:53:48.080366 kubelet[1913]: I0514 00:53:48.080365 1913 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8e075227-ab31-47bf-a1d5-f4ba469d9776-host-proc-sys-kernel\") pod \"8e075227-ab31-47bf-a1d5-f4ba469d9776\" (UID: \"8e075227-ab31-47bf-a1d5-f4ba469d9776\") " May 14 00:53:48.080480 kubelet[1913]: I0514 00:53:48.080383 1913 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8e075227-ab31-47bf-a1d5-f4ba469d9776-bpf-maps\") pod \"8e075227-ab31-47bf-a1d5-f4ba469d9776\" (UID: \"8e075227-ab31-47bf-a1d5-f4ba469d9776\") " May 14 00:53:48.080480 kubelet[1913]: I0514 00:53:48.080430 1913 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f3ac2d5-9c5a-49e1-938e-e58a16060912-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 14 00:53:48.080480 kubelet[1913]: I0514 00:53:48.080441 1913 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8e075227-ab31-47bf-a1d5-f4ba469d9776-cni-path\") on node \"localhost\" DevicePath \"\"" May 14 00:53:48.080480 kubelet[1913]: I0514 00:53:48.080449 1913 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8e075227-ab31-47bf-a1d5-f4ba469d9776-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 14 00:53:48.080480 kubelet[1913]: I0514 00:53:48.080459 1913 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8e075227-ab31-47bf-a1d5-f4ba469d9776-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 14 00:53:48.080480 kubelet[1913]: I0514 00:53:48.080467 1913 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nv5k\" (UniqueName: \"kubernetes.io/projected/6f3ac2d5-9c5a-49e1-938e-e58a16060912-kube-api-access-8nv5k\") on node \"localhost\" DevicePath \"\"" May 14 00:53:48.080619 kubelet[1913]: I0514 00:53:48.080284 1913 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e075227-ab31-47bf-a1d5-f4ba469d9776-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8e075227-ab31-47bf-a1d5-f4ba469d9776" (UID: "8e075227-ab31-47bf-a1d5-f4ba469d9776"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 00:53:48.080619 kubelet[1913]: I0514 00:53:48.080304 1913 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e075227-ab31-47bf-a1d5-f4ba469d9776-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8e075227-ab31-47bf-a1d5-f4ba469d9776" (UID: "8e075227-ab31-47bf-a1d5-f4ba469d9776"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 00:53:48.080619 kubelet[1913]: I0514 00:53:48.080489 1913 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e075227-ab31-47bf-a1d5-f4ba469d9776-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8e075227-ab31-47bf-a1d5-f4ba469d9776" (UID: "8e075227-ab31-47bf-a1d5-f4ba469d9776"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 00:53:48.080619 kubelet[1913]: I0514 00:53:48.080518 1913 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e075227-ab31-47bf-a1d5-f4ba469d9776-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8e075227-ab31-47bf-a1d5-f4ba469d9776" (UID: "8e075227-ab31-47bf-a1d5-f4ba469d9776"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 00:53:48.080619 kubelet[1913]: I0514 00:53:48.080532 1913 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e075227-ab31-47bf-a1d5-f4ba469d9776-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8e075227-ab31-47bf-a1d5-f4ba469d9776" (UID: "8e075227-ab31-47bf-a1d5-f4ba469d9776"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 00:53:48.080748 kubelet[1913]: I0514 00:53:48.080548 1913 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e075227-ab31-47bf-a1d5-f4ba469d9776-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8e075227-ab31-47bf-a1d5-f4ba469d9776" (UID: "8e075227-ab31-47bf-a1d5-f4ba469d9776"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 00:53:48.080748 kubelet[1913]: I0514 00:53:48.080562 1913 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e075227-ab31-47bf-a1d5-f4ba469d9776-hostproc" (OuterVolumeSpecName: "hostproc") pod "8e075227-ab31-47bf-a1d5-f4ba469d9776" (UID: "8e075227-ab31-47bf-a1d5-f4ba469d9776"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 00:53:48.080984 kubelet[1913]: I0514 00:53:48.080960 1913 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e075227-ab31-47bf-a1d5-f4ba469d9776-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8e075227-ab31-47bf-a1d5-f4ba469d9776" (UID: "8e075227-ab31-47bf-a1d5-f4ba469d9776"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 14 00:53:48.081418 kubelet[1913]: I0514 00:53:48.081381 1913 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e075227-ab31-47bf-a1d5-f4ba469d9776-kube-api-access-j2gh7" (OuterVolumeSpecName: "kube-api-access-j2gh7") pod "8e075227-ab31-47bf-a1d5-f4ba469d9776" (UID: "8e075227-ab31-47bf-a1d5-f4ba469d9776"). InnerVolumeSpecName "kube-api-access-j2gh7". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 14 00:53:48.082109 kubelet[1913]: I0514 00:53:48.082061 1913 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e075227-ab31-47bf-a1d5-f4ba469d9776-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8e075227-ab31-47bf-a1d5-f4ba469d9776" (UID: "8e075227-ab31-47bf-a1d5-f4ba469d9776"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 14 00:53:48.083419 kubelet[1913]: I0514 00:53:48.083378 1913 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e075227-ab31-47bf-a1d5-f4ba469d9776-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8e075227-ab31-47bf-a1d5-f4ba469d9776" (UID: "8e075227-ab31-47bf-a1d5-f4ba469d9776"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 14 00:53:48.181226 kubelet[1913]: I0514 00:53:48.181169 1913 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8e075227-ab31-47bf-a1d5-f4ba469d9776-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 14 00:53:48.181226 kubelet[1913]: I0514 00:53:48.181206 1913 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8e075227-ab31-47bf-a1d5-f4ba469d9776-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 14 00:53:48.181226 kubelet[1913]: I0514 00:53:48.181218 1913 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j2gh7\" (UniqueName: \"kubernetes.io/projected/8e075227-ab31-47bf-a1d5-f4ba469d9776-kube-api-access-j2gh7\") on node \"localhost\" DevicePath \"\"" May 14 00:53:48.181226 kubelet[1913]: I0514 00:53:48.181227 1913 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e075227-ab31-47bf-a1d5-f4ba469d9776-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 14 00:53:48.181226 kubelet[1913]: I0514 00:53:48.181256 1913 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8e075227-ab31-47bf-a1d5-f4ba469d9776-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 14 00:53:48.181500 kubelet[1913]: I0514 00:53:48.181265 1913 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8e075227-ab31-47bf-a1d5-f4ba469d9776-cilium-run\") on node \"localhost\" DevicePath \"\"" May 14 00:53:48.181500 kubelet[1913]: I0514 00:53:48.181273 1913 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8e075227-ab31-47bf-a1d5-f4ba469d9776-hostproc\") on node \"localhost\" DevicePath \"\"" May 14 00:53:48.181500 kubelet[1913]: I0514 00:53:48.181281 1913 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8e075227-ab31-47bf-a1d5-f4ba469d9776-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 14 00:53:48.181500 kubelet[1913]: I0514 00:53:48.181288 1913 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e075227-ab31-47bf-a1d5-f4ba469d9776-lib-modules\") on node \"localhost\" DevicePath \"\"" May 14 00:53:48.181500 kubelet[1913]: I0514 00:53:48.181296 1913 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8e075227-ab31-47bf-a1d5-f4ba469d9776-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 14 00:53:48.181500 kubelet[1913]: I0514 00:53:48.181303 1913 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8e075227-ab31-47bf-a1d5-f4ba469d9776-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 14 00:53:48.518259 systemd[1]: Removed slice kubepods-burstable-pod8e075227_ab31_47bf_a1d5_f4ba469d9776.slice. May 14 00:53:48.518345 systemd[1]: kubepods-burstable-pod8e075227_ab31_47bf_a1d5_f4ba469d9776.slice: Consumed 6.656s CPU time. May 14 00:53:48.519217 systemd[1]: Removed slice kubepods-besteffort-pod6f3ac2d5_9c5a_49e1_938e_e58a16060912.slice. May 14 00:53:48.690903 kubelet[1913]: I0514 00:53:48.690858 1913 scope.go:117] "RemoveContainer" containerID="c266efa393bd102729935a1dbaaeca5d2e3793b331c8e3fe2c55d1e19d06c746" May 14 00:53:48.693410 env[1215]: time="2025-05-14T00:53:48.693371036Z" level=info msg="RemoveContainer for \"c266efa393bd102729935a1dbaaeca5d2e3793b331c8e3fe2c55d1e19d06c746\"" May 14 00:53:48.700636 env[1215]: time="2025-05-14T00:53:48.700302585Z" level=info msg="RemoveContainer for \"c266efa393bd102729935a1dbaaeca5d2e3793b331c8e3fe2c55d1e19d06c746\" returns successfully" May 14 00:53:48.700734 kubelet[1913]: I0514 00:53:48.700554 1913 scope.go:117] "RemoveContainer" containerID="70854290db81085780c905c043540c7b832aff445852701defdfa9ac2f1bd516" May 14 00:53:48.701662 env[1215]: time="2025-05-14T00:53:48.701521421Z" level=info msg="RemoveContainer for \"70854290db81085780c905c043540c7b832aff445852701defdfa9ac2f1bd516\"" May 14 00:53:48.703636 env[1215]: time="2025-05-14T00:53:48.703598466Z" level=info msg="RemoveContainer for \"70854290db81085780c905c043540c7b832aff445852701defdfa9ac2f1bd516\" returns successfully" May 14 00:53:48.704286 kubelet[1913]: I0514 00:53:48.703767 1913 scope.go:117] "RemoveContainer" containerID="523daec23a205b0b08e727113afd6be2d2126789045d59466cef958c34258068" May 14 00:53:48.705087 env[1215]: time="2025-05-14T00:53:48.704836101Z" level=info msg="RemoveContainer for \"523daec23a205b0b08e727113afd6be2d2126789045d59466cef958c34258068\"" May 14 00:53:48.708050 env[1215]: time="2025-05-14T00:53:48.707966508Z" level=info msg="RemoveContainer for \"523daec23a205b0b08e727113afd6be2d2126789045d59466cef958c34258068\" returns successfully" May 14 00:53:48.711816 kubelet[1913]: I0514 00:53:48.711784 1913 scope.go:117] "RemoveContainer" containerID="bafe2c04a9121ccd8e058d69c4f60ee6b36d8687196f5c012fed6ab4592103f8" May 14 00:53:48.714368 env[1215]: time="2025-05-14T00:53:48.714335637Z" level=info msg="RemoveContainer for \"bafe2c04a9121ccd8e058d69c4f60ee6b36d8687196f5c012fed6ab4592103f8\"" May 14 00:53:48.716647 env[1215]: time="2025-05-14T00:53:48.716615635Z" level=info msg="RemoveContainer for \"bafe2c04a9121ccd8e058d69c4f60ee6b36d8687196f5c012fed6ab4592103f8\" returns successfully" May 14 00:53:48.716873 kubelet[1913]: I0514 00:53:48.716840 1913 scope.go:117] "RemoveContainer" containerID="e072a64a7da97f9d4893165293116a85e6dde27545cf5ef6d0f356aef0c453f1" May 14 00:53:48.718208 env[1215]: time="2025-05-14T00:53:48.718180698Z" level=info msg="RemoveContainer for \"e072a64a7da97f9d4893165293116a85e6dde27545cf5ef6d0f356aef0c453f1\"" May 14 00:53:48.721933 env[1215]: time="2025-05-14T00:53:48.721893043Z" level=info msg="RemoveContainer for \"e072a64a7da97f9d4893165293116a85e6dde27545cf5ef6d0f356aef0c453f1\" returns successfully" May 14 00:53:48.722071 kubelet[1913]: I0514 00:53:48.722051 1913 scope.go:117] "RemoveContainer" containerID="c266efa393bd102729935a1dbaaeca5d2e3793b331c8e3fe2c55d1e19d06c746" May 14 00:53:48.722363 env[1215]: time="2025-05-14T00:53:48.722279469Z" level=error msg="ContainerStatus for \"c266efa393bd102729935a1dbaaeca5d2e3793b331c8e3fe2c55d1e19d06c746\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c266efa393bd102729935a1dbaaeca5d2e3793b331c8e3fe2c55d1e19d06c746\": not found" May 14 00:53:48.722484 kubelet[1913]: E0514 00:53:48.722464 1913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c266efa393bd102729935a1dbaaeca5d2e3793b331c8e3fe2c55d1e19d06c746\": not found" containerID="c266efa393bd102729935a1dbaaeca5d2e3793b331c8e3fe2c55d1e19d06c746" May 14 00:53:48.722569 kubelet[1913]: I0514 00:53:48.722501 1913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c266efa393bd102729935a1dbaaeca5d2e3793b331c8e3fe2c55d1e19d06c746"} err="failed to get container status \"c266efa393bd102729935a1dbaaeca5d2e3793b331c8e3fe2c55d1e19d06c746\": rpc error: code = NotFound desc = an error occurred when try to find container \"c266efa393bd102729935a1dbaaeca5d2e3793b331c8e3fe2c55d1e19d06c746\": not found" May 14 00:53:48.722607 kubelet[1913]: I0514 00:53:48.722570 1913 scope.go:117] "RemoveContainer" containerID="70854290db81085780c905c043540c7b832aff445852701defdfa9ac2f1bd516" May 14 00:53:48.722847 env[1215]: time="2025-05-14T00:53:48.722752692Z" level=error msg="ContainerStatus for \"70854290db81085780c905c043540c7b832aff445852701defdfa9ac2f1bd516\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"70854290db81085780c905c043540c7b832aff445852701defdfa9ac2f1bd516\": not found" May 14 00:53:48.722922 kubelet[1913]: E0514 00:53:48.722890 1913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"70854290db81085780c905c043540c7b832aff445852701defdfa9ac2f1bd516\": not found" containerID="70854290db81085780c905c043540c7b832aff445852701defdfa9ac2f1bd516" May 14 00:53:48.722922 kubelet[1913]: I0514 00:53:48.722909 1913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"70854290db81085780c905c043540c7b832aff445852701defdfa9ac2f1bd516"} err="failed to get container status \"70854290db81085780c905c043540c7b832aff445852701defdfa9ac2f1bd516\": rpc error: code = NotFound desc = an error occurred when try to find container \"70854290db81085780c905c043540c7b832aff445852701defdfa9ac2f1bd516\": not found" May 14 00:53:48.722922 kubelet[1913]: I0514 00:53:48.722921 1913 scope.go:117] "RemoveContainer" containerID="523daec23a205b0b08e727113afd6be2d2126789045d59466cef958c34258068" May 14 00:53:48.723094 env[1215]: time="2025-05-14T00:53:48.723048842Z" level=error msg="ContainerStatus for \"523daec23a205b0b08e727113afd6be2d2126789045d59466cef958c34258068\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"523daec23a205b0b08e727113afd6be2d2126789045d59466cef958c34258068\": not found" May 14 00:53:48.723189 kubelet[1913]: E0514 00:53:48.723170 1913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"523daec23a205b0b08e727113afd6be2d2126789045d59466cef958c34258068\": not found" containerID="523daec23a205b0b08e727113afd6be2d2126789045d59466cef958c34258068" May 14 00:53:48.723247 kubelet[1913]: I0514 00:53:48.723192 1913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"523daec23a205b0b08e727113afd6be2d2126789045d59466cef958c34258068"} err="failed to get container status \"523daec23a205b0b08e727113afd6be2d2126789045d59466cef958c34258068\": rpc error: code = NotFound desc = an error occurred when try to find container \"523daec23a205b0b08e727113afd6be2d2126789045d59466cef958c34258068\": not found" May 14 00:53:48.723247 kubelet[1913]: I0514 00:53:48.723205 1913 scope.go:117] "RemoveContainer" containerID="bafe2c04a9121ccd8e058d69c4f60ee6b36d8687196f5c012fed6ab4592103f8" May 14 00:53:48.723376 env[1215]: time="2025-05-14T00:53:48.723333031Z" level=error msg="ContainerStatus for \"bafe2c04a9121ccd8e058d69c4f60ee6b36d8687196f5c012fed6ab4592103f8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bafe2c04a9121ccd8e058d69c4f60ee6b36d8687196f5c012fed6ab4592103f8\": not found" May 14 00:53:48.723467 kubelet[1913]: E0514 00:53:48.723449 1913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bafe2c04a9121ccd8e058d69c4f60ee6b36d8687196f5c012fed6ab4592103f8\": not found" containerID="bafe2c04a9121ccd8e058d69c4f60ee6b36d8687196f5c012fed6ab4592103f8" May 14 00:53:48.723505 kubelet[1913]: I0514 00:53:48.723470 1913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bafe2c04a9121ccd8e058d69c4f60ee6b36d8687196f5c012fed6ab4592103f8"} err="failed to get container status \"bafe2c04a9121ccd8e058d69c4f60ee6b36d8687196f5c012fed6ab4592103f8\": rpc error: code = NotFound desc = an error occurred when try to find container \"bafe2c04a9121ccd8e058d69c4f60ee6b36d8687196f5c012fed6ab4592103f8\": not found" May 14 00:53:48.723505 kubelet[1913]: I0514 00:53:48.723482 1913 scope.go:117] "RemoveContainer" containerID="e072a64a7da97f9d4893165293116a85e6dde27545cf5ef6d0f356aef0c453f1" May 14 00:53:48.723804 env[1215]: time="2025-05-14T00:53:48.723756576Z" level=error msg="ContainerStatus for \"e072a64a7da97f9d4893165293116a85e6dde27545cf5ef6d0f356aef0c453f1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e072a64a7da97f9d4893165293116a85e6dde27545cf5ef6d0f356aef0c453f1\": not found" May 14 00:53:48.723934 kubelet[1913]: E0514 00:53:48.723913 1913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e072a64a7da97f9d4893165293116a85e6dde27545cf5ef6d0f356aef0c453f1\": not found" containerID="e072a64a7da97f9d4893165293116a85e6dde27545cf5ef6d0f356aef0c453f1" May 14 00:53:48.723975 kubelet[1913]: I0514 00:53:48.723937 1913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e072a64a7da97f9d4893165293116a85e6dde27545cf5ef6d0f356aef0c453f1"} err="failed to get container status \"e072a64a7da97f9d4893165293116a85e6dde27545cf5ef6d0f356aef0c453f1\": rpc error: code = NotFound desc = an error occurred when try to find container \"e072a64a7da97f9d4893165293116a85e6dde27545cf5ef6d0f356aef0c453f1\": not found" May 14 00:53:48.723975 kubelet[1913]: I0514 00:53:48.723954 1913 scope.go:117] "RemoveContainer" containerID="e03a9c330b9d26e33baf7c0938e55c3091856a742364c581f1c711eb0abff897" May 14 00:53:48.727711 env[1215]: time="2025-05-14T00:53:48.727670794Z" level=info msg="RemoveContainer for \"e03a9c330b9d26e33baf7c0938e55c3091856a742364c581f1c711eb0abff897\"" May 14 00:53:48.730148 env[1215]: time="2025-05-14T00:53:48.730113906Z" level=info msg="RemoveContainer for \"e03a9c330b9d26e33baf7c0938e55c3091856a742364c581f1c711eb0abff897\" returns successfully" May 14 00:53:48.730427 kubelet[1913]: I0514 00:53:48.730405 1913 scope.go:117] "RemoveContainer" containerID="e03a9c330b9d26e33baf7c0938e55c3091856a742364c581f1c711eb0abff897" May 14 00:53:48.730649 env[1215]: time="2025-05-14T00:53:48.730592369Z" level=error msg="ContainerStatus for \"e03a9c330b9d26e33baf7c0938e55c3091856a742364c581f1c711eb0abff897\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e03a9c330b9d26e33baf7c0938e55c3091856a742364c581f1c711eb0abff897\": not found" May 14 00:53:48.730746 kubelet[1913]: E0514 00:53:48.730727 1913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e03a9c330b9d26e33baf7c0938e55c3091856a742364c581f1c711eb0abff897\": not found" containerID="e03a9c330b9d26e33baf7c0938e55c3091856a742364c581f1c711eb0abff897" May 14 00:53:48.730781 kubelet[1913]: I0514 00:53:48.730752 1913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e03a9c330b9d26e33baf7c0938e55c3091856a742364c581f1c711eb0abff897"} err="failed to get container status \"e03a9c330b9d26e33baf7c0938e55c3091856a742364c581f1c711eb0abff897\": rpc error: code = NotFound desc = an error occurred when try to find container \"e03a9c330b9d26e33baf7c0938e55c3091856a742364c581f1c711eb0abff897\": not found" May 14 00:53:48.817171 systemd[1]: var-lib-kubelet-pods-6f3ac2d5\x2d9c5a\x2d49e1\x2d938e\x2de58a16060912-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8nv5k.mount: Deactivated successfully. May 14 00:53:48.817292 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45a1aa091ac1c1102d7bec0c9b115bc6e50bfabd16a7a59095689d66c091f5e9-rootfs.mount: Deactivated successfully. May 14 00:53:48.817343 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-45a1aa091ac1c1102d7bec0c9b115bc6e50bfabd16a7a59095689d66c091f5e9-shm.mount: Deactivated successfully. May 14 00:53:48.817409 systemd[1]: var-lib-kubelet-pods-8e075227\x2dab31\x2d47bf\x2da1d5\x2df4ba469d9776-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj2gh7.mount: Deactivated successfully. May 14 00:53:48.817459 systemd[1]: var-lib-kubelet-pods-8e075227\x2dab31\x2d47bf\x2da1d5\x2df4ba469d9776-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 14 00:53:48.817506 systemd[1]: var-lib-kubelet-pods-8e075227\x2dab31\x2d47bf\x2da1d5\x2df4ba469d9776-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 14 00:53:49.766658 sshd[3520]: pam_unix(sshd:session): session closed for user core May 14 00:53:49.769976 systemd[1]: sshd@21-10.0.0.131:22-10.0.0.1:50682.service: Deactivated successfully. May 14 00:53:49.770544 systemd[1]: session-22.scope: Deactivated successfully. May 14 00:53:49.770716 systemd[1]: session-22.scope: Consumed 1.063s CPU time. May 14 00:53:49.771071 systemd-logind[1203]: Session 22 logged out. Waiting for processes to exit. May 14 00:53:49.772182 systemd[1]: Started sshd@22-10.0.0.131:22-10.0.0.1:50698.service. May 14 00:53:49.772932 systemd-logind[1203]: Removed session 22. May 14 00:53:49.815031 sshd[3684]: Accepted publickey for core from 10.0.0.1 port 50698 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:53:49.816261 sshd[3684]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:53:49.819740 systemd-logind[1203]: New session 23 of user core. May 14 00:53:49.820645 systemd[1]: Started session-23.scope. May 14 00:53:50.510181 kubelet[1913]: E0514 00:53:50.510143 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:53:50.512589 kubelet[1913]: I0514 00:53:50.512542 1913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f3ac2d5-9c5a-49e1-938e-e58a16060912" path="/var/lib/kubelet/pods/6f3ac2d5-9c5a-49e1-938e-e58a16060912/volumes" May 14 00:53:50.513128 kubelet[1913]: I0514 00:53:50.513101 1913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e075227-ab31-47bf-a1d5-f4ba469d9776" path="/var/lib/kubelet/pods/8e075227-ab31-47bf-a1d5-f4ba469d9776/volumes" May 14 00:53:50.649869 sshd[3684]: pam_unix(sshd:session): session closed for user core May 14 00:53:50.653884 systemd[1]: Started sshd@23-10.0.0.131:22-10.0.0.1:50708.service. May 14 00:53:50.654399 systemd[1]: sshd@22-10.0.0.131:22-10.0.0.1:50698.service: Deactivated successfully. May 14 00:53:50.656568 systemd[1]: session-23.scope: Deactivated successfully. May 14 00:53:50.657498 systemd-logind[1203]: Session 23 logged out. Waiting for processes to exit. May 14 00:53:50.658539 systemd-logind[1203]: Removed session 23. May 14 00:53:50.662944 kubelet[1913]: I0514 00:53:50.662905 1913 memory_manager.go:355] "RemoveStaleState removing state" podUID="8e075227-ab31-47bf-a1d5-f4ba469d9776" containerName="cilium-agent" May 14 00:53:50.663064 kubelet[1913]: I0514 00:53:50.663052 1913 memory_manager.go:355] "RemoveStaleState removing state" podUID="6f3ac2d5-9c5a-49e1-938e-e58a16060912" containerName="cilium-operator" May 14 00:53:50.673136 systemd[1]: Created slice kubepods-burstable-podc6cbcf15_4d3e_4741_a295_ccc51d88b6bf.slice. May 14 00:53:50.705996 sshd[3695]: Accepted publickey for core from 10.0.0.1 port 50708 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:53:50.707631 sshd[3695]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:53:50.711096 systemd-logind[1203]: New session 24 of user core. May 14 00:53:50.711921 systemd[1]: Started session-24.scope. May 14 00:53:50.796964 kubelet[1913]: I0514 00:53:50.796849 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-cni-path\") pod \"cilium-q62z4\" (UID: \"c6cbcf15-4d3e-4741-a295-ccc51d88b6bf\") " pod="kube-system/cilium-q62z4" May 14 00:53:50.796964 kubelet[1913]: I0514 00:53:50.796892 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-cilium-cgroup\") pod \"cilium-q62z4\" (UID: \"c6cbcf15-4d3e-4741-a295-ccc51d88b6bf\") " pod="kube-system/cilium-q62z4" May 14 00:53:50.796964 kubelet[1913]: I0514 00:53:50.796912 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-clustermesh-secrets\") pod \"cilium-q62z4\" (UID: \"c6cbcf15-4d3e-4741-a295-ccc51d88b6bf\") " pod="kube-system/cilium-q62z4" May 14 00:53:50.796964 kubelet[1913]: I0514 00:53:50.796931 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-host-proc-sys-net\") pod \"cilium-q62z4\" (UID: \"c6cbcf15-4d3e-4741-a295-ccc51d88b6bf\") " pod="kube-system/cilium-q62z4" May 14 00:53:50.796964 kubelet[1913]: I0514 00:53:50.796959 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-host-proc-sys-kernel\") pod \"cilium-q62z4\" (UID: \"c6cbcf15-4d3e-4741-a295-ccc51d88b6bf\") " pod="kube-system/cilium-q62z4" May 14 00:53:50.797192 kubelet[1913]: I0514 00:53:50.796977 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-xtables-lock\") pod \"cilium-q62z4\" (UID: \"c6cbcf15-4d3e-4741-a295-ccc51d88b6bf\") " pod="kube-system/cilium-q62z4" May 14 00:53:50.797192 kubelet[1913]: I0514 00:53:50.796992 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-cilium-config-path\") pod \"cilium-q62z4\" (UID: \"c6cbcf15-4d3e-4741-a295-ccc51d88b6bf\") " pod="kube-system/cilium-q62z4" May 14 00:53:50.797192 kubelet[1913]: I0514 00:53:50.797011 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-cilium-ipsec-secrets\") pod \"cilium-q62z4\" (UID: \"c6cbcf15-4d3e-4741-a295-ccc51d88b6bf\") " pod="kube-system/cilium-q62z4" May 14 00:53:50.797192 kubelet[1913]: I0514 00:53:50.797028 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rznm\" (UniqueName: \"kubernetes.io/projected/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-kube-api-access-8rznm\") pod \"cilium-q62z4\" (UID: \"c6cbcf15-4d3e-4741-a295-ccc51d88b6bf\") " pod="kube-system/cilium-q62z4" May 14 00:53:50.797192 kubelet[1913]: I0514 00:53:50.797044 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-lib-modules\") pod \"cilium-q62z4\" (UID: \"c6cbcf15-4d3e-4741-a295-ccc51d88b6bf\") " pod="kube-system/cilium-q62z4" May 14 00:53:50.797192 kubelet[1913]: I0514 00:53:50.797058 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-hubble-tls\") pod \"cilium-q62z4\" (UID: \"c6cbcf15-4d3e-4741-a295-ccc51d88b6bf\") " pod="kube-system/cilium-q62z4" May 14 00:53:50.797360 kubelet[1913]: I0514 00:53:50.797075 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-cilium-run\") pod \"cilium-q62z4\" (UID: \"c6cbcf15-4d3e-4741-a295-ccc51d88b6bf\") " pod="kube-system/cilium-q62z4" May 14 00:53:50.797360 kubelet[1913]: I0514 00:53:50.797111 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-bpf-maps\") pod \"cilium-q62z4\" (UID: \"c6cbcf15-4d3e-4741-a295-ccc51d88b6bf\") " pod="kube-system/cilium-q62z4" May 14 00:53:50.797360 kubelet[1913]: I0514 00:53:50.797126 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-hostproc\") pod \"cilium-q62z4\" (UID: \"c6cbcf15-4d3e-4741-a295-ccc51d88b6bf\") " pod="kube-system/cilium-q62z4" May 14 00:53:50.797360 kubelet[1913]: I0514 00:53:50.797142 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-etc-cni-netd\") pod \"cilium-q62z4\" (UID: \"c6cbcf15-4d3e-4741-a295-ccc51d88b6bf\") " pod="kube-system/cilium-q62z4" May 14 00:53:50.835458 sshd[3695]: pam_unix(sshd:session): session closed for user core May 14 00:53:50.838398 systemd[1]: Started sshd@24-10.0.0.131:22-10.0.0.1:50722.service. May 14 00:53:50.839053 systemd[1]: session-24.scope: Deactivated successfully. May 14 00:53:50.839644 systemd[1]: sshd@23-10.0.0.131:22-10.0.0.1:50708.service: Deactivated successfully. May 14 00:53:50.841389 systemd-logind[1203]: Session 24 logged out. Waiting for processes to exit. May 14 00:53:50.842253 systemd-logind[1203]: Removed session 24. May 14 00:53:50.844523 kubelet[1913]: E0514 00:53:50.844462 1913 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-8rznm lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-q62z4" podUID="c6cbcf15-4d3e-4741-a295-ccc51d88b6bf" May 14 00:53:50.881474 sshd[3709]: Accepted publickey for core from 10.0.0.1 port 50722 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:53:50.882835 sshd[3709]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:53:50.886380 systemd-logind[1203]: New session 25 of user core. May 14 00:53:50.887207 systemd[1]: Started session-25.scope. May 14 00:53:51.803648 kubelet[1913]: I0514 00:53:51.803606 1913 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-cilium-config-path\") pod \"c6cbcf15-4d3e-4741-a295-ccc51d88b6bf\" (UID: \"c6cbcf15-4d3e-4741-a295-ccc51d88b6bf\") " May 14 00:53:51.803648 kubelet[1913]: I0514 00:53:51.803648 1913 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-hubble-tls\") pod \"c6cbcf15-4d3e-4741-a295-ccc51d88b6bf\" (UID: \"c6cbcf15-4d3e-4741-a295-ccc51d88b6bf\") " May 14 00:53:51.804046 kubelet[1913]: I0514 00:53:51.803677 1913 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-cni-path\") pod \"c6cbcf15-4d3e-4741-a295-ccc51d88b6bf\" (UID: \"c6cbcf15-4d3e-4741-a295-ccc51d88b6bf\") " May 14 00:53:51.804046 kubelet[1913]: I0514 00:53:51.803698 1913 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-hostproc\") pod \"c6cbcf15-4d3e-4741-a295-ccc51d88b6bf\" (UID: \"c6cbcf15-4d3e-4741-a295-ccc51d88b6bf\") " May 14 00:53:51.804046 kubelet[1913]: I0514 00:53:51.803717 1913 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-clustermesh-secrets\") pod \"c6cbcf15-4d3e-4741-a295-ccc51d88b6bf\" (UID: \"c6cbcf15-4d3e-4741-a295-ccc51d88b6bf\") " May 14 00:53:51.804046 kubelet[1913]: I0514 00:53:51.803735 1913 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-host-proc-sys-net\") pod \"c6cbcf15-4d3e-4741-a295-ccc51d88b6bf\" (UID: \"c6cbcf15-4d3e-4741-a295-ccc51d88b6bf\") " May 14 00:53:51.804046 kubelet[1913]: I0514 00:53:51.803754 1913 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-cilium-ipsec-secrets\") pod \"c6cbcf15-4d3e-4741-a295-ccc51d88b6bf\" (UID: \"c6cbcf15-4d3e-4741-a295-ccc51d88b6bf\") " May 14 00:53:51.804046 kubelet[1913]: I0514 00:53:51.803778 1913 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rznm\" (UniqueName: \"kubernetes.io/projected/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-kube-api-access-8rznm\") pod \"c6cbcf15-4d3e-4741-a295-ccc51d88b6bf\" (UID: \"c6cbcf15-4d3e-4741-a295-ccc51d88b6bf\") " May 14 00:53:51.804191 kubelet[1913]: I0514 00:53:51.803799 1913 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-cilium-cgroup\") pod \"c6cbcf15-4d3e-4741-a295-ccc51d88b6bf\" (UID: \"c6cbcf15-4d3e-4741-a295-ccc51d88b6bf\") " May 14 00:53:51.804191 kubelet[1913]: I0514 00:53:51.803813 1913 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-host-proc-sys-kernel\") pod \"c6cbcf15-4d3e-4741-a295-ccc51d88b6bf\" (UID: \"c6cbcf15-4d3e-4741-a295-ccc51d88b6bf\") " May 14 00:53:51.804191 kubelet[1913]: I0514 00:53:51.803832 1913 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-etc-cni-netd\") pod \"c6cbcf15-4d3e-4741-a295-ccc51d88b6bf\" (UID: \"c6cbcf15-4d3e-4741-a295-ccc51d88b6bf\") " May 14 00:53:51.804191 kubelet[1913]: I0514 00:53:51.803847 1913 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-cilium-run\") pod \"c6cbcf15-4d3e-4741-a295-ccc51d88b6bf\" (UID: \"c6cbcf15-4d3e-4741-a295-ccc51d88b6bf\") " May 14 00:53:51.804191 kubelet[1913]: I0514 00:53:51.803861 1913 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-bpf-maps\") pod \"c6cbcf15-4d3e-4741-a295-ccc51d88b6bf\" (UID: \"c6cbcf15-4d3e-4741-a295-ccc51d88b6bf\") " May 14 00:53:51.804191 kubelet[1913]: I0514 00:53:51.803877 1913 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-xtables-lock\") pod \"c6cbcf15-4d3e-4741-a295-ccc51d88b6bf\" (UID: \"c6cbcf15-4d3e-4741-a295-ccc51d88b6bf\") " May 14 00:53:51.804345 kubelet[1913]: I0514 00:53:51.803891 1913 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-lib-modules\") pod \"c6cbcf15-4d3e-4741-a295-ccc51d88b6bf\" (UID: \"c6cbcf15-4d3e-4741-a295-ccc51d88b6bf\") " May 14 00:53:51.804345 kubelet[1913]: I0514 00:53:51.803938 1913 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c6cbcf15-4d3e-4741-a295-ccc51d88b6bf" (UID: "c6cbcf15-4d3e-4741-a295-ccc51d88b6bf"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 00:53:51.804345 kubelet[1913]: I0514 00:53:51.803964 1913 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-cni-path" (OuterVolumeSpecName: "cni-path") pod "c6cbcf15-4d3e-4741-a295-ccc51d88b6bf" (UID: "c6cbcf15-4d3e-4741-a295-ccc51d88b6bf"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 00:53:51.804345 kubelet[1913]: I0514 00:53:51.803989 1913 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-hostproc" (OuterVolumeSpecName: "hostproc") pod "c6cbcf15-4d3e-4741-a295-ccc51d88b6bf" (UID: "c6cbcf15-4d3e-4741-a295-ccc51d88b6bf"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 00:53:51.804345 kubelet[1913]: I0514 00:53:51.804076 1913 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c6cbcf15-4d3e-4741-a295-ccc51d88b6bf" (UID: "c6cbcf15-4d3e-4741-a295-ccc51d88b6bf"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 00:53:51.804503 kubelet[1913]: I0514 00:53:51.804115 1913 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c6cbcf15-4d3e-4741-a295-ccc51d88b6bf" (UID: "c6cbcf15-4d3e-4741-a295-ccc51d88b6bf"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 00:53:51.804723 kubelet[1913]: I0514 00:53:51.804691 1913 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c6cbcf15-4d3e-4741-a295-ccc51d88b6bf" (UID: "c6cbcf15-4d3e-4741-a295-ccc51d88b6bf"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 00:53:51.804802 kubelet[1913]: I0514 00:53:51.804735 1913 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c6cbcf15-4d3e-4741-a295-ccc51d88b6bf" (UID: "c6cbcf15-4d3e-4741-a295-ccc51d88b6bf"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 00:53:51.804802 kubelet[1913]: I0514 00:53:51.804754 1913 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c6cbcf15-4d3e-4741-a295-ccc51d88b6bf" (UID: "c6cbcf15-4d3e-4741-a295-ccc51d88b6bf"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 00:53:51.804802 kubelet[1913]: I0514 00:53:51.804770 1913 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c6cbcf15-4d3e-4741-a295-ccc51d88b6bf" (UID: "c6cbcf15-4d3e-4741-a295-ccc51d88b6bf"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 00:53:51.804802 kubelet[1913]: I0514 00:53:51.804797 1913 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c6cbcf15-4d3e-4741-a295-ccc51d88b6bf" (UID: "c6cbcf15-4d3e-4741-a295-ccc51d88b6bf"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 00:53:51.806764 kubelet[1913]: I0514 00:53:51.806722 1913 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c6cbcf15-4d3e-4741-a295-ccc51d88b6bf" (UID: "c6cbcf15-4d3e-4741-a295-ccc51d88b6bf"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 14 00:53:51.806764 kubelet[1913]: I0514 00:53:51.806732 1913 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c6cbcf15-4d3e-4741-a295-ccc51d88b6bf" (UID: "c6cbcf15-4d3e-4741-a295-ccc51d88b6bf"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 14 00:53:51.809156 kubelet[1913]: I0514 00:53:51.809124 1913 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "c6cbcf15-4d3e-4741-a295-ccc51d88b6bf" (UID: "c6cbcf15-4d3e-4741-a295-ccc51d88b6bf"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 14 00:53:51.809210 systemd[1]: var-lib-kubelet-pods-c6cbcf15\x2d4d3e\x2d4741\x2da295\x2dccc51d88b6bf-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 14 00:53:51.809327 systemd[1]: var-lib-kubelet-pods-c6cbcf15\x2d4d3e\x2d4741\x2da295\x2dccc51d88b6bf-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 14 00:53:51.809418 kubelet[1913]: I0514 00:53:51.809223 1913 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-kube-api-access-8rznm" (OuterVolumeSpecName: "kube-api-access-8rznm") pod "c6cbcf15-4d3e-4741-a295-ccc51d88b6bf" (UID: "c6cbcf15-4d3e-4741-a295-ccc51d88b6bf"). InnerVolumeSpecName "kube-api-access-8rznm". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 14 00:53:51.809382 systemd[1]: var-lib-kubelet-pods-c6cbcf15\x2d4d3e\x2d4741\x2da295\x2dccc51d88b6bf-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 14 00:53:51.809951 kubelet[1913]: I0514 00:53:51.809927 1913 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c6cbcf15-4d3e-4741-a295-ccc51d88b6bf" (UID: "c6cbcf15-4d3e-4741-a295-ccc51d88b6bf"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 14 00:53:51.811542 systemd[1]: var-lib-kubelet-pods-c6cbcf15\x2d4d3e\x2d4741\x2da295\x2dccc51d88b6bf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8rznm.mount: Deactivated successfully. May 14 00:53:51.904379 kubelet[1913]: I0514 00:53:51.904339 1913 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 14 00:53:51.904379 kubelet[1913]: I0514 00:53:51.904372 1913 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-lib-modules\") on node \"localhost\" DevicePath \"\"" May 14 00:53:51.904379 kubelet[1913]: I0514 00:53:51.904385 1913 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 14 00:53:51.904517 kubelet[1913]: I0514 00:53:51.904394 1913 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 14 00:53:51.904517 kubelet[1913]: I0514 00:53:51.904403 1913 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 14 00:53:51.904517 kubelet[1913]: I0514 00:53:51.904410 1913 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-cni-path\") on node \"localhost\" DevicePath \"\"" May 14 00:53:51.904517 kubelet[1913]: I0514 00:53:51.904417 1913 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-hostproc\") on node \"localhost\" DevicePath \"\"" May 14 00:53:51.904517 kubelet[1913]: I0514 00:53:51.904425 1913 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 14 00:53:51.904517 kubelet[1913]: I0514 00:53:51.904433 1913 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 14 00:53:51.904517 kubelet[1913]: I0514 00:53:51.904441 1913 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8rznm\" (UniqueName: \"kubernetes.io/projected/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-kube-api-access-8rznm\") on node \"localhost\" DevicePath \"\"" May 14 00:53:51.904517 kubelet[1913]: I0514 00:53:51.904449 1913 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" May 14 00:53:51.904707 kubelet[1913]: I0514 00:53:51.904457 1913 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 14 00:53:51.904707 kubelet[1913]: I0514 00:53:51.904465 1913 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-cilium-run\") on node \"localhost\" DevicePath \"\"" May 14 00:53:51.904707 kubelet[1913]: I0514 00:53:51.904472 1913 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 14 00:53:51.904707 kubelet[1913]: I0514 00:53:51.904480 1913 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 14 00:53:52.515746 systemd[1]: Removed slice kubepods-burstable-podc6cbcf15_4d3e_4741_a295_ccc51d88b6bf.slice. May 14 00:53:52.555590 kubelet[1913]: E0514 00:53:52.555536 1913 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 14 00:53:52.764889 systemd[1]: Created slice kubepods-burstable-pod304b2b40_a84a_499a_ac88_647372ac1688.slice. May 14 00:53:52.809045 kubelet[1913]: I0514 00:53:52.808916 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/304b2b40-a84a-499a-ac88-647372ac1688-cilium-run\") pod \"cilium-2wrjc\" (UID: \"304b2b40-a84a-499a-ac88-647372ac1688\") " pod="kube-system/cilium-2wrjc" May 14 00:53:52.809045 kubelet[1913]: I0514 00:53:52.808967 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/304b2b40-a84a-499a-ac88-647372ac1688-xtables-lock\") pod \"cilium-2wrjc\" (UID: \"304b2b40-a84a-499a-ac88-647372ac1688\") " pod="kube-system/cilium-2wrjc" May 14 00:53:52.809045 kubelet[1913]: I0514 00:53:52.808985 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/304b2b40-a84a-499a-ac88-647372ac1688-clustermesh-secrets\") pod \"cilium-2wrjc\" (UID: \"304b2b40-a84a-499a-ac88-647372ac1688\") " pod="kube-system/cilium-2wrjc" May 14 00:53:52.809045 kubelet[1913]: I0514 00:53:52.809008 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/304b2b40-a84a-499a-ac88-647372ac1688-etc-cni-netd\") pod \"cilium-2wrjc\" (UID: \"304b2b40-a84a-499a-ac88-647372ac1688\") " pod="kube-system/cilium-2wrjc" May 14 00:53:52.809045 kubelet[1913]: I0514 00:53:52.809044 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/304b2b40-a84a-499a-ac88-647372ac1688-host-proc-sys-net\") pod \"cilium-2wrjc\" (UID: \"304b2b40-a84a-499a-ac88-647372ac1688\") " pod="kube-system/cilium-2wrjc" May 14 00:53:52.809585 kubelet[1913]: I0514 00:53:52.809076 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52wjd\" (UniqueName: \"kubernetes.io/projected/304b2b40-a84a-499a-ac88-647372ac1688-kube-api-access-52wjd\") pod \"cilium-2wrjc\" (UID: \"304b2b40-a84a-499a-ac88-647372ac1688\") " pod="kube-system/cilium-2wrjc" May 14 00:53:52.809585 kubelet[1913]: I0514 00:53:52.809101 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/304b2b40-a84a-499a-ac88-647372ac1688-cilium-cgroup\") pod \"cilium-2wrjc\" (UID: \"304b2b40-a84a-499a-ac88-647372ac1688\") " pod="kube-system/cilium-2wrjc" May 14 00:53:52.809585 kubelet[1913]: I0514 00:53:52.809145 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/304b2b40-a84a-499a-ac88-647372ac1688-cilium-config-path\") pod \"cilium-2wrjc\" (UID: \"304b2b40-a84a-499a-ac88-647372ac1688\") " pod="kube-system/cilium-2wrjc" May 14 00:53:52.809585 kubelet[1913]: I0514 00:53:52.809185 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/304b2b40-a84a-499a-ac88-647372ac1688-hubble-tls\") pod \"cilium-2wrjc\" (UID: \"304b2b40-a84a-499a-ac88-647372ac1688\") " pod="kube-system/cilium-2wrjc" May 14 00:53:52.809585 kubelet[1913]: I0514 00:53:52.809211 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/304b2b40-a84a-499a-ac88-647372ac1688-hostproc\") pod \"cilium-2wrjc\" (UID: \"304b2b40-a84a-499a-ac88-647372ac1688\") " pod="kube-system/cilium-2wrjc" May 14 00:53:52.809585 kubelet[1913]: I0514 00:53:52.809245 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/304b2b40-a84a-499a-ac88-647372ac1688-cilium-ipsec-secrets\") pod \"cilium-2wrjc\" (UID: \"304b2b40-a84a-499a-ac88-647372ac1688\") " pod="kube-system/cilium-2wrjc" May 14 00:53:52.809731 kubelet[1913]: I0514 00:53:52.809271 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/304b2b40-a84a-499a-ac88-647372ac1688-bpf-maps\") pod \"cilium-2wrjc\" (UID: \"304b2b40-a84a-499a-ac88-647372ac1688\") " pod="kube-system/cilium-2wrjc" May 14 00:53:52.809731 kubelet[1913]: I0514 00:53:52.809289 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/304b2b40-a84a-499a-ac88-647372ac1688-lib-modules\") pod \"cilium-2wrjc\" (UID: \"304b2b40-a84a-499a-ac88-647372ac1688\") " pod="kube-system/cilium-2wrjc" May 14 00:53:52.809731 kubelet[1913]: I0514 00:53:52.809316 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/304b2b40-a84a-499a-ac88-647372ac1688-host-proc-sys-kernel\") pod \"cilium-2wrjc\" (UID: \"304b2b40-a84a-499a-ac88-647372ac1688\") " pod="kube-system/cilium-2wrjc" May 14 00:53:52.809731 kubelet[1913]: I0514 00:53:52.809331 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/304b2b40-a84a-499a-ac88-647372ac1688-cni-path\") pod \"cilium-2wrjc\" (UID: \"304b2b40-a84a-499a-ac88-647372ac1688\") " pod="kube-system/cilium-2wrjc" May 14 00:53:53.067809 kubelet[1913]: E0514 00:53:53.067694 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:53:53.068553 env[1215]: time="2025-05-14T00:53:53.068514403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2wrjc,Uid:304b2b40-a84a-499a-ac88-647372ac1688,Namespace:kube-system,Attempt:0,}" May 14 00:53:53.079514 env[1215]: time="2025-05-14T00:53:53.079450591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:53:53.079514 env[1215]: time="2025-05-14T00:53:53.079492190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:53:53.079514 env[1215]: time="2025-05-14T00:53:53.079503070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:53:53.079667 env[1215]: time="2025-05-14T00:53:53.079617867Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/788c100d03297b32248ff95fe9faec5a88e047567e892909215257d87578f856 pid=3741 runtime=io.containerd.runc.v2 May 14 00:53:53.096948 systemd[1]: Started cri-containerd-788c100d03297b32248ff95fe9faec5a88e047567e892909215257d87578f856.scope. May 14 00:53:53.138754 env[1215]: time="2025-05-14T00:53:53.138692011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2wrjc,Uid:304b2b40-a84a-499a-ac88-647372ac1688,Namespace:kube-system,Attempt:0,} returns sandbox id \"788c100d03297b32248ff95fe9faec5a88e047567e892909215257d87578f856\"" May 14 00:53:53.139325 kubelet[1913]: E0514 00:53:53.139297 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:53:53.142293 env[1215]: time="2025-05-14T00:53:53.142212797Z" level=info msg="CreateContainer within sandbox \"788c100d03297b32248ff95fe9faec5a88e047567e892909215257d87578f856\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 00:53:53.151914 env[1215]: time="2025-05-14T00:53:53.151868099Z" level=info msg="CreateContainer within sandbox \"788c100d03297b32248ff95fe9faec5a88e047567e892909215257d87578f856\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9212712b9846986708a855de14019c47d6e002e01eca5475495f711680588bd3\"" May 14 00:53:53.152588 env[1215]: time="2025-05-14T00:53:53.152557560Z" level=info msg="StartContainer for \"9212712b9846986708a855de14019c47d6e002e01eca5475495f711680588bd3\"" May 14 00:53:53.165844 systemd[1]: Started cri-containerd-9212712b9846986708a855de14019c47d6e002e01eca5475495f711680588bd3.scope. May 14 00:53:53.201221 env[1215]: time="2025-05-14T00:53:53.200526160Z" level=info msg="StartContainer for \"9212712b9846986708a855de14019c47d6e002e01eca5475495f711680588bd3\" returns successfully" May 14 00:53:53.205107 systemd[1]: cri-containerd-9212712b9846986708a855de14019c47d6e002e01eca5475495f711680588bd3.scope: Deactivated successfully. May 14 00:53:53.239937 env[1215]: time="2025-05-14T00:53:53.239883030Z" level=info msg="shim disconnected" id=9212712b9846986708a855de14019c47d6e002e01eca5475495f711680588bd3 May 14 00:53:53.239937 env[1215]: time="2025-05-14T00:53:53.239932349Z" level=warning msg="cleaning up after shim disconnected" id=9212712b9846986708a855de14019c47d6e002e01eca5475495f711680588bd3 namespace=k8s.io May 14 00:53:53.239937 env[1215]: time="2025-05-14T00:53:53.239942669Z" level=info msg="cleaning up dead shim" May 14 00:53:53.246494 env[1215]: time="2025-05-14T00:53:53.246459415Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:53:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3827 runtime=io.containerd.runc.v2\n" May 14 00:53:53.707303 kubelet[1913]: E0514 00:53:53.707275 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:53:53.708983 env[1215]: time="2025-05-14T00:53:53.708948793Z" level=info msg="CreateContainer within sandbox \"788c100d03297b32248ff95fe9faec5a88e047567e892909215257d87578f856\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 00:53:53.719248 env[1215]: time="2025-05-14T00:53:53.719191679Z" level=info msg="CreateContainer within sandbox \"788c100d03297b32248ff95fe9faec5a88e047567e892909215257d87578f856\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"24d738ac0ff31ce50d0eb42bd79c68fb0154edeb512a5db1c29e77b669a08517\"" May 14 00:53:53.719861 env[1215]: time="2025-05-14T00:53:53.719832822Z" level=info msg="StartContainer for \"24d738ac0ff31ce50d0eb42bd79c68fb0154edeb512a5db1c29e77b669a08517\"" May 14 00:53:53.734026 systemd[1]: Started cri-containerd-24d738ac0ff31ce50d0eb42bd79c68fb0154edeb512a5db1c29e77b669a08517.scope. May 14 00:53:53.763428 env[1215]: time="2025-05-14T00:53:53.763388380Z" level=info msg="StartContainer for \"24d738ac0ff31ce50d0eb42bd79c68fb0154edeb512a5db1c29e77b669a08517\" returns successfully" May 14 00:53:53.767285 systemd[1]: cri-containerd-24d738ac0ff31ce50d0eb42bd79c68fb0154edeb512a5db1c29e77b669a08517.scope: Deactivated successfully. May 14 00:53:53.785716 env[1215]: time="2025-05-14T00:53:53.785675465Z" level=info msg="shim disconnected" id=24d738ac0ff31ce50d0eb42bd79c68fb0154edeb512a5db1c29e77b669a08517 May 14 00:53:53.785865 env[1215]: time="2025-05-14T00:53:53.785721184Z" level=warning msg="cleaning up after shim disconnected" id=24d738ac0ff31ce50d0eb42bd79c68fb0154edeb512a5db1c29e77b669a08517 namespace=k8s.io May 14 00:53:53.785865 env[1215]: time="2025-05-14T00:53:53.785739423Z" level=info msg="cleaning up dead shim" May 14 00:53:53.791549 env[1215]: time="2025-05-14T00:53:53.791519029Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:53:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3890 runtime=io.containerd.runc.v2\n" May 14 00:53:54.510565 kubelet[1913]: E0514 00:53:54.510531 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:53:54.512653 kubelet[1913]: I0514 00:53:54.512625 1913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6cbcf15-4d3e-4741-a295-ccc51d88b6bf" path="/var/lib/kubelet/pods/c6cbcf15-4d3e-4741-a295-ccc51d88b6bf/volumes" May 14 00:53:54.515933 kubelet[1913]: I0514 00:53:54.515899 1913 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-14T00:53:54Z","lastTransitionTime":"2025-05-14T00:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 14 00:53:54.710318 kubelet[1913]: E0514 00:53:54.710285 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:53:54.712900 env[1215]: time="2025-05-14T00:53:54.712860752Z" level=info msg="CreateContainer within sandbox \"788c100d03297b32248ff95fe9faec5a88e047567e892909215257d87578f856\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 00:53:54.727953 env[1215]: time="2025-05-14T00:53:54.727912696Z" level=info msg="CreateContainer within sandbox \"788c100d03297b32248ff95fe9faec5a88e047567e892909215257d87578f856\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"875e4e336e4a597e29b33e664db9a363495462766427398b700938ea44d8e080\"" May 14 00:53:54.728679 env[1215]: time="2025-05-14T00:53:54.728630798Z" level=info msg="StartContainer for \"875e4e336e4a597e29b33e664db9a363495462766427398b700938ea44d8e080\"" May 14 00:53:54.749209 systemd[1]: Started cri-containerd-875e4e336e4a597e29b33e664db9a363495462766427398b700938ea44d8e080.scope. May 14 00:53:54.780834 env[1215]: time="2025-05-14T00:53:54.780720938Z" level=info msg="StartContainer for \"875e4e336e4a597e29b33e664db9a363495462766427398b700938ea44d8e080\" returns successfully" May 14 00:53:54.781567 systemd[1]: cri-containerd-875e4e336e4a597e29b33e664db9a363495462766427398b700938ea44d8e080.scope: Deactivated successfully. May 14 00:53:54.801977 env[1215]: time="2025-05-14T00:53:54.801935928Z" level=info msg="shim disconnected" id=875e4e336e4a597e29b33e664db9a363495462766427398b700938ea44d8e080 May 14 00:53:54.801977 env[1215]: time="2025-05-14T00:53:54.801974008Z" level=warning msg="cleaning up after shim disconnected" id=875e4e336e4a597e29b33e664db9a363495462766427398b700938ea44d8e080 namespace=k8s.io May 14 00:53:54.802153 env[1215]: time="2025-05-14T00:53:54.801984287Z" level=info msg="cleaning up dead shim" May 14 00:53:54.808779 env[1215]: time="2025-05-14T00:53:54.808724839Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:53:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3947 runtime=io.containerd.runc.v2\n" May 14 00:53:54.914779 systemd[1]: run-containerd-runc-k8s.io-875e4e336e4a597e29b33e664db9a363495462766427398b700938ea44d8e080-runc.3dLvLH.mount: Deactivated successfully. May 14 00:53:54.914874 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-875e4e336e4a597e29b33e664db9a363495462766427398b700938ea44d8e080-rootfs.mount: Deactivated successfully. May 14 00:53:55.713805 kubelet[1913]: E0514 00:53:55.713764 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:53:55.715850 env[1215]: time="2025-05-14T00:53:55.715809877Z" level=info msg="CreateContainer within sandbox \"788c100d03297b32248ff95fe9faec5a88e047567e892909215257d87578f856\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 00:53:55.725755 env[1215]: time="2025-05-14T00:53:55.725708207Z" level=info msg="CreateContainer within sandbox \"788c100d03297b32248ff95fe9faec5a88e047567e892909215257d87578f856\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"942cd4f496878d5b7f8c2199f6d6f10dd95fcb47d98755461e9bb90b2589a7b1\"" May 14 00:53:55.726389 env[1215]: time="2025-05-14T00:53:55.726364392Z" level=info msg="StartContainer for \"942cd4f496878d5b7f8c2199f6d6f10dd95fcb47d98755461e9bb90b2589a7b1\"" May 14 00:53:55.741218 systemd[1]: Started cri-containerd-942cd4f496878d5b7f8c2199f6d6f10dd95fcb47d98755461e9bb90b2589a7b1.scope. May 14 00:53:55.769461 systemd[1]: cri-containerd-942cd4f496878d5b7f8c2199f6d6f10dd95fcb47d98755461e9bb90b2589a7b1.scope: Deactivated successfully. May 14 00:53:55.770743 env[1215]: time="2025-05-14T00:53:55.770666080Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod304b2b40_a84a_499a_ac88_647372ac1688.slice/cri-containerd-942cd4f496878d5b7f8c2199f6d6f10dd95fcb47d98755461e9bb90b2589a7b1.scope/memory.events\": no such file or directory" May 14 00:53:55.772085 env[1215]: time="2025-05-14T00:53:55.772050488Z" level=info msg="StartContainer for \"942cd4f496878d5b7f8c2199f6d6f10dd95fcb47d98755461e9bb90b2589a7b1\" returns successfully" May 14 00:53:55.790087 env[1215]: time="2025-05-14T00:53:55.790041989Z" level=info msg="shim disconnected" id=942cd4f496878d5b7f8c2199f6d6f10dd95fcb47d98755461e9bb90b2589a7b1 May 14 00:53:55.790087 env[1215]: time="2025-05-14T00:53:55.790088828Z" level=warning msg="cleaning up after shim disconnected" id=942cd4f496878d5b7f8c2199f6d6f10dd95fcb47d98755461e9bb90b2589a7b1 namespace=k8s.io May 14 00:53:55.790340 env[1215]: time="2025-05-14T00:53:55.790098108Z" level=info msg="cleaning up dead shim" May 14 00:53:55.797067 env[1215]: time="2025-05-14T00:53:55.797032026Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:53:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4002 runtime=io.containerd.runc.v2\n" May 14 00:53:55.914783 systemd[1]: run-containerd-runc-k8s.io-942cd4f496878d5b7f8c2199f6d6f10dd95fcb47d98755461e9bb90b2589a7b1-runc.6mdsLK.mount: Deactivated successfully. May 14 00:53:55.914877 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-942cd4f496878d5b7f8c2199f6d6f10dd95fcb47d98755461e9bb90b2589a7b1-rootfs.mount: Deactivated successfully. May 14 00:53:56.722116 kubelet[1913]: E0514 00:53:56.722071 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:53:56.723824 env[1215]: time="2025-05-14T00:53:56.723787061Z" level=info msg="CreateContainer within sandbox \"788c100d03297b32248ff95fe9faec5a88e047567e892909215257d87578f856\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 00:53:56.740654 env[1215]: time="2025-05-14T00:53:56.740603817Z" level=info msg="CreateContainer within sandbox \"788c100d03297b32248ff95fe9faec5a88e047567e892909215257d87578f856\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d9dab364f1d14dd85686701c8004c1de193b1836b72d1407f9b945605c5e0540\"" May 14 00:53:56.741602 env[1215]: time="2025-05-14T00:53:56.741110526Z" level=info msg="StartContainer for \"d9dab364f1d14dd85686701c8004c1de193b1836b72d1407f9b945605c5e0540\"" May 14 00:53:56.758314 systemd[1]: Started cri-containerd-d9dab364f1d14dd85686701c8004c1de193b1836b72d1407f9b945605c5e0540.scope. May 14 00:53:56.786082 env[1215]: time="2025-05-14T00:53:56.786040473Z" level=info msg="StartContainer for \"d9dab364f1d14dd85686701c8004c1de193b1836b72d1407f9b945605c5e0540\" returns successfully" May 14 00:53:56.847130 systemd[1]: Created slice system-systemd\x2dcoredump.slice. May 14 00:53:56.850605 systemd[1]: Started systemd-coredump@0-4082-0.service. May 14 00:53:56.972545 systemd-coredump[4087]: elfutils disabled, parsing ELF objects not supported May 14 00:53:56.972596 systemd-coredump[4087]: Process 4071 (cilium-envoy) of user 0 dumped core. May 14 00:53:56.977649 systemd[1]: systemd-coredump@0-4082-0.service: Deactivated successfully. May 14 00:53:57.725627 kubelet[1913]: E0514 00:53:57.725593 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:53:57.740950 kubelet[1913]: I0514 00:53:57.740590 1913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2wrjc" podStartSLOduration=5.740575719 podStartE2EDuration="5.740575719s" podCreationTimestamp="2025-05-14 00:53:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:53:57.739832934 +0000 UTC m=+85.329626071" watchObservedRunningTime="2025-05-14 00:53:57.740575719 +0000 UTC m=+85.330368856" May 14 00:53:58.979772 systemd[1]: cri-containerd-d9dab364f1d14dd85686701c8004c1de193b1836b72d1407f9b945605c5e0540.scope: Deactivated successfully. May 14 00:53:58.994697 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9dab364f1d14dd85686701c8004c1de193b1836b72d1407f9b945605c5e0540-rootfs.mount: Deactivated successfully. May 14 00:53:59.018801 env[1215]: time="2025-05-14T00:53:59.018754697Z" level=info msg="shim disconnected" id=d9dab364f1d14dd85686701c8004c1de193b1836b72d1407f9b945605c5e0540 May 14 00:53:59.018801 env[1215]: time="2025-05-14T00:53:59.018802976Z" level=warning msg="cleaning up after shim disconnected" id=d9dab364f1d14dd85686701c8004c1de193b1836b72d1407f9b945605c5e0540 namespace=k8s.io May 14 00:53:59.019259 env[1215]: time="2025-05-14T00:53:59.018811816Z" level=info msg="cleaning up dead shim" May 14 00:53:59.025289 env[1215]: time="2025-05-14T00:53:59.025254506Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:53:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4105 runtime=io.containerd.runc.v2\n" May 14 00:53:59.068912 kubelet[1913]: E0514 00:53:59.068814 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:53:59.737195 kubelet[1913]: I0514 00:53:59.737165 1913 scope.go:117] "RemoveContainer" containerID="d9dab364f1d14dd85686701c8004c1de193b1836b72d1407f9b945605c5e0540" May 14 00:53:59.737466 kubelet[1913]: E0514 00:53:59.737452 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:53:59.739624 env[1215]: time="2025-05-14T00:53:59.739566457Z" level=info msg="CreateContainer within sandbox \"788c100d03297b32248ff95fe9faec5a88e047567e892909215257d87578f856\" for container &ContainerMetadata{Name:cilium-agent,Attempt:1,}" May 14 00:53:59.760939 env[1215]: time="2025-05-14T00:53:59.760880333Z" level=info msg="CreateContainer within sandbox \"788c100d03297b32248ff95fe9faec5a88e047567e892909215257d87578f856\" for &ContainerMetadata{Name:cilium-agent,Attempt:1,} returns container id \"a251bb124b0fb98ba56da6326e30207e0a92f7ecf2d5fe4e354342abb538f164\"" May 14 00:53:59.761486 env[1215]: time="2025-05-14T00:53:59.761453643Z" level=info msg="StartContainer for \"a251bb124b0fb98ba56da6326e30207e0a92f7ecf2d5fe4e354342abb538f164\"" May 14 00:53:59.794946 systemd[1]: Started cri-containerd-a251bb124b0fb98ba56da6326e30207e0a92f7ecf2d5fe4e354342abb538f164.scope. May 14 00:53:59.827103 env[1215]: time="2025-05-14T00:53:59.827055882Z" level=info msg="StartContainer for \"a251bb124b0fb98ba56da6326e30207e0a92f7ecf2d5fe4e354342abb538f164\" returns successfully" May 14 00:53:59.994765 systemd[1]: run-containerd-runc-k8s.io-a251bb124b0fb98ba56da6326e30207e0a92f7ecf2d5fe4e354342abb538f164-runc.LCsLJD.mount: Deactivated successfully. May 14 00:54:00.065257 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) May 14 00:54:00.743001 kubelet[1913]: E0514 00:54:00.742973 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:54:01.744775 kubelet[1913]: E0514 00:54:01.744733 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:54:02.871388 systemd-networkd[1042]: lxc_health: Link UP May 14 00:54:02.875484 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 14 00:54:02.875349 systemd-networkd[1042]: lxc_health: Gained carrier May 14 00:54:03.069562 kubelet[1913]: E0514 00:54:03.069523 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:54:03.545664 systemd[1]: run-containerd-runc-k8s.io-a251bb124b0fb98ba56da6326e30207e0a92f7ecf2d5fe4e354342abb538f164-runc.GDR2uD.mount: Deactivated successfully. May 14 00:54:03.748789 kubelet[1913]: E0514 00:54:03.748763 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:54:04.347416 systemd-networkd[1042]: lxc_health: Gained IPv6LL May 14 00:54:04.750769 kubelet[1913]: E0514 00:54:04.750735 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:54:09.918394 sshd[3709]: pam_unix(sshd:session): session closed for user core May 14 00:54:09.921320 systemd[1]: sshd@24-10.0.0.131:22-10.0.0.1:50722.service: Deactivated successfully. May 14 00:54:09.922116 systemd[1]: session-25.scope: Deactivated successfully. May 14 00:54:09.922615 systemd-logind[1203]: Session 25 logged out. Waiting for processes to exit. May 14 00:54:09.923494 systemd-logind[1203]: Removed session 25. May 14 00:54:11.511007 kubelet[1913]: E0514 00:54:11.510966 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"