May 13 00:35:42.730666 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 13 00:35:42.730684 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon May 12 23:22:00 -00 2025 May 13 00:35:42.730692 kernel: efi: EFI v2.70 by EDK II May 13 00:35:42.730698 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 May 13 00:35:42.730703 kernel: random: crng init done May 13 00:35:42.730708 kernel: ACPI: Early table checksum verification disabled May 13 00:35:42.730714 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) May 13 00:35:42.730721 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) May 13 00:35:42.730727 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:35:42.730732 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:35:42.730738 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:35:42.730743 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:35:42.730748 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:35:42.730754 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:35:42.730762 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:35:42.730768 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:35:42.730774 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:35:42.730779 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 13 00:35:42.730785 kernel: NUMA: Failed to initialise from firmware May 13 00:35:42.730791 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 13 00:35:42.730797 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] May 13 00:35:42.730803 kernel: Zone ranges: May 13 00:35:42.730808 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 13 00:35:42.730815 kernel: DMA32 empty May 13 00:35:42.730821 kernel: Normal empty May 13 00:35:42.730826 kernel: Movable zone start for each node May 13 00:35:42.730833 kernel: Early memory node ranges May 13 00:35:42.730838 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] May 13 00:35:42.730845 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] May 13 00:35:42.730852 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] May 13 00:35:42.730858 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] May 13 00:35:42.730863 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] May 13 00:35:42.730869 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] May 13 00:35:42.730875 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] May 13 00:35:42.730881 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 13 00:35:42.730888 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 13 00:35:42.730893 kernel: psci: probing for conduit method from ACPI. May 13 00:35:42.730899 kernel: psci: PSCIv1.1 detected in firmware. May 13 00:35:42.730905 kernel: psci: Using standard PSCI v0.2 function IDs May 13 00:35:42.730911 kernel: psci: Trusted OS migration not required May 13 00:35:42.730919 kernel: psci: SMC Calling Convention v1.1 May 13 00:35:42.730925 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 13 00:35:42.730933 kernel: ACPI: SRAT not present May 13 00:35:42.730939 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 May 13 00:35:42.730945 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 May 13 00:35:42.730952 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 13 00:35:42.730958 kernel: Detected PIPT I-cache on CPU0 May 13 00:35:42.730964 kernel: CPU features: detected: GIC system register CPU interface May 13 00:35:42.730970 kernel: CPU features: detected: Hardware dirty bit management May 13 00:35:42.730976 kernel: CPU features: detected: Spectre-v4 May 13 00:35:42.730982 kernel: CPU features: detected: Spectre-BHB May 13 00:35:42.730989 kernel: CPU features: kernel page table isolation forced ON by KASLR May 13 00:35:42.730995 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 13 00:35:42.731001 kernel: CPU features: detected: ARM erratum 1418040 May 13 00:35:42.731008 kernel: CPU features: detected: SSBS not fully self-synchronizing May 13 00:35:42.731014 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 13 00:35:42.731020 kernel: Policy zone: DMA May 13 00:35:42.731027 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ae60136413c5686d5b1e9c38408a367f831e354d706496e9f743f02289aad53d May 13 00:35:42.731034 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 00:35:42.731040 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 00:35:42.731046 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 00:35:42.731052 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 00:35:42.731060 kernel: Memory: 2457340K/2572288K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36480K init, 777K bss, 114948K reserved, 0K cma-reserved) May 13 00:35:42.731066 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 00:35:42.731072 kernel: trace event string verifier disabled May 13 00:35:42.731078 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 00:35:42.731085 kernel: rcu: RCU event tracing is enabled. May 13 00:35:42.731092 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 00:35:42.731099 kernel: Trampoline variant of Tasks RCU enabled. May 13 00:35:42.731105 kernel: Tracing variant of Tasks RCU enabled. May 13 00:35:42.731111 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 00:35:42.731118 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 00:35:42.731128 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 13 00:35:42.731142 kernel: GICv3: 256 SPIs implemented May 13 00:35:42.731152 kernel: GICv3: 0 Extended SPIs implemented May 13 00:35:42.731160 kernel: GICv3: Distributor has no Range Selector support May 13 00:35:42.731166 kernel: Root IRQ handler: gic_handle_irq May 13 00:35:42.731172 kernel: GICv3: 16 PPIs implemented May 13 00:35:42.731178 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 13 00:35:42.731184 kernel: ACPI: SRAT not present May 13 00:35:42.731190 kernel: ITS [mem 0x08080000-0x0809ffff] May 13 00:35:42.731196 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) May 13 00:35:42.731203 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) May 13 00:35:42.731209 kernel: GICv3: using LPI property table @0x00000000400d0000 May 13 00:35:42.731215 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 May 13 00:35:42.731225 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:35:42.731235 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 13 00:35:42.731243 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 13 00:35:42.731249 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 13 00:35:42.731255 kernel: arm-pv: using stolen time PV May 13 00:35:42.731262 kernel: Console: colour dummy device 80x25 May 13 00:35:42.731268 kernel: ACPI: Core revision 20210730 May 13 00:35:42.731275 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 13 00:35:42.731281 kernel: pid_max: default: 32768 minimum: 301 May 13 00:35:42.731287 kernel: LSM: Security Framework initializing May 13 00:35:42.731295 kernel: SELinux: Initializing. May 13 00:35:42.731301 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:35:42.731308 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:35:42.731314 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 13 00:35:42.731320 kernel: rcu: Hierarchical SRCU implementation. May 13 00:35:42.731327 kernel: Platform MSI: ITS@0x8080000 domain created May 13 00:35:42.731333 kernel: PCI/MSI: ITS@0x8080000 domain created May 13 00:35:42.731339 kernel: Remapping and enabling EFI services. May 13 00:35:42.731345 kernel: smp: Bringing up secondary CPUs ... May 13 00:35:42.731353 kernel: Detected PIPT I-cache on CPU1 May 13 00:35:42.731359 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 13 00:35:42.731366 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 May 13 00:35:42.731372 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:35:42.731378 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 13 00:35:42.731385 kernel: Detected PIPT I-cache on CPU2 May 13 00:35:42.731391 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 13 00:35:42.731427 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 May 13 00:35:42.731434 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:35:42.731441 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 13 00:35:42.731449 kernel: Detected PIPT I-cache on CPU3 May 13 00:35:42.731455 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 13 00:35:42.731461 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 May 13 00:35:42.731468 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:35:42.731478 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 13 00:35:42.731486 kernel: smp: Brought up 1 node, 4 CPUs May 13 00:35:42.731493 kernel: SMP: Total of 4 processors activated. May 13 00:35:42.731500 kernel: CPU features: detected: 32-bit EL0 Support May 13 00:35:42.731506 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 13 00:35:42.731513 kernel: CPU features: detected: Common not Private translations May 13 00:35:42.731520 kernel: CPU features: detected: CRC32 instructions May 13 00:35:42.731526 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 13 00:35:42.731534 kernel: CPU features: detected: LSE atomic instructions May 13 00:35:42.731541 kernel: CPU features: detected: Privileged Access Never May 13 00:35:42.731548 kernel: CPU features: detected: RAS Extension Support May 13 00:35:42.731554 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 13 00:35:42.731561 kernel: CPU: All CPU(s) started at EL1 May 13 00:35:42.731569 kernel: alternatives: patching kernel code May 13 00:35:42.731576 kernel: devtmpfs: initialized May 13 00:35:42.731582 kernel: KASLR enabled May 13 00:35:42.731589 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 00:35:42.731596 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 00:35:42.731602 kernel: pinctrl core: initialized pinctrl subsystem May 13 00:35:42.731609 kernel: SMBIOS 3.0.0 present. May 13 00:35:42.731615 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 May 13 00:35:42.731622 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 00:35:42.731630 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 13 00:35:42.731637 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 13 00:35:42.731644 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 13 00:35:42.731650 kernel: audit: initializing netlink subsys (disabled) May 13 00:35:42.731657 kernel: audit: type=2000 audit(0.033:1): state=initialized audit_enabled=0 res=1 May 13 00:35:42.731664 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 00:35:42.731670 kernel: cpuidle: using governor menu May 13 00:35:42.731677 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 13 00:35:42.731684 kernel: ASID allocator initialised with 32768 entries May 13 00:35:42.731692 kernel: ACPI: bus type PCI registered May 13 00:35:42.731698 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 00:35:42.731705 kernel: Serial: AMBA PL011 UART driver May 13 00:35:42.731715 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 13 00:35:42.731723 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages May 13 00:35:42.731729 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 13 00:35:42.731736 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages May 13 00:35:42.731743 kernel: cryptd: max_cpu_qlen set to 1000 May 13 00:35:42.731750 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 13 00:35:42.731758 kernel: ACPI: Added _OSI(Module Device) May 13 00:35:42.731765 kernel: ACPI: Added _OSI(Processor Device) May 13 00:35:42.731771 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 00:35:42.731778 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 00:35:42.731785 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 13 00:35:42.731791 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 13 00:35:42.731798 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 13 00:35:42.731805 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 00:35:42.731811 kernel: ACPI: Interpreter enabled May 13 00:35:42.731819 kernel: ACPI: Using GIC for interrupt routing May 13 00:35:42.731826 kernel: ACPI: MCFG table detected, 1 entries May 13 00:35:42.731832 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 13 00:35:42.731839 kernel: printk: console [ttyAMA0] enabled May 13 00:35:42.731845 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 00:35:42.731974 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 00:35:42.732076 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 13 00:35:42.732146 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 13 00:35:42.732206 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 13 00:35:42.732268 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 13 00:35:42.732276 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 13 00:35:42.732283 kernel: PCI host bridge to bus 0000:00 May 13 00:35:42.732355 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 13 00:35:42.732450 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 13 00:35:42.732507 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 13 00:35:42.732562 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 00:35:42.732636 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 13 00:35:42.732705 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 13 00:35:42.732767 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 13 00:35:42.732828 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 13 00:35:42.732890 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 13 00:35:42.732953 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 13 00:35:42.733015 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 13 00:35:42.733076 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 13 00:35:42.733131 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 13 00:35:42.733184 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 13 00:35:42.733240 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 13 00:35:42.733249 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 13 00:35:42.733255 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 13 00:35:42.733264 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 13 00:35:42.733271 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 13 00:35:42.733277 kernel: iommu: Default domain type: Translated May 13 00:35:42.733284 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 13 00:35:42.733291 kernel: vgaarb: loaded May 13 00:35:42.733297 kernel: pps_core: LinuxPPS API ver. 1 registered May 13 00:35:42.733304 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 13 00:35:42.733310 kernel: PTP clock support registered May 13 00:35:42.733317 kernel: Registered efivars operations May 13 00:35:42.733325 kernel: clocksource: Switched to clocksource arch_sys_counter May 13 00:35:42.733332 kernel: VFS: Disk quotas dquot_6.6.0 May 13 00:35:42.733339 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 00:35:42.733345 kernel: pnp: PnP ACPI init May 13 00:35:42.733437 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 13 00:35:42.733448 kernel: pnp: PnP ACPI: found 1 devices May 13 00:35:42.733455 kernel: NET: Registered PF_INET protocol family May 13 00:35:42.733462 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 00:35:42.733471 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 00:35:42.733479 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 00:35:42.733486 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 00:35:42.733493 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 13 00:35:42.733499 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 00:35:42.733506 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:35:42.733513 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:35:42.733520 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 00:35:42.733526 kernel: PCI: CLS 0 bytes, default 64 May 13 00:35:42.733534 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 13 00:35:42.733541 kernel: kvm [1]: HYP mode not available May 13 00:35:42.733547 kernel: Initialise system trusted keyrings May 13 00:35:42.733554 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 00:35:42.733561 kernel: Key type asymmetric registered May 13 00:35:42.733567 kernel: Asymmetric key parser 'x509' registered May 13 00:35:42.733574 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 13 00:35:42.733581 kernel: io scheduler mq-deadline registered May 13 00:35:42.733587 kernel: io scheduler kyber registered May 13 00:35:42.733595 kernel: io scheduler bfq registered May 13 00:35:42.733602 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 13 00:35:42.733608 kernel: ACPI: button: Power Button [PWRB] May 13 00:35:42.733616 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 13 00:35:42.733679 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 13 00:35:42.733688 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 00:35:42.733695 kernel: thunder_xcv, ver 1.0 May 13 00:35:42.733702 kernel: thunder_bgx, ver 1.0 May 13 00:35:42.733709 kernel: nicpf, ver 1.0 May 13 00:35:42.733717 kernel: nicvf, ver 1.0 May 13 00:35:42.733794 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 13 00:35:42.733853 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-13T00:35:42 UTC (1747096542) May 13 00:35:42.733862 kernel: hid: raw HID events driver (C) Jiri Kosina May 13 00:35:42.733868 kernel: NET: Registered PF_INET6 protocol family May 13 00:35:42.733875 kernel: Segment Routing with IPv6 May 13 00:35:42.733882 kernel: In-situ OAM (IOAM) with IPv6 May 13 00:35:42.733889 kernel: NET: Registered PF_PACKET protocol family May 13 00:35:42.733897 kernel: Key type dns_resolver registered May 13 00:35:42.733903 kernel: registered taskstats version 1 May 13 00:35:42.733910 kernel: Loading compiled-in X.509 certificates May 13 00:35:42.733917 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: d291b704d59536a3c0ba96fd6f5a99459de8de99' May 13 00:35:42.733923 kernel: Key type .fscrypt registered May 13 00:35:42.733930 kernel: Key type fscrypt-provisioning registered May 13 00:35:42.733936 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 00:35:42.733943 kernel: ima: Allocated hash algorithm: sha1 May 13 00:35:42.733950 kernel: ima: No architecture policies found May 13 00:35:42.733958 kernel: clk: Disabling unused clocks May 13 00:35:42.733964 kernel: Freeing unused kernel memory: 36480K May 13 00:35:42.733971 kernel: Run /init as init process May 13 00:35:42.733978 kernel: with arguments: May 13 00:35:42.733984 kernel: /init May 13 00:35:42.733990 kernel: with environment: May 13 00:35:42.733997 kernel: HOME=/ May 13 00:35:42.734003 kernel: TERM=linux May 13 00:35:42.734010 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 00:35:42.734020 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 13 00:35:42.734028 systemd[1]: Detected virtualization kvm. May 13 00:35:42.734036 systemd[1]: Detected architecture arm64. May 13 00:35:42.734043 systemd[1]: Running in initrd. May 13 00:35:42.734050 systemd[1]: No hostname configured, using default hostname. May 13 00:35:42.734056 systemd[1]: Hostname set to . May 13 00:35:42.734064 systemd[1]: Initializing machine ID from VM UUID. May 13 00:35:42.734072 systemd[1]: Queued start job for default target initrd.target. May 13 00:35:42.734079 systemd[1]: Started systemd-ask-password-console.path. May 13 00:35:42.734086 systemd[1]: Reached target cryptsetup.target. May 13 00:35:42.734094 systemd[1]: Reached target paths.target. May 13 00:35:42.734101 systemd[1]: Reached target slices.target. May 13 00:35:42.734108 systemd[1]: Reached target swap.target. May 13 00:35:42.734115 systemd[1]: Reached target timers.target. May 13 00:35:42.734122 systemd[1]: Listening on iscsid.socket. May 13 00:35:42.734130 systemd[1]: Listening on iscsiuio.socket. May 13 00:35:42.734137 systemd[1]: Listening on systemd-journald-audit.socket. May 13 00:35:42.734144 systemd[1]: Listening on systemd-journald-dev-log.socket. May 13 00:35:42.734151 systemd[1]: Listening on systemd-journald.socket. May 13 00:35:42.734158 systemd[1]: Listening on systemd-networkd.socket. May 13 00:35:42.734166 systemd[1]: Listening on systemd-udevd-control.socket. May 13 00:35:42.734173 systemd[1]: Listening on systemd-udevd-kernel.socket. May 13 00:35:42.734180 systemd[1]: Reached target sockets.target. May 13 00:35:42.734188 systemd[1]: Starting kmod-static-nodes.service... May 13 00:35:42.734195 systemd[1]: Finished network-cleanup.service. May 13 00:35:42.734202 systemd[1]: Starting systemd-fsck-usr.service... May 13 00:35:42.734210 systemd[1]: Starting systemd-journald.service... May 13 00:35:42.734217 systemd[1]: Starting systemd-modules-load.service... May 13 00:35:42.734224 systemd[1]: Starting systemd-resolved.service... May 13 00:35:42.734231 systemd[1]: Starting systemd-vconsole-setup.service... May 13 00:35:42.734238 systemd[1]: Finished kmod-static-nodes.service. May 13 00:35:42.734245 systemd[1]: Finished systemd-fsck-usr.service. May 13 00:35:42.734254 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 13 00:35:42.734261 systemd[1]: Finished systemd-vconsole-setup.service. May 13 00:35:42.734268 kernel: audit: type=1130 audit(1747096542.731:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:42.734279 systemd-journald[289]: Journal started May 13 00:35:42.734320 systemd-journald[289]: Runtime Journal (/run/log/journal/bbe816d977fd4eab953f5fbdecdae3e2) is 6.0M, max 48.7M, 42.6M free. May 13 00:35:42.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:42.730349 systemd-modules-load[290]: Inserted module 'overlay' May 13 00:35:42.736059 systemd[1]: Started systemd-journald.service. May 13 00:35:42.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:42.739415 kernel: audit: type=1130 audit(1747096542.736:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:42.739709 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 13 00:35:42.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:42.744424 kernel: audit: type=1130 audit(1747096542.740:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:42.744834 systemd[1]: Starting dracut-cmdline-ask.service... May 13 00:35:42.758421 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 00:35:42.759203 systemd-resolved[291]: Positive Trust Anchors: May 13 00:35:42.759217 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:35:42.759245 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 13 00:35:42.764798 systemd-resolved[291]: Defaulting to hostname 'linux'. May 13 00:35:42.770850 kernel: Bridge firewalling registered May 13 00:35:42.770869 kernel: audit: type=1130 audit(1747096542.767:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:42.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:42.765594 systemd[1]: Started systemd-resolved.service. May 13 00:35:42.767783 systemd-modules-load[290]: Inserted module 'br_netfilter' May 13 00:35:42.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:42.771015 systemd[1]: Finished dracut-cmdline-ask.service. May 13 00:35:42.775530 systemd[1]: Reached target nss-lookup.target. May 13 00:35:42.777511 kernel: audit: type=1130 audit(1747096542.771:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:42.777729 systemd[1]: Starting dracut-cmdline.service... May 13 00:35:42.781431 kernel: SCSI subsystem initialized May 13 00:35:42.786891 dracut-cmdline[307]: dracut-dracut-053 May 13 00:35:42.789528 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 00:35:42.789546 kernel: device-mapper: uevent: version 1.0.3 May 13 00:35:42.789555 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 13 00:35:42.789564 dracut-cmdline[307]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ae60136413c5686d5b1e9c38408a367f831e354d706496e9f743f02289aad53d May 13 00:35:42.798097 systemd-modules-load[290]: Inserted module 'dm_multipath' May 13 00:35:42.798979 systemd[1]: Finished systemd-modules-load.service. May 13 00:35:42.803833 kernel: audit: type=1130 audit(1747096542.799:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:42.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:42.801196 systemd[1]: Starting systemd-sysctl.service... May 13 00:35:42.810255 systemd[1]: Finished systemd-sysctl.service. May 13 00:35:42.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:42.814429 kernel: audit: type=1130 audit(1747096542.810:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:42.864433 kernel: Loading iSCSI transport class v2.0-870. May 13 00:35:42.876432 kernel: iscsi: registered transport (tcp) May 13 00:35:42.893715 kernel: iscsi: registered transport (qla4xxx) May 13 00:35:42.893738 kernel: QLogic iSCSI HBA Driver May 13 00:35:42.930106 systemd[1]: Finished dracut-cmdline.service. May 13 00:35:42.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:42.931831 systemd[1]: Starting dracut-pre-udev.service... May 13 00:35:42.935436 kernel: audit: type=1130 audit(1747096542.930:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:42.974429 kernel: raid6: neonx8 gen() 13744 MB/s May 13 00:35:42.991420 kernel: raid6: neonx8 xor() 10768 MB/s May 13 00:35:43.008417 kernel: raid6: neonx4 gen() 13523 MB/s May 13 00:35:43.025414 kernel: raid6: neonx4 xor() 11079 MB/s May 13 00:35:43.042412 kernel: raid6: neonx2 gen() 12953 MB/s May 13 00:35:43.059411 kernel: raid6: neonx2 xor() 10347 MB/s May 13 00:35:43.076423 kernel: raid6: neonx1 gen() 10570 MB/s May 13 00:35:43.093423 kernel: raid6: neonx1 xor() 8771 MB/s May 13 00:35:43.110448 kernel: raid6: int64x8 gen() 6262 MB/s May 13 00:35:43.127426 kernel: raid6: int64x8 xor() 3520 MB/s May 13 00:35:43.144427 kernel: raid6: int64x4 gen() 7199 MB/s May 13 00:35:43.161426 kernel: raid6: int64x4 xor() 3851 MB/s May 13 00:35:43.178425 kernel: raid6: int64x2 gen() 6118 MB/s May 13 00:35:43.195423 kernel: raid6: int64x2 xor() 3315 MB/s May 13 00:35:43.212425 kernel: raid6: int64x1 gen() 5033 MB/s May 13 00:35:43.229605 kernel: raid6: int64x1 xor() 2643 MB/s May 13 00:35:43.229617 kernel: raid6: using algorithm neonx8 gen() 13744 MB/s May 13 00:35:43.229626 kernel: raid6: .... xor() 10768 MB/s, rmw enabled May 13 00:35:43.230748 kernel: raid6: using neon recovery algorithm May 13 00:35:43.242781 kernel: xor: measuring software checksum speed May 13 00:35:43.242798 kernel: 8regs : 17206 MB/sec May 13 00:35:43.243470 kernel: 32regs : 20691 MB/sec May 13 00:35:43.244755 kernel: arm64_neon : 27570 MB/sec May 13 00:35:43.244767 kernel: xor: using function: arm64_neon (27570 MB/sec) May 13 00:35:43.306423 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no May 13 00:35:43.316232 systemd[1]: Finished dracut-pre-udev.service. May 13 00:35:43.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:43.319000 audit: BPF prog-id=7 op=LOAD May 13 00:35:43.319000 audit: BPF prog-id=8 op=LOAD May 13 00:35:43.320424 kernel: audit: type=1130 audit(1747096543.316:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:43.320506 systemd[1]: Starting systemd-udevd.service... May 13 00:35:43.332135 systemd-udevd[490]: Using default interface naming scheme 'v252'. May 13 00:35:43.335379 systemd[1]: Started systemd-udevd.service. May 13 00:35:43.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:43.337235 systemd[1]: Starting dracut-pre-trigger.service... May 13 00:35:43.348766 dracut-pre-trigger[498]: rd.md=0: removing MD RAID activation May 13 00:35:43.375252 systemd[1]: Finished dracut-pre-trigger.service. May 13 00:35:43.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:43.376832 systemd[1]: Starting systemd-udev-trigger.service... May 13 00:35:43.409313 systemd[1]: Finished systemd-udev-trigger.service. May 13 00:35:43.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:43.438505 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 00:35:43.443379 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 00:35:43.443393 kernel: GPT:9289727 != 19775487 May 13 00:35:43.443423 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 00:35:43.443439 kernel: GPT:9289727 != 19775487 May 13 00:35:43.443447 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 00:35:43.443455 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:35:43.462686 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 13 00:35:43.465844 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (553) May 13 00:35:43.467360 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 13 00:35:43.474046 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 13 00:35:43.475074 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 13 00:35:43.479423 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 13 00:35:43.481342 systemd[1]: Starting disk-uuid.service... May 13 00:35:43.488255 disk-uuid[561]: Primary Header is updated. May 13 00:35:43.488255 disk-uuid[561]: Secondary Entries is updated. May 13 00:35:43.488255 disk-uuid[561]: Secondary Header is updated. May 13 00:35:43.492426 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:35:44.504984 disk-uuid[562]: The operation has completed successfully. May 13 00:35:44.506259 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:35:44.530463 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 00:35:44.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:44.530000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:44.530557 systemd[1]: Finished disk-uuid.service. May 13 00:35:44.532289 systemd[1]: Starting verity-setup.service... May 13 00:35:44.550422 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 13 00:35:44.571438 systemd[1]: Found device dev-mapper-usr.device. May 13 00:35:44.573691 systemd[1]: Mounting sysusr-usr.mount... May 13 00:35:44.575342 systemd[1]: Finished verity-setup.service. May 13 00:35:44.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:44.621019 systemd[1]: Mounted sysusr-usr.mount. May 13 00:35:44.622288 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 13 00:35:44.621848 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 13 00:35:44.622567 systemd[1]: Starting ignition-setup.service... May 13 00:35:44.624886 systemd[1]: Starting parse-ip-for-networkd.service... May 13 00:35:44.631512 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 00:35:44.631549 kernel: BTRFS info (device vda6): using free space tree May 13 00:35:44.631565 kernel: BTRFS info (device vda6): has skinny extents May 13 00:35:44.642029 systemd[1]: mnt-oem.mount: Deactivated successfully. May 13 00:35:44.647838 systemd[1]: Finished ignition-setup.service. May 13 00:35:44.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:44.649452 systemd[1]: Starting ignition-fetch-offline.service... May 13 00:35:44.704675 systemd[1]: Finished parse-ip-for-networkd.service. May 13 00:35:44.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:44.706000 audit: BPF prog-id=9 op=LOAD May 13 00:35:44.706964 systemd[1]: Starting systemd-networkd.service... May 13 00:35:44.732693 systemd-networkd[739]: lo: Link UP May 13 00:35:44.732703 systemd-networkd[739]: lo: Gained carrier May 13 00:35:44.733085 systemd-networkd[739]: Enumeration completed May 13 00:35:44.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:44.733250 systemd-networkd[739]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:35:44.733357 systemd[1]: Started systemd-networkd.service. May 13 00:35:44.734665 systemd-networkd[739]: eth0: Link UP May 13 00:35:44.734669 systemd-networkd[739]: eth0: Gained carrier May 13 00:35:44.734764 systemd[1]: Reached target network.target. May 13 00:35:44.741255 ignition[652]: Ignition 2.14.0 May 13 00:35:44.737437 systemd[1]: Starting iscsiuio.service... May 13 00:35:44.741261 ignition[652]: Stage: fetch-offline May 13 00:35:44.741306 ignition[652]: no configs at "/usr/lib/ignition/base.d" May 13 00:35:44.741315 ignition[652]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:35:44.741502 ignition[652]: parsed url from cmdline: "" May 13 00:35:44.741505 ignition[652]: no config URL provided May 13 00:35:44.741510 ignition[652]: reading system config file "/usr/lib/ignition/user.ign" May 13 00:35:44.741517 ignition[652]: no config at "/usr/lib/ignition/user.ign" May 13 00:35:44.741536 ignition[652]: op(1): [started] loading QEMU firmware config module May 13 00:35:44.741541 ignition[652]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 00:35:44.752463 ignition[652]: op(1): [finished] loading QEMU firmware config module May 13 00:35:44.752490 ignition[652]: QEMU firmware config was not found. Ignoring... May 13 00:35:44.755035 systemd[1]: Started iscsiuio.service. May 13 00:35:44.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:44.756466 systemd-networkd[739]: eth0: DHCPv4 address 10.0.0.114/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:35:44.756683 systemd[1]: Starting iscsid.service... May 13 00:35:44.760936 iscsid[745]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 13 00:35:44.760936 iscsid[745]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 13 00:35:44.760936 iscsid[745]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 13 00:35:44.760936 iscsid[745]: If using hardware iscsi like qla4xxx this message can be ignored. May 13 00:35:44.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:44.772521 iscsid[745]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 13 00:35:44.772521 iscsid[745]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 13 00:35:44.763779 systemd[1]: Started iscsid.service. May 13 00:35:44.770315 systemd[1]: Starting dracut-initqueue.service... May 13 00:35:44.780207 systemd[1]: Finished dracut-initqueue.service. May 13 00:35:44.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:44.781236 systemd[1]: Reached target remote-fs-pre.target. May 13 00:35:44.782740 systemd[1]: Reached target remote-cryptsetup.target. May 13 00:35:44.784321 systemd[1]: Reached target remote-fs.target. May 13 00:35:44.786738 systemd[1]: Starting dracut-pre-mount.service... May 13 00:35:44.795218 systemd[1]: Finished dracut-pre-mount.service. May 13 00:35:44.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:44.812700 ignition[652]: parsing config with SHA512: 201eeb7a0a83a1eb065663891b1c3aeb473cdbe98a179e54318aeb01b23e58d2d4963a7299b278eefb0c96de6595aa74591d8db66b98c018954096cd11945d89 May 13 00:35:44.819551 unknown[652]: fetched base config from "system" May 13 00:35:44.819562 unknown[652]: fetched user config from "qemu" May 13 00:35:44.820061 ignition[652]: fetch-offline: fetch-offline passed May 13 00:35:44.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:44.821362 systemd[1]: Finished ignition-fetch-offline.service. May 13 00:35:44.820115 ignition[652]: Ignition finished successfully May 13 00:35:44.822958 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 00:35:44.823740 systemd[1]: Starting ignition-kargs.service... May 13 00:35:44.832557 ignition[760]: Ignition 2.14.0 May 13 00:35:44.832567 ignition[760]: Stage: kargs May 13 00:35:44.832658 ignition[760]: no configs at "/usr/lib/ignition/base.d" May 13 00:35:44.832668 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:35:44.833608 ignition[760]: kargs: kargs passed May 13 00:35:44.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:44.835877 systemd[1]: Finished ignition-kargs.service. May 13 00:35:44.833652 ignition[760]: Ignition finished successfully May 13 00:35:44.838290 systemd[1]: Starting ignition-disks.service... May 13 00:35:44.844679 ignition[766]: Ignition 2.14.0 May 13 00:35:44.844688 ignition[766]: Stage: disks May 13 00:35:44.844778 ignition[766]: no configs at "/usr/lib/ignition/base.d" May 13 00:35:44.844787 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:35:44.848620 systemd[1]: Finished ignition-disks.service. May 13 00:35:44.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:44.845954 ignition[766]: disks: disks passed May 13 00:35:44.849670 systemd[1]: Reached target initrd-root-device.target. May 13 00:35:44.845999 ignition[766]: Ignition finished successfully May 13 00:35:44.850921 systemd[1]: Reached target local-fs-pre.target. May 13 00:35:44.852149 systemd[1]: Reached target local-fs.target. May 13 00:35:44.853565 systemd[1]: Reached target sysinit.target. May 13 00:35:44.854816 systemd[1]: Reached target basic.target. May 13 00:35:44.857005 systemd[1]: Starting systemd-fsck-root.service... May 13 00:35:44.868861 systemd-fsck[774]: ROOT: clean, 619/553520 files, 56022/553472 blocks May 13 00:35:44.873364 systemd[1]: Finished systemd-fsck-root.service. May 13 00:35:44.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:44.878189 systemd[1]: Mounting sysroot.mount... May 13 00:35:44.885431 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 13 00:35:44.885715 systemd[1]: Mounted sysroot.mount. May 13 00:35:44.886437 systemd[1]: Reached target initrd-root-fs.target. May 13 00:35:44.888694 systemd[1]: Mounting sysroot-usr.mount... May 13 00:35:44.889555 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 13 00:35:44.889596 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 00:35:44.889620 systemd[1]: Reached target ignition-diskful.target. May 13 00:35:44.891996 systemd[1]: Mounted sysroot-usr.mount. May 13 00:35:44.894021 systemd[1]: Starting initrd-setup-root.service... May 13 00:35:44.898509 initrd-setup-root[784]: cut: /sysroot/etc/passwd: No such file or directory May 13 00:35:44.903634 initrd-setup-root[792]: cut: /sysroot/etc/group: No such file or directory May 13 00:35:44.907388 initrd-setup-root[800]: cut: /sysroot/etc/shadow: No such file or directory May 13 00:35:44.911565 initrd-setup-root[808]: cut: /sysroot/etc/gshadow: No such file or directory May 13 00:35:44.945711 systemd[1]: Finished initrd-setup-root.service. May 13 00:35:44.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:44.947439 systemd[1]: Starting ignition-mount.service... May 13 00:35:44.948958 systemd[1]: Starting sysroot-boot.service... May 13 00:35:44.954254 bash[825]: umount: /sysroot/usr/share/oem: not mounted. May 13 00:35:44.963088 ignition[826]: INFO : Ignition 2.14.0 May 13 00:35:44.963088 ignition[826]: INFO : Stage: mount May 13 00:35:44.965444 ignition[826]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:35:44.965444 ignition[826]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:35:44.965444 ignition[826]: INFO : mount: mount passed May 13 00:35:44.965444 ignition[826]: INFO : Ignition finished successfully May 13 00:35:44.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:44.966672 systemd[1]: Finished ignition-mount.service. May 13 00:35:44.978301 systemd[1]: Finished sysroot-boot.service. May 13 00:35:44.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:45.582510 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 13 00:35:45.590232 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (836) May 13 00:35:45.590268 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 00:35:45.590278 kernel: BTRFS info (device vda6): using free space tree May 13 00:35:45.590908 kernel: BTRFS info (device vda6): has skinny extents May 13 00:35:45.594505 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 13 00:35:45.596059 systemd[1]: Starting ignition-files.service... May 13 00:35:45.612478 ignition[856]: INFO : Ignition 2.14.0 May 13 00:35:45.612478 ignition[856]: INFO : Stage: files May 13 00:35:45.612478 ignition[856]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:35:45.612478 ignition[856]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:35:45.612478 ignition[856]: DEBUG : files: compiled without relabeling support, skipping May 13 00:35:45.618377 ignition[856]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 00:35:45.618377 ignition[856]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 00:35:45.622740 ignition[856]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 00:35:45.624597 ignition[856]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 00:35:45.626014 unknown[856]: wrote ssh authorized keys file for user: core May 13 00:35:45.628479 ignition[856]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 00:35:45.628479 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 13 00:35:45.628479 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 13 00:35:45.800177 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 00:35:45.843525 systemd-networkd[739]: eth0: Gained IPv6LL May 13 00:35:45.918935 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 13 00:35:45.920994 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 00:35:45.922689 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 13 00:35:46.237068 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 13 00:35:46.319684 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 00:35:46.319684 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 13 00:35:46.323030 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 13 00:35:46.323030 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 00:35:46.323030 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 00:35:46.323030 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:35:46.323030 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:35:46.323030 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:35:46.323030 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:35:46.334720 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:35:46.334720 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:35:46.334720 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 13 00:35:46.334720 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 13 00:35:46.334720 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 13 00:35:46.334720 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 May 13 00:35:46.591248 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 13 00:35:47.023337 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 13 00:35:47.023337 ignition[856]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 13 00:35:47.030241 ignition[856]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:35:47.030241 ignition[856]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:35:47.030241 ignition[856]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 13 00:35:47.030241 ignition[856]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 13 00:35:47.030241 ignition[856]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:35:47.030241 ignition[856]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:35:47.030241 ignition[856]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 13 00:35:47.030241 ignition[856]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" May 13 00:35:47.030241 ignition[856]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" May 13 00:35:47.030241 ignition[856]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" May 13 00:35:47.030241 ignition[856]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:35:47.091622 ignition[856]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:35:47.093328 ignition[856]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" May 13 00:35:47.093328 ignition[856]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 00:35:47.093328 ignition[856]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 00:35:47.093328 ignition[856]: INFO : files: files passed May 13 00:35:47.093328 ignition[856]: INFO : Ignition finished successfully May 13 00:35:47.104695 kernel: kauditd_printk_skb: 23 callbacks suppressed May 13 00:35:47.104716 kernel: audit: type=1130 audit(1747096547.093:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.093192 systemd[1]: Finished ignition-files.service. May 13 00:35:47.110808 kernel: audit: type=1130 audit(1747096547.104:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.110829 kernel: audit: type=1131 audit(1747096547.104:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.104000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.095045 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 13 00:35:47.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.096338 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 13 00:35:47.117517 kernel: audit: type=1130 audit(1747096547.111:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.117540 initrd-setup-root-after-ignition[881]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 13 00:35:47.096991 systemd[1]: Starting ignition-quench.service... May 13 00:35:47.120224 initrd-setup-root-after-ignition[883]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:35:47.104335 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 00:35:47.104440 systemd[1]: Finished ignition-quench.service. May 13 00:35:47.107334 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 13 00:35:47.111760 systemd[1]: Reached target ignition-complete.target. May 13 00:35:47.116529 systemd[1]: Starting initrd-parse-etc.service... May 13 00:35:47.129530 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 00:35:47.129625 systemd[1]: Finished initrd-parse-etc.service. May 13 00:35:47.137769 kernel: audit: type=1130 audit(1747096547.130:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.137791 kernel: audit: type=1131 audit(1747096547.130:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.131434 systemd[1]: Reached target initrd-fs.target. May 13 00:35:47.138619 systemd[1]: Reached target initrd.target. May 13 00:35:47.139937 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 13 00:35:47.140794 systemd[1]: Starting dracut-pre-pivot.service... May 13 00:35:47.151477 systemd[1]: Finished dracut-pre-pivot.service. May 13 00:35:47.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.155448 kernel: audit: type=1130 audit(1747096547.151:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.153088 systemd[1]: Starting initrd-cleanup.service... May 13 00:35:47.161260 systemd[1]: Stopped target nss-lookup.target. May 13 00:35:47.162155 systemd[1]: Stopped target remote-cryptsetup.target. May 13 00:35:47.163550 systemd[1]: Stopped target timers.target. May 13 00:35:47.164944 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 00:35:47.169419 kernel: audit: type=1131 audit(1747096547.165:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.165051 systemd[1]: Stopped dracut-pre-pivot.service. May 13 00:35:47.166518 systemd[1]: Stopped target initrd.target. May 13 00:35:47.170260 systemd[1]: Stopped target basic.target. May 13 00:35:47.171559 systemd[1]: Stopped target ignition-complete.target. May 13 00:35:47.172885 systemd[1]: Stopped target ignition-diskful.target. May 13 00:35:47.174211 systemd[1]: Stopped target initrd-root-device.target. May 13 00:35:47.175684 systemd[1]: Stopped target remote-fs.target. May 13 00:35:47.177028 systemd[1]: Stopped target remote-fs-pre.target. May 13 00:35:47.178450 systemd[1]: Stopped target sysinit.target. May 13 00:35:47.179725 systemd[1]: Stopped target local-fs.target. May 13 00:35:47.181054 systemd[1]: Stopped target local-fs-pre.target. May 13 00:35:47.182509 systemd[1]: Stopped target swap.target. May 13 00:35:47.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.183755 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 00:35:47.189572 kernel: audit: type=1131 audit(1747096547.184:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.183865 systemd[1]: Stopped dracut-pre-mount.service. May 13 00:35:47.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.185228 systemd[1]: Stopped target cryptsetup.target. May 13 00:35:47.194795 kernel: audit: type=1131 audit(1747096547.189:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.188820 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 00:35:47.188920 systemd[1]: Stopped dracut-initqueue.service. May 13 00:35:47.190361 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 00:35:47.190487 systemd[1]: Stopped ignition-fetch-offline.service. May 13 00:35:47.194279 systemd[1]: Stopped target paths.target. May 13 00:35:47.195489 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 00:35:47.197992 systemd[1]: Stopped systemd-ask-password-console.path. May 13 00:35:47.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.198999 systemd[1]: Stopped target slices.target. May 13 00:35:47.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.200326 systemd[1]: Stopped target sockets.target. May 13 00:35:47.201584 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 00:35:47.207954 iscsid[745]: iscsid shutting down. May 13 00:35:47.201688 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 13 00:35:47.203352 systemd[1]: ignition-files.service: Deactivated successfully. May 13 00:35:47.203472 systemd[1]: Stopped ignition-files.service. May 13 00:35:47.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.215822 ignition[896]: INFO : Ignition 2.14.0 May 13 00:35:47.215822 ignition[896]: INFO : Stage: umount May 13 00:35:47.215822 ignition[896]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:35:47.215822 ignition[896]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:35:47.215822 ignition[896]: INFO : umount: umount passed May 13 00:35:47.215822 ignition[896]: INFO : Ignition finished successfully May 13 00:35:47.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.222000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.205652 systemd[1]: Stopping ignition-mount.service... May 13 00:35:47.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.209151 systemd[1]: Stopping iscsid.service... May 13 00:35:47.211008 systemd[1]: Stopping sysroot-boot.service... May 13 00:35:47.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.229000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.212534 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 00:35:47.230000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.212676 systemd[1]: Stopped systemd-udev-trigger.service. May 13 00:35:47.213632 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 00:35:47.213728 systemd[1]: Stopped dracut-pre-trigger.service. May 13 00:35:47.215994 systemd[1]: iscsid.service: Deactivated successfully. May 13 00:35:47.216094 systemd[1]: Stopped iscsid.service. May 13 00:35:47.217750 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 00:35:47.217835 systemd[1]: Stopped ignition-mount.service. May 13 00:35:47.219167 systemd[1]: iscsid.socket: Deactivated successfully. May 13 00:35:47.219245 systemd[1]: Closed iscsid.socket. May 13 00:35:47.220516 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 00:35:47.220557 systemd[1]: Stopped ignition-disks.service. May 13 00:35:47.223082 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 00:35:47.223126 systemd[1]: Stopped ignition-kargs.service. May 13 00:35:47.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.224390 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 00:35:47.224798 systemd[1]: Stopped ignition-setup.service. May 13 00:35:47.226858 systemd[1]: Stopping iscsiuio.service... May 13 00:35:47.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.228783 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 00:35:47.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.258000 audit: BPF prog-id=6 op=UNLOAD May 13 00:35:47.229257 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 00:35:47.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.229367 systemd[1]: Finished initrd-cleanup.service. May 13 00:35:47.230354 systemd[1]: iscsiuio.service: Deactivated successfully. May 13 00:35:47.230461 systemd[1]: Stopped iscsiuio.service. May 13 00:35:47.232161 systemd[1]: Stopped target network.target. May 13 00:35:47.232962 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 00:35:47.232996 systemd[1]: Closed iscsiuio.socket. May 13 00:35:47.235892 systemd[1]: Stopping systemd-networkd.service... May 13 00:35:47.237629 systemd[1]: Stopping systemd-resolved.service... May 13 00:35:47.244723 systemd-networkd[739]: eth0: DHCPv6 lease lost May 13 00:35:47.267000 audit: BPF prog-id=9 op=UNLOAD May 13 00:35:47.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.245923 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 00:35:47.246019 systemd[1]: Stopped systemd-networkd.service. May 13 00:35:47.249419 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 00:35:47.249502 systemd[1]: Stopped systemd-resolved.service. May 13 00:35:47.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.251599 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 00:35:47.251630 systemd[1]: Closed systemd-networkd.socket. May 13 00:35:47.253412 systemd[1]: Stopping network-cleanup.service... May 13 00:35:47.278000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.254272 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 00:35:47.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.254334 systemd[1]: Stopped parse-ip-for-networkd.service. May 13 00:35:47.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.256689 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 00:35:47.256732 systemd[1]: Stopped systemd-sysctl.service. May 13 00:35:47.258997 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 00:35:47.259043 systemd[1]: Stopped systemd-modules-load.service. May 13 00:35:47.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.260771 systemd[1]: Stopping systemd-udevd.service... May 13 00:35:47.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.262869 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 00:35:47.267549 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 00:35:47.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.267676 systemd[1]: Stopped network-cleanup.service. May 13 00:35:47.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.271325 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 00:35:47.271469 systemd[1]: Stopped systemd-udevd.service. May 13 00:35:47.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.273560 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 00:35:47.273601 systemd[1]: Closed systemd-udevd-control.socket. May 13 00:35:47.275358 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 00:35:47.275658 systemd[1]: Closed systemd-udevd-kernel.socket. May 13 00:35:47.277177 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 00:35:47.277225 systemd[1]: Stopped dracut-pre-udev.service. May 13 00:35:47.278504 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 00:35:47.278544 systemd[1]: Stopped dracut-cmdline.service. May 13 00:35:47.280124 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:35:47.280163 systemd[1]: Stopped dracut-cmdline-ask.service. May 13 00:35:47.282440 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 13 00:35:47.283884 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 00:35:47.283938 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. May 13 00:35:47.286084 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 00:35:47.286124 systemd[1]: Stopped kmod-static-nodes.service. May 13 00:35:47.286964 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:35:47.287000 systemd[1]: Stopped systemd-vconsole-setup.service. May 13 00:35:47.289042 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 13 00:35:47.289481 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 00:35:47.289574 systemd[1]: Stopped sysroot-boot.service. May 13 00:35:47.290911 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 00:35:47.290990 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 13 00:35:47.292343 systemd[1]: Reached target initrd-switch-root.target. May 13 00:35:47.293513 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 00:35:47.293562 systemd[1]: Stopped initrd-setup-root.service. May 13 00:35:47.295636 systemd[1]: Starting initrd-switch-root.service... May 13 00:35:47.319602 systemd-journald[289]: Received SIGTERM from PID 1 (n/a). May 13 00:35:47.301845 systemd[1]: Switching root. May 13 00:35:47.320146 systemd-journald[289]: Journal stopped May 13 00:35:49.411191 kernel: SELinux: Class mctp_socket not defined in policy. May 13 00:35:49.411247 kernel: SELinux: Class anon_inode not defined in policy. May 13 00:35:49.411258 kernel: SELinux: the above unknown classes and permissions will be allowed May 13 00:35:49.411269 kernel: SELinux: policy capability network_peer_controls=1 May 13 00:35:49.411278 kernel: SELinux: policy capability open_perms=1 May 13 00:35:49.411288 kernel: SELinux: policy capability extended_socket_class=1 May 13 00:35:49.411304 kernel: SELinux: policy capability always_check_network=0 May 13 00:35:49.411314 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 00:35:49.411323 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 00:35:49.411333 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 00:35:49.411347 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 00:35:49.411359 systemd[1]: Successfully loaded SELinux policy in 34.632ms. May 13 00:35:49.411389 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.059ms. May 13 00:35:49.411421 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 13 00:35:49.411434 systemd[1]: Detected virtualization kvm. May 13 00:35:49.411447 systemd[1]: Detected architecture arm64. May 13 00:35:49.411457 systemd[1]: Detected first boot. May 13 00:35:49.411467 systemd[1]: Initializing machine ID from VM UUID. May 13 00:35:49.411477 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 13 00:35:49.411487 systemd[1]: Populated /etc with preset unit settings. May 13 00:35:49.411500 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:35:49.411513 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:35:49.411524 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:35:49.411535 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 00:35:49.411545 systemd[1]: Stopped initrd-switch-root.service. May 13 00:35:49.411556 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 00:35:49.411567 systemd[1]: Created slice system-addon\x2dconfig.slice. May 13 00:35:49.411579 systemd[1]: Created slice system-addon\x2drun.slice. May 13 00:35:49.411589 systemd[1]: Created slice system-getty.slice. May 13 00:35:49.411600 systemd[1]: Created slice system-modprobe.slice. May 13 00:35:49.411610 systemd[1]: Created slice system-serial\x2dgetty.slice. May 13 00:35:49.411620 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 13 00:35:49.411631 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 13 00:35:49.411641 systemd[1]: Created slice user.slice. May 13 00:35:49.411651 systemd[1]: Started systemd-ask-password-console.path. May 13 00:35:49.411662 systemd[1]: Started systemd-ask-password-wall.path. May 13 00:35:49.411673 systemd[1]: Set up automount boot.automount. May 13 00:35:49.411684 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 13 00:35:49.411694 systemd[1]: Stopped target initrd-switch-root.target. May 13 00:35:49.411704 systemd[1]: Stopped target initrd-fs.target. May 13 00:35:49.411714 systemd[1]: Stopped target initrd-root-fs.target. May 13 00:35:49.411726 systemd[1]: Reached target integritysetup.target. May 13 00:35:49.411736 systemd[1]: Reached target remote-cryptsetup.target. May 13 00:35:49.411747 systemd[1]: Reached target remote-fs.target. May 13 00:35:49.411759 systemd[1]: Reached target slices.target. May 13 00:35:49.411769 systemd[1]: Reached target swap.target. May 13 00:35:49.411779 systemd[1]: Reached target torcx.target. May 13 00:35:49.411790 systemd[1]: Reached target veritysetup.target. May 13 00:35:49.411799 systemd[1]: Listening on systemd-coredump.socket. May 13 00:35:49.411810 systemd[1]: Listening on systemd-initctl.socket. May 13 00:35:49.411820 systemd[1]: Listening on systemd-networkd.socket. May 13 00:35:49.411834 systemd[1]: Listening on systemd-udevd-control.socket. May 13 00:35:49.411844 systemd[1]: Listening on systemd-udevd-kernel.socket. May 13 00:35:49.411855 systemd[1]: Listening on systemd-userdbd.socket. May 13 00:35:49.411866 systemd[1]: Mounting dev-hugepages.mount... May 13 00:35:49.411876 systemd[1]: Mounting dev-mqueue.mount... May 13 00:35:49.411887 systemd[1]: Mounting media.mount... May 13 00:35:49.411897 systemd[1]: Mounting sys-kernel-debug.mount... May 13 00:35:49.411907 systemd[1]: Mounting sys-kernel-tracing.mount... May 13 00:35:49.411917 systemd[1]: Mounting tmp.mount... May 13 00:35:49.411927 systemd[1]: Starting flatcar-tmpfiles.service... May 13 00:35:49.411938 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:35:49.411949 systemd[1]: Starting kmod-static-nodes.service... May 13 00:35:49.411961 systemd[1]: Starting modprobe@configfs.service... May 13 00:35:49.411975 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:35:49.411985 systemd[1]: Starting modprobe@drm.service... May 13 00:35:49.411995 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:35:49.412006 systemd[1]: Starting modprobe@fuse.service... May 13 00:35:49.412016 systemd[1]: Starting modprobe@loop.service... May 13 00:35:49.412027 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 00:35:49.412037 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 00:35:49.412047 systemd[1]: Stopped systemd-fsck-root.service. May 13 00:35:49.412059 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 00:35:49.412069 systemd[1]: Stopped systemd-fsck-usr.service. May 13 00:35:49.412079 systemd[1]: Stopped systemd-journald.service. May 13 00:35:49.412090 systemd[1]: Starting systemd-journald.service... May 13 00:35:49.412100 kernel: loop: module loaded May 13 00:35:49.412110 kernel: fuse: init (API version 7.34) May 13 00:35:49.412120 systemd[1]: Starting systemd-modules-load.service... May 13 00:35:49.412130 systemd[1]: Starting systemd-network-generator.service... May 13 00:35:49.412140 systemd[1]: Starting systemd-remount-fs.service... May 13 00:35:49.412152 systemd[1]: Starting systemd-udev-trigger.service... May 13 00:35:49.412163 systemd[1]: verity-setup.service: Deactivated successfully. May 13 00:35:49.412222 systemd[1]: Stopped verity-setup.service. May 13 00:35:49.412235 systemd[1]: Mounted dev-hugepages.mount. May 13 00:35:49.412245 systemd[1]: Mounted dev-mqueue.mount. May 13 00:35:49.412256 systemd[1]: Mounted media.mount. May 13 00:35:49.412266 systemd[1]: Mounted sys-kernel-debug.mount. May 13 00:35:49.412277 systemd[1]: Mounted sys-kernel-tracing.mount. May 13 00:35:49.412287 systemd[1]: Mounted tmp.mount. May 13 00:35:49.412301 systemd[1]: Finished kmod-static-nodes.service. May 13 00:35:49.412311 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 00:35:49.412322 systemd[1]: Finished modprobe@configfs.service. May 13 00:35:49.412332 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:35:49.412342 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:35:49.412353 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:35:49.412364 systemd[1]: Finished modprobe@drm.service. May 13 00:35:49.412383 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:35:49.412420 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:35:49.412432 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 00:35:49.412442 systemd[1]: Finished modprobe@fuse.service. May 13 00:35:49.412453 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:35:49.412466 systemd-journald[993]: Journal started May 13 00:35:49.412515 systemd-journald[993]: Runtime Journal (/run/log/journal/bbe816d977fd4eab953f5fbdecdae3e2) is 6.0M, max 48.7M, 42.6M free. May 13 00:35:47.396000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 00:35:47.492000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 13 00:35:47.492000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 13 00:35:47.492000 audit: BPF prog-id=10 op=LOAD May 13 00:35:47.492000 audit: BPF prog-id=10 op=UNLOAD May 13 00:35:47.492000 audit: BPF prog-id=11 op=LOAD May 13 00:35:47.492000 audit: BPF prog-id=11 op=UNLOAD May 13 00:35:47.541000 audit[931]: AVC avc: denied { associate } for pid=931 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 13 00:35:47.541000 audit[931]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c589c a1=40000c8de0 a2=40000cf0c0 a3=32 items=0 ppid=914 pid=931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:35:47.541000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 13 00:35:47.542000 audit[931]: AVC avc: denied { associate } for pid=931 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 13 00:35:47.542000 audit[931]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001c5975 a2=1ed a3=0 items=2 ppid=914 pid=931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:35:47.542000 audit: CWD cwd="/" May 13 00:35:47.542000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:35:47.542000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:35:47.542000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 13 00:35:49.245000 audit: BPF prog-id=12 op=LOAD May 13 00:35:49.245000 audit: BPF prog-id=3 op=UNLOAD May 13 00:35:49.245000 audit: BPF prog-id=13 op=LOAD May 13 00:35:49.245000 audit: BPF prog-id=14 op=LOAD May 13 00:35:49.245000 audit: BPF prog-id=4 op=UNLOAD May 13 00:35:49.245000 audit: BPF prog-id=5 op=UNLOAD May 13 00:35:49.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:49.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:49.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:49.413432 systemd[1]: Finished modprobe@loop.service. May 13 00:35:49.262000 audit: BPF prog-id=12 op=UNLOAD May 13 00:35:49.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:49.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:49.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:49.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:49.353000 audit: BPF prog-id=15 op=LOAD May 13 00:35:49.358000 audit: BPF prog-id=16 op=LOAD May 13 00:35:49.359000 audit: BPF prog-id=17 op=LOAD May 13 00:35:49.359000 audit: BPF prog-id=13 op=UNLOAD May 13 00:35:49.359000 audit: BPF prog-id=14 op=UNLOAD May 13 00:35:49.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:49.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:49.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:49.398000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:49.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:49.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:49.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:49.405000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:49.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:49.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:49.409000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 13 00:35:49.409000 audit[993]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=ffffd619cd60 a2=4000 a3=1 items=0 ppid=1 pid=993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:35:49.409000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 13 00:35:49.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:49.411000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:47.539728 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-13T00:35:47Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:35:49.243806 systemd[1]: Queued start job for default target multi-user.target. May 13 00:35:47.540003 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-13T00:35:47Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 13 00:35:49.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:49.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:49.243819 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 13 00:35:47.540023 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-13T00:35:47Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 13 00:35:49.246508 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 00:35:47.540053 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-13T00:35:47Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 13 00:35:47.540063 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-13T00:35:47Z" level=debug msg="skipped missing lower profile" missing profile=oem May 13 00:35:47.540090 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-13T00:35:47Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 13 00:35:47.540102 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-13T00:35:47Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 13 00:35:47.540288 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-13T00:35:47Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 13 00:35:47.540320 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-13T00:35:47Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 13 00:35:49.415025 systemd[1]: Started systemd-journald.service. May 13 00:35:47.540332 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-13T00:35:47Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 13 00:35:47.541209 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-13T00:35:47Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 13 00:35:47.541244 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-13T00:35:47Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 13 00:35:47.541307 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-13T00:35:47Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 13 00:35:47.541325 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-13T00:35:47Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 13 00:35:47.541344 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-13T00:35:47Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 13 00:35:47.541357 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-13T00:35:47Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 13 00:35:48.985510 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-13T00:35:48Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 13 00:35:49.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:48.985769 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-13T00:35:48Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 13 00:35:48.985856 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-13T00:35:48Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 13 00:35:48.986009 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-13T00:35:48Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 13 00:35:48.986056 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-13T00:35:48Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 13 00:35:48.986111 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-13T00:35:48Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 13 00:35:49.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:49.416722 systemd[1]: Finished systemd-modules-load.service. May 13 00:35:49.417885 systemd[1]: Finished systemd-network-generator.service. May 13 00:35:49.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:49.419089 systemd[1]: Finished systemd-remount-fs.service. May 13 00:35:49.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:49.420234 systemd[1]: Finished flatcar-tmpfiles.service. May 13 00:35:49.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:49.421548 systemd[1]: Reached target network-pre.target. May 13 00:35:49.423679 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 13 00:35:49.425541 systemd[1]: Mounting sys-kernel-config.mount... May 13 00:35:49.426272 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 00:35:49.429622 systemd[1]: Starting systemd-hwdb-update.service... May 13 00:35:49.431485 systemd[1]: Starting systemd-journal-flush.service... May 13 00:35:49.434072 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:35:49.435200 systemd[1]: Starting systemd-random-seed.service... May 13 00:35:49.436070 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:35:49.437089 systemd[1]: Starting systemd-sysctl.service... May 13 00:35:49.441757 systemd-journald[993]: Time spent on flushing to /var/log/journal/bbe816d977fd4eab953f5fbdecdae3e2 is 13.859ms for 996 entries. May 13 00:35:49.441757 systemd-journald[993]: System Journal (/var/log/journal/bbe816d977fd4eab953f5fbdecdae3e2) is 8.0M, max 195.6M, 187.6M free. May 13 00:35:49.468797 systemd-journald[993]: Received client request to flush runtime journal. May 13 00:35:49.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:49.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:49.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:49.440293 systemd[1]: Starting systemd-sysusers.service... May 13 00:35:49.443275 systemd[1]: Finished systemd-udev-trigger.service. May 13 00:35:49.469334 udevadm[1032]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 13 00:35:49.444934 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 13 00:35:49.446244 systemd[1]: Mounted sys-kernel-config.mount. May 13 00:35:49.447358 systemd[1]: Finished systemd-random-seed.service. May 13 00:35:49.448653 systemd[1]: Reached target first-boot-complete.target. May 13 00:35:49.450689 systemd[1]: Starting systemd-udev-settle.service... May 13 00:35:49.462798 systemd[1]: Finished systemd-sysctl.service. May 13 00:35:49.469823 systemd[1]: Finished systemd-journal-flush.service. May 13 00:35:49.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:49.473221 systemd[1]: Finished systemd-sysusers.service. May 13 00:35:49.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:49.475272 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 13 00:35:49.494004 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 13 00:35:49.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:49.856045 systemd[1]: Finished systemd-hwdb-update.service. May 13 00:35:49.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:49.857000 audit: BPF prog-id=18 op=LOAD May 13 00:35:49.857000 audit: BPF prog-id=19 op=LOAD May 13 00:35:49.857000 audit: BPF prog-id=7 op=UNLOAD May 13 00:35:49.857000 audit: BPF prog-id=8 op=UNLOAD May 13 00:35:49.858828 systemd[1]: Starting systemd-udevd.service... May 13 00:35:49.880098 systemd-udevd[1037]: Using default interface naming scheme 'v252'. May 13 00:35:49.927765 systemd[1]: Started systemd-udevd.service. May 13 00:35:49.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:49.934000 audit: BPF prog-id=20 op=LOAD May 13 00:35:49.936084 systemd[1]: Starting systemd-networkd.service... May 13 00:35:49.944000 audit: BPF prog-id=21 op=LOAD May 13 00:35:49.944000 audit: BPF prog-id=22 op=LOAD May 13 00:35:49.944000 audit: BPF prog-id=23 op=LOAD May 13 00:35:49.945567 systemd[1]: Starting systemd-userdbd.service... May 13 00:35:49.952133 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. May 13 00:35:49.979684 systemd[1]: Started systemd-userdbd.service. May 13 00:35:49.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:50.009736 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 13 00:35:50.040332 systemd-networkd[1057]: lo: Link UP May 13 00:35:50.040634 systemd-networkd[1057]: lo: Gained carrier May 13 00:35:50.041039 systemd-networkd[1057]: Enumeration completed May 13 00:35:50.041223 systemd-networkd[1057]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:35:50.041231 systemd[1]: Started systemd-networkd.service. May 13 00:35:50.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:50.047728 systemd-networkd[1057]: eth0: Link UP May 13 00:35:50.047826 systemd-networkd[1057]: eth0: Gained carrier May 13 00:35:50.049866 systemd[1]: Finished systemd-udev-settle.service. May 13 00:35:50.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:50.052059 systemd[1]: Starting lvm2-activation-early.service... May 13 00:35:50.070027 lvm[1070]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:35:50.078551 systemd-networkd[1057]: eth0: DHCPv4 address 10.0.0.114/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:35:50.098315 systemd[1]: Finished lvm2-activation-early.service. May 13 00:35:50.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:50.099379 systemd[1]: Reached target cryptsetup.target. May 13 00:35:50.101296 systemd[1]: Starting lvm2-activation.service... May 13 00:35:50.104922 lvm[1071]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:35:50.144413 systemd[1]: Finished lvm2-activation.service. May 13 00:35:50.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:50.145395 systemd[1]: Reached target local-fs-pre.target. May 13 00:35:50.146238 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 00:35:50.146271 systemd[1]: Reached target local-fs.target. May 13 00:35:50.147067 systemd[1]: Reached target machines.target. May 13 00:35:50.150604 systemd[1]: Starting ldconfig.service... May 13 00:35:50.151910 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:35:50.151970 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:35:50.153071 systemd[1]: Starting systemd-boot-update.service... May 13 00:35:50.159617 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 13 00:35:50.161845 systemd[1]: Starting systemd-machine-id-commit.service... May 13 00:35:50.163898 systemd[1]: Starting systemd-sysext.service... May 13 00:35:50.169389 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 13 00:35:50.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:50.175637 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1073 (bootctl) May 13 00:35:50.177155 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 13 00:35:50.190130 systemd[1]: Unmounting usr-share-oem.mount... May 13 00:35:50.228198 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 13 00:35:50.228420 systemd[1]: Unmounted usr-share-oem.mount. May 13 00:35:50.232879 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 00:35:50.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:50.233693 systemd[1]: Finished systemd-machine-id-commit.service. May 13 00:35:50.244433 kernel: loop0: detected capacity change from 0 to 189592 May 13 00:35:50.254809 systemd-fsck[1081]: fsck.fat 4.2 (2021-01-31) May 13 00:35:50.254809 systemd-fsck[1081]: /dev/vda1: 236 files, 117310/258078 clusters May 13 00:35:50.256449 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 00:35:50.261466 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 13 00:35:50.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:50.265605 systemd[1]: Mounting boot.mount... May 13 00:35:50.273125 systemd[1]: Mounted boot.mount. May 13 00:35:50.273526 kernel: loop1: detected capacity change from 0 to 189592 May 13 00:35:50.278309 (sd-sysext)[1086]: Using extensions 'kubernetes'. May 13 00:35:50.279048 (sd-sysext)[1086]: Merged extensions into '/usr'. May 13 00:35:50.282548 systemd[1]: Finished systemd-boot-update.service. May 13 00:35:50.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:50.298591 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:35:50.300325 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:35:50.303667 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:35:50.306089 systemd[1]: Starting modprobe@loop.service... May 13 00:35:50.307265 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:35:50.307423 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:35:50.308251 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:35:50.308419 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:35:50.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:50.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:50.310038 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:35:50.310487 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:35:50.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:50.311000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:50.312740 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:35:50.312882 systemd[1]: Finished modprobe@loop.service. May 13 00:35:50.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:50.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:50.314613 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:35:50.314741 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:35:50.358102 ldconfig[1072]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 00:35:50.365559 systemd[1]: Finished ldconfig.service. May 13 00:35:50.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:50.386945 systemd[1]: Mounting usr-share-oem.mount... May 13 00:35:50.391578 systemd[1]: Mounted usr-share-oem.mount. May 13 00:35:50.393310 systemd[1]: Finished systemd-sysext.service. May 13 00:35:50.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:50.395266 systemd[1]: Starting ensure-sysext.service... May 13 00:35:50.396913 systemd[1]: Starting systemd-tmpfiles-setup.service... May 13 00:35:50.400952 systemd[1]: Reloading. May 13 00:35:50.407186 systemd-tmpfiles[1093]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 13 00:35:50.410308 systemd-tmpfiles[1093]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 00:35:50.411648 systemd-tmpfiles[1093]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 00:35:50.439275 /usr/lib/systemd/system-generators/torcx-generator[1113]: time="2025-05-13T00:35:50Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:35:50.439307 /usr/lib/systemd/system-generators/torcx-generator[1113]: time="2025-05-13T00:35:50Z" level=info msg="torcx already run" May 13 00:35:50.496479 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:35:50.496501 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:35:50.512079 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:35:50.555000 audit: BPF prog-id=24 op=LOAD May 13 00:35:50.555000 audit: BPF prog-id=15 op=UNLOAD May 13 00:35:50.555000 audit: BPF prog-id=25 op=LOAD May 13 00:35:50.555000 audit: BPF prog-id=26 op=LOAD May 13 00:35:50.555000 audit: BPF prog-id=16 op=UNLOAD May 13 00:35:50.555000 audit: BPF prog-id=17 op=UNLOAD May 13 00:35:50.556000 audit: BPF prog-id=27 op=LOAD May 13 00:35:50.556000 audit: BPF prog-id=28 op=LOAD May 13 00:35:50.556000 audit: BPF prog-id=18 op=UNLOAD May 13 00:35:50.556000 audit: BPF prog-id=19 op=UNLOAD May 13 00:35:50.557000 audit: BPF prog-id=29 op=LOAD May 13 00:35:50.557000 audit: BPF prog-id=21 op=UNLOAD May 13 00:35:50.557000 audit: BPF prog-id=30 op=LOAD May 13 00:35:50.557000 audit: BPF prog-id=31 op=LOAD May 13 00:35:50.557000 audit: BPF prog-id=22 op=UNLOAD May 13 00:35:50.557000 audit: BPF prog-id=23 op=UNLOAD May 13 00:35:50.559000 audit: BPF prog-id=32 op=LOAD May 13 00:35:50.559000 audit: BPF prog-id=20 op=UNLOAD May 13 00:35:50.561924 systemd[1]: Finished systemd-tmpfiles-setup.service. May 13 00:35:50.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:50.566462 systemd[1]: Starting audit-rules.service... May 13 00:35:50.568424 systemd[1]: Starting clean-ca-certificates.service... May 13 00:35:50.570588 systemd[1]: Starting systemd-journal-catalog-update.service... May 13 00:35:50.573000 audit: BPF prog-id=33 op=LOAD May 13 00:35:50.575388 systemd[1]: Starting systemd-resolved.service... May 13 00:35:50.577000 audit: BPF prog-id=34 op=LOAD May 13 00:35:50.578694 systemd[1]: Starting systemd-timesyncd.service... May 13 00:35:50.581648 systemd[1]: Starting systemd-update-utmp.service... May 13 00:35:50.588000 audit[1160]: SYSTEM_BOOT pid=1160 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 13 00:35:50.589219 systemd[1]: Finished clean-ca-certificates.service. May 13 00:35:50.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:50.593190 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:35:50.594616 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:35:50.596524 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:35:50.598464 systemd[1]: Starting modprobe@loop.service... May 13 00:35:50.599242 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:35:50.599455 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:35:50.599609 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:35:50.600793 systemd[1]: Finished systemd-journal-catalog-update.service. May 13 00:35:50.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:50.602285 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:35:50.602427 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:35:50.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:50.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:50.603747 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:35:50.603860 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:35:50.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:50.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:50.605271 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:35:50.605415 systemd[1]: Finished modprobe@loop.service. May 13 00:35:50.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:50.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:50.606938 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:35:50.607054 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:35:50.608464 systemd[1]: Starting systemd-update-done.service... May 13 00:35:50.610025 systemd[1]: Finished systemd-update-utmp.service. May 13 00:35:50.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:50.613196 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:35:50.614557 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:35:50.616553 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:35:50.618644 systemd[1]: Starting modprobe@loop.service... May 13 00:35:50.619511 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:35:50.619663 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:35:50.619796 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:35:50.620722 systemd[1]: Finished systemd-update-done.service. May 13 00:35:50.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:50.622066 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:35:50.622184 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:35:50.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:50.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:50.623522 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:35:50.623638 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:35:50.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:50.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:50.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:50.625000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:50.624939 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:35:50.625051 systemd[1]: Finished modprobe@loop.service. May 13 00:35:50.626194 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:35:50.626292 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:35:50.628788 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:35:50.630156 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:35:50.632086 systemd[1]: Starting modprobe@drm.service... May 13 00:35:50.633945 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:35:50.635957 systemd[1]: Starting modprobe@loop.service... May 13 00:35:50.636814 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:35:50.636960 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:35:50.637855 systemd-resolved[1155]: Positive Trust Anchors: May 13 00:35:50.637864 systemd-resolved[1155]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:35:50.637890 systemd-resolved[1155]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 13 00:35:50.638348 systemd[1]: Starting systemd-networkd-wait-online.service... May 13 00:35:50.639483 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:35:50.640497 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:35:50.640615 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:35:50.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:50.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:50.641889 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:35:50.642064 systemd[1]: Finished modprobe@drm.service. May 13 00:35:50.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:50.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:50.643327 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:35:50.643474 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:35:50.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:50.644000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:50.644712 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:35:50.644819 systemd[1]: Finished modprobe@loop.service. May 13 00:35:50.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:50.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:50.646138 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:35:50.646233 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:35:50.647240 systemd[1]: Finished ensure-sysext.service. May 13 00:35:50.646000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 13 00:35:50.646000 audit[1183]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffec016be0 a2=420 a3=0 items=0 ppid=1151 pid=1183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:35:50.646000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 13 00:35:50.647737 augenrules[1183]: No rules May 13 00:35:50.648449 systemd[1]: Finished audit-rules.service. May 13 00:35:50.649629 systemd[1]: Started systemd-timesyncd.service. May 13 00:35:50.650375 systemd-timesyncd[1157]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 00:35:50.650445 systemd-timesyncd[1157]: Initial clock synchronization to Tue 2025-05-13 00:35:50.709459 UTC. May 13 00:35:50.650860 systemd[1]: Reached target time-set.target. May 13 00:35:50.657949 systemd-resolved[1155]: Defaulting to hostname 'linux'. May 13 00:35:50.659266 systemd[1]: Started systemd-resolved.service. May 13 00:35:50.660210 systemd[1]: Reached target network.target. May 13 00:35:50.661007 systemd[1]: Reached target nss-lookup.target. May 13 00:35:50.661818 systemd[1]: Reached target sysinit.target. May 13 00:35:50.662660 systemd[1]: Started motdgen.path. May 13 00:35:50.663372 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 13 00:35:50.664616 systemd[1]: Started logrotate.timer. May 13 00:35:50.665454 systemd[1]: Started mdadm.timer. May 13 00:35:50.666123 systemd[1]: Started systemd-tmpfiles-clean.timer. May 13 00:35:50.666983 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 00:35:50.667014 systemd[1]: Reached target paths.target. May 13 00:35:50.667812 systemd[1]: Reached target timers.target. May 13 00:35:50.668887 systemd[1]: Listening on dbus.socket. May 13 00:35:50.670720 systemd[1]: Starting docker.socket... May 13 00:35:50.674275 systemd[1]: Listening on sshd.socket. May 13 00:35:50.675148 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:35:50.675615 systemd[1]: Listening on docker.socket. May 13 00:35:50.676463 systemd[1]: Reached target sockets.target. May 13 00:35:50.677215 systemd[1]: Reached target basic.target. May 13 00:35:50.678029 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 13 00:35:50.678065 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 13 00:35:50.679026 systemd[1]: Starting containerd.service... May 13 00:35:50.680760 systemd[1]: Starting dbus.service... May 13 00:35:50.682731 systemd[1]: Starting enable-oem-cloudinit.service... May 13 00:35:50.684813 systemd[1]: Starting extend-filesystems.service... May 13 00:35:50.685766 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 13 00:35:50.687246 systemd[1]: Starting motdgen.service... May 13 00:35:50.689182 systemd[1]: Starting prepare-helm.service... May 13 00:35:50.690916 systemd[1]: Starting ssh-key-proc-cmdline.service... May 13 00:35:50.691847 jq[1193]: false May 13 00:35:50.692818 systemd[1]: Starting sshd-keygen.service... May 13 00:35:50.695952 systemd[1]: Starting systemd-logind.service... May 13 00:35:50.697061 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:35:50.697138 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 00:35:50.697585 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 00:35:50.698465 systemd[1]: Starting update-engine.service... May 13 00:35:50.700468 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 13 00:35:50.703541 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 00:35:50.703725 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 13 00:35:50.704201 jq[1209]: true May 13 00:35:50.704755 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 00:35:50.704918 systemd[1]: Finished ssh-key-proc-cmdline.service. May 13 00:35:50.713430 extend-filesystems[1194]: Found loop1 May 13 00:35:50.713430 extend-filesystems[1194]: Found vda May 13 00:35:50.713430 extend-filesystems[1194]: Found vda1 May 13 00:35:50.713430 extend-filesystems[1194]: Found vda2 May 13 00:35:50.713430 extend-filesystems[1194]: Found vda3 May 13 00:35:50.713430 extend-filesystems[1194]: Found usr May 13 00:35:50.713430 extend-filesystems[1194]: Found vda4 May 13 00:35:50.713430 extend-filesystems[1194]: Found vda6 May 13 00:35:50.713430 extend-filesystems[1194]: Found vda7 May 13 00:35:50.734845 extend-filesystems[1194]: Found vda9 May 13 00:35:50.734845 extend-filesystems[1194]: Checking size of /dev/vda9 May 13 00:35:50.730319 dbus-daemon[1192]: [system] SELinux support is enabled May 13 00:35:50.741348 tar[1214]: linux-arm64/helm May 13 00:35:50.742676 jq[1215]: true May 13 00:35:50.715347 systemd[1]: motdgen.service: Deactivated successfully. May 13 00:35:50.715556 systemd[1]: Finished motdgen.service. May 13 00:35:50.730542 systemd[1]: Started dbus.service. May 13 00:35:50.733127 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 00:35:50.733158 systemd[1]: Reached target system-config.target. May 13 00:35:50.734170 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 00:35:50.734188 systemd[1]: Reached target user-config.target. May 13 00:35:50.746450 extend-filesystems[1194]: Resized partition /dev/vda9 May 13 00:35:50.755247 extend-filesystems[1238]: resize2fs 1.46.5 (30-Dec-2021) May 13 00:35:50.768600 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 00:35:50.795310 systemd-logind[1204]: Watching system buttons on /dev/input/event0 (Power Button) May 13 00:35:50.795728 systemd-logind[1204]: New seat seat0. May 13 00:35:50.795975 update_engine[1207]: I0513 00:35:50.794688 1207 main.cc:92] Flatcar Update Engine starting May 13 00:35:50.799296 systemd[1]: Started systemd-logind.service. May 13 00:35:50.804421 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 00:35:50.805913 update_engine[1207]: I0513 00:35:50.805806 1207 update_check_scheduler.cc:74] Next update check in 10m37s May 13 00:35:50.806542 systemd[1]: Started update-engine.service. May 13 00:35:50.818049 extend-filesystems[1238]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 00:35:50.818049 extend-filesystems[1238]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 00:35:50.818049 extend-filesystems[1238]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 00:35:50.810753 systemd[1]: Started locksmithd.service. May 13 00:35:50.823677 extend-filesystems[1194]: Resized filesystem in /dev/vda9 May 13 00:35:50.819706 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 00:35:50.827089 bash[1244]: Updated "/home/core/.ssh/authorized_keys" May 13 00:35:50.819876 systemd[1]: Finished extend-filesystems.service. May 13 00:35:50.825443 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 13 00:35:50.837555 env[1216]: time="2025-05-13T00:35:50.837510720Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 13 00:35:50.858098 env[1216]: time="2025-05-13T00:35:50.858046160Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 13 00:35:50.858498 env[1216]: time="2025-05-13T00:35:50.858473840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 13 00:35:50.859968 env[1216]: time="2025-05-13T00:35:50.859934960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 13 00:35:50.860055 env[1216]: time="2025-05-13T00:35:50.860039840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 13 00:35:50.860323 env[1216]: time="2025-05-13T00:35:50.860298160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:35:50.860442 env[1216]: time="2025-05-13T00:35:50.860424240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 13 00:35:50.860507 env[1216]: time="2025-05-13T00:35:50.860492480Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 13 00:35:50.860558 env[1216]: time="2025-05-13T00:35:50.860545800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 13 00:35:50.860692 env[1216]: time="2025-05-13T00:35:50.860672600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 13 00:35:50.861099 env[1216]: time="2025-05-13T00:35:50.861076680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 13 00:35:50.861333 env[1216]: time="2025-05-13T00:35:50.861308440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:35:50.861442 env[1216]: time="2025-05-13T00:35:50.861425560Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 13 00:35:50.861561 env[1216]: time="2025-05-13T00:35:50.861540960Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 13 00:35:50.861638 env[1216]: time="2025-05-13T00:35:50.861622400Z" level=info msg="metadata content store policy set" policy=shared May 13 00:35:50.865105 env[1216]: time="2025-05-13T00:35:50.865081680Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 13 00:35:50.865200 env[1216]: time="2025-05-13T00:35:50.865185080Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 13 00:35:50.865261 env[1216]: time="2025-05-13T00:35:50.865247600Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 13 00:35:50.865340 env[1216]: time="2025-05-13T00:35:50.865326960Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 13 00:35:50.865499 env[1216]: time="2025-05-13T00:35:50.865483200Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 13 00:35:50.865567 env[1216]: time="2025-05-13T00:35:50.865553320Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 13 00:35:50.865641 env[1216]: time="2025-05-13T00:35:50.865626640Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 13 00:35:50.866024 env[1216]: time="2025-05-13T00:35:50.865999520Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 13 00:35:50.866116 env[1216]: time="2025-05-13T00:35:50.866100720Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 13 00:35:50.866187 env[1216]: time="2025-05-13T00:35:50.866171680Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 13 00:35:50.866246 env[1216]: time="2025-05-13T00:35:50.866232520Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 13 00:35:50.866307 env[1216]: time="2025-05-13T00:35:50.866294040Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 13 00:35:50.866487 env[1216]: time="2025-05-13T00:35:50.866466360Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 13 00:35:50.866632 env[1216]: time="2025-05-13T00:35:50.866614920Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 13 00:35:50.866956 env[1216]: time="2025-05-13T00:35:50.866937560Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 13 00:35:50.867051 env[1216]: time="2025-05-13T00:35:50.867036440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 13 00:35:50.867108 env[1216]: time="2025-05-13T00:35:50.867095000Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 13 00:35:50.867258 env[1216]: time="2025-05-13T00:35:50.867242240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 13 00:35:50.867338 env[1216]: time="2025-05-13T00:35:50.867322880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 13 00:35:50.867436 env[1216]: time="2025-05-13T00:35:50.867421040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 13 00:35:50.867496 env[1216]: time="2025-05-13T00:35:50.867483080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 13 00:35:50.867550 env[1216]: time="2025-05-13T00:35:50.867537240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 13 00:35:50.867605 env[1216]: time="2025-05-13T00:35:50.867592080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 13 00:35:50.867663 env[1216]: time="2025-05-13T00:35:50.867649680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 13 00:35:50.867728 env[1216]: time="2025-05-13T00:35:50.867714920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 13 00:35:50.867795 env[1216]: time="2025-05-13T00:35:50.867781040Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 13 00:35:50.867961 env[1216]: time="2025-05-13T00:35:50.867943640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 13 00:35:50.868030 env[1216]: time="2025-05-13T00:35:50.868016040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 13 00:35:50.868096 env[1216]: time="2025-05-13T00:35:50.868082240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 13 00:35:50.868153 env[1216]: time="2025-05-13T00:35:50.868139840Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 13 00:35:50.868216 env[1216]: time="2025-05-13T00:35:50.868199520Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 13 00:35:50.868269 env[1216]: time="2025-05-13T00:35:50.868255680Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 13 00:35:50.868329 env[1216]: time="2025-05-13T00:35:50.868315400Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 13 00:35:50.868454 env[1216]: time="2025-05-13T00:35:50.868436400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 13 00:35:50.868751 env[1216]: time="2025-05-13T00:35:50.868694280Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 13 00:35:50.871486 env[1216]: time="2025-05-13T00:35:50.871304080Z" level=info msg="Connect containerd service" May 13 00:35:50.871546 env[1216]: time="2025-05-13T00:35:50.871509480Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 13 00:35:50.872168 env[1216]: time="2025-05-13T00:35:50.872140800Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:35:50.872416 env[1216]: time="2025-05-13T00:35:50.872359560Z" level=info msg="Start subscribing containerd event" May 13 00:35:50.872507 env[1216]: time="2025-05-13T00:35:50.872492960Z" level=info msg="Start recovering state" May 13 00:35:50.872658 env[1216]: time="2025-05-13T00:35:50.872643560Z" level=info msg="Start event monitor" May 13 00:35:50.872734 env[1216]: time="2025-05-13T00:35:50.872720720Z" level=info msg="Start snapshots syncer" May 13 00:35:50.872788 env[1216]: time="2025-05-13T00:35:50.872775800Z" level=info msg="Start cni network conf syncer for default" May 13 00:35:50.872842 env[1216]: time="2025-05-13T00:35:50.872831040Z" level=info msg="Start streaming server" May 13 00:35:50.873107 env[1216]: time="2025-05-13T00:35:50.873084400Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 00:35:50.873647 env[1216]: time="2025-05-13T00:35:50.873623040Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 00:35:50.874717 env[1216]: time="2025-05-13T00:35:50.873758880Z" level=info msg="containerd successfully booted in 0.036868s" May 13 00:35:50.873767 systemd[1]: Started containerd.service. May 13 00:35:50.874117 locksmithd[1245]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 00:35:51.099544 tar[1214]: linux-arm64/LICENSE May 13 00:35:51.099702 tar[1214]: linux-arm64/README.md May 13 00:35:51.104006 systemd[1]: Finished prepare-helm.service. May 13 00:35:52.051725 systemd-networkd[1057]: eth0: Gained IPv6LL May 13 00:35:52.054222 systemd[1]: Finished systemd-networkd-wait-online.service. May 13 00:35:52.055524 systemd[1]: Reached target network-online.target. May 13 00:35:52.057883 systemd[1]: Starting kubelet.service... May 13 00:35:52.596733 systemd[1]: Started kubelet.service. May 13 00:35:52.870877 sshd_keygen[1208]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 00:35:52.889341 systemd[1]: Finished sshd-keygen.service. May 13 00:35:52.891623 systemd[1]: Starting issuegen.service... May 13 00:35:52.896646 systemd[1]: issuegen.service: Deactivated successfully. May 13 00:35:52.896794 systemd[1]: Finished issuegen.service. May 13 00:35:52.898916 systemd[1]: Starting systemd-user-sessions.service... May 13 00:35:52.905488 systemd[1]: Finished systemd-user-sessions.service. May 13 00:35:52.907584 systemd[1]: Started getty@tty1.service. May 13 00:35:52.909468 systemd[1]: Started serial-getty@ttyAMA0.service. May 13 00:35:52.910491 systemd[1]: Reached target getty.target. May 13 00:35:52.911272 systemd[1]: Reached target multi-user.target. May 13 00:35:52.913328 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 13 00:35:52.920875 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 13 00:35:52.921027 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 13 00:35:52.922130 systemd[1]: Startup finished in 598ms (kernel) + 4.780s (initrd) + 5.571s (userspace) = 10.949s. May 13 00:35:53.072114 kubelet[1260]: E0513 00:35:53.072062 1260 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:35:53.073786 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:35:53.073910 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:35:55.572183 systemd[1]: Created slice system-sshd.slice. May 13 00:35:55.573314 systemd[1]: Started sshd@0-10.0.0.114:22-10.0.0.1:42862.service. May 13 00:35:55.620418 sshd[1282]: Accepted publickey for core from 10.0.0.1 port 42862 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:35:55.623119 sshd[1282]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:35:55.631792 systemd[1]: Created slice user-500.slice. May 13 00:35:55.632922 systemd[1]: Starting user-runtime-dir@500.service... May 13 00:35:55.634705 systemd-logind[1204]: New session 1 of user core. May 13 00:35:55.641179 systemd[1]: Finished user-runtime-dir@500.service. May 13 00:35:55.642592 systemd[1]: Starting user@500.service... May 13 00:35:55.645777 (systemd)[1285]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 00:35:55.706203 systemd[1285]: Queued start job for default target default.target. May 13 00:35:55.706772 systemd[1285]: Reached target paths.target. May 13 00:35:55.706807 systemd[1285]: Reached target sockets.target. May 13 00:35:55.706819 systemd[1285]: Reached target timers.target. May 13 00:35:55.706830 systemd[1285]: Reached target basic.target. May 13 00:35:55.706876 systemd[1285]: Reached target default.target. May 13 00:35:55.706901 systemd[1285]: Startup finished in 54ms. May 13 00:35:55.706979 systemd[1]: Started user@500.service. May 13 00:35:55.708003 systemd[1]: Started session-1.scope. May 13 00:35:55.760202 systemd[1]: Started sshd@1-10.0.0.114:22-10.0.0.1:42878.service. May 13 00:35:55.799537 sshd[1294]: Accepted publickey for core from 10.0.0.1 port 42878 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:35:55.800873 sshd[1294]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:35:55.805093 systemd-logind[1204]: New session 2 of user core. May 13 00:35:55.806376 systemd[1]: Started session-2.scope. May 13 00:35:55.861570 sshd[1294]: pam_unix(sshd:session): session closed for user core May 13 00:35:55.865184 systemd[1]: Started sshd@2-10.0.0.114:22-10.0.0.1:42890.service. May 13 00:35:55.865693 systemd[1]: sshd@1-10.0.0.114:22-10.0.0.1:42878.service: Deactivated successfully. May 13 00:35:55.866314 systemd[1]: session-2.scope: Deactivated successfully. May 13 00:35:55.866869 systemd-logind[1204]: Session 2 logged out. Waiting for processes to exit. May 13 00:35:55.867842 systemd-logind[1204]: Removed session 2. May 13 00:35:55.904636 sshd[1299]: Accepted publickey for core from 10.0.0.1 port 42890 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:35:55.905906 sshd[1299]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:35:55.909367 systemd-logind[1204]: New session 3 of user core. May 13 00:35:55.910142 systemd[1]: Started session-3.scope. May 13 00:35:55.960266 sshd[1299]: pam_unix(sshd:session): session closed for user core May 13 00:35:55.965079 systemd[1]: sshd@2-10.0.0.114:22-10.0.0.1:42890.service: Deactivated successfully. May 13 00:35:55.965651 systemd[1]: session-3.scope: Deactivated successfully. May 13 00:35:55.970558 systemd-logind[1204]: Session 3 logged out. Waiting for processes to exit. May 13 00:35:55.971686 systemd[1]: Started sshd@3-10.0.0.114:22-10.0.0.1:42902.service. May 13 00:35:55.975740 systemd-logind[1204]: Removed session 3. May 13 00:35:56.010278 sshd[1306]: Accepted publickey for core from 10.0.0.1 port 42902 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:35:56.011958 sshd[1306]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:35:56.016374 systemd[1]: Started session-4.scope. May 13 00:35:56.016609 systemd-logind[1204]: New session 4 of user core. May 13 00:35:56.070205 sshd[1306]: pam_unix(sshd:session): session closed for user core May 13 00:35:56.073232 systemd[1]: sshd@3-10.0.0.114:22-10.0.0.1:42902.service: Deactivated successfully. May 13 00:35:56.073795 systemd[1]: session-4.scope: Deactivated successfully. May 13 00:35:56.074898 systemd-logind[1204]: Session 4 logged out. Waiting for processes to exit. May 13 00:35:56.076479 systemd[1]: Started sshd@4-10.0.0.114:22-10.0.0.1:42904.service. May 13 00:35:56.077110 systemd-logind[1204]: Removed session 4. May 13 00:35:56.113344 sshd[1312]: Accepted publickey for core from 10.0.0.1 port 42904 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:35:56.115277 sshd[1312]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:35:56.118969 systemd-logind[1204]: New session 5 of user core. May 13 00:35:56.120968 systemd[1]: Started session-5.scope. May 13 00:35:56.186629 sudo[1315]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 00:35:56.186868 sudo[1315]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 13 00:35:56.253219 systemd[1]: Starting docker.service... May 13 00:35:56.353583 env[1327]: time="2025-05-13T00:35:56.353523615Z" level=info msg="Starting up" May 13 00:35:56.355834 env[1327]: time="2025-05-13T00:35:56.355807044Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 13 00:35:56.355834 env[1327]: time="2025-05-13T00:35:56.355830854Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 13 00:35:56.355951 env[1327]: time="2025-05-13T00:35:56.355863056Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 13 00:35:56.355951 env[1327]: time="2025-05-13T00:35:56.355875062Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 13 00:35:56.358962 env[1327]: time="2025-05-13T00:35:56.358933620Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 13 00:35:56.359051 env[1327]: time="2025-05-13T00:35:56.359036368Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 13 00:35:56.359130 env[1327]: time="2025-05-13T00:35:56.359112496Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 13 00:35:56.359185 env[1327]: time="2025-05-13T00:35:56.359171760Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 13 00:35:56.473622 env[1327]: time="2025-05-13T00:35:56.473521694Z" level=info msg="Loading containers: start." May 13 00:35:56.612444 kernel: Initializing XFRM netlink socket May 13 00:35:56.642532 env[1327]: time="2025-05-13T00:35:56.642492805Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 13 00:35:56.705817 systemd-networkd[1057]: docker0: Link UP May 13 00:35:56.724995 env[1327]: time="2025-05-13T00:35:56.724888423Z" level=info msg="Loading containers: done." May 13 00:35:56.751777 env[1327]: time="2025-05-13T00:35:56.751727329Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 00:35:56.751956 env[1327]: time="2025-05-13T00:35:56.751931299Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 13 00:35:56.752053 env[1327]: time="2025-05-13T00:35:56.752033485Z" level=info msg="Daemon has completed initialization" May 13 00:35:56.768561 systemd[1]: Started docker.service. May 13 00:35:56.777804 env[1327]: time="2025-05-13T00:35:56.777742080Z" level=info msg="API listen on /run/docker.sock" May 13 00:35:57.396438 env[1216]: time="2025-05-13T00:35:57.396377613Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 13 00:35:58.039732 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1385865582.mount: Deactivated successfully. May 13 00:35:59.315526 env[1216]: time="2025-05-13T00:35:59.315472998Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:35:59.322299 env[1216]: time="2025-05-13T00:35:59.322239911Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:35:59.325814 env[1216]: time="2025-05-13T00:35:59.325762280Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:35:59.327955 env[1216]: time="2025-05-13T00:35:59.327876920Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:35:59.328610 env[1216]: time="2025-05-13T00:35:59.328574849Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\"" May 13 00:35:59.329535 env[1216]: time="2025-05-13T00:35:59.329503282Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 13 00:36:00.732844 env[1216]: time="2025-05-13T00:36:00.732781189Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:00.738930 env[1216]: time="2025-05-13T00:36:00.738875911Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:00.745268 env[1216]: time="2025-05-13T00:36:00.745214896Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:00.750937 env[1216]: time="2025-05-13T00:36:00.750891049Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:00.752053 env[1216]: time="2025-05-13T00:36:00.752007687Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\"" May 13 00:36:00.752710 env[1216]: time="2025-05-13T00:36:00.752680860Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 13 00:36:02.071482 env[1216]: time="2025-05-13T00:36:02.071420197Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:02.073891 env[1216]: time="2025-05-13T00:36:02.073841512Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:02.075568 env[1216]: time="2025-05-13T00:36:02.075536793Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:02.078079 env[1216]: time="2025-05-13T00:36:02.078037644Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:02.078873 env[1216]: time="2025-05-13T00:36:02.078831393Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\"" May 13 00:36:02.079582 env[1216]: time="2025-05-13T00:36:02.079552178Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 13 00:36:03.303421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2745854155.mount: Deactivated successfully. May 13 00:36:03.304344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 00:36:03.304487 systemd[1]: Stopped kubelet.service. May 13 00:36:03.305884 systemd[1]: Starting kubelet.service... May 13 00:36:03.413425 systemd[1]: Started kubelet.service. May 13 00:36:03.454463 kubelet[1461]: E0513 00:36:03.454380 1461 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:36:03.457189 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:36:03.457328 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:36:04.011150 env[1216]: time="2025-05-13T00:36:04.011103259Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:04.012685 env[1216]: time="2025-05-13T00:36:04.012640420Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:04.014116 env[1216]: time="2025-05-13T00:36:04.014086663Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:04.015887 env[1216]: time="2025-05-13T00:36:04.015837542Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:04.016504 env[1216]: time="2025-05-13T00:36:04.016475092Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\"" May 13 00:36:04.017004 env[1216]: time="2025-05-13T00:36:04.016970577Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 13 00:36:04.500924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2215059610.mount: Deactivated successfully. May 13 00:36:05.266632 env[1216]: time="2025-05-13T00:36:05.266566014Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:05.268054 env[1216]: time="2025-05-13T00:36:05.268018069Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:05.269947 env[1216]: time="2025-05-13T00:36:05.269914509Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:05.272258 env[1216]: time="2025-05-13T00:36:05.272209203Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:05.273137 env[1216]: time="2025-05-13T00:36:05.273098256Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 13 00:36:05.274241 env[1216]: time="2025-05-13T00:36:05.274189459Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 13 00:36:05.746496 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount529995888.mount: Deactivated successfully. May 13 00:36:05.749836 env[1216]: time="2025-05-13T00:36:05.749791551Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:05.751177 env[1216]: time="2025-05-13T00:36:05.751135442Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:05.752654 env[1216]: time="2025-05-13T00:36:05.752616890Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:05.754795 env[1216]: time="2025-05-13T00:36:05.754758089Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:05.755193 env[1216]: time="2025-05-13T00:36:05.755161108Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 13 00:36:05.755859 env[1216]: time="2025-05-13T00:36:05.755823022Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 13 00:36:06.236568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount144672300.mount: Deactivated successfully. May 13 00:36:08.163048 env[1216]: time="2025-05-13T00:36:08.162963219Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:08.165354 env[1216]: time="2025-05-13T00:36:08.165315695Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:08.167354 env[1216]: time="2025-05-13T00:36:08.167302251Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:08.170496 env[1216]: time="2025-05-13T00:36:08.170445491Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:08.170852 env[1216]: time="2025-05-13T00:36:08.170828464Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 13 00:36:12.538906 systemd[1]: Stopped kubelet.service. May 13 00:36:12.541150 systemd[1]: Starting kubelet.service... May 13 00:36:12.563936 systemd[1]: Reloading. May 13 00:36:12.609359 /usr/lib/systemd/system-generators/torcx-generator[1517]: time="2025-05-13T00:36:12Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:36:12.609392 /usr/lib/systemd/system-generators/torcx-generator[1517]: time="2025-05-13T00:36:12Z" level=info msg="torcx already run" May 13 00:36:12.705843 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:36:12.705867 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:36:12.721391 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:36:12.791057 systemd[1]: Started kubelet.service. May 13 00:36:12.796801 systemd[1]: Stopping kubelet.service... May 13 00:36:12.797706 systemd[1]: kubelet.service: Deactivated successfully. May 13 00:36:12.798023 systemd[1]: Stopped kubelet.service. May 13 00:36:12.800525 systemd[1]: Starting kubelet.service... May 13 00:36:12.888335 systemd[1]: Started kubelet.service. May 13 00:36:12.929476 kubelet[1565]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:36:12.929476 kubelet[1565]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 00:36:12.929476 kubelet[1565]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:36:12.929831 kubelet[1565]: I0513 00:36:12.929742 1565 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:36:13.782062 kubelet[1565]: I0513 00:36:13.782004 1565 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 13 00:36:13.782062 kubelet[1565]: I0513 00:36:13.782044 1565 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:36:13.782324 kubelet[1565]: I0513 00:36:13.782294 1565 server.go:929] "Client rotation is on, will bootstrap in background" May 13 00:36:13.819716 kubelet[1565]: E0513 00:36:13.819669 1565 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.114:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" May 13 00:36:13.823206 kubelet[1565]: I0513 00:36:13.823168 1565 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:36:13.833530 kubelet[1565]: E0513 00:36:13.833471 1565 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 13 00:36:13.833530 kubelet[1565]: I0513 00:36:13.833522 1565 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 13 00:36:13.837238 kubelet[1565]: I0513 00:36:13.837216 1565 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:36:13.837700 kubelet[1565]: I0513 00:36:13.837679 1565 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 13 00:36:13.837848 kubelet[1565]: I0513 00:36:13.837813 1565 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:36:13.838024 kubelet[1565]: I0513 00:36:13.837844 1565 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 00:36:13.838099 kubelet[1565]: I0513 00:36:13.838030 1565 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:36:13.838099 kubelet[1565]: I0513 00:36:13.838041 1565 container_manager_linux.go:300] "Creating device plugin manager" May 13 00:36:13.838172 kubelet[1565]: I0513 00:36:13.838159 1565 state_mem.go:36] "Initialized new in-memory state store" May 13 00:36:13.840792 kubelet[1565]: I0513 00:36:13.840764 1565 kubelet.go:408] "Attempting to sync node with API server" May 13 00:36:13.840792 kubelet[1565]: I0513 00:36:13.840801 1565 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:36:13.840929 kubelet[1565]: I0513 00:36:13.840896 1565 kubelet.go:314] "Adding apiserver pod source" May 13 00:36:13.840929 kubelet[1565]: I0513 00:36:13.840907 1565 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:36:13.851896 kubelet[1565]: I0513 00:36:13.851864 1565 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 13 00:36:13.860978 kubelet[1565]: I0513 00:36:13.860928 1565 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:36:13.861781 kubelet[1565]: W0513 00:36:13.861731 1565 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused May 13 00:36:13.861893 kubelet[1565]: E0513 00:36:13.861825 1565 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" May 13 00:36:13.862153 kubelet[1565]: W0513 00:36:13.862131 1565 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 00:36:13.863977 kubelet[1565]: W0513 00:36:13.863946 1565 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.114:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused May 13 00:36:13.864071 kubelet[1565]: E0513 00:36:13.863986 1565 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.114:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" May 13 00:36:13.870263 kubelet[1565]: I0513 00:36:13.870236 1565 server.go:1269] "Started kubelet" May 13 00:36:13.870644 kubelet[1565]: I0513 00:36:13.870450 1565 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:36:13.871662 kubelet[1565]: I0513 00:36:13.871632 1565 server.go:460] "Adding debug handlers to kubelet server" May 13 00:36:13.872597 kubelet[1565]: I0513 00:36:13.872541 1565 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:36:13.872676 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 13 00:36:13.872786 kubelet[1565]: I0513 00:36:13.872765 1565 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:36:13.872845 kubelet[1565]: I0513 00:36:13.872821 1565 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:36:13.872893 kubelet[1565]: I0513 00:36:13.872878 1565 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 00:36:13.873895 kubelet[1565]: I0513 00:36:13.873696 1565 volume_manager.go:289] "Starting Kubelet Volume Manager" May 13 00:36:13.874215 kubelet[1565]: E0513 00:36:13.874191 1565 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:36:13.876995 kubelet[1565]: I0513 00:36:13.874886 1565 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 13 00:36:13.876995 kubelet[1565]: I0513 00:36:13.874959 1565 reconciler.go:26] "Reconciler: start to sync state" May 13 00:36:13.877282 kubelet[1565]: E0513 00:36:13.877247 1565 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.114:6443: connect: connection refused" interval="200ms" May 13 00:36:13.877498 kubelet[1565]: W0513 00:36:13.877448 1565 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused May 13 00:36:13.877611 kubelet[1565]: E0513 00:36:13.877590 1565 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" May 13 00:36:13.879088 kubelet[1565]: E0513 00:36:13.877781 1565 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.114:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.114:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183eef13d1d5dc7c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 00:36:13.87020198 +0000 UTC m=+0.977664678,LastTimestamp:2025-05-13 00:36:13.87020198 +0000 UTC m=+0.977664678,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 00:36:13.882523 kubelet[1565]: I0513 00:36:13.879914 1565 factory.go:221] Registration of the systemd container factory successfully May 13 00:36:13.882523 kubelet[1565]: I0513 00:36:13.880059 1565 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:36:13.886307 kubelet[1565]: E0513 00:36:13.886266 1565 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:36:13.887479 kubelet[1565]: I0513 00:36:13.887440 1565 factory.go:221] Registration of the containerd container factory successfully May 13 00:36:13.890526 kubelet[1565]: I0513 00:36:13.890246 1565 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:36:13.891571 kubelet[1565]: I0513 00:36:13.891539 1565 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:36:13.891571 kubelet[1565]: I0513 00:36:13.891567 1565 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 00:36:13.891676 kubelet[1565]: I0513 00:36:13.891589 1565 kubelet.go:2321] "Starting kubelet main sync loop" May 13 00:36:13.891676 kubelet[1565]: E0513 00:36:13.891637 1565 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:36:13.893897 kubelet[1565]: W0513 00:36:13.893854 1565 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused May 13 00:36:13.894002 kubelet[1565]: E0513 00:36:13.893902 1565 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" May 13 00:36:13.902130 kubelet[1565]: I0513 00:36:13.902101 1565 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 00:36:13.902130 kubelet[1565]: I0513 00:36:13.902124 1565 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 00:36:13.902278 kubelet[1565]: I0513 00:36:13.902146 1565 state_mem.go:36] "Initialized new in-memory state store" May 13 00:36:13.903963 kubelet[1565]: I0513 00:36:13.903927 1565 policy_none.go:49] "None policy: Start" May 13 00:36:13.904606 kubelet[1565]: I0513 00:36:13.904566 1565 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 00:36:13.904606 kubelet[1565]: I0513 00:36:13.904611 1565 state_mem.go:35] "Initializing new in-memory state store" May 13 00:36:13.910610 systemd[1]: Created slice kubepods.slice. May 13 00:36:13.915154 systemd[1]: Created slice kubepods-burstable.slice. May 13 00:36:13.918120 systemd[1]: Created slice kubepods-besteffort.slice. May 13 00:36:13.929712 kubelet[1565]: I0513 00:36:13.929681 1565 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:36:13.930242 kubelet[1565]: I0513 00:36:13.930223 1565 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 00:36:13.930364 kubelet[1565]: I0513 00:36:13.930326 1565 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:36:13.931163 kubelet[1565]: I0513 00:36:13.931103 1565 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:36:13.931842 kubelet[1565]: E0513 00:36:13.931750 1565 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 13 00:36:13.999134 systemd[1]: Created slice kubepods-burstable-pod8f06b8e4d7c32187f395e1fa257636bb.slice. May 13 00:36:14.011533 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 13 00:36:14.027916 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 13 00:36:14.032641 kubelet[1565]: I0513 00:36:14.032527 1565 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 00:36:14.033390 kubelet[1565]: E0513 00:36:14.033333 1565 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.114:6443/api/v1/nodes\": dial tcp 10.0.0.114:6443: connect: connection refused" node="localhost" May 13 00:36:14.077886 kubelet[1565]: E0513 00:36:14.077838 1565 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.114:6443: connect: connection refused" interval="400ms" May 13 00:36:14.176383 kubelet[1565]: I0513 00:36:14.176343 1565 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:36:14.176383 kubelet[1565]: I0513 00:36:14.176393 1565 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:36:14.176598 kubelet[1565]: I0513 00:36:14.176431 1565 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:36:14.176598 kubelet[1565]: I0513 00:36:14.176450 1565 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 13 00:36:14.176598 kubelet[1565]: I0513 00:36:14.176481 1565 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8f06b8e4d7c32187f395e1fa257636bb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8f06b8e4d7c32187f395e1fa257636bb\") " pod="kube-system/kube-apiserver-localhost" May 13 00:36:14.176598 kubelet[1565]: I0513 00:36:14.176497 1565 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8f06b8e4d7c32187f395e1fa257636bb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8f06b8e4d7c32187f395e1fa257636bb\") " pod="kube-system/kube-apiserver-localhost" May 13 00:36:14.176598 kubelet[1565]: I0513 00:36:14.176511 1565 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:36:14.176716 kubelet[1565]: I0513 00:36:14.176525 1565 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8f06b8e4d7c32187f395e1fa257636bb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8f06b8e4d7c32187f395e1fa257636bb\") " pod="kube-system/kube-apiserver-localhost" May 13 00:36:14.176716 kubelet[1565]: I0513 00:36:14.176557 1565 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:36:14.235693 kubelet[1565]: I0513 00:36:14.235652 1565 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 00:36:14.236045 kubelet[1565]: E0513 00:36:14.236022 1565 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.114:6443/api/v1/nodes\": dial tcp 10.0.0.114:6443: connect: connection refused" node="localhost" May 13 00:36:14.310158 kubelet[1565]: E0513 00:36:14.310049 1565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:14.311338 env[1216]: time="2025-05-13T00:36:14.310883132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8f06b8e4d7c32187f395e1fa257636bb,Namespace:kube-system,Attempt:0,}" May 13 00:36:14.327267 kubelet[1565]: E0513 00:36:14.327232 1565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:14.327984 env[1216]: time="2025-05-13T00:36:14.327912570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 13 00:36:14.330263 kubelet[1565]: E0513 00:36:14.330218 1565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:14.330738 env[1216]: time="2025-05-13T00:36:14.330692403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 13 00:36:14.478909 kubelet[1565]: E0513 00:36:14.478862 1565 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.114:6443: connect: connection refused" interval="800ms" May 13 00:36:14.637622 kubelet[1565]: I0513 00:36:14.637333 1565 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 00:36:14.638080 kubelet[1565]: E0513 00:36:14.638050 1565 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.114:6443/api/v1/nodes\": dial tcp 10.0.0.114:6443: connect: connection refused" node="localhost" May 13 00:36:14.684065 kubelet[1565]: W0513 00:36:14.683998 1565 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.114:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused May 13 00:36:14.684166 kubelet[1565]: E0513 00:36:14.684068 1565 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.114:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" May 13 00:36:14.826446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2790047452.mount: Deactivated successfully. May 13 00:36:14.830789 env[1216]: time="2025-05-13T00:36:14.830738331Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:14.832585 env[1216]: time="2025-05-13T00:36:14.832537348Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:14.833355 env[1216]: time="2025-05-13T00:36:14.833325698Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:14.834312 env[1216]: time="2025-05-13T00:36:14.834283026Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:14.835959 env[1216]: time="2025-05-13T00:36:14.835928270Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:14.838475 env[1216]: time="2025-05-13T00:36:14.838446694Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:14.841739 env[1216]: time="2025-05-13T00:36:14.841705491Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:14.845054 env[1216]: time="2025-05-13T00:36:14.845020707Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:14.845724 env[1216]: time="2025-05-13T00:36:14.845693498Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:14.847603 env[1216]: time="2025-05-13T00:36:14.847577784Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:14.848538 env[1216]: time="2025-05-13T00:36:14.848513464Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:14.849781 env[1216]: time="2025-05-13T00:36:14.849730722Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:14.864050 kubelet[1565]: W0513 00:36:14.863920 1565 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused May 13 00:36:14.864050 kubelet[1565]: E0513 00:36:14.863965 1565 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" May 13 00:36:14.869448 env[1216]: time="2025-05-13T00:36:14.869351327Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:36:14.869448 env[1216]: time="2025-05-13T00:36:14.869416830Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:36:14.869448 env[1216]: time="2025-05-13T00:36:14.869435956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:36:14.869754 env[1216]: time="2025-05-13T00:36:14.869707570Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a2a0997e406e8bde8d55f937aba9f0c9e34879cd416d0dafc4e38fc4a85abd1 pid=1617 runtime=io.containerd.runc.v2 May 13 00:36:14.870726 env[1216]: time="2025-05-13T00:36:14.870661176Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:36:14.870726 env[1216]: time="2025-05-13T00:36:14.870697589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:36:14.870726 env[1216]: time="2025-05-13T00:36:14.870707872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:36:14.871034 env[1216]: time="2025-05-13T00:36:14.870988249Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dad9dc64dc0d02a362b2ffa5af35b73d6d43e7bd0cb9a832a8fb8ac49a71da8f pid=1623 runtime=io.containerd.runc.v2 May 13 00:36:14.872265 env[1216]: time="2025-05-13T00:36:14.872199424Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:36:14.872339 env[1216]: time="2025-05-13T00:36:14.872281172Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:36:14.872339 env[1216]: time="2025-05-13T00:36:14.872296017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:36:14.872489 env[1216]: time="2025-05-13T00:36:14.872453991Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0569f401842a184b8c7a26464a58cf5d4c991429b8d542a741c39e3715f4f714 pid=1624 runtime=io.containerd.runc.v2 May 13 00:36:14.884323 systemd[1]: Started cri-containerd-0569f401842a184b8c7a26464a58cf5d4c991429b8d542a741c39e3715f4f714.scope. May 13 00:36:14.892014 systemd[1]: Started cri-containerd-dad9dc64dc0d02a362b2ffa5af35b73d6d43e7bd0cb9a832a8fb8ac49a71da8f.scope. May 13 00:36:14.901075 systemd[1]: Started cri-containerd-1a2a0997e406e8bde8d55f937aba9f0c9e34879cd416d0dafc4e38fc4a85abd1.scope. May 13 00:36:14.945002 env[1216]: time="2025-05-13T00:36:14.944954363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8f06b8e4d7c32187f395e1fa257636bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"0569f401842a184b8c7a26464a58cf5d4c991429b8d542a741c39e3715f4f714\"" May 13 00:36:14.946610 kubelet[1565]: E0513 00:36:14.946168 1565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:14.952475 env[1216]: time="2025-05-13T00:36:14.952437128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"dad9dc64dc0d02a362b2ffa5af35b73d6d43e7bd0cb9a832a8fb8ac49a71da8f\"" May 13 00:36:14.952557 env[1216]: time="2025-05-13T00:36:14.952525238Z" level=info msg="CreateContainer within sandbox \"0569f401842a184b8c7a26464a58cf5d4c991429b8d542a741c39e3715f4f714\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 00:36:14.953052 kubelet[1565]: E0513 00:36:14.953016 1565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:14.955262 env[1216]: time="2025-05-13T00:36:14.955226284Z" level=info msg="CreateContainer within sandbox \"dad9dc64dc0d02a362b2ffa5af35b73d6d43e7bd0cb9a832a8fb8ac49a71da8f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 00:36:14.960012 env[1216]: time="2025-05-13T00:36:14.959986836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a2a0997e406e8bde8d55f937aba9f0c9e34879cd416d0dafc4e38fc4a85abd1\"" May 13 00:36:14.961542 kubelet[1565]: E0513 00:36:14.961520 1565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:14.963269 env[1216]: time="2025-05-13T00:36:14.963242272Z" level=info msg="CreateContainer within sandbox \"1a2a0997e406e8bde8d55f937aba9f0c9e34879cd416d0dafc4e38fc4a85abd1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 00:36:14.969282 env[1216]: time="2025-05-13T00:36:14.969250852Z" level=info msg="CreateContainer within sandbox \"0569f401842a184b8c7a26464a58cf5d4c991429b8d542a741c39e3715f4f714\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9a288743cea9c7610574de4131fb1a3752bbd8c79cad3a3a7b5ba4eace3c78cc\"" May 13 00:36:14.970031 env[1216]: time="2025-05-13T00:36:14.970006871Z" level=info msg="StartContainer for \"9a288743cea9c7610574de4131fb1a3752bbd8c79cad3a3a7b5ba4eace3c78cc\"" May 13 00:36:14.973650 env[1216]: time="2025-05-13T00:36:14.973611266Z" level=info msg="CreateContainer within sandbox \"dad9dc64dc0d02a362b2ffa5af35b73d6d43e7bd0cb9a832a8fb8ac49a71da8f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bcb519ca12e912012269f83dd21a7d08261d265ce566dbf6ec7ba6e4ab5a8cec\"" May 13 00:36:14.974056 env[1216]: time="2025-05-13T00:36:14.974029930Z" level=info msg="StartContainer for \"bcb519ca12e912012269f83dd21a7d08261d265ce566dbf6ec7ba6e4ab5a8cec\"" May 13 00:36:14.981638 env[1216]: time="2025-05-13T00:36:14.981598204Z" level=info msg="CreateContainer within sandbox \"1a2a0997e406e8bde8d55f937aba9f0c9e34879cd416d0dafc4e38fc4a85abd1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b073a790df6b1beb1c5f2a2cca755f2a3e5407e32017a22547a927ecc564b027\"" May 13 00:36:14.982147 env[1216]: time="2025-05-13T00:36:14.982125905Z" level=info msg="StartContainer for \"b073a790df6b1beb1c5f2a2cca755f2a3e5407e32017a22547a927ecc564b027\"" May 13 00:36:14.992498 systemd[1]: Started cri-containerd-bcb519ca12e912012269f83dd21a7d08261d265ce566dbf6ec7ba6e4ab5a8cec.scope. May 13 00:36:15.001776 systemd[1]: Started cri-containerd-9a288743cea9c7610574de4131fb1a3752bbd8c79cad3a3a7b5ba4eace3c78cc.scope. May 13 00:36:15.006570 systemd[1]: Started cri-containerd-b073a790df6b1beb1c5f2a2cca755f2a3e5407e32017a22547a927ecc564b027.scope. May 13 00:36:15.058337 kubelet[1565]: W0513 00:36:15.058258 1565 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused May 13 00:36:15.058337 kubelet[1565]: E0513 00:36:15.058336 1565 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" May 13 00:36:15.063982 env[1216]: time="2025-05-13T00:36:15.063928667Z" level=info msg="StartContainer for \"bcb519ca12e912012269f83dd21a7d08261d265ce566dbf6ec7ba6e4ab5a8cec\" returns successfully" May 13 00:36:15.071767 env[1216]: time="2025-05-13T00:36:15.071736850Z" level=info msg="StartContainer for \"9a288743cea9c7610574de4131fb1a3752bbd8c79cad3a3a7b5ba4eace3c78cc\" returns successfully" May 13 00:36:15.089174 env[1216]: time="2025-05-13T00:36:15.089127106Z" level=info msg="StartContainer for \"b073a790df6b1beb1c5f2a2cca755f2a3e5407e32017a22547a927ecc564b027\" returns successfully" May 13 00:36:15.440926 kubelet[1565]: I0513 00:36:15.440883 1565 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 00:36:15.903544 kubelet[1565]: E0513 00:36:15.903513 1565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:15.905714 kubelet[1565]: E0513 00:36:15.905693 1565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:15.907681 kubelet[1565]: E0513 00:36:15.907664 1565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:16.909129 kubelet[1565]: E0513 00:36:16.909104 1565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:17.172832 kubelet[1565]: E0513 00:36:17.172728 1565 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 13 00:36:17.327742 kubelet[1565]: I0513 00:36:17.327706 1565 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 13 00:36:17.327742 kubelet[1565]: E0513 00:36:17.327740 1565 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 13 00:36:17.337801 kubelet[1565]: E0513 00:36:17.337770 1565 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:36:17.438811 kubelet[1565]: E0513 00:36:17.438709 1565 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:36:17.539573 kubelet[1565]: E0513 00:36:17.539515 1565 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:36:17.640067 kubelet[1565]: E0513 00:36:17.640029 1565 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:36:17.740697 kubelet[1565]: E0513 00:36:17.740608 1565 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:36:17.841371 kubelet[1565]: E0513 00:36:17.841330 1565 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:36:17.941811 kubelet[1565]: E0513 00:36:17.941766 1565 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:36:18.042474 kubelet[1565]: E0513 00:36:18.042385 1565 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:36:18.142923 kubelet[1565]: E0513 00:36:18.142890 1565 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:36:18.843034 kubelet[1565]: I0513 00:36:18.842979 1565 apiserver.go:52] "Watching apiserver" May 13 00:36:18.875228 kubelet[1565]: I0513 00:36:18.875187 1565 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 13 00:36:19.353249 systemd[1]: Reloading. May 13 00:36:19.402133 /usr/lib/systemd/system-generators/torcx-generator[1865]: time="2025-05-13T00:36:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:36:19.402505 /usr/lib/systemd/system-generators/torcx-generator[1865]: time="2025-05-13T00:36:19Z" level=info msg="torcx already run" May 13 00:36:19.466066 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:36:19.466084 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:36:19.482085 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:36:19.562090 systemd[1]: Stopping kubelet.service... May 13 00:36:19.580840 systemd[1]: kubelet.service: Deactivated successfully. May 13 00:36:19.581026 systemd[1]: Stopped kubelet.service. May 13 00:36:19.581069 systemd[1]: kubelet.service: Consumed 1.366s CPU time. May 13 00:36:19.582620 systemd[1]: Starting kubelet.service... May 13 00:36:19.669704 systemd[1]: Started kubelet.service. May 13 00:36:19.704246 kubelet[1907]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:36:19.704246 kubelet[1907]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 00:36:19.704246 kubelet[1907]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:36:19.704623 kubelet[1907]: I0513 00:36:19.704296 1907 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:36:19.710513 kubelet[1907]: I0513 00:36:19.710480 1907 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 13 00:36:19.710649 kubelet[1907]: I0513 00:36:19.710636 1907 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:36:19.710953 kubelet[1907]: I0513 00:36:19.710933 1907 server.go:929] "Client rotation is on, will bootstrap in background" May 13 00:36:19.712355 kubelet[1907]: I0513 00:36:19.712331 1907 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 00:36:19.714458 kubelet[1907]: I0513 00:36:19.714426 1907 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:36:19.717394 kubelet[1907]: E0513 00:36:19.717361 1907 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 13 00:36:19.717485 kubelet[1907]: I0513 00:36:19.717408 1907 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 13 00:36:19.719623 kubelet[1907]: I0513 00:36:19.719603 1907 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:36:19.719817 kubelet[1907]: I0513 00:36:19.719803 1907 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 13 00:36:19.720019 kubelet[1907]: I0513 00:36:19.719992 1907 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:36:19.720354 kubelet[1907]: I0513 00:36:19.720179 1907 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 00:36:19.720545 kubelet[1907]: I0513 00:36:19.720530 1907 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:36:19.720612 kubelet[1907]: I0513 00:36:19.720602 1907 container_manager_linux.go:300] "Creating device plugin manager" May 13 00:36:19.720692 kubelet[1907]: I0513 00:36:19.720682 1907 state_mem.go:36] "Initialized new in-memory state store" May 13 00:36:19.720848 kubelet[1907]: I0513 00:36:19.720834 1907 kubelet.go:408] "Attempting to sync node with API server" May 13 00:36:19.720938 kubelet[1907]: I0513 00:36:19.720924 1907 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:36:19.721012 kubelet[1907]: I0513 00:36:19.721001 1907 kubelet.go:314] "Adding apiserver pod source" May 13 00:36:19.721079 kubelet[1907]: I0513 00:36:19.721068 1907 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:36:19.721634 kubelet[1907]: I0513 00:36:19.721574 1907 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 13 00:36:19.722153 kubelet[1907]: I0513 00:36:19.722131 1907 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:36:19.722977 kubelet[1907]: I0513 00:36:19.722952 1907 server.go:1269] "Started kubelet" May 13 00:36:19.724782 kubelet[1907]: I0513 00:36:19.724764 1907 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:36:19.726694 kubelet[1907]: E0513 00:36:19.726661 1907 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:36:19.731187 kubelet[1907]: I0513 00:36:19.728340 1907 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:36:19.731187 kubelet[1907]: I0513 00:36:19.729652 1907 server.go:460] "Adding debug handlers to kubelet server" May 13 00:36:19.731517 kubelet[1907]: I0513 00:36:19.731465 1907 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:36:19.731866 kubelet[1907]: I0513 00:36:19.731833 1907 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 00:36:19.732218 kubelet[1907]: I0513 00:36:19.732192 1907 volume_manager.go:289] "Starting Kubelet Volume Manager" May 13 00:36:19.732279 kubelet[1907]: I0513 00:36:19.732248 1907 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:36:19.732479 kubelet[1907]: E0513 00:36:19.732445 1907 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:36:19.746229 kubelet[1907]: I0513 00:36:19.745138 1907 factory.go:221] Registration of the containerd container factory successfully May 13 00:36:19.746229 kubelet[1907]: I0513 00:36:19.745160 1907 factory.go:221] Registration of the systemd container factory successfully May 13 00:36:19.746229 kubelet[1907]: I0513 00:36:19.745982 1907 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:36:19.746904 kubelet[1907]: I0513 00:36:19.746638 1907 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 13 00:36:19.746986 kubelet[1907]: I0513 00:36:19.746971 1907 reconciler.go:26] "Reconciler: start to sync state" May 13 00:36:19.755260 kubelet[1907]: I0513 00:36:19.754880 1907 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:36:19.757015 kubelet[1907]: I0513 00:36:19.755948 1907 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:36:19.757015 kubelet[1907]: I0513 00:36:19.755964 1907 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 00:36:19.757015 kubelet[1907]: I0513 00:36:19.755983 1907 kubelet.go:2321] "Starting kubelet main sync loop" May 13 00:36:19.757015 kubelet[1907]: E0513 00:36:19.756026 1907 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:36:19.786980 kubelet[1907]: I0513 00:36:19.786939 1907 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 00:36:19.786980 kubelet[1907]: I0513 00:36:19.786959 1907 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 00:36:19.786980 kubelet[1907]: I0513 00:36:19.786980 1907 state_mem.go:36] "Initialized new in-memory state store" May 13 00:36:19.787157 kubelet[1907]: I0513 00:36:19.787124 1907 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 00:36:19.787157 kubelet[1907]: I0513 00:36:19.787135 1907 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 00:36:19.787157 kubelet[1907]: I0513 00:36:19.787151 1907 policy_none.go:49] "None policy: Start" May 13 00:36:19.787732 kubelet[1907]: I0513 00:36:19.787701 1907 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 00:36:19.787777 kubelet[1907]: I0513 00:36:19.787738 1907 state_mem.go:35] "Initializing new in-memory state store" May 13 00:36:19.787907 kubelet[1907]: I0513 00:36:19.787894 1907 state_mem.go:75] "Updated machine memory state" May 13 00:36:19.791701 kubelet[1907]: I0513 00:36:19.791679 1907 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:36:19.792018 kubelet[1907]: I0513 00:36:19.792001 1907 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 00:36:19.792138 kubelet[1907]: I0513 00:36:19.792105 1907 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:36:19.792424 kubelet[1907]: I0513 00:36:19.792412 1907 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:36:19.895516 kubelet[1907]: I0513 00:36:19.895477 1907 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 00:36:19.901342 kubelet[1907]: I0513 00:36:19.901310 1907 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 13 00:36:19.901517 kubelet[1907]: I0513 00:36:19.901413 1907 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 13 00:36:19.947464 kubelet[1907]: I0513 00:36:19.947345 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:36:19.947464 kubelet[1907]: I0513 00:36:19.947382 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8f06b8e4d7c32187f395e1fa257636bb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8f06b8e4d7c32187f395e1fa257636bb\") " pod="kube-system/kube-apiserver-localhost" May 13 00:36:19.947464 kubelet[1907]: I0513 00:36:19.947436 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:36:19.947464 kubelet[1907]: I0513 00:36:19.947455 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:36:19.947464 kubelet[1907]: I0513 00:36:19.947469 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:36:19.947727 kubelet[1907]: I0513 00:36:19.947484 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 13 00:36:19.947727 kubelet[1907]: I0513 00:36:19.947507 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8f06b8e4d7c32187f395e1fa257636bb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8f06b8e4d7c32187f395e1fa257636bb\") " pod="kube-system/kube-apiserver-localhost" May 13 00:36:19.947727 kubelet[1907]: I0513 00:36:19.947521 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8f06b8e4d7c32187f395e1fa257636bb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8f06b8e4d7c32187f395e1fa257636bb\") " pod="kube-system/kube-apiserver-localhost" May 13 00:36:19.947727 kubelet[1907]: I0513 00:36:19.947535 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:36:20.163957 kubelet[1907]: E0513 00:36:20.163920 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:20.164521 kubelet[1907]: E0513 00:36:20.164431 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:20.164598 kubelet[1907]: E0513 00:36:20.164518 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:20.349831 sudo[1942]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 13 00:36:20.350380 sudo[1942]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) May 13 00:36:20.721983 kubelet[1907]: I0513 00:36:20.721889 1907 apiserver.go:52] "Watching apiserver" May 13 00:36:20.747949 kubelet[1907]: I0513 00:36:20.747910 1907 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 13 00:36:20.770587 kubelet[1907]: E0513 00:36:20.770558 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:20.770758 kubelet[1907]: E0513 00:36:20.770728 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:20.780151 kubelet[1907]: E0513 00:36:20.780097 1907 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 13 00:36:20.780348 kubelet[1907]: E0513 00:36:20.780301 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:20.793619 kubelet[1907]: I0513 00:36:20.793531 1907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.793515615 podStartE2EDuration="1.793515615s" podCreationTimestamp="2025-05-13 00:36:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:36:20.793367592 +0000 UTC m=+1.116393930" watchObservedRunningTime="2025-05-13 00:36:20.793515615 +0000 UTC m=+1.116541953" May 13 00:36:20.801932 kubelet[1907]: I0513 00:36:20.801871 1907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.8018542979999999 podStartE2EDuration="1.801854298s" podCreationTimestamp="2025-05-13 00:36:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:36:20.801764524 +0000 UTC m=+1.124790822" watchObservedRunningTime="2025-05-13 00:36:20.801854298 +0000 UTC m=+1.124880636" May 13 00:36:20.807360 sudo[1942]: pam_unix(sudo:session): session closed for user root May 13 00:36:20.823383 kubelet[1907]: I0513 00:36:20.823321 1907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.8233069990000002 podStartE2EDuration="1.823306999s" podCreationTimestamp="2025-05-13 00:36:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:36:20.809539 +0000 UTC m=+1.132565338" watchObservedRunningTime="2025-05-13 00:36:20.823306999 +0000 UTC m=+1.146333297" May 13 00:36:21.772713 kubelet[1907]: E0513 00:36:21.772679 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:21.773122 kubelet[1907]: E0513 00:36:21.773097 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:22.825630 sudo[1315]: pam_unix(sudo:session): session closed for user root May 13 00:36:22.827172 sshd[1312]: pam_unix(sshd:session): session closed for user core May 13 00:36:22.829854 systemd[1]: sshd@4-10.0.0.114:22-10.0.0.1:42904.service: Deactivated successfully. May 13 00:36:22.830753 systemd[1]: session-5.scope: Deactivated successfully. May 13 00:36:22.830915 systemd[1]: session-5.scope: Consumed 6.793s CPU time. May 13 00:36:22.831378 systemd-logind[1204]: Session 5 logged out. Waiting for processes to exit. May 13 00:36:22.832080 systemd-logind[1204]: Removed session 5. May 13 00:36:24.908487 kubelet[1907]: I0513 00:36:24.908455 1907 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 00:36:24.909267 env[1216]: time="2025-05-13T00:36:24.909227671Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 00:36:24.910326 kubelet[1907]: I0513 00:36:24.910301 1907 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 00:36:25.914405 systemd[1]: Created slice kubepods-besteffort-podfc48d4b2_855d_4297_8c24_52d13f8dff87.slice. May 13 00:36:25.937182 systemd[1]: Created slice kubepods-burstable-podc54037b3_477b_4bd5_9ae0_e58cb5593a1b.slice. May 13 00:36:25.958582 systemd[1]: Created slice kubepods-besteffort-pod5ca47b04_cf1f_404d_89c7_16cbe6b82188.slice. May 13 00:36:26.091074 kubelet[1907]: I0513 00:36:26.091023 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-lib-modules\") pod \"cilium-v5jdz\" (UID: \"c54037b3-477b-4bd5-9ae0-e58cb5593a1b\") " pod="kube-system/cilium-v5jdz" May 13 00:36:26.091074 kubelet[1907]: I0513 00:36:26.091071 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zlpm\" (UniqueName: \"kubernetes.io/projected/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-kube-api-access-9zlpm\") pod \"cilium-v5jdz\" (UID: \"c54037b3-477b-4bd5-9ae0-e58cb5593a1b\") " pod="kube-system/cilium-v5jdz" May 13 00:36:26.091471 kubelet[1907]: I0513 00:36:26.091093 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-hostproc\") pod \"cilium-v5jdz\" (UID: \"c54037b3-477b-4bd5-9ae0-e58cb5593a1b\") " pod="kube-system/cilium-v5jdz" May 13 00:36:26.091471 kubelet[1907]: I0513 00:36:26.091109 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc48d4b2-855d-4297-8c24-52d13f8dff87-xtables-lock\") pod \"kube-proxy-vxr2p\" (UID: \"fc48d4b2-855d-4297-8c24-52d13f8dff87\") " pod="kube-system/kube-proxy-vxr2p" May 13 00:36:26.091471 kubelet[1907]: I0513 00:36:26.091125 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc48d4b2-855d-4297-8c24-52d13f8dff87-lib-modules\") pod \"kube-proxy-vxr2p\" (UID: \"fc48d4b2-855d-4297-8c24-52d13f8dff87\") " pod="kube-system/kube-proxy-vxr2p" May 13 00:36:26.091471 kubelet[1907]: I0513 00:36:26.091140 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-cilium-config-path\") pod \"cilium-v5jdz\" (UID: \"c54037b3-477b-4bd5-9ae0-e58cb5593a1b\") " pod="kube-system/cilium-v5jdz" May 13 00:36:26.091471 kubelet[1907]: I0513 00:36:26.091156 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-host-proc-sys-kernel\") pod \"cilium-v5jdz\" (UID: \"c54037b3-477b-4bd5-9ae0-e58cb5593a1b\") " pod="kube-system/cilium-v5jdz" May 13 00:36:26.091471 kubelet[1907]: I0513 00:36:26.091169 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-hubble-tls\") pod \"cilium-v5jdz\" (UID: \"c54037b3-477b-4bd5-9ae0-e58cb5593a1b\") " pod="kube-system/cilium-v5jdz" May 13 00:36:26.091636 kubelet[1907]: I0513 00:36:26.091192 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-bpf-maps\") pod \"cilium-v5jdz\" (UID: \"c54037b3-477b-4bd5-9ae0-e58cb5593a1b\") " pod="kube-system/cilium-v5jdz" May 13 00:36:26.091636 kubelet[1907]: I0513 00:36:26.091208 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-cni-path\") pod \"cilium-v5jdz\" (UID: \"c54037b3-477b-4bd5-9ae0-e58cb5593a1b\") " pod="kube-system/cilium-v5jdz" May 13 00:36:26.091636 kubelet[1907]: I0513 00:36:26.091223 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-cilium-cgroup\") pod \"cilium-v5jdz\" (UID: \"c54037b3-477b-4bd5-9ae0-e58cb5593a1b\") " pod="kube-system/cilium-v5jdz" May 13 00:36:26.091636 kubelet[1907]: I0513 00:36:26.091236 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-clustermesh-secrets\") pod \"cilium-v5jdz\" (UID: \"c54037b3-477b-4bd5-9ae0-e58cb5593a1b\") " pod="kube-system/cilium-v5jdz" May 13 00:36:26.091636 kubelet[1907]: I0513 00:36:26.091253 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ca47b04-cf1f-404d-89c7-16cbe6b82188-cilium-config-path\") pod \"cilium-operator-5d85765b45-tq9gw\" (UID: \"5ca47b04-cf1f-404d-89c7-16cbe6b82188\") " pod="kube-system/cilium-operator-5d85765b45-tq9gw" May 13 00:36:26.091743 kubelet[1907]: I0513 00:36:26.091297 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfj7c\" (UniqueName: \"kubernetes.io/projected/fc48d4b2-855d-4297-8c24-52d13f8dff87-kube-api-access-hfj7c\") pod \"kube-proxy-vxr2p\" (UID: \"fc48d4b2-855d-4297-8c24-52d13f8dff87\") " pod="kube-system/kube-proxy-vxr2p" May 13 00:36:26.091743 kubelet[1907]: I0513 00:36:26.091349 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq62h\" (UniqueName: \"kubernetes.io/projected/5ca47b04-cf1f-404d-89c7-16cbe6b82188-kube-api-access-fq62h\") pod \"cilium-operator-5d85765b45-tq9gw\" (UID: \"5ca47b04-cf1f-404d-89c7-16cbe6b82188\") " pod="kube-system/cilium-operator-5d85765b45-tq9gw" May 13 00:36:26.091743 kubelet[1907]: I0513 00:36:26.091366 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-etc-cni-netd\") pod \"cilium-v5jdz\" (UID: \"c54037b3-477b-4bd5-9ae0-e58cb5593a1b\") " pod="kube-system/cilium-v5jdz" May 13 00:36:26.091743 kubelet[1907]: I0513 00:36:26.091382 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-xtables-lock\") pod \"cilium-v5jdz\" (UID: \"c54037b3-477b-4bd5-9ae0-e58cb5593a1b\") " pod="kube-system/cilium-v5jdz" May 13 00:36:26.091743 kubelet[1907]: I0513 00:36:26.091424 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-host-proc-sys-net\") pod \"cilium-v5jdz\" (UID: \"c54037b3-477b-4bd5-9ae0-e58cb5593a1b\") " pod="kube-system/cilium-v5jdz" May 13 00:36:26.091850 kubelet[1907]: I0513 00:36:26.091460 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fc48d4b2-855d-4297-8c24-52d13f8dff87-kube-proxy\") pod \"kube-proxy-vxr2p\" (UID: \"fc48d4b2-855d-4297-8c24-52d13f8dff87\") " pod="kube-system/kube-proxy-vxr2p" May 13 00:36:26.091850 kubelet[1907]: I0513 00:36:26.091515 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-cilium-run\") pod \"cilium-v5jdz\" (UID: \"c54037b3-477b-4bd5-9ae0-e58cb5593a1b\") " pod="kube-system/cilium-v5jdz" May 13 00:36:26.192329 kubelet[1907]: I0513 00:36:26.192228 1907 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 13 00:36:26.233203 kubelet[1907]: E0513 00:36:26.233150 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:26.233990 env[1216]: time="2025-05-13T00:36:26.233931504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vxr2p,Uid:fc48d4b2-855d-4297-8c24-52d13f8dff87,Namespace:kube-system,Attempt:0,}" May 13 00:36:26.241369 kubelet[1907]: E0513 00:36:26.240394 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:26.242243 env[1216]: time="2025-05-13T00:36:26.241790156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v5jdz,Uid:c54037b3-477b-4bd5-9ae0-e58cb5593a1b,Namespace:kube-system,Attempt:0,}" May 13 00:36:26.251806 env[1216]: time="2025-05-13T00:36:26.251664384Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:36:26.251806 env[1216]: time="2025-05-13T00:36:26.251711147Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:36:26.251806 env[1216]: time="2025-05-13T00:36:26.251721348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:36:26.252215 env[1216]: time="2025-05-13T00:36:26.252173618Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dbddf709df664f02f0488e2e328231944d447f2a81bfd1b68ae9c9383c970284 pid=2006 runtime=io.containerd.runc.v2 May 13 00:36:26.259827 env[1216]: time="2025-05-13T00:36:26.259705248Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:36:26.259827 env[1216]: time="2025-05-13T00:36:26.259751291Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:36:26.259827 env[1216]: time="2025-05-13T00:36:26.259762612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:36:26.260434 env[1216]: time="2025-05-13T00:36:26.259949785Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e5bb3060bbae2faf0f15fa84a56f8360c096c0008c3b41d8245060282bce45cc pid=2029 runtime=io.containerd.runc.v2 May 13 00:36:26.261332 kubelet[1907]: E0513 00:36:26.261104 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:26.264950 env[1216]: time="2025-05-13T00:36:26.261846113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-tq9gw,Uid:5ca47b04-cf1f-404d-89c7-16cbe6b82188,Namespace:kube-system,Attempt:0,}" May 13 00:36:26.264676 systemd[1]: Started cri-containerd-dbddf709df664f02f0488e2e328231944d447f2a81bfd1b68ae9c9383c970284.scope. May 13 00:36:26.277469 systemd[1]: Started cri-containerd-e5bb3060bbae2faf0f15fa84a56f8360c096c0008c3b41d8245060282bce45cc.scope. May 13 00:36:26.296047 env[1216]: time="2025-05-13T00:36:26.295971422Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:36:26.296160 env[1216]: time="2025-05-13T00:36:26.296053788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:36:26.296160 env[1216]: time="2025-05-13T00:36:26.296081990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:36:26.296306 env[1216]: time="2025-05-13T00:36:26.296246361Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4ec709ae52638a0bb6113a6b5d156b69c238368740b0ea09a43a7352f025acb9 pid=2074 runtime=io.containerd.runc.v2 May 13 00:36:26.308840 systemd[1]: Started cri-containerd-4ec709ae52638a0bb6113a6b5d156b69c238368740b0ea09a43a7352f025acb9.scope. May 13 00:36:26.316724 env[1216]: time="2025-05-13T00:36:26.316688144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vxr2p,Uid:fc48d4b2-855d-4297-8c24-52d13f8dff87,Namespace:kube-system,Attempt:0,} returns sandbox id \"dbddf709df664f02f0488e2e328231944d447f2a81bfd1b68ae9c9383c970284\"" May 13 00:36:26.318149 kubelet[1907]: E0513 00:36:26.318122 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:26.321267 env[1216]: time="2025-05-13T00:36:26.321221011Z" level=info msg="CreateContainer within sandbox \"dbddf709df664f02f0488e2e328231944d447f2a81bfd1b68ae9c9383c970284\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 00:36:26.329455 env[1216]: time="2025-05-13T00:36:26.329420006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v5jdz,Uid:c54037b3-477b-4bd5-9ae0-e58cb5593a1b,Namespace:kube-system,Attempt:0,} returns sandbox id \"e5bb3060bbae2faf0f15fa84a56f8360c096c0008c3b41d8245060282bce45cc\"" May 13 00:36:26.330170 kubelet[1907]: E0513 00:36:26.330148 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:26.332158 env[1216]: time="2025-05-13T00:36:26.332124149Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 13 00:36:26.342562 env[1216]: time="2025-05-13T00:36:26.342496011Z" level=info msg="CreateContainer within sandbox \"dbddf709df664f02f0488e2e328231944d447f2a81bfd1b68ae9c9383c970284\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7caa8fbb7078a2fcb630ee177d8f5a40fafaafdf85dc893b5bb94a1dc88ba98e\"" May 13 00:36:26.344728 env[1216]: time="2025-05-13T00:36:26.344688839Z" level=info msg="StartContainer for \"7caa8fbb7078a2fcb630ee177d8f5a40fafaafdf85dc893b5bb94a1dc88ba98e\"" May 13 00:36:26.364067 env[1216]: time="2025-05-13T00:36:26.364027828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-tq9gw,Uid:5ca47b04-cf1f-404d-89c7-16cbe6b82188,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ec709ae52638a0bb6113a6b5d156b69c238368740b0ea09a43a7352f025acb9\"" May 13 00:36:26.364988 kubelet[1907]: E0513 00:36:26.364965 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:26.375343 systemd[1]: Started cri-containerd-7caa8fbb7078a2fcb630ee177d8f5a40fafaafdf85dc893b5bb94a1dc88ba98e.scope. May 13 00:36:26.418513 env[1216]: time="2025-05-13T00:36:26.418468672Z" level=info msg="StartContainer for \"7caa8fbb7078a2fcb630ee177d8f5a40fafaafdf85dc893b5bb94a1dc88ba98e\" returns successfully" May 13 00:36:26.781950 kubelet[1907]: E0513 00:36:26.781901 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:26.790550 kubelet[1907]: I0513 00:36:26.790499 1907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vxr2p" podStartSLOduration=1.7904842479999998 podStartE2EDuration="1.790484248s" podCreationTimestamp="2025-05-13 00:36:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:36:26.79037432 +0000 UTC m=+7.113400658" watchObservedRunningTime="2025-05-13 00:36:26.790484248 +0000 UTC m=+7.113510586" May 13 00:36:28.870148 kubelet[1907]: E0513 00:36:28.870110 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:29.790429 kubelet[1907]: E0513 00:36:29.790377 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:29.873512 kubelet[1907]: E0513 00:36:29.873475 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:30.788100 kubelet[1907]: E0513 00:36:30.788028 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:30.795825 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1194681556.mount: Deactivated successfully. May 13 00:36:31.142921 kubelet[1907]: E0513 00:36:31.142888 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:33.143344 env[1216]: time="2025-05-13T00:36:33.143269616Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:33.146115 env[1216]: time="2025-05-13T00:36:33.146060066Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:33.149269 env[1216]: time="2025-05-13T00:36:33.149222852Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:33.149859 env[1216]: time="2025-05-13T00:36:33.149820640Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 13 00:36:33.168489 env[1216]: time="2025-05-13T00:36:33.168391580Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 13 00:36:33.179446 env[1216]: time="2025-05-13T00:36:33.179378089Z" level=info msg="CreateContainer within sandbox \"e5bb3060bbae2faf0f15fa84a56f8360c096c0008c3b41d8245060282bce45cc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 00:36:33.194346 env[1216]: time="2025-05-13T00:36:33.194280539Z" level=info msg="CreateContainer within sandbox \"e5bb3060bbae2faf0f15fa84a56f8360c096c0008c3b41d8245060282bce45cc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"84e4d333303e3d5ad30e382979b8045fba8bf9cd30bc063d751764dcead50338\"" May 13 00:36:33.197941 env[1216]: time="2025-05-13T00:36:33.196576845Z" level=info msg="StartContainer for \"84e4d333303e3d5ad30e382979b8045fba8bf9cd30bc063d751764dcead50338\"" May 13 00:36:33.224879 systemd[1]: run-containerd-runc-k8s.io-84e4d333303e3d5ad30e382979b8045fba8bf9cd30bc063d751764dcead50338-runc.QqpkEC.mount: Deactivated successfully. May 13 00:36:33.231107 systemd[1]: Started cri-containerd-84e4d333303e3d5ad30e382979b8045fba8bf9cd30bc063d751764dcead50338.scope. May 13 00:36:33.319363 systemd[1]: cri-containerd-84e4d333303e3d5ad30e382979b8045fba8bf9cd30bc063d751764dcead50338.scope: Deactivated successfully. May 13 00:36:33.322498 env[1216]: time="2025-05-13T00:36:33.322450835Z" level=info msg="StartContainer for \"84e4d333303e3d5ad30e382979b8045fba8bf9cd30bc063d751764dcead50338\" returns successfully" May 13 00:36:33.377588 env[1216]: time="2025-05-13T00:36:33.377541307Z" level=info msg="shim disconnected" id=84e4d333303e3d5ad30e382979b8045fba8bf9cd30bc063d751764dcead50338 May 13 00:36:33.377588 env[1216]: time="2025-05-13T00:36:33.377583229Z" level=warning msg="cleaning up after shim disconnected" id=84e4d333303e3d5ad30e382979b8045fba8bf9cd30bc063d751764dcead50338 namespace=k8s.io May 13 00:36:33.377588 env[1216]: time="2025-05-13T00:36:33.377594589Z" level=info msg="cleaning up dead shim" May 13 00:36:33.385323 env[1216]: time="2025-05-13T00:36:33.385267545Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:36:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2340 runtime=io.containerd.runc.v2\n" May 13 00:36:33.802157 kubelet[1907]: E0513 00:36:33.802119 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:33.805685 env[1216]: time="2025-05-13T00:36:33.805643454Z" level=info msg="CreateContainer within sandbox \"e5bb3060bbae2faf0f15fa84a56f8360c096c0008c3b41d8245060282bce45cc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 00:36:33.832876 env[1216]: time="2025-05-13T00:36:33.832823233Z" level=info msg="CreateContainer within sandbox \"e5bb3060bbae2faf0f15fa84a56f8360c096c0008c3b41d8245060282bce45cc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2f4d4ce75fc66a77882e5fcc08bf57bb0f4e41629ab78b5cb932e7bb7202aed1\"" May 13 00:36:33.833551 env[1216]: time="2025-05-13T00:36:33.833517425Z" level=info msg="StartContainer for \"2f4d4ce75fc66a77882e5fcc08bf57bb0f4e41629ab78b5cb932e7bb7202aed1\"" May 13 00:36:33.848288 systemd[1]: Started cri-containerd-2f4d4ce75fc66a77882e5fcc08bf57bb0f4e41629ab78b5cb932e7bb7202aed1.scope. May 13 00:36:33.895822 env[1216]: time="2025-05-13T00:36:33.895755548Z" level=info msg="StartContainer for \"2f4d4ce75fc66a77882e5fcc08bf57bb0f4e41629ab78b5cb932e7bb7202aed1\" returns successfully" May 13 00:36:33.914090 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 00:36:33.914288 systemd[1]: Stopped systemd-sysctl.service. May 13 00:36:33.914468 systemd[1]: Stopping systemd-sysctl.service... May 13 00:36:33.916082 systemd[1]: Starting systemd-sysctl.service... May 13 00:36:33.917202 systemd[1]: cri-containerd-2f4d4ce75fc66a77882e5fcc08bf57bb0f4e41629ab78b5cb932e7bb7202aed1.scope: Deactivated successfully. May 13 00:36:33.928306 systemd[1]: Finished systemd-sysctl.service. May 13 00:36:33.942284 env[1216]: time="2025-05-13T00:36:33.942237821Z" level=info msg="shim disconnected" id=2f4d4ce75fc66a77882e5fcc08bf57bb0f4e41629ab78b5cb932e7bb7202aed1 May 13 00:36:33.942564 env[1216]: time="2025-05-13T00:36:33.942543795Z" level=warning msg="cleaning up after shim disconnected" id=2f4d4ce75fc66a77882e5fcc08bf57bb0f4e41629ab78b5cb932e7bb7202aed1 namespace=k8s.io May 13 00:36:33.942654 env[1216]: time="2025-05-13T00:36:33.942638039Z" level=info msg="cleaning up dead shim" May 13 00:36:33.951916 env[1216]: time="2025-05-13T00:36:33.951874547Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:36:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2404 runtime=io.containerd.runc.v2\n" May 13 00:36:34.189925 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84e4d333303e3d5ad30e382979b8045fba8bf9cd30bc063d751764dcead50338-rootfs.mount: Deactivated successfully. May 13 00:36:34.803502 kubelet[1907]: E0513 00:36:34.803350 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:34.814423 env[1216]: time="2025-05-13T00:36:34.813953535Z" level=info msg="CreateContainer within sandbox \"e5bb3060bbae2faf0f15fa84a56f8360c096c0008c3b41d8245060282bce45cc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 00:36:34.838981 env[1216]: time="2025-05-13T00:36:34.838928274Z" level=info msg="CreateContainer within sandbox \"e5bb3060bbae2faf0f15fa84a56f8360c096c0008c3b41d8245060282bce45cc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"56e8dd6aa3c87c5bb6dbb0a885357c16e09b19dc20ebedccbffbd9cb9b3536ab\"" May 13 00:36:34.839830 env[1216]: time="2025-05-13T00:36:34.839435496Z" level=info msg="StartContainer for \"56e8dd6aa3c87c5bb6dbb0a885357c16e09b19dc20ebedccbffbd9cb9b3536ab\"" May 13 00:36:34.861071 systemd[1]: Started cri-containerd-56e8dd6aa3c87c5bb6dbb0a885357c16e09b19dc20ebedccbffbd9cb9b3536ab.scope. May 13 00:36:34.953412 env[1216]: time="2025-05-13T00:36:34.952556591Z" level=info msg="StartContainer for \"56e8dd6aa3c87c5bb6dbb0a885357c16e09b19dc20ebedccbffbd9cb9b3536ab\" returns successfully" May 13 00:36:34.968986 systemd[1]: cri-containerd-56e8dd6aa3c87c5bb6dbb0a885357c16e09b19dc20ebedccbffbd9cb9b3536ab.scope: Deactivated successfully. May 13 00:36:35.003176 env[1216]: time="2025-05-13T00:36:35.003128091Z" level=info msg="shim disconnected" id=56e8dd6aa3c87c5bb6dbb0a885357c16e09b19dc20ebedccbffbd9cb9b3536ab May 13 00:36:35.003484 env[1216]: time="2025-05-13T00:36:35.003462665Z" level=warning msg="cleaning up after shim disconnected" id=56e8dd6aa3c87c5bb6dbb0a885357c16e09b19dc20ebedccbffbd9cb9b3536ab namespace=k8s.io May 13 00:36:35.003559 env[1216]: time="2025-05-13T00:36:35.003543429Z" level=info msg="cleaning up dead shim" May 13 00:36:35.013129 env[1216]: time="2025-05-13T00:36:35.013087707Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:36:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2463 runtime=io.containerd.runc.v2\n" May 13 00:36:35.189584 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-56e8dd6aa3c87c5bb6dbb0a885357c16e09b19dc20ebedccbffbd9cb9b3536ab-rootfs.mount: Deactivated successfully. May 13 00:36:35.445597 env[1216]: time="2025-05-13T00:36:35.445464215Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:35.446895 env[1216]: time="2025-05-13T00:36:35.446865193Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:35.448226 env[1216]: time="2025-05-13T00:36:35.448201889Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:35.448628 env[1216]: time="2025-05-13T00:36:35.448599826Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 13 00:36:35.451434 env[1216]: time="2025-05-13T00:36:35.451000846Z" level=info msg="CreateContainer within sandbox \"4ec709ae52638a0bb6113a6b5d156b69c238368740b0ea09a43a7352f025acb9\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 13 00:36:35.461883 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2816083734.mount: Deactivated successfully. May 13 00:36:35.463154 env[1216]: time="2025-05-13T00:36:35.463097752Z" level=info msg="CreateContainer within sandbox \"4ec709ae52638a0bb6113a6b5d156b69c238368740b0ea09a43a7352f025acb9\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3dfc34368ce6c97ef8c40ea12c9092c89acafa59f53045b9f03579ee781eb1cb\"" May 13 00:36:35.463839 env[1216]: time="2025-05-13T00:36:35.463598933Z" level=info msg="StartContainer for \"3dfc34368ce6c97ef8c40ea12c9092c89acafa59f53045b9f03579ee781eb1cb\"" May 13 00:36:35.480428 systemd[1]: Started cri-containerd-3dfc34368ce6c97ef8c40ea12c9092c89acafa59f53045b9f03579ee781eb1cb.scope. May 13 00:36:35.522975 env[1216]: time="2025-05-13T00:36:35.522923012Z" level=info msg="StartContainer for \"3dfc34368ce6c97ef8c40ea12c9092c89acafa59f53045b9f03579ee781eb1cb\" returns successfully" May 13 00:36:35.807990 kubelet[1907]: E0513 00:36:35.807960 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:35.810913 env[1216]: time="2025-05-13T00:36:35.810865724Z" level=info msg="CreateContainer within sandbox \"e5bb3060bbae2faf0f15fa84a56f8360c096c0008c3b41d8245060282bce45cc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 00:36:35.813232 kubelet[1907]: E0513 00:36:35.813194 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:35.868654 env[1216]: time="2025-05-13T00:36:35.868316604Z" level=info msg="CreateContainer within sandbox \"e5bb3060bbae2faf0f15fa84a56f8360c096c0008c3b41d8245060282bce45cc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"803f83a275fec647e5366f4c1b5338c40b4e459f311330ac47305a279cba1d10\"" May 13 00:36:35.871560 env[1216]: time="2025-05-13T00:36:35.871506298Z" level=info msg="StartContainer for \"803f83a275fec647e5366f4c1b5338c40b4e459f311330ac47305a279cba1d10\"" May 13 00:36:35.893947 systemd[1]: Started cri-containerd-803f83a275fec647e5366f4c1b5338c40b4e459f311330ac47305a279cba1d10.scope. May 13 00:36:35.933628 systemd[1]: cri-containerd-803f83a275fec647e5366f4c1b5338c40b4e459f311330ac47305a279cba1d10.scope: Deactivated successfully. May 13 00:36:35.943126 env[1216]: time="2025-05-13T00:36:35.943065848Z" level=info msg="StartContainer for \"803f83a275fec647e5366f4c1b5338c40b4e459f311330ac47305a279cba1d10\" returns successfully" May 13 00:36:35.973904 update_engine[1207]: I0513 00:36:35.973292 1207 update_attempter.cc:509] Updating boot flags... May 13 00:36:35.977716 env[1216]: time="2025-05-13T00:36:35.977540649Z" level=info msg="shim disconnected" id=803f83a275fec647e5366f4c1b5338c40b4e459f311330ac47305a279cba1d10 May 13 00:36:35.977716 env[1216]: time="2025-05-13T00:36:35.977586570Z" level=warning msg="cleaning up after shim disconnected" id=803f83a275fec647e5366f4c1b5338c40b4e459f311330ac47305a279cba1d10 namespace=k8s.io May 13 00:36:35.977716 env[1216]: time="2025-05-13T00:36:35.977595971Z" level=info msg="cleaning up dead shim" May 13 00:36:36.024548 env[1216]: time="2025-05-13T00:36:36.024504244Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:36:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2559 runtime=io.containerd.runc.v2\n" May 13 00:36:36.816212 kubelet[1907]: E0513 00:36:36.816176 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:36.816635 kubelet[1907]: E0513 00:36:36.816223 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:36.817983 env[1216]: time="2025-05-13T00:36:36.817935448Z" level=info msg="CreateContainer within sandbox \"e5bb3060bbae2faf0f15fa84a56f8360c096c0008c3b41d8245060282bce45cc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 00:36:36.835645 env[1216]: time="2025-05-13T00:36:36.835573829Z" level=info msg="CreateContainer within sandbox \"e5bb3060bbae2faf0f15fa84a56f8360c096c0008c3b41d8245060282bce45cc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bc9b7f9f6bcf5d1d63337a40182e9886f02eda8a1d756666e10af7b059dd79ab\"" May 13 00:36:36.836122 env[1216]: time="2025-05-13T00:36:36.836080129Z" level=info msg="StartContainer for \"bc9b7f9f6bcf5d1d63337a40182e9886f02eda8a1d756666e10af7b059dd79ab\"" May 13 00:36:36.836340 kubelet[1907]: I0513 00:36:36.836285 1907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-tq9gw" podStartSLOduration=2.7532953559999997 podStartE2EDuration="11.836267257s" podCreationTimestamp="2025-05-13 00:36:25 +0000 UTC" firstStartedPulling="2025-05-13 00:36:26.366731811 +0000 UTC m=+6.689758149" lastFinishedPulling="2025-05-13 00:36:35.449703752 +0000 UTC m=+15.772730050" observedRunningTime="2025-05-13 00:36:35.869532175 +0000 UTC m=+16.192558513" watchObservedRunningTime="2025-05-13 00:36:36.836267257 +0000 UTC m=+17.159293595" May 13 00:36:36.852003 systemd[1]: Started cri-containerd-bc9b7f9f6bcf5d1d63337a40182e9886f02eda8a1d756666e10af7b059dd79ab.scope. May 13 00:36:36.895949 env[1216]: time="2025-05-13T00:36:36.895875625Z" level=info msg="StartContainer for \"bc9b7f9f6bcf5d1d63337a40182e9886f02eda8a1d756666e10af7b059dd79ab\" returns successfully" May 13 00:36:37.000621 kubelet[1907]: I0513 00:36:37.000572 1907 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 13 00:36:37.040839 systemd[1]: Created slice kubepods-burstable-poda7b77f6a_d244_4de5_8963_b3d5ed87ea9a.slice. May 13 00:36:37.047808 systemd[1]: Created slice kubepods-burstable-podeb2aa602_3986_41b3_8122_c93ff35cdd82.slice. May 13 00:36:37.073652 kubelet[1907]: I0513 00:36:37.073532 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a7b77f6a-d244-4de5-8963-b3d5ed87ea9a-config-volume\") pod \"coredns-6f6b679f8f-z5zqc\" (UID: \"a7b77f6a-d244-4de5-8963-b3d5ed87ea9a\") " pod="kube-system/coredns-6f6b679f8f-z5zqc" May 13 00:36:37.073838 kubelet[1907]: I0513 00:36:37.073817 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrvvv\" (UniqueName: \"kubernetes.io/projected/eb2aa602-3986-41b3-8122-c93ff35cdd82-kube-api-access-nrvvv\") pod \"coredns-6f6b679f8f-xhhfs\" (UID: \"eb2aa602-3986-41b3-8122-c93ff35cdd82\") " pod="kube-system/coredns-6f6b679f8f-xhhfs" May 13 00:36:37.073948 kubelet[1907]: I0513 00:36:37.073931 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9hpl\" (UniqueName: \"kubernetes.io/projected/a7b77f6a-d244-4de5-8963-b3d5ed87ea9a-kube-api-access-r9hpl\") pod \"coredns-6f6b679f8f-z5zqc\" (UID: \"a7b77f6a-d244-4de5-8963-b3d5ed87ea9a\") " pod="kube-system/coredns-6f6b679f8f-z5zqc" May 13 00:36:37.074066 kubelet[1907]: I0513 00:36:37.074049 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb2aa602-3986-41b3-8122-c93ff35cdd82-config-volume\") pod \"coredns-6f6b679f8f-xhhfs\" (UID: \"eb2aa602-3986-41b3-8122-c93ff35cdd82\") " pod="kube-system/coredns-6f6b679f8f-xhhfs" May 13 00:36:37.161424 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 13 00:36:37.346405 kubelet[1907]: E0513 00:36:37.346296 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:37.347251 env[1216]: time="2025-05-13T00:36:37.347199491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-z5zqc,Uid:a7b77f6a-d244-4de5-8963-b3d5ed87ea9a,Namespace:kube-system,Attempt:0,}" May 13 00:36:37.352085 kubelet[1907]: E0513 00:36:37.352060 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:37.352565 env[1216]: time="2025-05-13T00:36:37.352530412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xhhfs,Uid:eb2aa602-3986-41b3-8122-c93ff35cdd82,Namespace:kube-system,Attempt:0,}" May 13 00:36:37.411431 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 13 00:36:37.820610 kubelet[1907]: E0513 00:36:37.820571 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:38.821795 kubelet[1907]: E0513 00:36:38.821763 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:39.074732 systemd-networkd[1057]: cilium_host: Link UP May 13 00:36:39.075610 systemd-networkd[1057]: cilium_net: Link UP May 13 00:36:39.076808 systemd-networkd[1057]: cilium_net: Gained carrier May 13 00:36:39.077520 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 13 00:36:39.077585 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 13 00:36:39.077681 systemd-networkd[1057]: cilium_host: Gained carrier May 13 00:36:39.156368 systemd-networkd[1057]: cilium_vxlan: Link UP May 13 00:36:39.156376 systemd-networkd[1057]: cilium_vxlan: Gained carrier May 13 00:36:39.307529 systemd-networkd[1057]: cilium_host: Gained IPv6LL May 13 00:36:39.453431 kernel: NET: Registered PF_ALG protocol family May 13 00:36:39.732987 systemd-networkd[1057]: cilium_net: Gained IPv6LL May 13 00:36:39.823250 kubelet[1907]: E0513 00:36:39.823215 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:40.086479 systemd-networkd[1057]: lxc_health: Link UP May 13 00:36:40.101507 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 13 00:36:40.097017 systemd-networkd[1057]: lxc_health: Gained carrier May 13 00:36:40.259317 kubelet[1907]: I0513 00:36:40.259263 1907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-v5jdz" podStartSLOduration=8.42360031 podStartE2EDuration="15.259246085s" podCreationTimestamp="2025-05-13 00:36:25 +0000 UTC" firstStartedPulling="2025-05-13 00:36:26.33155391 +0000 UTC m=+6.654580248" lastFinishedPulling="2025-05-13 00:36:33.167199685 +0000 UTC m=+13.490226023" observedRunningTime="2025-05-13 00:36:37.839131889 +0000 UTC m=+18.162158227" watchObservedRunningTime="2025-05-13 00:36:40.259246085 +0000 UTC m=+20.582272423" May 13 00:36:40.492479 systemd-networkd[1057]: lxc1ac1e3ba23e3: Link UP May 13 00:36:40.497480 systemd-networkd[1057]: lxc1835621ee250: Link UP May 13 00:36:40.499434 kernel: eth0: renamed from tmp94671 May 13 00:36:40.511501 kernel: eth0: renamed from tmp91e95 May 13 00:36:40.518633 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc1835621ee250: link becomes ready May 13 00:36:40.518697 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc1ac1e3ba23e3: link becomes ready May 13 00:36:40.518501 systemd-networkd[1057]: lxc1835621ee250: Gained carrier May 13 00:36:40.519335 systemd-networkd[1057]: lxc1ac1e3ba23e3: Gained carrier May 13 00:36:40.826152 kubelet[1907]: E0513 00:36:40.826120 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:41.075573 systemd-networkd[1057]: cilium_vxlan: Gained IPv6LL May 13 00:36:41.587560 systemd-networkd[1057]: lxc1835621ee250: Gained IPv6LL May 13 00:36:41.843613 systemd-networkd[1057]: lxc_health: Gained IPv6LL May 13 00:36:41.971591 systemd-networkd[1057]: lxc1ac1e3ba23e3: Gained IPv6LL May 13 00:36:44.029580 env[1216]: time="2025-05-13T00:36:44.029502892Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:36:44.029964 env[1216]: time="2025-05-13T00:36:44.029934663Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:36:44.030069 env[1216]: time="2025-05-13T00:36:44.030046026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:36:44.030327 env[1216]: time="2025-05-13T00:36:44.030283913Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/91e95abee8f6bbbc2203e824a39b4b6e6934785e672ea41b2f4606cb91baf553 pid=3143 runtime=io.containerd.runc.v2 May 13 00:36:44.034425 env[1216]: time="2025-05-13T00:36:44.034006494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:36:44.034425 env[1216]: time="2025-05-13T00:36:44.034046976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:36:44.034425 env[1216]: time="2025-05-13T00:36:44.034057936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:36:44.034425 env[1216]: time="2025-05-13T00:36:44.034211140Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/946714dd182338552a92d9e9591e6b3b739ed24a06e2e48434527d5be891e872 pid=3152 runtime=io.containerd.runc.v2 May 13 00:36:44.045052 systemd[1]: Started cri-containerd-91e95abee8f6bbbc2203e824a39b4b6e6934785e672ea41b2f4606cb91baf553.scope. May 13 00:36:44.059211 systemd[1]: Started cri-containerd-946714dd182338552a92d9e9591e6b3b739ed24a06e2e48434527d5be891e872.scope. May 13 00:36:44.093438 systemd-resolved[1155]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:36:44.095551 systemd-resolved[1155]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:36:44.113007 env[1216]: time="2025-05-13T00:36:44.112964490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xhhfs,Uid:eb2aa602-3986-41b3-8122-c93ff35cdd82,Namespace:kube-system,Attempt:0,} returns sandbox id \"91e95abee8f6bbbc2203e824a39b4b6e6934785e672ea41b2f4606cb91baf553\"" May 13 00:36:44.113626 kubelet[1907]: E0513 00:36:44.113606 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:44.115457 env[1216]: time="2025-05-13T00:36:44.115391157Z" level=info msg="CreateContainer within sandbox \"91e95abee8f6bbbc2203e824a39b4b6e6934785e672ea41b2f4606cb91baf553\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:36:44.115867 env[1216]: time="2025-05-13T00:36:44.115703845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-z5zqc,Uid:a7b77f6a-d244-4de5-8963-b3d5ed87ea9a,Namespace:kube-system,Attempt:0,} returns sandbox id \"946714dd182338552a92d9e9591e6b3b739ed24a06e2e48434527d5be891e872\"" May 13 00:36:44.117259 kubelet[1907]: E0513 00:36:44.117228 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:44.118950 env[1216]: time="2025-05-13T00:36:44.118895332Z" level=info msg="CreateContainer within sandbox \"946714dd182338552a92d9e9591e6b3b739ed24a06e2e48434527d5be891e872\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:36:44.144623 env[1216]: time="2025-05-13T00:36:44.144563753Z" level=info msg="CreateContainer within sandbox \"91e95abee8f6bbbc2203e824a39b4b6e6934785e672ea41b2f4606cb91baf553\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"09a24aec0b2d1adc5fa66ad5f1dc4696a6d0e1ecd35d452ec7c9b55ad38c9503\"" May 13 00:36:44.145083 env[1216]: time="2025-05-13T00:36:44.145056687Z" level=info msg="StartContainer for \"09a24aec0b2d1adc5fa66ad5f1dc4696a6d0e1ecd35d452ec7c9b55ad38c9503\"" May 13 00:36:44.145249 env[1216]: time="2025-05-13T00:36:44.145223891Z" level=info msg="CreateContainer within sandbox \"946714dd182338552a92d9e9591e6b3b739ed24a06e2e48434527d5be891e872\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cc9693f327d4160693390c03913281a1d8bb11a384d560daa5d830b35bad0b0a\"" May 13 00:36:44.146523 env[1216]: time="2025-05-13T00:36:44.146471885Z" level=info msg="StartContainer for \"cc9693f327d4160693390c03913281a1d8bb11a384d560daa5d830b35bad0b0a\"" May 13 00:36:44.160440 systemd[1]: Started cri-containerd-09a24aec0b2d1adc5fa66ad5f1dc4696a6d0e1ecd35d452ec7c9b55ad38c9503.scope. May 13 00:36:44.165328 systemd[1]: Started cri-containerd-cc9693f327d4160693390c03913281a1d8bb11a384d560daa5d830b35bad0b0a.scope. May 13 00:36:44.197744 env[1216]: time="2025-05-13T00:36:44.197671243Z" level=info msg="StartContainer for \"09a24aec0b2d1adc5fa66ad5f1dc4696a6d0e1ecd35d452ec7c9b55ad38c9503\" returns successfully" May 13 00:36:44.199388 kubelet[1907]: I0513 00:36:44.199103 1907 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:36:44.199876 kubelet[1907]: E0513 00:36:44.199726 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:44.204965 env[1216]: time="2025-05-13T00:36:44.204919481Z" level=info msg="StartContainer for \"cc9693f327d4160693390c03913281a1d8bb11a384d560daa5d830b35bad0b0a\" returns successfully" May 13 00:36:44.836132 kubelet[1907]: E0513 00:36:44.835828 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:44.837625 kubelet[1907]: E0513 00:36:44.837597 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:44.837852 kubelet[1907]: E0513 00:36:44.837833 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:44.847875 kubelet[1907]: I0513 00:36:44.847828 1907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-z5zqc" podStartSLOduration=19.847815315 podStartE2EDuration="19.847815315s" podCreationTimestamp="2025-05-13 00:36:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:36:44.847531747 +0000 UTC m=+25.170558085" watchObservedRunningTime="2025-05-13 00:36:44.847815315 +0000 UTC m=+25.170841653" May 13 00:36:44.867989 kubelet[1907]: I0513 00:36:44.867766 1907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-xhhfs" podStartSLOduration=19.867748299 podStartE2EDuration="19.867748299s" podCreationTimestamp="2025-05-13 00:36:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:36:44.867618496 +0000 UTC m=+25.190644794" watchObservedRunningTime="2025-05-13 00:36:44.867748299 +0000 UTC m=+25.190774637" May 13 00:36:45.034072 systemd[1]: run-containerd-runc-k8s.io-946714dd182338552a92d9e9591e6b3b739ed24a06e2e48434527d5be891e872-runc.h5bvxZ.mount: Deactivated successfully. May 13 00:36:45.839193 kubelet[1907]: E0513 00:36:45.839163 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:45.839636 kubelet[1907]: E0513 00:36:45.839240 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:46.841116 kubelet[1907]: E0513 00:36:46.841075 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:46.841116 kubelet[1907]: E0513 00:36:46.841126 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:47.204644 systemd[1]: Started sshd@5-10.0.0.114:22-10.0.0.1:51726.service. May 13 00:36:47.251271 sshd[3300]: Accepted publickey for core from 10.0.0.1 port 51726 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:36:47.252863 sshd[3300]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:36:47.257745 systemd-logind[1204]: New session 6 of user core. May 13 00:36:47.258606 systemd[1]: Started session-6.scope. May 13 00:36:47.410237 sshd[3300]: pam_unix(sshd:session): session closed for user core May 13 00:36:47.413963 systemd[1]: session-6.scope: Deactivated successfully. May 13 00:36:47.414573 systemd-logind[1204]: Session 6 logged out. Waiting for processes to exit. May 13 00:36:47.414704 systemd[1]: sshd@5-10.0.0.114:22-10.0.0.1:51726.service: Deactivated successfully. May 13 00:36:47.416554 systemd-logind[1204]: Removed session 6. May 13 00:36:52.417036 systemd[1]: Started sshd@6-10.0.0.114:22-10.0.0.1:51732.service. May 13 00:36:52.464858 sshd[3316]: Accepted publickey for core from 10.0.0.1 port 51732 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:36:52.466617 sshd[3316]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:36:52.471329 systemd-logind[1204]: New session 7 of user core. May 13 00:36:52.474716 systemd[1]: Started session-7.scope. May 13 00:36:52.607314 sshd[3316]: pam_unix(sshd:session): session closed for user core May 13 00:36:52.610165 systemd[1]: sshd@6-10.0.0.114:22-10.0.0.1:51732.service: Deactivated successfully. May 13 00:36:52.611002 systemd[1]: session-7.scope: Deactivated successfully. May 13 00:36:52.612808 systemd-logind[1204]: Session 7 logged out. Waiting for processes to exit. May 13 00:36:52.613953 systemd-logind[1204]: Removed session 7. May 13 00:36:57.611658 systemd[1]: Started sshd@7-10.0.0.114:22-10.0.0.1:33280.service. May 13 00:36:57.663376 sshd[3333]: Accepted publickey for core from 10.0.0.1 port 33280 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:36:57.664988 sshd[3333]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:36:57.668959 systemd-logind[1204]: New session 8 of user core. May 13 00:36:57.669531 systemd[1]: Started session-8.scope. May 13 00:36:57.812929 sshd[3333]: pam_unix(sshd:session): session closed for user core May 13 00:36:57.816071 systemd[1]: sshd@7-10.0.0.114:22-10.0.0.1:33280.service: Deactivated successfully. May 13 00:36:57.816877 systemd[1]: session-8.scope: Deactivated successfully. May 13 00:36:57.817425 systemd-logind[1204]: Session 8 logged out. Waiting for processes to exit. May 13 00:36:57.818202 systemd-logind[1204]: Removed session 8. May 13 00:37:02.819774 systemd[1]: Started sshd@8-10.0.0.114:22-10.0.0.1:42952.service. May 13 00:37:02.857316 sshd[3348]: Accepted publickey for core from 10.0.0.1 port 42952 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:37:02.858785 sshd[3348]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:37:02.865487 systemd-logind[1204]: New session 9 of user core. May 13 00:37:02.866436 systemd[1]: Started session-9.scope. May 13 00:37:02.993708 sshd[3348]: pam_unix(sshd:session): session closed for user core May 13 00:37:02.996914 systemd[1]: Started sshd@9-10.0.0.114:22-10.0.0.1:42968.service. May 13 00:37:02.998743 systemd[1]: sshd@8-10.0.0.114:22-10.0.0.1:42952.service: Deactivated successfully. May 13 00:37:02.999376 systemd[1]: session-9.scope: Deactivated successfully. May 13 00:37:02.999926 systemd-logind[1204]: Session 9 logged out. Waiting for processes to exit. May 13 00:37:03.000882 systemd-logind[1204]: Removed session 9. May 13 00:37:03.041562 sshd[3361]: Accepted publickey for core from 10.0.0.1 port 42968 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:37:03.042979 sshd[3361]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:37:03.046595 systemd-logind[1204]: New session 10 of user core. May 13 00:37:03.047454 systemd[1]: Started session-10.scope. May 13 00:37:03.193796 sshd[3361]: pam_unix(sshd:session): session closed for user core May 13 00:37:03.197169 systemd[1]: Started sshd@10-10.0.0.114:22-10.0.0.1:42972.service. May 13 00:37:03.203380 systemd[1]: sshd@9-10.0.0.114:22-10.0.0.1:42968.service: Deactivated successfully. May 13 00:37:03.204219 systemd[1]: session-10.scope: Deactivated successfully. May 13 00:37:03.209149 systemd-logind[1204]: Session 10 logged out. Waiting for processes to exit. May 13 00:37:03.211813 systemd-logind[1204]: Removed session 10. May 13 00:37:03.244618 sshd[3375]: Accepted publickey for core from 10.0.0.1 port 42972 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:37:03.246204 sshd[3375]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:37:03.249884 systemd-logind[1204]: New session 11 of user core. May 13 00:37:03.250745 systemd[1]: Started session-11.scope. May 13 00:37:03.360855 sshd[3375]: pam_unix(sshd:session): session closed for user core May 13 00:37:03.363569 systemd[1]: sshd@10-10.0.0.114:22-10.0.0.1:42972.service: Deactivated successfully. May 13 00:37:03.364297 systemd[1]: session-11.scope: Deactivated successfully. May 13 00:37:03.365031 systemd-logind[1204]: Session 11 logged out. Waiting for processes to exit. May 13 00:37:03.365782 systemd-logind[1204]: Removed session 11. May 13 00:37:08.365642 systemd[1]: Started sshd@11-10.0.0.114:22-10.0.0.1:42974.service. May 13 00:37:08.409186 sshd[3390]: Accepted publickey for core from 10.0.0.1 port 42974 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:37:08.410726 sshd[3390]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:37:08.415133 systemd-logind[1204]: New session 12 of user core. May 13 00:37:08.415555 systemd[1]: Started session-12.scope. May 13 00:37:08.535889 sshd[3390]: pam_unix(sshd:session): session closed for user core May 13 00:37:08.539290 systemd[1]: sshd@11-10.0.0.114:22-10.0.0.1:42974.service: Deactivated successfully. May 13 00:37:08.540000 systemd[1]: session-12.scope: Deactivated successfully. May 13 00:37:08.541143 systemd-logind[1204]: Session 12 logged out. Waiting for processes to exit. May 13 00:37:08.542258 systemd-logind[1204]: Removed session 12. May 13 00:37:13.539391 systemd[1]: Started sshd@12-10.0.0.114:22-10.0.0.1:56506.service. May 13 00:37:13.577208 sshd[3404]: Accepted publickey for core from 10.0.0.1 port 56506 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:37:13.578493 sshd[3404]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:37:13.581823 systemd-logind[1204]: New session 13 of user core. May 13 00:37:13.582681 systemd[1]: Started session-13.scope. May 13 00:37:13.688042 sshd[3404]: pam_unix(sshd:session): session closed for user core May 13 00:37:13.691003 systemd[1]: sshd@12-10.0.0.114:22-10.0.0.1:56506.service: Deactivated successfully. May 13 00:37:13.691587 systemd[1]: session-13.scope: Deactivated successfully. May 13 00:37:13.692133 systemd-logind[1204]: Session 13 logged out. Waiting for processes to exit. May 13 00:37:13.693245 systemd[1]: Started sshd@13-10.0.0.114:22-10.0.0.1:56508.service. May 13 00:37:13.693878 systemd-logind[1204]: Removed session 13. May 13 00:37:13.734193 sshd[3417]: Accepted publickey for core from 10.0.0.1 port 56508 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:37:13.735957 sshd[3417]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:37:13.739452 systemd-logind[1204]: New session 14 of user core. May 13 00:37:13.740266 systemd[1]: Started session-14.scope. May 13 00:37:13.980029 sshd[3417]: pam_unix(sshd:session): session closed for user core May 13 00:37:13.983487 systemd[1]: Started sshd@14-10.0.0.114:22-10.0.0.1:56512.service. May 13 00:37:13.984054 systemd[1]: sshd@13-10.0.0.114:22-10.0.0.1:56508.service: Deactivated successfully. May 13 00:37:13.984930 systemd[1]: session-14.scope: Deactivated successfully. May 13 00:37:13.985571 systemd-logind[1204]: Session 14 logged out. Waiting for processes to exit. May 13 00:37:13.986772 systemd-logind[1204]: Removed session 14. May 13 00:37:14.027359 sshd[3427]: Accepted publickey for core from 10.0.0.1 port 56512 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:37:14.028842 sshd[3427]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:37:14.034460 systemd-logind[1204]: New session 15 of user core. May 13 00:37:14.034973 systemd[1]: Started session-15.scope. May 13 00:37:15.387973 sshd[3427]: pam_unix(sshd:session): session closed for user core May 13 00:37:15.390998 systemd[1]: sshd@14-10.0.0.114:22-10.0.0.1:56512.service: Deactivated successfully. May 13 00:37:15.391645 systemd[1]: session-15.scope: Deactivated successfully. May 13 00:37:15.394218 systemd[1]: Started sshd@15-10.0.0.114:22-10.0.0.1:56520.service. May 13 00:37:15.395257 systemd-logind[1204]: Session 15 logged out. Waiting for processes to exit. May 13 00:37:15.396632 systemd-logind[1204]: Removed session 15. May 13 00:37:15.437455 sshd[3451]: Accepted publickey for core from 10.0.0.1 port 56520 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:37:15.438883 sshd[3451]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:37:15.442673 systemd-logind[1204]: New session 16 of user core. May 13 00:37:15.443107 systemd[1]: Started session-16.scope. May 13 00:37:15.664455 sshd[3451]: pam_unix(sshd:session): session closed for user core May 13 00:37:15.667966 systemd[1]: Started sshd@16-10.0.0.114:22-10.0.0.1:56526.service. May 13 00:37:15.668660 systemd[1]: sshd@15-10.0.0.114:22-10.0.0.1:56520.service: Deactivated successfully. May 13 00:37:15.669351 systemd[1]: session-16.scope: Deactivated successfully. May 13 00:37:15.673504 systemd-logind[1204]: Session 16 logged out. Waiting for processes to exit. May 13 00:37:15.676875 systemd-logind[1204]: Removed session 16. May 13 00:37:15.707949 sshd[3462]: Accepted publickey for core from 10.0.0.1 port 56526 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:37:15.709837 sshd[3462]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:37:15.713640 systemd-logind[1204]: New session 17 of user core. May 13 00:37:15.715648 systemd[1]: Started session-17.scope. May 13 00:37:15.832287 sshd[3462]: pam_unix(sshd:session): session closed for user core May 13 00:37:15.835480 systemd[1]: sshd@16-10.0.0.114:22-10.0.0.1:56526.service: Deactivated successfully. May 13 00:37:15.836221 systemd[1]: session-17.scope: Deactivated successfully. May 13 00:37:15.837172 systemd-logind[1204]: Session 17 logged out. Waiting for processes to exit. May 13 00:37:15.837948 systemd-logind[1204]: Removed session 17. May 13 00:37:20.836579 systemd[1]: Started sshd@17-10.0.0.114:22-10.0.0.1:56528.service. May 13 00:37:20.876191 sshd[3478]: Accepted publickey for core from 10.0.0.1 port 56528 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:37:20.879823 sshd[3478]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:37:20.884183 systemd-logind[1204]: New session 18 of user core. May 13 00:37:20.884351 systemd[1]: Started session-18.scope. May 13 00:37:21.010248 sshd[3478]: pam_unix(sshd:session): session closed for user core May 13 00:37:21.012814 systemd[1]: sshd@17-10.0.0.114:22-10.0.0.1:56528.service: Deactivated successfully. May 13 00:37:21.013542 systemd[1]: session-18.scope: Deactivated successfully. May 13 00:37:21.014053 systemd-logind[1204]: Session 18 logged out. Waiting for processes to exit. May 13 00:37:21.014722 systemd-logind[1204]: Removed session 18. May 13 00:37:26.015493 systemd[1]: Started sshd@18-10.0.0.114:22-10.0.0.1:38538.service. May 13 00:37:26.053088 sshd[3494]: Accepted publickey for core from 10.0.0.1 port 38538 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:37:26.054554 sshd[3494]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:37:26.057861 systemd-logind[1204]: New session 19 of user core. May 13 00:37:26.058914 systemd[1]: Started session-19.scope. May 13 00:37:26.165419 sshd[3494]: pam_unix(sshd:session): session closed for user core May 13 00:37:26.167998 systemd[1]: sshd@18-10.0.0.114:22-10.0.0.1:38538.service: Deactivated successfully. May 13 00:37:26.168720 systemd[1]: session-19.scope: Deactivated successfully. May 13 00:37:26.169214 systemd-logind[1204]: Session 19 logged out. Waiting for processes to exit. May 13 00:37:26.170061 systemd-logind[1204]: Removed session 19. May 13 00:37:31.170156 systemd[1]: Started sshd@19-10.0.0.114:22-10.0.0.1:38544.service. May 13 00:37:31.208086 sshd[3510]: Accepted publickey for core from 10.0.0.1 port 38544 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:37:31.209942 sshd[3510]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:37:31.216188 systemd-logind[1204]: New session 20 of user core. May 13 00:37:31.216625 systemd[1]: Started session-20.scope. May 13 00:37:31.340324 sshd[3510]: pam_unix(sshd:session): session closed for user core May 13 00:37:31.345260 systemd[1]: Started sshd@20-10.0.0.114:22-10.0.0.1:38556.service. May 13 00:37:31.346183 systemd[1]: sshd@19-10.0.0.114:22-10.0.0.1:38544.service: Deactivated successfully. May 13 00:37:31.346943 systemd[1]: session-20.scope: Deactivated successfully. May 13 00:37:31.347524 systemd-logind[1204]: Session 20 logged out. Waiting for processes to exit. May 13 00:37:31.348847 systemd-logind[1204]: Removed session 20. May 13 00:37:31.385197 sshd[3522]: Accepted publickey for core from 10.0.0.1 port 38556 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:37:31.387042 sshd[3522]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:37:31.394017 systemd-logind[1204]: New session 21 of user core. May 13 00:37:31.396358 systemd[1]: Started session-21.scope. May 13 00:37:31.760825 kubelet[1907]: E0513 00:37:31.758869 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:37:33.138800 env[1216]: time="2025-05-13T00:37:33.137866028Z" level=info msg="StopContainer for \"3dfc34368ce6c97ef8c40ea12c9092c89acafa59f53045b9f03579ee781eb1cb\" with timeout 30 (s)" May 13 00:37:33.139567 env[1216]: time="2025-05-13T00:37:33.139137989Z" level=info msg="Stop container \"3dfc34368ce6c97ef8c40ea12c9092c89acafa59f53045b9f03579ee781eb1cb\" with signal terminated" May 13 00:37:33.154815 systemd[1]: cri-containerd-3dfc34368ce6c97ef8c40ea12c9092c89acafa59f53045b9f03579ee781eb1cb.scope: Deactivated successfully. May 13 00:37:33.171584 env[1216]: time="2025-05-13T00:37:33.171243332Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:37:33.174708 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3dfc34368ce6c97ef8c40ea12c9092c89acafa59f53045b9f03579ee781eb1cb-rootfs.mount: Deactivated successfully. May 13 00:37:33.178626 env[1216]: time="2025-05-13T00:37:33.178589818Z" level=info msg="StopContainer for \"bc9b7f9f6bcf5d1d63337a40182e9886f02eda8a1d756666e10af7b059dd79ab\" with timeout 2 (s)" May 13 00:37:33.179073 env[1216]: time="2025-05-13T00:37:33.179051418Z" level=info msg="Stop container \"bc9b7f9f6bcf5d1d63337a40182e9886f02eda8a1d756666e10af7b059dd79ab\" with signal terminated" May 13 00:37:33.185624 systemd-networkd[1057]: lxc_health: Link DOWN May 13 00:37:33.185630 systemd-networkd[1057]: lxc_health: Lost carrier May 13 00:37:33.186829 env[1216]: time="2025-05-13T00:37:33.186792184Z" level=info msg="shim disconnected" id=3dfc34368ce6c97ef8c40ea12c9092c89acafa59f53045b9f03579ee781eb1cb May 13 00:37:33.187067 env[1216]: time="2025-05-13T00:37:33.187047584Z" level=warning msg="cleaning up after shim disconnected" id=3dfc34368ce6c97ef8c40ea12c9092c89acafa59f53045b9f03579ee781eb1cb namespace=k8s.io May 13 00:37:33.187158 env[1216]: time="2025-05-13T00:37:33.187143464Z" level=info msg="cleaning up dead shim" May 13 00:37:33.194517 env[1216]: time="2025-05-13T00:37:33.194482590Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:37:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3574 runtime=io.containerd.runc.v2\n" May 13 00:37:33.199832 env[1216]: time="2025-05-13T00:37:33.199795914Z" level=info msg="StopContainer for \"3dfc34368ce6c97ef8c40ea12c9092c89acafa59f53045b9f03579ee781eb1cb\" returns successfully" May 13 00:37:33.200528 env[1216]: time="2025-05-13T00:37:33.200498754Z" level=info msg="StopPodSandbox for \"4ec709ae52638a0bb6113a6b5d156b69c238368740b0ea09a43a7352f025acb9\"" May 13 00:37:33.200676 env[1216]: time="2025-05-13T00:37:33.200654434Z" level=info msg="Container to stop \"3dfc34368ce6c97ef8c40ea12c9092c89acafa59f53045b9f03579ee781eb1cb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:37:33.202362 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4ec709ae52638a0bb6113a6b5d156b69c238368740b0ea09a43a7352f025acb9-shm.mount: Deactivated successfully. May 13 00:37:33.209547 systemd[1]: cri-containerd-4ec709ae52638a0bb6113a6b5d156b69c238368740b0ea09a43a7352f025acb9.scope: Deactivated successfully. May 13 00:37:33.219614 systemd[1]: cri-containerd-bc9b7f9f6bcf5d1d63337a40182e9886f02eda8a1d756666e10af7b059dd79ab.scope: Deactivated successfully. May 13 00:37:33.219931 systemd[1]: cri-containerd-bc9b7f9f6bcf5d1d63337a40182e9886f02eda8a1d756666e10af7b059dd79ab.scope: Consumed 6.487s CPU time. May 13 00:37:33.234380 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ec709ae52638a0bb6113a6b5d156b69c238368740b0ea09a43a7352f025acb9-rootfs.mount: Deactivated successfully. May 13 00:37:33.241871 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc9b7f9f6bcf5d1d63337a40182e9886f02eda8a1d756666e10af7b059dd79ab-rootfs.mount: Deactivated successfully. May 13 00:37:33.242482 env[1216]: time="2025-05-13T00:37:33.242429865Z" level=info msg="shim disconnected" id=4ec709ae52638a0bb6113a6b5d156b69c238368740b0ea09a43a7352f025acb9 May 13 00:37:33.242482 env[1216]: time="2025-05-13T00:37:33.242477465Z" level=warning msg="cleaning up after shim disconnected" id=4ec709ae52638a0bb6113a6b5d156b69c238368740b0ea09a43a7352f025acb9 namespace=k8s.io May 13 00:37:33.242482 env[1216]: time="2025-05-13T00:37:33.242486825Z" level=info msg="cleaning up dead shim" May 13 00:37:33.243064 env[1216]: time="2025-05-13T00:37:33.243026706Z" level=info msg="shim disconnected" id=bc9b7f9f6bcf5d1d63337a40182e9886f02eda8a1d756666e10af7b059dd79ab May 13 00:37:33.243178 env[1216]: time="2025-05-13T00:37:33.243161186Z" level=warning msg="cleaning up after shim disconnected" id=bc9b7f9f6bcf5d1d63337a40182e9886f02eda8a1d756666e10af7b059dd79ab namespace=k8s.io May 13 00:37:33.243249 env[1216]: time="2025-05-13T00:37:33.243235986Z" level=info msg="cleaning up dead shim" May 13 00:37:33.251764 env[1216]: time="2025-05-13T00:37:33.251715952Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:37:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3621 runtime=io.containerd.runc.v2\n" May 13 00:37:33.251977 env[1216]: time="2025-05-13T00:37:33.251739152Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:37:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3620 runtime=io.containerd.runc.v2\n" May 13 00:37:33.252349 env[1216]: time="2025-05-13T00:37:33.252314593Z" level=info msg="TearDown network for sandbox \"4ec709ae52638a0bb6113a6b5d156b69c238368740b0ea09a43a7352f025acb9\" successfully" May 13 00:37:33.252349 env[1216]: time="2025-05-13T00:37:33.252344313Z" level=info msg="StopPodSandbox for \"4ec709ae52638a0bb6113a6b5d156b69c238368740b0ea09a43a7352f025acb9\" returns successfully" May 13 00:37:33.254636 env[1216]: time="2025-05-13T00:37:33.254330554Z" level=info msg="StopContainer for \"bc9b7f9f6bcf5d1d63337a40182e9886f02eda8a1d756666e10af7b059dd79ab\" returns successfully" May 13 00:37:33.255157 env[1216]: time="2025-05-13T00:37:33.255130395Z" level=info msg="StopPodSandbox for \"e5bb3060bbae2faf0f15fa84a56f8360c096c0008c3b41d8245060282bce45cc\"" May 13 00:37:33.255636 env[1216]: time="2025-05-13T00:37:33.255610115Z" level=info msg="Container to stop \"bc9b7f9f6bcf5d1d63337a40182e9886f02eda8a1d756666e10af7b059dd79ab\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:37:33.255940 env[1216]: time="2025-05-13T00:37:33.255914675Z" level=info msg="Container to stop \"2f4d4ce75fc66a77882e5fcc08bf57bb0f4e41629ab78b5cb932e7bb7202aed1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:37:33.256240 env[1216]: time="2025-05-13T00:37:33.256215436Z" level=info msg="Container to stop \"56e8dd6aa3c87c5bb6dbb0a885357c16e09b19dc20ebedccbffbd9cb9b3536ab\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:37:33.256522 env[1216]: time="2025-05-13T00:37:33.256484476Z" level=info msg="Container to stop \"84e4d333303e3d5ad30e382979b8045fba8bf9cd30bc063d751764dcead50338\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:37:33.258576 env[1216]: time="2025-05-13T00:37:33.256599676Z" level=info msg="Container to stop \"803f83a275fec647e5366f4c1b5338c40b4e459f311330ac47305a279cba1d10\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:37:33.264775 systemd[1]: cri-containerd-e5bb3060bbae2faf0f15fa84a56f8360c096c0008c3b41d8245060282bce45cc.scope: Deactivated successfully. May 13 00:37:33.288649 env[1216]: time="2025-05-13T00:37:33.288599660Z" level=info msg="shim disconnected" id=e5bb3060bbae2faf0f15fa84a56f8360c096c0008c3b41d8245060282bce45cc May 13 00:37:33.288649 env[1216]: time="2025-05-13T00:37:33.288644060Z" level=warning msg="cleaning up after shim disconnected" id=e5bb3060bbae2faf0f15fa84a56f8360c096c0008c3b41d8245060282bce45cc namespace=k8s.io May 13 00:37:33.288901 env[1216]: time="2025-05-13T00:37:33.288659580Z" level=info msg="cleaning up dead shim" May 13 00:37:33.296003 env[1216]: time="2025-05-13T00:37:33.295934385Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:37:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3663 runtime=io.containerd.runc.v2\n" May 13 00:37:33.296321 env[1216]: time="2025-05-13T00:37:33.296285225Z" level=info msg="TearDown network for sandbox \"e5bb3060bbae2faf0f15fa84a56f8360c096c0008c3b41d8245060282bce45cc\" successfully" May 13 00:37:33.296321 env[1216]: time="2025-05-13T00:37:33.296311626Z" level=info msg="StopPodSandbox for \"e5bb3060bbae2faf0f15fa84a56f8360c096c0008c3b41d8245060282bce45cc\" returns successfully" May 13 00:37:33.425738 kubelet[1907]: I0513 00:37:33.425571 1907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-lib-modules\") pod \"c54037b3-477b-4bd5-9ae0-e58cb5593a1b\" (UID: \"c54037b3-477b-4bd5-9ae0-e58cb5593a1b\") " May 13 00:37:33.425738 kubelet[1907]: I0513 00:37:33.425672 1907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-hostproc\") pod \"c54037b3-477b-4bd5-9ae0-e58cb5593a1b\" (UID: \"c54037b3-477b-4bd5-9ae0-e58cb5593a1b\") " May 13 00:37:33.425738 kubelet[1907]: I0513 00:37:33.425700 1907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-cilium-config-path\") pod \"c54037b3-477b-4bd5-9ae0-e58cb5593a1b\" (UID: \"c54037b3-477b-4bd5-9ae0-e58cb5593a1b\") " May 13 00:37:33.425738 kubelet[1907]: I0513 00:37:33.425723 1907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-clustermesh-secrets\") pod \"c54037b3-477b-4bd5-9ae0-e58cb5593a1b\" (UID: \"c54037b3-477b-4bd5-9ae0-e58cb5593a1b\") " May 13 00:37:33.426134 kubelet[1907]: I0513 00:37:33.425750 1907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fq62h\" (UniqueName: \"kubernetes.io/projected/5ca47b04-cf1f-404d-89c7-16cbe6b82188-kube-api-access-fq62h\") pod \"5ca47b04-cf1f-404d-89c7-16cbe6b82188\" (UID: \"5ca47b04-cf1f-404d-89c7-16cbe6b82188\") " May 13 00:37:33.426134 kubelet[1907]: I0513 00:37:33.425771 1907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zlpm\" (UniqueName: \"kubernetes.io/projected/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-kube-api-access-9zlpm\") pod \"c54037b3-477b-4bd5-9ae0-e58cb5593a1b\" (UID: \"c54037b3-477b-4bd5-9ae0-e58cb5593a1b\") " May 13 00:37:33.426134 kubelet[1907]: I0513 00:37:33.425787 1907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ca47b04-cf1f-404d-89c7-16cbe6b82188-cilium-config-path\") pod \"5ca47b04-cf1f-404d-89c7-16cbe6b82188\" (UID: \"5ca47b04-cf1f-404d-89c7-16cbe6b82188\") " May 13 00:37:33.426134 kubelet[1907]: I0513 00:37:33.425800 1907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-etc-cni-netd\") pod \"c54037b3-477b-4bd5-9ae0-e58cb5593a1b\" (UID: \"c54037b3-477b-4bd5-9ae0-e58cb5593a1b\") " May 13 00:37:33.426134 kubelet[1907]: I0513 00:37:33.425814 1907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-host-proc-sys-kernel\") pod \"c54037b3-477b-4bd5-9ae0-e58cb5593a1b\" (UID: \"c54037b3-477b-4bd5-9ae0-e58cb5593a1b\") " May 13 00:37:33.426134 kubelet[1907]: I0513 00:37:33.425871 1907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-cilium-cgroup\") pod \"c54037b3-477b-4bd5-9ae0-e58cb5593a1b\" (UID: \"c54037b3-477b-4bd5-9ae0-e58cb5593a1b\") " May 13 00:37:33.426267 kubelet[1907]: I0513 00:37:33.425889 1907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-cilium-run\") pod \"c54037b3-477b-4bd5-9ae0-e58cb5593a1b\" (UID: \"c54037b3-477b-4bd5-9ae0-e58cb5593a1b\") " May 13 00:37:33.426267 kubelet[1907]: I0513 00:37:33.425903 1907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-bpf-maps\") pod \"c54037b3-477b-4bd5-9ae0-e58cb5593a1b\" (UID: \"c54037b3-477b-4bd5-9ae0-e58cb5593a1b\") " May 13 00:37:33.426267 kubelet[1907]: I0513 00:37:33.425921 1907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-host-proc-sys-net\") pod \"c54037b3-477b-4bd5-9ae0-e58cb5593a1b\" (UID: \"c54037b3-477b-4bd5-9ae0-e58cb5593a1b\") " May 13 00:37:33.426267 kubelet[1907]: I0513 00:37:33.425938 1907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-xtables-lock\") pod \"c54037b3-477b-4bd5-9ae0-e58cb5593a1b\" (UID: \"c54037b3-477b-4bd5-9ae0-e58cb5593a1b\") " May 13 00:37:33.426267 kubelet[1907]: I0513 00:37:33.425954 1907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-cni-path\") pod \"c54037b3-477b-4bd5-9ae0-e58cb5593a1b\" (UID: \"c54037b3-477b-4bd5-9ae0-e58cb5593a1b\") " May 13 00:37:33.426267 kubelet[1907]: I0513 00:37:33.425973 1907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-hubble-tls\") pod \"c54037b3-477b-4bd5-9ae0-e58cb5593a1b\" (UID: \"c54037b3-477b-4bd5-9ae0-e58cb5593a1b\") " May 13 00:37:33.428355 kubelet[1907]: I0513 00:37:33.428304 1907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c54037b3-477b-4bd5-9ae0-e58cb5593a1b" (UID: "c54037b3-477b-4bd5-9ae0-e58cb5593a1b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:37:33.428525 kubelet[1907]: I0513 00:37:33.428506 1907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c54037b3-477b-4bd5-9ae0-e58cb5593a1b" (UID: "c54037b3-477b-4bd5-9ae0-e58cb5593a1b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:37:33.428661 kubelet[1907]: I0513 00:37:33.428630 1907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c54037b3-477b-4bd5-9ae0-e58cb5593a1b" (UID: "c54037b3-477b-4bd5-9ae0-e58cb5593a1b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:37:33.428661 kubelet[1907]: I0513 00:37:33.428643 1907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-hostproc" (OuterVolumeSpecName: "hostproc") pod "c54037b3-477b-4bd5-9ae0-e58cb5593a1b" (UID: "c54037b3-477b-4bd5-9ae0-e58cb5593a1b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:37:33.428753 kubelet[1907]: I0513 00:37:33.428672 1907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c54037b3-477b-4bd5-9ae0-e58cb5593a1b" (UID: "c54037b3-477b-4bd5-9ae0-e58cb5593a1b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:37:33.428753 kubelet[1907]: I0513 00:37:33.428689 1907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c54037b3-477b-4bd5-9ae0-e58cb5593a1b" (UID: "c54037b3-477b-4bd5-9ae0-e58cb5593a1b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:37:33.428753 kubelet[1907]: I0513 00:37:33.428703 1907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c54037b3-477b-4bd5-9ae0-e58cb5593a1b" (UID: "c54037b3-477b-4bd5-9ae0-e58cb5593a1b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:37:33.428753 kubelet[1907]: I0513 00:37:33.428718 1907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c54037b3-477b-4bd5-9ae0-e58cb5593a1b" (UID: "c54037b3-477b-4bd5-9ae0-e58cb5593a1b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:37:33.428850 kubelet[1907]: I0513 00:37:33.428756 1907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c54037b3-477b-4bd5-9ae0-e58cb5593a1b" (UID: "c54037b3-477b-4bd5-9ae0-e58cb5593a1b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:37:33.428850 kubelet[1907]: I0513 00:37:33.428774 1907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-cni-path" (OuterVolumeSpecName: "cni-path") pod "c54037b3-477b-4bd5-9ae0-e58cb5593a1b" (UID: "c54037b3-477b-4bd5-9ae0-e58cb5593a1b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:37:33.431138 kubelet[1907]: I0513 00:37:33.431106 1907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c54037b3-477b-4bd5-9ae0-e58cb5593a1b" (UID: "c54037b3-477b-4bd5-9ae0-e58cb5593a1b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 00:37:33.431218 kubelet[1907]: I0513 00:37:33.431110 1907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ca47b04-cf1f-404d-89c7-16cbe6b82188-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5ca47b04-cf1f-404d-89c7-16cbe6b82188" (UID: "5ca47b04-cf1f-404d-89c7-16cbe6b82188"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 00:37:33.431218 kubelet[1907]: I0513 00:37:33.431195 1907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c54037b3-477b-4bd5-9ae0-e58cb5593a1b" (UID: "c54037b3-477b-4bd5-9ae0-e58cb5593a1b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 00:37:33.436364 kubelet[1907]: I0513 00:37:33.436321 1907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-kube-api-access-9zlpm" (OuterVolumeSpecName: "kube-api-access-9zlpm") pod "c54037b3-477b-4bd5-9ae0-e58cb5593a1b" (UID: "c54037b3-477b-4bd5-9ae0-e58cb5593a1b"). InnerVolumeSpecName "kube-api-access-9zlpm". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 00:37:33.436565 kubelet[1907]: I0513 00:37:33.436362 1907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c54037b3-477b-4bd5-9ae0-e58cb5593a1b" (UID: "c54037b3-477b-4bd5-9ae0-e58cb5593a1b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 00:37:33.437544 kubelet[1907]: I0513 00:37:33.437511 1907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ca47b04-cf1f-404d-89c7-16cbe6b82188-kube-api-access-fq62h" (OuterVolumeSpecName: "kube-api-access-fq62h") pod "5ca47b04-cf1f-404d-89c7-16cbe6b82188" (UID: "5ca47b04-cf1f-404d-89c7-16cbe6b82188"). InnerVolumeSpecName "kube-api-access-fq62h". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 00:37:33.526847 kubelet[1907]: I0513 00:37:33.526802 1907 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-lib-modules\") on node \"localhost\" DevicePath \"\"" May 13 00:37:33.526847 kubelet[1907]: I0513 00:37:33.526836 1907 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-hostproc\") on node \"localhost\" DevicePath \"\"" May 13 00:37:33.526847 kubelet[1907]: I0513 00:37:33.526850 1907 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 00:37:33.527052 kubelet[1907]: I0513 00:37:33.526865 1907 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 13 00:37:33.527052 kubelet[1907]: I0513 00:37:33.526876 1907 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-9zlpm\" (UniqueName: \"kubernetes.io/projected/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-kube-api-access-9zlpm\") on node \"localhost\" DevicePath \"\"" May 13 00:37:33.527052 kubelet[1907]: I0513 00:37:33.526884 1907 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ca47b04-cf1f-404d-89c7-16cbe6b82188-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 00:37:33.527052 kubelet[1907]: I0513 00:37:33.526892 1907 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 13 00:37:33.527052 kubelet[1907]: I0513 00:37:33.526901 1907 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-fq62h\" (UniqueName: \"kubernetes.io/projected/5ca47b04-cf1f-404d-89c7-16cbe6b82188-kube-api-access-fq62h\") on node \"localhost\" DevicePath \"\"" May 13 00:37:33.527052 kubelet[1907]: I0513 00:37:33.526908 1907 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 13 00:37:33.527052 kubelet[1907]: I0513 00:37:33.526916 1907 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 13 00:37:33.527052 kubelet[1907]: I0513 00:37:33.526923 1907 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-cilium-run\") on node \"localhost\" DevicePath \"\"" May 13 00:37:33.527270 kubelet[1907]: I0513 00:37:33.526931 1907 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 13 00:37:33.527270 kubelet[1907]: I0513 00:37:33.526939 1907 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 13 00:37:33.527270 kubelet[1907]: I0513 00:37:33.526947 1907 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 13 00:37:33.527270 kubelet[1907]: I0513 00:37:33.526954 1907 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-cni-path\") on node \"localhost\" DevicePath \"\"" May 13 00:37:33.527270 kubelet[1907]: I0513 00:37:33.526962 1907 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c54037b3-477b-4bd5-9ae0-e58cb5593a1b-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 13 00:37:33.768934 systemd[1]: Removed slice kubepods-besteffort-pod5ca47b04_cf1f_404d_89c7_16cbe6b82188.slice. May 13 00:37:33.770649 systemd[1]: Removed slice kubepods-burstable-podc54037b3_477b_4bd5_9ae0_e58cb5593a1b.slice. May 13 00:37:33.770738 systemd[1]: kubepods-burstable-podc54037b3_477b_4bd5_9ae0_e58cb5593a1b.slice: Consumed 6.750s CPU time. May 13 00:37:33.938762 kubelet[1907]: I0513 00:37:33.938723 1907 scope.go:117] "RemoveContainer" containerID="3dfc34368ce6c97ef8c40ea12c9092c89acafa59f53045b9f03579ee781eb1cb" May 13 00:37:33.940517 env[1216]: time="2025-05-13T00:37:33.940470465Z" level=info msg="RemoveContainer for \"3dfc34368ce6c97ef8c40ea12c9092c89acafa59f53045b9f03579ee781eb1cb\"" May 13 00:37:33.944180 env[1216]: time="2025-05-13T00:37:33.944143828Z" level=info msg="RemoveContainer for \"3dfc34368ce6c97ef8c40ea12c9092c89acafa59f53045b9f03579ee781eb1cb\" returns successfully" May 13 00:37:33.944550 kubelet[1907]: I0513 00:37:33.944514 1907 scope.go:117] "RemoveContainer" containerID="3dfc34368ce6c97ef8c40ea12c9092c89acafa59f53045b9f03579ee781eb1cb" May 13 00:37:33.944943 env[1216]: time="2025-05-13T00:37:33.944748748Z" level=error msg="ContainerStatus for \"3dfc34368ce6c97ef8c40ea12c9092c89acafa59f53045b9f03579ee781eb1cb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3dfc34368ce6c97ef8c40ea12c9092c89acafa59f53045b9f03579ee781eb1cb\": not found" May 13 00:37:33.945338 kubelet[1907]: E0513 00:37:33.945309 1907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3dfc34368ce6c97ef8c40ea12c9092c89acafa59f53045b9f03579ee781eb1cb\": not found" containerID="3dfc34368ce6c97ef8c40ea12c9092c89acafa59f53045b9f03579ee781eb1cb" May 13 00:37:33.945546 kubelet[1907]: I0513 00:37:33.945452 1907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3dfc34368ce6c97ef8c40ea12c9092c89acafa59f53045b9f03579ee781eb1cb"} err="failed to get container status \"3dfc34368ce6c97ef8c40ea12c9092c89acafa59f53045b9f03579ee781eb1cb\": rpc error: code = NotFound desc = an error occurred when try to find container \"3dfc34368ce6c97ef8c40ea12c9092c89acafa59f53045b9f03579ee781eb1cb\": not found" May 13 00:37:33.945643 kubelet[1907]: I0513 00:37:33.945629 1907 scope.go:117] "RemoveContainer" containerID="bc9b7f9f6bcf5d1d63337a40182e9886f02eda8a1d756666e10af7b059dd79ab" May 13 00:37:33.949410 env[1216]: time="2025-05-13T00:37:33.949361672Z" level=info msg="RemoveContainer for \"bc9b7f9f6bcf5d1d63337a40182e9886f02eda8a1d756666e10af7b059dd79ab\"" May 13 00:37:33.952454 env[1216]: time="2025-05-13T00:37:33.952418714Z" level=info msg="RemoveContainer for \"bc9b7f9f6bcf5d1d63337a40182e9886f02eda8a1d756666e10af7b059dd79ab\" returns successfully" May 13 00:37:33.952658 kubelet[1907]: I0513 00:37:33.952624 1907 scope.go:117] "RemoveContainer" containerID="803f83a275fec647e5366f4c1b5338c40b4e459f311330ac47305a279cba1d10" May 13 00:37:33.953832 env[1216]: time="2025-05-13T00:37:33.953797435Z" level=info msg="RemoveContainer for \"803f83a275fec647e5366f4c1b5338c40b4e459f311330ac47305a279cba1d10\"" May 13 00:37:33.957095 env[1216]: time="2025-05-13T00:37:33.956227557Z" level=info msg="RemoveContainer for \"803f83a275fec647e5366f4c1b5338c40b4e459f311330ac47305a279cba1d10\" returns successfully" May 13 00:37:33.957201 kubelet[1907]: I0513 00:37:33.957004 1907 scope.go:117] "RemoveContainer" containerID="56e8dd6aa3c87c5bb6dbb0a885357c16e09b19dc20ebedccbffbd9cb9b3536ab" May 13 00:37:33.958286 env[1216]: time="2025-05-13T00:37:33.958255958Z" level=info msg="RemoveContainer for \"56e8dd6aa3c87c5bb6dbb0a885357c16e09b19dc20ebedccbffbd9cb9b3536ab\"" May 13 00:37:33.960915 env[1216]: time="2025-05-13T00:37:33.960871120Z" level=info msg="RemoveContainer for \"56e8dd6aa3c87c5bb6dbb0a885357c16e09b19dc20ebedccbffbd9cb9b3536ab\" returns successfully" May 13 00:37:33.961150 kubelet[1907]: I0513 00:37:33.961109 1907 scope.go:117] "RemoveContainer" containerID="2f4d4ce75fc66a77882e5fcc08bf57bb0f4e41629ab78b5cb932e7bb7202aed1" May 13 00:37:33.962281 env[1216]: time="2025-05-13T00:37:33.962251441Z" level=info msg="RemoveContainer for \"2f4d4ce75fc66a77882e5fcc08bf57bb0f4e41629ab78b5cb932e7bb7202aed1\"" May 13 00:37:33.964403 env[1216]: time="2025-05-13T00:37:33.964358123Z" level=info msg="RemoveContainer for \"2f4d4ce75fc66a77882e5fcc08bf57bb0f4e41629ab78b5cb932e7bb7202aed1\" returns successfully" May 13 00:37:33.964576 kubelet[1907]: I0513 00:37:33.964558 1907 scope.go:117] "RemoveContainer" containerID="84e4d333303e3d5ad30e382979b8045fba8bf9cd30bc063d751764dcead50338" May 13 00:37:33.965609 env[1216]: time="2025-05-13T00:37:33.965582924Z" level=info msg="RemoveContainer for \"84e4d333303e3d5ad30e382979b8045fba8bf9cd30bc063d751764dcead50338\"" May 13 00:37:33.968625 env[1216]: time="2025-05-13T00:37:33.967894685Z" level=info msg="RemoveContainer for \"84e4d333303e3d5ad30e382979b8045fba8bf9cd30bc063d751764dcead50338\" returns successfully" May 13 00:37:33.968625 env[1216]: time="2025-05-13T00:37:33.968285806Z" level=error msg="ContainerStatus for \"bc9b7f9f6bcf5d1d63337a40182e9886f02eda8a1d756666e10af7b059dd79ab\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bc9b7f9f6bcf5d1d63337a40182e9886f02eda8a1d756666e10af7b059dd79ab\": not found" May 13 00:37:33.968625 env[1216]: time="2025-05-13T00:37:33.968605486Z" level=error msg="ContainerStatus for \"803f83a275fec647e5366f4c1b5338c40b4e459f311330ac47305a279cba1d10\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"803f83a275fec647e5366f4c1b5338c40b4e459f311330ac47305a279cba1d10\": not found" May 13 00:37:33.968791 kubelet[1907]: I0513 00:37:33.968085 1907 scope.go:117] "RemoveContainer" containerID="bc9b7f9f6bcf5d1d63337a40182e9886f02eda8a1d756666e10af7b059dd79ab" May 13 00:37:33.968791 kubelet[1907]: E0513 00:37:33.968416 1907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bc9b7f9f6bcf5d1d63337a40182e9886f02eda8a1d756666e10af7b059dd79ab\": not found" containerID="bc9b7f9f6bcf5d1d63337a40182e9886f02eda8a1d756666e10af7b059dd79ab" May 13 00:37:33.968791 kubelet[1907]: I0513 00:37:33.968436 1907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bc9b7f9f6bcf5d1d63337a40182e9886f02eda8a1d756666e10af7b059dd79ab"} err="failed to get container status \"bc9b7f9f6bcf5d1d63337a40182e9886f02eda8a1d756666e10af7b059dd79ab\": rpc error: code = NotFound desc = an error occurred when try to find container \"bc9b7f9f6bcf5d1d63337a40182e9886f02eda8a1d756666e10af7b059dd79ab\": not found" May 13 00:37:33.968791 kubelet[1907]: I0513 00:37:33.968452 1907 scope.go:117] "RemoveContainer" containerID="803f83a275fec647e5366f4c1b5338c40b4e459f311330ac47305a279cba1d10" May 13 00:37:33.968791 kubelet[1907]: E0513 00:37:33.968718 1907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"803f83a275fec647e5366f4c1b5338c40b4e459f311330ac47305a279cba1d10\": not found" containerID="803f83a275fec647e5366f4c1b5338c40b4e459f311330ac47305a279cba1d10" May 13 00:37:33.968791 kubelet[1907]: I0513 00:37:33.968745 1907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"803f83a275fec647e5366f4c1b5338c40b4e459f311330ac47305a279cba1d10"} err="failed to get container status \"803f83a275fec647e5366f4c1b5338c40b4e459f311330ac47305a279cba1d10\": rpc error: code = NotFound desc = an error occurred when try to find container \"803f83a275fec647e5366f4c1b5338c40b4e459f311330ac47305a279cba1d10\": not found" May 13 00:37:33.968791 kubelet[1907]: I0513 00:37:33.968772 1907 scope.go:117] "RemoveContainer" containerID="56e8dd6aa3c87c5bb6dbb0a885357c16e09b19dc20ebedccbffbd9cb9b3536ab" May 13 00:37:33.968966 env[1216]: time="2025-05-13T00:37:33.968893406Z" level=error msg="ContainerStatus for \"56e8dd6aa3c87c5bb6dbb0a885357c16e09b19dc20ebedccbffbd9cb9b3536ab\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"56e8dd6aa3c87c5bb6dbb0a885357c16e09b19dc20ebedccbffbd9cb9b3536ab\": not found" May 13 00:37:33.969013 kubelet[1907]: E0513 00:37:33.968982 1907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"56e8dd6aa3c87c5bb6dbb0a885357c16e09b19dc20ebedccbffbd9cb9b3536ab\": not found" containerID="56e8dd6aa3c87c5bb6dbb0a885357c16e09b19dc20ebedccbffbd9cb9b3536ab" May 13 00:37:33.969013 kubelet[1907]: I0513 00:37:33.969008 1907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"56e8dd6aa3c87c5bb6dbb0a885357c16e09b19dc20ebedccbffbd9cb9b3536ab"} err="failed to get container status \"56e8dd6aa3c87c5bb6dbb0a885357c16e09b19dc20ebedccbffbd9cb9b3536ab\": rpc error: code = NotFound desc = an error occurred when try to find container \"56e8dd6aa3c87c5bb6dbb0a885357c16e09b19dc20ebedccbffbd9cb9b3536ab\": not found" May 13 00:37:33.969069 kubelet[1907]: I0513 00:37:33.969022 1907 scope.go:117] "RemoveContainer" containerID="2f4d4ce75fc66a77882e5fcc08bf57bb0f4e41629ab78b5cb932e7bb7202aed1" May 13 00:37:33.969159 env[1216]: time="2025-05-13T00:37:33.969123686Z" level=error msg="ContainerStatus for \"2f4d4ce75fc66a77882e5fcc08bf57bb0f4e41629ab78b5cb932e7bb7202aed1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2f4d4ce75fc66a77882e5fcc08bf57bb0f4e41629ab78b5cb932e7bb7202aed1\": not found" May 13 00:37:33.969237 kubelet[1907]: E0513 00:37:33.969220 1907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2f4d4ce75fc66a77882e5fcc08bf57bb0f4e41629ab78b5cb932e7bb7202aed1\": not found" containerID="2f4d4ce75fc66a77882e5fcc08bf57bb0f4e41629ab78b5cb932e7bb7202aed1" May 13 00:37:33.969287 kubelet[1907]: I0513 00:37:33.969240 1907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2f4d4ce75fc66a77882e5fcc08bf57bb0f4e41629ab78b5cb932e7bb7202aed1"} err="failed to get container status \"2f4d4ce75fc66a77882e5fcc08bf57bb0f4e41629ab78b5cb932e7bb7202aed1\": rpc error: code = NotFound desc = an error occurred when try to find container \"2f4d4ce75fc66a77882e5fcc08bf57bb0f4e41629ab78b5cb932e7bb7202aed1\": not found" May 13 00:37:33.969287 kubelet[1907]: I0513 00:37:33.969253 1907 scope.go:117] "RemoveContainer" containerID="84e4d333303e3d5ad30e382979b8045fba8bf9cd30bc063d751764dcead50338" May 13 00:37:33.969597 env[1216]: time="2025-05-13T00:37:33.969546767Z" level=error msg="ContainerStatus for \"84e4d333303e3d5ad30e382979b8045fba8bf9cd30bc063d751764dcead50338\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"84e4d333303e3d5ad30e382979b8045fba8bf9cd30bc063d751764dcead50338\": not found" May 13 00:37:33.969721 kubelet[1907]: E0513 00:37:33.969704 1907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"84e4d333303e3d5ad30e382979b8045fba8bf9cd30bc063d751764dcead50338\": not found" containerID="84e4d333303e3d5ad30e382979b8045fba8bf9cd30bc063d751764dcead50338" May 13 00:37:33.969779 kubelet[1907]: I0513 00:37:33.969731 1907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"84e4d333303e3d5ad30e382979b8045fba8bf9cd30bc063d751764dcead50338"} err="failed to get container status \"84e4d333303e3d5ad30e382979b8045fba8bf9cd30bc063d751764dcead50338\": rpc error: code = NotFound desc = an error occurred when try to find container \"84e4d333303e3d5ad30e382979b8045fba8bf9cd30bc063d751764dcead50338\": not found" May 13 00:37:34.144303 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5bb3060bbae2faf0f15fa84a56f8360c096c0008c3b41d8245060282bce45cc-rootfs.mount: Deactivated successfully. May 13 00:37:34.144419 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e5bb3060bbae2faf0f15fa84a56f8360c096c0008c3b41d8245060282bce45cc-shm.mount: Deactivated successfully. May 13 00:37:34.144476 systemd[1]: var-lib-kubelet-pods-c54037b3\x2d477b\x2d4bd5\x2d9ae0\x2de58cb5593a1b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9zlpm.mount: Deactivated successfully. May 13 00:37:34.144527 systemd[1]: var-lib-kubelet-pods-5ca47b04\x2dcf1f\x2d404d\x2d89c7\x2d16cbe6b82188-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfq62h.mount: Deactivated successfully. May 13 00:37:34.144577 systemd[1]: var-lib-kubelet-pods-c54037b3\x2d477b\x2d4bd5\x2d9ae0\x2de58cb5593a1b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 00:37:34.144625 systemd[1]: var-lib-kubelet-pods-c54037b3\x2d477b\x2d4bd5\x2d9ae0\x2de58cb5593a1b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 00:37:34.804817 kubelet[1907]: E0513 00:37:34.804782 1907 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 00:37:35.091240 sshd[3522]: pam_unix(sshd:session): session closed for user core May 13 00:37:35.094509 systemd[1]: sshd@20-10.0.0.114:22-10.0.0.1:38556.service: Deactivated successfully. May 13 00:37:35.095111 systemd[1]: session-21.scope: Deactivated successfully. May 13 00:37:35.095272 systemd[1]: session-21.scope: Consumed 1.039s CPU time. May 13 00:37:35.095860 systemd-logind[1204]: Session 21 logged out. Waiting for processes to exit. May 13 00:37:35.097066 systemd[1]: Started sshd@21-10.0.0.114:22-10.0.0.1:40520.service. May 13 00:37:35.098563 systemd-logind[1204]: Removed session 21. May 13 00:37:35.136784 sshd[3681]: Accepted publickey for core from 10.0.0.1 port 40520 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:37:35.138431 sshd[3681]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:37:35.143119 systemd-logind[1204]: New session 22 of user core. May 13 00:37:35.143663 systemd[1]: Started session-22.scope. May 13 00:37:35.759609 kubelet[1907]: I0513 00:37:35.759573 1907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ca47b04-cf1f-404d-89c7-16cbe6b82188" path="/var/lib/kubelet/pods/5ca47b04-cf1f-404d-89c7-16cbe6b82188/volumes" May 13 00:37:35.760204 kubelet[1907]: I0513 00:37:35.760179 1907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c54037b3-477b-4bd5-9ae0-e58cb5593a1b" path="/var/lib/kubelet/pods/c54037b3-477b-4bd5-9ae0-e58cb5593a1b/volumes" May 13 00:37:36.179832 sshd[3681]: pam_unix(sshd:session): session closed for user core May 13 00:37:36.183803 systemd[1]: Started sshd@22-10.0.0.114:22-10.0.0.1:40536.service. May 13 00:37:36.184350 systemd[1]: sshd@21-10.0.0.114:22-10.0.0.1:40520.service: Deactivated successfully. May 13 00:37:36.185007 systemd[1]: session-22.scope: Deactivated successfully. May 13 00:37:36.186428 systemd-logind[1204]: Session 22 logged out. Waiting for processes to exit. May 13 00:37:36.187569 systemd-logind[1204]: Removed session 22. May 13 00:37:36.198932 kubelet[1907]: E0513 00:37:36.198887 1907 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c54037b3-477b-4bd5-9ae0-e58cb5593a1b" containerName="clean-cilium-state" May 13 00:37:36.198932 kubelet[1907]: E0513 00:37:36.198922 1907 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c54037b3-477b-4bd5-9ae0-e58cb5593a1b" containerName="cilium-agent" May 13 00:37:36.198932 kubelet[1907]: E0513 00:37:36.198931 1907 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5ca47b04-cf1f-404d-89c7-16cbe6b82188" containerName="cilium-operator" May 13 00:37:36.198932 kubelet[1907]: E0513 00:37:36.198938 1907 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c54037b3-477b-4bd5-9ae0-e58cb5593a1b" containerName="apply-sysctl-overwrites" May 13 00:37:36.199314 kubelet[1907]: E0513 00:37:36.198946 1907 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c54037b3-477b-4bd5-9ae0-e58cb5593a1b" containerName="mount-bpf-fs" May 13 00:37:36.199314 kubelet[1907]: E0513 00:37:36.198952 1907 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c54037b3-477b-4bd5-9ae0-e58cb5593a1b" containerName="mount-cgroup" May 13 00:37:36.199314 kubelet[1907]: I0513 00:37:36.198994 1907 memory_manager.go:354] "RemoveStaleState removing state" podUID="c54037b3-477b-4bd5-9ae0-e58cb5593a1b" containerName="cilium-agent" May 13 00:37:36.199314 kubelet[1907]: I0513 00:37:36.199001 1907 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ca47b04-cf1f-404d-89c7-16cbe6b82188" containerName="cilium-operator" May 13 00:37:36.210257 systemd[1]: Created slice kubepods-burstable-pode0357702_43b7_434b_8884_1c649bdadc9b.slice. May 13 00:37:36.235202 sshd[3692]: Accepted publickey for core from 10.0.0.1 port 40536 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:37:36.236530 sshd[3692]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:37:36.239805 systemd-logind[1204]: New session 23 of user core. May 13 00:37:36.240677 systemd[1]: Started session-23.scope. May 13 00:37:36.342436 kubelet[1907]: I0513 00:37:36.342382 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e0357702-43b7-434b-8884-1c649bdadc9b-etc-cni-netd\") pod \"cilium-tl6rx\" (UID: \"e0357702-43b7-434b-8884-1c649bdadc9b\") " pod="kube-system/cilium-tl6rx" May 13 00:37:36.342592 kubelet[1907]: I0513 00:37:36.342444 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e0357702-43b7-434b-8884-1c649bdadc9b-cilium-ipsec-secrets\") pod \"cilium-tl6rx\" (UID: \"e0357702-43b7-434b-8884-1c649bdadc9b\") " pod="kube-system/cilium-tl6rx" May 13 00:37:36.342592 kubelet[1907]: I0513 00:37:36.342466 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e0357702-43b7-434b-8884-1c649bdadc9b-host-proc-sys-kernel\") pod \"cilium-tl6rx\" (UID: \"e0357702-43b7-434b-8884-1c649bdadc9b\") " pod="kube-system/cilium-tl6rx" May 13 00:37:36.342592 kubelet[1907]: I0513 00:37:36.342493 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e0357702-43b7-434b-8884-1c649bdadc9b-hostproc\") pod \"cilium-tl6rx\" (UID: \"e0357702-43b7-434b-8884-1c649bdadc9b\") " pod="kube-system/cilium-tl6rx" May 13 00:37:36.342592 kubelet[1907]: I0513 00:37:36.342513 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e0357702-43b7-434b-8884-1c649bdadc9b-xtables-lock\") pod \"cilium-tl6rx\" (UID: \"e0357702-43b7-434b-8884-1c649bdadc9b\") " pod="kube-system/cilium-tl6rx" May 13 00:37:36.342592 kubelet[1907]: I0513 00:37:36.342558 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e0357702-43b7-434b-8884-1c649bdadc9b-host-proc-sys-net\") pod \"cilium-tl6rx\" (UID: \"e0357702-43b7-434b-8884-1c649bdadc9b\") " pod="kube-system/cilium-tl6rx" May 13 00:37:36.342736 kubelet[1907]: I0513 00:37:36.342595 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prgdm\" (UniqueName: \"kubernetes.io/projected/e0357702-43b7-434b-8884-1c649bdadc9b-kube-api-access-prgdm\") pod \"cilium-tl6rx\" (UID: \"e0357702-43b7-434b-8884-1c649bdadc9b\") " pod="kube-system/cilium-tl6rx" May 13 00:37:36.342736 kubelet[1907]: I0513 00:37:36.342618 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e0357702-43b7-434b-8884-1c649bdadc9b-cni-path\") pod \"cilium-tl6rx\" (UID: \"e0357702-43b7-434b-8884-1c649bdadc9b\") " pod="kube-system/cilium-tl6rx" May 13 00:37:36.342736 kubelet[1907]: I0513 00:37:36.342637 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e0357702-43b7-434b-8884-1c649bdadc9b-cilium-config-path\") pod \"cilium-tl6rx\" (UID: \"e0357702-43b7-434b-8884-1c649bdadc9b\") " pod="kube-system/cilium-tl6rx" May 13 00:37:36.342736 kubelet[1907]: I0513 00:37:36.342657 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e0357702-43b7-434b-8884-1c649bdadc9b-bpf-maps\") pod \"cilium-tl6rx\" (UID: \"e0357702-43b7-434b-8884-1c649bdadc9b\") " pod="kube-system/cilium-tl6rx" May 13 00:37:36.342736 kubelet[1907]: I0513 00:37:36.342680 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e0357702-43b7-434b-8884-1c649bdadc9b-clustermesh-secrets\") pod \"cilium-tl6rx\" (UID: \"e0357702-43b7-434b-8884-1c649bdadc9b\") " pod="kube-system/cilium-tl6rx" May 13 00:37:36.342736 kubelet[1907]: I0513 00:37:36.342708 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e0357702-43b7-434b-8884-1c649bdadc9b-cilium-cgroup\") pod \"cilium-tl6rx\" (UID: \"e0357702-43b7-434b-8884-1c649bdadc9b\") " pod="kube-system/cilium-tl6rx" May 13 00:37:36.342887 kubelet[1907]: I0513 00:37:36.342755 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e0357702-43b7-434b-8884-1c649bdadc9b-hubble-tls\") pod \"cilium-tl6rx\" (UID: \"e0357702-43b7-434b-8884-1c649bdadc9b\") " pod="kube-system/cilium-tl6rx" May 13 00:37:36.342887 kubelet[1907]: I0513 00:37:36.342777 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e0357702-43b7-434b-8884-1c649bdadc9b-lib-modules\") pod \"cilium-tl6rx\" (UID: \"e0357702-43b7-434b-8884-1c649bdadc9b\") " pod="kube-system/cilium-tl6rx" May 13 00:37:36.342887 kubelet[1907]: I0513 00:37:36.342793 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e0357702-43b7-434b-8884-1c649bdadc9b-cilium-run\") pod \"cilium-tl6rx\" (UID: \"e0357702-43b7-434b-8884-1c649bdadc9b\") " pod="kube-system/cilium-tl6rx" May 13 00:37:36.365192 sshd[3692]: pam_unix(sshd:session): session closed for user core May 13 00:37:36.369267 systemd[1]: Started sshd@23-10.0.0.114:22-10.0.0.1:40552.service. May 13 00:37:36.372467 systemd-logind[1204]: Session 23 logged out. Waiting for processes to exit. May 13 00:37:36.372680 systemd[1]: sshd@22-10.0.0.114:22-10.0.0.1:40536.service: Deactivated successfully. May 13 00:37:36.373349 systemd[1]: session-23.scope: Deactivated successfully. May 13 00:37:36.375437 kubelet[1907]: E0513 00:37:36.375299 1907 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-prgdm lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-tl6rx" podUID="e0357702-43b7-434b-8884-1c649bdadc9b" May 13 00:37:36.376736 systemd-logind[1204]: Removed session 23. May 13 00:37:36.412165 sshd[3705]: Accepted publickey for core from 10.0.0.1 port 40552 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:37:36.413469 sshd[3705]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:37:36.417185 systemd-logind[1204]: New session 24 of user core. May 13 00:37:36.417647 systemd[1]: Started session-24.scope. May 13 00:37:37.047830 kubelet[1907]: I0513 00:37:37.047788 1907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e0357702-43b7-434b-8884-1c649bdadc9b-cilium-run\") pod \"e0357702-43b7-434b-8884-1c649bdadc9b\" (UID: \"e0357702-43b7-434b-8884-1c649bdadc9b\") " May 13 00:37:37.047830 kubelet[1907]: I0513 00:37:37.047830 1907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e0357702-43b7-434b-8884-1c649bdadc9b-bpf-maps\") pod \"e0357702-43b7-434b-8884-1c649bdadc9b\" (UID: \"e0357702-43b7-434b-8884-1c649bdadc9b\") " May 13 00:37:37.048012 kubelet[1907]: I0513 00:37:37.047854 1907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e0357702-43b7-434b-8884-1c649bdadc9b-clustermesh-secrets\") pod \"e0357702-43b7-434b-8884-1c649bdadc9b\" (UID: \"e0357702-43b7-434b-8884-1c649bdadc9b\") " May 13 00:37:37.048012 kubelet[1907]: I0513 00:37:37.047873 1907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e0357702-43b7-434b-8884-1c649bdadc9b-xtables-lock\") pod \"e0357702-43b7-434b-8884-1c649bdadc9b\" (UID: \"e0357702-43b7-434b-8884-1c649bdadc9b\") " May 13 00:37:37.048012 kubelet[1907]: I0513 00:37:37.047888 1907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e0357702-43b7-434b-8884-1c649bdadc9b-host-proc-sys-net\") pod \"e0357702-43b7-434b-8884-1c649bdadc9b\" (UID: \"e0357702-43b7-434b-8884-1c649bdadc9b\") " May 13 00:37:37.048012 kubelet[1907]: I0513 00:37:37.047903 1907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e0357702-43b7-434b-8884-1c649bdadc9b-etc-cni-netd\") pod \"e0357702-43b7-434b-8884-1c649bdadc9b\" (UID: \"e0357702-43b7-434b-8884-1c649bdadc9b\") " May 13 00:37:37.048012 kubelet[1907]: I0513 00:37:37.047918 1907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e0357702-43b7-434b-8884-1c649bdadc9b-cni-path\") pod \"e0357702-43b7-434b-8884-1c649bdadc9b\" (UID: \"e0357702-43b7-434b-8884-1c649bdadc9b\") " May 13 00:37:37.048012 kubelet[1907]: I0513 00:37:37.047932 1907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e0357702-43b7-434b-8884-1c649bdadc9b-cilium-cgroup\") pod \"e0357702-43b7-434b-8884-1c649bdadc9b\" (UID: \"e0357702-43b7-434b-8884-1c649bdadc9b\") " May 13 00:37:37.048148 kubelet[1907]: I0513 00:37:37.047958 1907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e0357702-43b7-434b-8884-1c649bdadc9b-lib-modules\") pod \"e0357702-43b7-434b-8884-1c649bdadc9b\" (UID: \"e0357702-43b7-434b-8884-1c649bdadc9b\") " May 13 00:37:37.048148 kubelet[1907]: I0513 00:37:37.047975 1907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e0357702-43b7-434b-8884-1c649bdadc9b-host-proc-sys-kernel\") pod \"e0357702-43b7-434b-8884-1c649bdadc9b\" (UID: \"e0357702-43b7-434b-8884-1c649bdadc9b\") " May 13 00:37:37.048148 kubelet[1907]: I0513 00:37:37.047993 1907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prgdm\" (UniqueName: \"kubernetes.io/projected/e0357702-43b7-434b-8884-1c649bdadc9b-kube-api-access-prgdm\") pod \"e0357702-43b7-434b-8884-1c649bdadc9b\" (UID: \"e0357702-43b7-434b-8884-1c649bdadc9b\") " May 13 00:37:37.048148 kubelet[1907]: I0513 00:37:37.048012 1907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e0357702-43b7-434b-8884-1c649bdadc9b-cilium-ipsec-secrets\") pod \"e0357702-43b7-434b-8884-1c649bdadc9b\" (UID: \"e0357702-43b7-434b-8884-1c649bdadc9b\") " May 13 00:37:37.048148 kubelet[1907]: I0513 00:37:37.048029 1907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e0357702-43b7-434b-8884-1c649bdadc9b-hubble-tls\") pod \"e0357702-43b7-434b-8884-1c649bdadc9b\" (UID: \"e0357702-43b7-434b-8884-1c649bdadc9b\") " May 13 00:37:37.048148 kubelet[1907]: I0513 00:37:37.048046 1907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e0357702-43b7-434b-8884-1c649bdadc9b-hostproc\") pod \"e0357702-43b7-434b-8884-1c649bdadc9b\" (UID: \"e0357702-43b7-434b-8884-1c649bdadc9b\") " May 13 00:37:37.048271 kubelet[1907]: I0513 00:37:37.048065 1907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e0357702-43b7-434b-8884-1c649bdadc9b-cilium-config-path\") pod \"e0357702-43b7-434b-8884-1c649bdadc9b\" (UID: \"e0357702-43b7-434b-8884-1c649bdadc9b\") " May 13 00:37:37.048374 kubelet[1907]: I0513 00:37:37.048346 1907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0357702-43b7-434b-8884-1c649bdadc9b-cni-path" (OuterVolumeSpecName: "cni-path") pod "e0357702-43b7-434b-8884-1c649bdadc9b" (UID: "e0357702-43b7-434b-8884-1c649bdadc9b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:37:37.048503 kubelet[1907]: I0513 00:37:37.048487 1907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0357702-43b7-434b-8884-1c649bdadc9b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e0357702-43b7-434b-8884-1c649bdadc9b" (UID: "e0357702-43b7-434b-8884-1c649bdadc9b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:37:37.048586 kubelet[1907]: I0513 00:37:37.048573 1907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0357702-43b7-434b-8884-1c649bdadc9b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e0357702-43b7-434b-8884-1c649bdadc9b" (UID: "e0357702-43b7-434b-8884-1c649bdadc9b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:37:37.048664 kubelet[1907]: I0513 00:37:37.048651 1907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0357702-43b7-434b-8884-1c649bdadc9b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e0357702-43b7-434b-8884-1c649bdadc9b" (UID: "e0357702-43b7-434b-8884-1c649bdadc9b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:37:37.049943 kubelet[1907]: I0513 00:37:37.049901 1907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0357702-43b7-434b-8884-1c649bdadc9b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e0357702-43b7-434b-8884-1c649bdadc9b" (UID: "e0357702-43b7-434b-8884-1c649bdadc9b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 00:37:37.050016 kubelet[1907]: I0513 00:37:37.049954 1907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0357702-43b7-434b-8884-1c649bdadc9b-hostproc" (OuterVolumeSpecName: "hostproc") pod "e0357702-43b7-434b-8884-1c649bdadc9b" (UID: "e0357702-43b7-434b-8884-1c649bdadc9b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:37:37.052386 systemd[1]: var-lib-kubelet-pods-e0357702\x2d43b7\x2d434b\x2d8884\x2d1c649bdadc9b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dprgdm.mount: Deactivated successfully. May 13 00:37:37.052501 systemd[1]: var-lib-kubelet-pods-e0357702\x2d43b7\x2d434b\x2d8884\x2d1c649bdadc9b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 00:37:37.054124 kubelet[1907]: I0513 00:37:37.054087 1907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0357702-43b7-434b-8884-1c649bdadc9b-kube-api-access-prgdm" (OuterVolumeSpecName: "kube-api-access-prgdm") pod "e0357702-43b7-434b-8884-1c649bdadc9b" (UID: "e0357702-43b7-434b-8884-1c649bdadc9b"). InnerVolumeSpecName "kube-api-access-prgdm". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 00:37:37.054201 kubelet[1907]: I0513 00:37:37.054135 1907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0357702-43b7-434b-8884-1c649bdadc9b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e0357702-43b7-434b-8884-1c649bdadc9b" (UID: "e0357702-43b7-434b-8884-1c649bdadc9b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:37:37.054201 kubelet[1907]: I0513 00:37:37.054155 1907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0357702-43b7-434b-8884-1c649bdadc9b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e0357702-43b7-434b-8884-1c649bdadc9b" (UID: "e0357702-43b7-434b-8884-1c649bdadc9b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:37:37.054201 kubelet[1907]: I0513 00:37:37.054179 1907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0357702-43b7-434b-8884-1c649bdadc9b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e0357702-43b7-434b-8884-1c649bdadc9b" (UID: "e0357702-43b7-434b-8884-1c649bdadc9b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:37:37.054201 kubelet[1907]: I0513 00:37:37.054196 1907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0357702-43b7-434b-8884-1c649bdadc9b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e0357702-43b7-434b-8884-1c649bdadc9b" (UID: "e0357702-43b7-434b-8884-1c649bdadc9b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:37:37.054312 kubelet[1907]: I0513 00:37:37.054212 1907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0357702-43b7-434b-8884-1c649bdadc9b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e0357702-43b7-434b-8884-1c649bdadc9b" (UID: "e0357702-43b7-434b-8884-1c649bdadc9b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:37:37.054853 kubelet[1907]: I0513 00:37:37.054827 1907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0357702-43b7-434b-8884-1c649bdadc9b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e0357702-43b7-434b-8884-1c649bdadc9b" (UID: "e0357702-43b7-434b-8884-1c649bdadc9b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 00:37:37.055319 kubelet[1907]: I0513 00:37:37.055286 1907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0357702-43b7-434b-8884-1c649bdadc9b-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "e0357702-43b7-434b-8884-1c649bdadc9b" (UID: "e0357702-43b7-434b-8884-1c649bdadc9b"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 00:37:37.056114 kubelet[1907]: I0513 00:37:37.056072 1907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0357702-43b7-434b-8884-1c649bdadc9b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e0357702-43b7-434b-8884-1c649bdadc9b" (UID: "e0357702-43b7-434b-8884-1c649bdadc9b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 00:37:37.148472 kubelet[1907]: I0513 00:37:37.148425 1907 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e0357702-43b7-434b-8884-1c649bdadc9b-cilium-run\") on node \"localhost\" DevicePath \"\"" May 13 00:37:37.148472 kubelet[1907]: I0513 00:37:37.148461 1907 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e0357702-43b7-434b-8884-1c649bdadc9b-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 13 00:37:37.148472 kubelet[1907]: I0513 00:37:37.148476 1907 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e0357702-43b7-434b-8884-1c649bdadc9b-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 13 00:37:37.148472 kubelet[1907]: I0513 00:37:37.148486 1907 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e0357702-43b7-434b-8884-1c649bdadc9b-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 13 00:37:37.148680 kubelet[1907]: I0513 00:37:37.148494 1907 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e0357702-43b7-434b-8884-1c649bdadc9b-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 13 00:37:37.148680 kubelet[1907]: I0513 00:37:37.148503 1907 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e0357702-43b7-434b-8884-1c649bdadc9b-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 13 00:37:37.148680 kubelet[1907]: I0513 00:37:37.148510 1907 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e0357702-43b7-434b-8884-1c649bdadc9b-cni-path\") on node \"localhost\" DevicePath \"\"" May 13 00:37:37.148680 kubelet[1907]: I0513 00:37:37.148518 1907 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e0357702-43b7-434b-8884-1c649bdadc9b-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 13 00:37:37.148680 kubelet[1907]: I0513 00:37:37.148526 1907 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e0357702-43b7-434b-8884-1c649bdadc9b-lib-modules\") on node \"localhost\" DevicePath \"\"" May 13 00:37:37.148680 kubelet[1907]: I0513 00:37:37.148534 1907 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e0357702-43b7-434b-8884-1c649bdadc9b-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 13 00:37:37.148680 kubelet[1907]: I0513 00:37:37.148543 1907 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-prgdm\" (UniqueName: \"kubernetes.io/projected/e0357702-43b7-434b-8884-1c649bdadc9b-kube-api-access-prgdm\") on node \"localhost\" DevicePath \"\"" May 13 00:37:37.148680 kubelet[1907]: I0513 00:37:37.148552 1907 reconciler_common.go:288] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e0357702-43b7-434b-8884-1c649bdadc9b-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" May 13 00:37:37.148851 kubelet[1907]: I0513 00:37:37.148560 1907 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e0357702-43b7-434b-8884-1c649bdadc9b-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 13 00:37:37.148851 kubelet[1907]: I0513 00:37:37.148568 1907 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e0357702-43b7-434b-8884-1c649bdadc9b-hostproc\") on node \"localhost\" DevicePath \"\"" May 13 00:37:37.148851 kubelet[1907]: I0513 00:37:37.148575 1907 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e0357702-43b7-434b-8884-1c649bdadc9b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 00:37:37.447631 systemd[1]: var-lib-kubelet-pods-e0357702\x2d43b7\x2d434b\x2d8884\x2d1c649bdadc9b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 00:37:37.447742 systemd[1]: var-lib-kubelet-pods-e0357702\x2d43b7\x2d434b\x2d8884\x2d1c649bdadc9b-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 13 00:37:37.762167 systemd[1]: Removed slice kubepods-burstable-pode0357702_43b7_434b_8884_1c649bdadc9b.slice. May 13 00:37:38.003369 systemd[1]: Created slice kubepods-burstable-podeffcf868_e6f3_4075_886b_b0ac6bcfc649.slice. May 13 00:37:38.152239 kubelet[1907]: I0513 00:37:38.152193 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/effcf868-e6f3-4075-886b-b0ac6bcfc649-clustermesh-secrets\") pod \"cilium-9v8cf\" (UID: \"effcf868-e6f3-4075-886b-b0ac6bcfc649\") " pod="kube-system/cilium-9v8cf" May 13 00:37:38.152239 kubelet[1907]: I0513 00:37:38.152242 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/effcf868-e6f3-4075-886b-b0ac6bcfc649-bpf-maps\") pod \"cilium-9v8cf\" (UID: \"effcf868-e6f3-4075-886b-b0ac6bcfc649\") " pod="kube-system/cilium-9v8cf" May 13 00:37:38.152602 kubelet[1907]: I0513 00:37:38.152261 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/effcf868-e6f3-4075-886b-b0ac6bcfc649-hubble-tls\") pod \"cilium-9v8cf\" (UID: \"effcf868-e6f3-4075-886b-b0ac6bcfc649\") " pod="kube-system/cilium-9v8cf" May 13 00:37:38.152602 kubelet[1907]: I0513 00:37:38.152277 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/effcf868-e6f3-4075-886b-b0ac6bcfc649-xtables-lock\") pod \"cilium-9v8cf\" (UID: \"effcf868-e6f3-4075-886b-b0ac6bcfc649\") " pod="kube-system/cilium-9v8cf" May 13 00:37:38.152602 kubelet[1907]: I0513 00:37:38.152293 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/effcf868-e6f3-4075-886b-b0ac6bcfc649-lib-modules\") pod \"cilium-9v8cf\" (UID: \"effcf868-e6f3-4075-886b-b0ac6bcfc649\") " pod="kube-system/cilium-9v8cf" May 13 00:37:38.152602 kubelet[1907]: I0513 00:37:38.152308 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/effcf868-e6f3-4075-886b-b0ac6bcfc649-etc-cni-netd\") pod \"cilium-9v8cf\" (UID: \"effcf868-e6f3-4075-886b-b0ac6bcfc649\") " pod="kube-system/cilium-9v8cf" May 13 00:37:38.152602 kubelet[1907]: I0513 00:37:38.152324 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/effcf868-e6f3-4075-886b-b0ac6bcfc649-host-proc-sys-net\") pod \"cilium-9v8cf\" (UID: \"effcf868-e6f3-4075-886b-b0ac6bcfc649\") " pod="kube-system/cilium-9v8cf" May 13 00:37:38.152602 kubelet[1907]: I0513 00:37:38.152342 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/effcf868-e6f3-4075-886b-b0ac6bcfc649-cilium-cgroup\") pod \"cilium-9v8cf\" (UID: \"effcf868-e6f3-4075-886b-b0ac6bcfc649\") " pod="kube-system/cilium-9v8cf" May 13 00:37:38.152789 kubelet[1907]: I0513 00:37:38.152359 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/effcf868-e6f3-4075-886b-b0ac6bcfc649-cni-path\") pod \"cilium-9v8cf\" (UID: \"effcf868-e6f3-4075-886b-b0ac6bcfc649\") " pod="kube-system/cilium-9v8cf" May 13 00:37:38.152789 kubelet[1907]: I0513 00:37:38.152375 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/effcf868-e6f3-4075-886b-b0ac6bcfc649-cilium-ipsec-secrets\") pod \"cilium-9v8cf\" (UID: \"effcf868-e6f3-4075-886b-b0ac6bcfc649\") " pod="kube-system/cilium-9v8cf" May 13 00:37:38.152789 kubelet[1907]: I0513 00:37:38.152392 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/effcf868-e6f3-4075-886b-b0ac6bcfc649-host-proc-sys-kernel\") pod \"cilium-9v8cf\" (UID: \"effcf868-e6f3-4075-886b-b0ac6bcfc649\") " pod="kube-system/cilium-9v8cf" May 13 00:37:38.152789 kubelet[1907]: I0513 00:37:38.152432 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/effcf868-e6f3-4075-886b-b0ac6bcfc649-cilium-run\") pod \"cilium-9v8cf\" (UID: \"effcf868-e6f3-4075-886b-b0ac6bcfc649\") " pod="kube-system/cilium-9v8cf" May 13 00:37:38.152789 kubelet[1907]: I0513 00:37:38.152446 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/effcf868-e6f3-4075-886b-b0ac6bcfc649-cilium-config-path\") pod \"cilium-9v8cf\" (UID: \"effcf868-e6f3-4075-886b-b0ac6bcfc649\") " pod="kube-system/cilium-9v8cf" May 13 00:37:38.152894 kubelet[1907]: I0513 00:37:38.152460 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5wvf\" (UniqueName: \"kubernetes.io/projected/effcf868-e6f3-4075-886b-b0ac6bcfc649-kube-api-access-n5wvf\") pod \"cilium-9v8cf\" (UID: \"effcf868-e6f3-4075-886b-b0ac6bcfc649\") " pod="kube-system/cilium-9v8cf" May 13 00:37:38.152894 kubelet[1907]: I0513 00:37:38.152477 1907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/effcf868-e6f3-4075-886b-b0ac6bcfc649-hostproc\") pod \"cilium-9v8cf\" (UID: \"effcf868-e6f3-4075-886b-b0ac6bcfc649\") " pod="kube-system/cilium-9v8cf" May 13 00:37:38.305539 kubelet[1907]: E0513 00:37:38.305504 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:37:38.307296 env[1216]: time="2025-05-13T00:37:38.307254553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9v8cf,Uid:effcf868-e6f3-4075-886b-b0ac6bcfc649,Namespace:kube-system,Attempt:0,}" May 13 00:37:38.329019 env[1216]: time="2025-05-13T00:37:38.328952592Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:37:38.329188 env[1216]: time="2025-05-13T00:37:38.329163913Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:37:38.329331 env[1216]: time="2025-05-13T00:37:38.329308113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:37:38.329884 env[1216]: time="2025-05-13T00:37:38.329809034Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a5d7e1e757fdbef98b22c4e31d7f558641151813e0c6746e667f8c0d92717c50 pid=3736 runtime=io.containerd.runc.v2 May 13 00:37:38.339777 systemd[1]: Started cri-containerd-a5d7e1e757fdbef98b22c4e31d7f558641151813e0c6746e667f8c0d92717c50.scope. May 13 00:37:38.370934 env[1216]: time="2025-05-13T00:37:38.370891787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9v8cf,Uid:effcf868-e6f3-4075-886b-b0ac6bcfc649,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5d7e1e757fdbef98b22c4e31d7f558641151813e0c6746e667f8c0d92717c50\"" May 13 00:37:38.371692 kubelet[1907]: E0513 00:37:38.371666 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:37:38.376993 env[1216]: time="2025-05-13T00:37:38.376950358Z" level=info msg="CreateContainer within sandbox \"a5d7e1e757fdbef98b22c4e31d7f558641151813e0c6746e667f8c0d92717c50\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 00:37:38.393594 env[1216]: time="2025-05-13T00:37:38.393539428Z" level=info msg="CreateContainer within sandbox \"a5d7e1e757fdbef98b22c4e31d7f558641151813e0c6746e667f8c0d92717c50\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e93150121ecd4a6f40e0779fb7aa444a861b6a43b23c0a41f05099ddedcbd940\"" May 13 00:37:38.394869 env[1216]: time="2025-05-13T00:37:38.394840110Z" level=info msg="StartContainer for \"e93150121ecd4a6f40e0779fb7aa444a861b6a43b23c0a41f05099ddedcbd940\"" May 13 00:37:38.412391 systemd[1]: Started cri-containerd-e93150121ecd4a6f40e0779fb7aa444a861b6a43b23c0a41f05099ddedcbd940.scope. May 13 00:37:38.440260 env[1216]: time="2025-05-13T00:37:38.440209111Z" level=info msg="StartContainer for \"e93150121ecd4a6f40e0779fb7aa444a861b6a43b23c0a41f05099ddedcbd940\" returns successfully" May 13 00:37:38.456308 systemd[1]: cri-containerd-e93150121ecd4a6f40e0779fb7aa444a861b6a43b23c0a41f05099ddedcbd940.scope: Deactivated successfully. May 13 00:37:38.473706 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e93150121ecd4a6f40e0779fb7aa444a861b6a43b23c0a41f05099ddedcbd940-rootfs.mount: Deactivated successfully. May 13 00:37:38.485406 env[1216]: time="2025-05-13T00:37:38.485351312Z" level=info msg="shim disconnected" id=e93150121ecd4a6f40e0779fb7aa444a861b6a43b23c0a41f05099ddedcbd940 May 13 00:37:38.485586 env[1216]: time="2025-05-13T00:37:38.485409792Z" level=warning msg="cleaning up after shim disconnected" id=e93150121ecd4a6f40e0779fb7aa444a861b6a43b23c0a41f05099ddedcbd940 namespace=k8s.io May 13 00:37:38.485586 env[1216]: time="2025-05-13T00:37:38.485421792Z" level=info msg="cleaning up dead shim" May 13 00:37:38.492347 env[1216]: time="2025-05-13T00:37:38.492301845Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:37:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3822 runtime=io.containerd.runc.v2\n" May 13 00:37:38.955554 kubelet[1907]: E0513 00:37:38.955507 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:37:38.957634 env[1216]: time="2025-05-13T00:37:38.957551558Z" level=info msg="CreateContainer within sandbox \"a5d7e1e757fdbef98b22c4e31d7f558641151813e0c6746e667f8c0d92717c50\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 00:37:38.969270 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2772845122.mount: Deactivated successfully. May 13 00:37:38.973243 env[1216]: time="2025-05-13T00:37:38.973192866Z" level=info msg="CreateContainer within sandbox \"a5d7e1e757fdbef98b22c4e31d7f558641151813e0c6746e667f8c0d92717c50\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9a74cb1552305cf14fd9b269b53e59496de95556071b0ade8b3f54cbd11dc0fa\"" May 13 00:37:38.974372 env[1216]: time="2025-05-13T00:37:38.974311028Z" level=info msg="StartContainer for \"9a74cb1552305cf14fd9b269b53e59496de95556071b0ade8b3f54cbd11dc0fa\"" May 13 00:37:38.999681 systemd[1]: Started cri-containerd-9a74cb1552305cf14fd9b269b53e59496de95556071b0ade8b3f54cbd11dc0fa.scope. May 13 00:37:39.047463 env[1216]: time="2025-05-13T00:37:39.045695484Z" level=info msg="StartContainer for \"9a74cb1552305cf14fd9b269b53e59496de95556071b0ade8b3f54cbd11dc0fa\" returns successfully" May 13 00:37:39.056847 systemd[1]: cri-containerd-9a74cb1552305cf14fd9b269b53e59496de95556071b0ade8b3f54cbd11dc0fa.scope: Deactivated successfully. May 13 00:37:39.084094 env[1216]: time="2025-05-13T00:37:39.084011680Z" level=info msg="shim disconnected" id=9a74cb1552305cf14fd9b269b53e59496de95556071b0ade8b3f54cbd11dc0fa May 13 00:37:39.084094 env[1216]: time="2025-05-13T00:37:39.084062400Z" level=warning msg="cleaning up after shim disconnected" id=9a74cb1552305cf14fd9b269b53e59496de95556071b0ade8b3f54cbd11dc0fa namespace=k8s.io May 13 00:37:39.084094 env[1216]: time="2025-05-13T00:37:39.084074920Z" level=info msg="cleaning up dead shim" May 13 00:37:39.091551 env[1216]: time="2025-05-13T00:37:39.091503615Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:37:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3884 runtime=io.containerd.runc.v2\n" May 13 00:37:39.447750 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a74cb1552305cf14fd9b269b53e59496de95556071b0ade8b3f54cbd11dc0fa-rootfs.mount: Deactivated successfully. May 13 00:37:39.759649 kubelet[1907]: I0513 00:37:39.759548 1907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0357702-43b7-434b-8884-1c649bdadc9b" path="/var/lib/kubelet/pods/e0357702-43b7-434b-8884-1c649bdadc9b/volumes" May 13 00:37:39.805284 kubelet[1907]: E0513 00:37:39.805250 1907 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 00:37:39.958634 kubelet[1907]: E0513 00:37:39.958595 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:37:39.961054 env[1216]: time="2025-05-13T00:37:39.960849016Z" level=info msg="CreateContainer within sandbox \"a5d7e1e757fdbef98b22c4e31d7f558641151813e0c6746e667f8c0d92717c50\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 00:37:39.977760 env[1216]: time="2025-05-13T00:37:39.977661370Z" level=info msg="CreateContainer within sandbox \"a5d7e1e757fdbef98b22c4e31d7f558641151813e0c6746e667f8c0d92717c50\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"068c3785e4fb05b846fe479d9f9eac99ec6134fb0a86d48f76b42e54daf2aa8c\"" May 13 00:37:39.979757 env[1216]: time="2025-05-13T00:37:39.979697934Z" level=info msg="StartContainer for \"068c3785e4fb05b846fe479d9f9eac99ec6134fb0a86d48f76b42e54daf2aa8c\"" May 13 00:37:39.999161 systemd[1]: Started cri-containerd-068c3785e4fb05b846fe479d9f9eac99ec6134fb0a86d48f76b42e54daf2aa8c.scope. May 13 00:37:40.035658 env[1216]: time="2025-05-13T00:37:40.035557851Z" level=info msg="StartContainer for \"068c3785e4fb05b846fe479d9f9eac99ec6134fb0a86d48f76b42e54daf2aa8c\" returns successfully" May 13 00:37:40.037655 systemd[1]: cri-containerd-068c3785e4fb05b846fe479d9f9eac99ec6134fb0a86d48f76b42e54daf2aa8c.scope: Deactivated successfully. May 13 00:37:40.060975 env[1216]: time="2025-05-13T00:37:40.060914586Z" level=info msg="shim disconnected" id=068c3785e4fb05b846fe479d9f9eac99ec6134fb0a86d48f76b42e54daf2aa8c May 13 00:37:40.060975 env[1216]: time="2025-05-13T00:37:40.060971866Z" level=warning msg="cleaning up after shim disconnected" id=068c3785e4fb05b846fe479d9f9eac99ec6134fb0a86d48f76b42e54daf2aa8c namespace=k8s.io May 13 00:37:40.060975 env[1216]: time="2025-05-13T00:37:40.060982786Z" level=info msg="cleaning up dead shim" May 13 00:37:40.069158 env[1216]: time="2025-05-13T00:37:40.069091843Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:37:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3942 runtime=io.containerd.runc.v2\ntime=\"2025-05-13T00:37:40Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" May 13 00:37:40.447836 systemd[1]: run-containerd-runc-k8s.io-068c3785e4fb05b846fe479d9f9eac99ec6134fb0a86d48f76b42e54daf2aa8c-runc.LUaoaT.mount: Deactivated successfully. May 13 00:37:40.447929 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-068c3785e4fb05b846fe479d9f9eac99ec6134fb0a86d48f76b42e54daf2aa8c-rootfs.mount: Deactivated successfully. May 13 00:37:40.962665 kubelet[1907]: E0513 00:37:40.962635 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:37:40.964975 env[1216]: time="2025-05-13T00:37:40.964916822Z" level=info msg="CreateContainer within sandbox \"a5d7e1e757fdbef98b22c4e31d7f558641151813e0c6746e667f8c0d92717c50\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 00:37:40.984964 env[1216]: time="2025-05-13T00:37:40.984914586Z" level=info msg="CreateContainer within sandbox \"a5d7e1e757fdbef98b22c4e31d7f558641151813e0c6746e667f8c0d92717c50\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c4d64fef0eda675f238e786b0ae55ee9f416f90aee648b69ba43139160987365\"" May 13 00:37:40.985860 env[1216]: time="2025-05-13T00:37:40.985834028Z" level=info msg="StartContainer for \"c4d64fef0eda675f238e786b0ae55ee9f416f90aee648b69ba43139160987365\"" May 13 00:37:41.012285 systemd[1]: Started cri-containerd-c4d64fef0eda675f238e786b0ae55ee9f416f90aee648b69ba43139160987365.scope. May 13 00:37:41.038547 systemd[1]: cri-containerd-c4d64fef0eda675f238e786b0ae55ee9f416f90aee648b69ba43139160987365.scope: Deactivated successfully. May 13 00:37:41.039552 env[1216]: time="2025-05-13T00:37:41.039266510Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeffcf868_e6f3_4075_886b_b0ac6bcfc649.slice/cri-containerd-c4d64fef0eda675f238e786b0ae55ee9f416f90aee648b69ba43139160987365.scope/memory.events\": no such file or directory" May 13 00:37:41.041595 env[1216]: time="2025-05-13T00:37:41.041553075Z" level=info msg="StartContainer for \"c4d64fef0eda675f238e786b0ae55ee9f416f90aee648b69ba43139160987365\" returns successfully" May 13 00:37:41.063051 env[1216]: time="2025-05-13T00:37:41.063002966Z" level=info msg="shim disconnected" id=c4d64fef0eda675f238e786b0ae55ee9f416f90aee648b69ba43139160987365 May 13 00:37:41.063268 env[1216]: time="2025-05-13T00:37:41.063248526Z" level=warning msg="cleaning up after shim disconnected" id=c4d64fef0eda675f238e786b0ae55ee9f416f90aee648b69ba43139160987365 namespace=k8s.io May 13 00:37:41.063342 env[1216]: time="2025-05-13T00:37:41.063317886Z" level=info msg="cleaning up dead shim" May 13 00:37:41.069442 env[1216]: time="2025-05-13T00:37:41.069390901Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:37:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3996 runtime=io.containerd.runc.v2\n" May 13 00:37:41.447928 systemd[1]: run-containerd-runc-k8s.io-c4d64fef0eda675f238e786b0ae55ee9f416f90aee648b69ba43139160987365-runc.Q77JR8.mount: Deactivated successfully. May 13 00:37:41.448030 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c4d64fef0eda675f238e786b0ae55ee9f416f90aee648b69ba43139160987365-rootfs.mount: Deactivated successfully. May 13 00:37:41.756981 kubelet[1907]: E0513 00:37:41.756877 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:37:41.929458 kubelet[1907]: I0513 00:37:41.929412 1907 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-13T00:37:41Z","lastTransitionTime":"2025-05-13T00:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 13 00:37:41.966578 kubelet[1907]: E0513 00:37:41.966527 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:37:41.968701 env[1216]: time="2025-05-13T00:37:41.968666007Z" level=info msg="CreateContainer within sandbox \"a5d7e1e757fdbef98b22c4e31d7f558641151813e0c6746e667f8c0d92717c50\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 00:37:41.983938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2445421484.mount: Deactivated successfully. May 13 00:37:41.992487 env[1216]: time="2025-05-13T00:37:41.992433943Z" level=info msg="CreateContainer within sandbox \"a5d7e1e757fdbef98b22c4e31d7f558641151813e0c6746e667f8c0d92717c50\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8420ce66abbb070d1b511f963b50cfbdbd644d6616a818954fe5359ddd045791\"" May 13 00:37:41.993474 env[1216]: time="2025-05-13T00:37:41.993443706Z" level=info msg="StartContainer for \"8420ce66abbb070d1b511f963b50cfbdbd644d6616a818954fe5359ddd045791\"" May 13 00:37:42.009717 systemd[1]: Started cri-containerd-8420ce66abbb070d1b511f963b50cfbdbd644d6616a818954fe5359ddd045791.scope. May 13 00:37:42.056204 env[1216]: time="2025-05-13T00:37:42.056153182Z" level=info msg="StartContainer for \"8420ce66abbb070d1b511f963b50cfbdbd644d6616a818954fe5359ddd045791\" returns successfully" May 13 00:37:42.370441 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) May 13 00:37:42.970846 kubelet[1907]: E0513 00:37:42.970803 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:37:42.986896 kubelet[1907]: I0513 00:37:42.986237 1907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9v8cf" podStartSLOduration=5.986218562 podStartE2EDuration="5.986218562s" podCreationTimestamp="2025-05-13 00:37:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:37:42.986017801 +0000 UTC m=+83.309044139" watchObservedRunningTime="2025-05-13 00:37:42.986218562 +0000 UTC m=+83.309244900" May 13 00:37:44.306314 kubelet[1907]: E0513 00:37:44.306262 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:37:45.183760 systemd-networkd[1057]: lxc_health: Link UP May 13 00:37:45.193480 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 13 00:37:45.191122 systemd-networkd[1057]: lxc_health: Gained carrier May 13 00:37:46.307590 kubelet[1907]: E0513 00:37:46.307557 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:37:46.983996 kubelet[1907]: E0513 00:37:46.980872 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:37:47.187557 systemd-networkd[1057]: lxc_health: Gained IPv6LL May 13 00:37:48.757525 kubelet[1907]: E0513 00:37:48.757442 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:37:49.100760 systemd[1]: run-containerd-runc-k8s.io-8420ce66abbb070d1b511f963b50cfbdbd644d6616a818954fe5359ddd045791-runc.J2ccr6.mount: Deactivated successfully. May 13 00:37:51.252335 kubelet[1907]: E0513 00:37:51.251921 1907 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:49056->127.0.0.1:42689: read tcp 127.0.0.1:49056->127.0.0.1:42689: read: connection reset by peer May 13 00:37:51.280156 sshd[3705]: pam_unix(sshd:session): session closed for user core May 13 00:37:51.282661 systemd[1]: sshd@23-10.0.0.114:22-10.0.0.1:40552.service: Deactivated successfully. May 13 00:37:51.283478 systemd[1]: session-24.scope: Deactivated successfully. May 13 00:37:51.283972 systemd-logind[1204]: Session 24 logged out. Waiting for processes to exit. May 13 00:37:51.284604 systemd-logind[1204]: Removed session 24.