Jul 12 00:20:19.761273 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 12 00:20:19.761294 kernel: Linux version 5.15.186-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Jul 11 23:15:18 -00 2025 Jul 12 00:20:19.761302 kernel: efi: EFI v2.70 by EDK II Jul 12 00:20:19.761308 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Jul 12 00:20:19.761314 kernel: random: crng init done Jul 12 00:20:19.761319 kernel: ACPI: Early table checksum verification disabled Jul 12 00:20:19.761326 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Jul 12 00:20:19.761333 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 12 00:20:19.761339 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:20:19.761345 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:20:19.761351 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:20:19.761356 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:20:19.761362 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:20:19.761368 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:20:19.761377 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:20:19.761383 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:20:19.761389 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:20:19.761396 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 12 00:20:19.761402 kernel: NUMA: Failed to initialise from firmware Jul 12 00:20:19.761408 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 12 00:20:19.761414 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Jul 12 00:20:19.761420 kernel: Zone ranges: Jul 12 00:20:19.761426 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 12 00:20:19.761433 kernel: DMA32 empty Jul 12 00:20:19.761439 kernel: Normal empty Jul 12 00:20:19.761445 kernel: Movable zone start for each node Jul 12 00:20:19.761451 kernel: Early memory node ranges Jul 12 00:20:19.761457 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Jul 12 00:20:19.761463 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Jul 12 00:20:19.761469 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Jul 12 00:20:19.761475 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Jul 12 00:20:19.761481 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Jul 12 00:20:19.761487 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Jul 12 00:20:19.761493 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Jul 12 00:20:19.761500 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 12 00:20:19.761507 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 12 00:20:19.761513 kernel: psci: probing for conduit method from ACPI. Jul 12 00:20:19.761519 kernel: psci: PSCIv1.1 detected in firmware. Jul 12 00:20:19.761524 kernel: psci: Using standard PSCI v0.2 function IDs Jul 12 00:20:19.761531 kernel: psci: Trusted OS migration not required Jul 12 00:20:19.761539 kernel: psci: SMC Calling Convention v1.1 Jul 12 00:20:19.761546 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 12 00:20:19.761553 kernel: ACPI: SRAT not present Jul 12 00:20:19.761560 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Jul 12 00:20:19.761567 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Jul 12 00:20:19.761574 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 12 00:20:19.761580 kernel: Detected PIPT I-cache on CPU0 Jul 12 00:20:19.761587 kernel: CPU features: detected: GIC system register CPU interface Jul 12 00:20:19.761593 kernel: CPU features: detected: Hardware dirty bit management Jul 12 00:20:19.761599 kernel: CPU features: detected: Spectre-v4 Jul 12 00:20:19.761605 kernel: CPU features: detected: Spectre-BHB Jul 12 00:20:19.761613 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 12 00:20:19.761619 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 12 00:20:19.761625 kernel: CPU features: detected: ARM erratum 1418040 Jul 12 00:20:19.761632 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 12 00:20:19.761643 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 12 00:20:19.761651 kernel: Policy zone: DMA Jul 12 00:20:19.761659 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=6cb548cec1e3020e9c3dcbc1d7670f4d8bdc2e3c8e062898ccaed7fc9d588f65 Jul 12 00:20:19.761665 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 12 00:20:19.761672 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 12 00:20:19.761678 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 12 00:20:19.761684 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 12 00:20:19.761692 kernel: Memory: 2457340K/2572288K available (9792K kernel code, 2094K rwdata, 7588K rodata, 36416K init, 777K bss, 114948K reserved, 0K cma-reserved) Jul 12 00:20:19.761699 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 12 00:20:19.761705 kernel: trace event string verifier disabled Jul 12 00:20:19.761711 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 12 00:20:19.761718 kernel: rcu: RCU event tracing is enabled. Jul 12 00:20:19.761738 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 12 00:20:19.761744 kernel: Trampoline variant of Tasks RCU enabled. Jul 12 00:20:19.761751 kernel: Tracing variant of Tasks RCU enabled. Jul 12 00:20:19.761757 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 12 00:20:19.761764 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 12 00:20:19.761770 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 12 00:20:19.761777 kernel: GICv3: 256 SPIs implemented Jul 12 00:20:19.761784 kernel: GICv3: 0 Extended SPIs implemented Jul 12 00:20:19.761790 kernel: GICv3: Distributor has no Range Selector support Jul 12 00:20:19.761796 kernel: Root IRQ handler: gic_handle_irq Jul 12 00:20:19.761803 kernel: GICv3: 16 PPIs implemented Jul 12 00:20:19.761817 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 12 00:20:19.761823 kernel: ACPI: SRAT not present Jul 12 00:20:19.761830 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 12 00:20:19.761836 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Jul 12 00:20:19.761843 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Jul 12 00:20:19.761849 kernel: GICv3: using LPI property table @0x00000000400d0000 Jul 12 00:20:19.761856 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Jul 12 00:20:19.761865 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:20:19.761871 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 12 00:20:19.761878 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 12 00:20:19.761895 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 12 00:20:19.761902 kernel: arm-pv: using stolen time PV Jul 12 00:20:19.761909 kernel: Console: colour dummy device 80x25 Jul 12 00:20:19.761915 kernel: ACPI: Core revision 20210730 Jul 12 00:20:19.761922 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 12 00:20:19.761929 kernel: pid_max: default: 32768 minimum: 301 Jul 12 00:20:19.761935 kernel: LSM: Security Framework initializing Jul 12 00:20:19.761943 kernel: SELinux: Initializing. Jul 12 00:20:19.761959 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:20:19.761986 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:20:19.761992 kernel: rcu: Hierarchical SRCU implementation. Jul 12 00:20:19.761999 kernel: Platform MSI: ITS@0x8080000 domain created Jul 12 00:20:19.762006 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 12 00:20:19.762012 kernel: Remapping and enabling EFI services. Jul 12 00:20:19.762019 kernel: smp: Bringing up secondary CPUs ... Jul 12 00:20:19.762025 kernel: Detected PIPT I-cache on CPU1 Jul 12 00:20:19.762034 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 12 00:20:19.762041 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Jul 12 00:20:19.762048 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:20:19.762054 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 12 00:20:19.762061 kernel: Detected PIPT I-cache on CPU2 Jul 12 00:20:19.762068 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 12 00:20:19.762074 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Jul 12 00:20:19.762081 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:20:19.762087 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 12 00:20:19.762094 kernel: Detected PIPT I-cache on CPU3 Jul 12 00:20:19.762101 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 12 00:20:19.762108 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Jul 12 00:20:19.762115 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:20:19.762121 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 12 00:20:19.762133 kernel: smp: Brought up 1 node, 4 CPUs Jul 12 00:20:19.762141 kernel: SMP: Total of 4 processors activated. Jul 12 00:20:19.762148 kernel: CPU features: detected: 32-bit EL0 Support Jul 12 00:20:19.762155 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 12 00:20:19.762162 kernel: CPU features: detected: Common not Private translations Jul 12 00:20:19.762168 kernel: CPU features: detected: CRC32 instructions Jul 12 00:20:19.762175 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 12 00:20:19.762183 kernel: CPU features: detected: LSE atomic instructions Jul 12 00:20:19.762191 kernel: CPU features: detected: Privileged Access Never Jul 12 00:20:19.762199 kernel: CPU features: detected: RAS Extension Support Jul 12 00:20:19.762205 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 12 00:20:19.762212 kernel: CPU: All CPU(s) started at EL1 Jul 12 00:20:19.762220 kernel: alternatives: patching kernel code Jul 12 00:20:19.762231 kernel: devtmpfs: initialized Jul 12 00:20:19.762237 kernel: KASLR enabled Jul 12 00:20:19.762245 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 12 00:20:19.762251 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 12 00:20:19.762259 kernel: pinctrl core: initialized pinctrl subsystem Jul 12 00:20:19.762266 kernel: SMBIOS 3.0.0 present. Jul 12 00:20:19.762273 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Jul 12 00:20:19.762280 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 12 00:20:19.762287 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 12 00:20:19.762299 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 12 00:20:19.762306 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 12 00:20:19.762313 kernel: audit: initializing netlink subsys (disabled) Jul 12 00:20:19.762320 kernel: audit: type=2000 audit(0.030:1): state=initialized audit_enabled=0 res=1 Jul 12 00:20:19.762327 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 12 00:20:19.762334 kernel: cpuidle: using governor menu Jul 12 00:20:19.762341 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 12 00:20:19.762348 kernel: ASID allocator initialised with 32768 entries Jul 12 00:20:19.762355 kernel: ACPI: bus type PCI registered Jul 12 00:20:19.762365 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 12 00:20:19.762372 kernel: Serial: AMBA PL011 UART driver Jul 12 00:20:19.762379 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 12 00:20:19.762386 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Jul 12 00:20:19.762393 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 12 00:20:19.762400 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Jul 12 00:20:19.762407 kernel: cryptd: max_cpu_qlen set to 1000 Jul 12 00:20:19.762414 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 12 00:20:19.762421 kernel: ACPI: Added _OSI(Module Device) Jul 12 00:20:19.762432 kernel: ACPI: Added _OSI(Processor Device) Jul 12 00:20:19.762439 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 12 00:20:19.762445 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 12 00:20:19.762452 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 12 00:20:19.762460 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 12 00:20:19.762467 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 12 00:20:19.762475 kernel: ACPI: Interpreter enabled Jul 12 00:20:19.762482 kernel: ACPI: Using GIC for interrupt routing Jul 12 00:20:19.762489 kernel: ACPI: MCFG table detected, 1 entries Jul 12 00:20:19.762497 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 12 00:20:19.762505 kernel: printk: console [ttyAMA0] enabled Jul 12 00:20:19.762511 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 12 00:20:19.762657 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 12 00:20:19.762730 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 12 00:20:19.762792 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 12 00:20:19.762861 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 12 00:20:19.762943 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 12 00:20:19.762973 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 12 00:20:19.762981 kernel: PCI host bridge to bus 0000:00 Jul 12 00:20:19.763059 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 12 00:20:19.763120 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 12 00:20:19.763179 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 12 00:20:19.763240 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 12 00:20:19.763320 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 12 00:20:19.763391 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 12 00:20:19.763454 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 12 00:20:19.763528 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 12 00:20:19.763594 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 12 00:20:19.763656 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 12 00:20:19.763778 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 12 00:20:19.763858 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 12 00:20:19.763916 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 12 00:20:19.763991 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 12 00:20:19.764051 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 12 00:20:19.764061 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 12 00:20:19.764068 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 12 00:20:19.764076 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 12 00:20:19.764086 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 12 00:20:19.764093 kernel: iommu: Default domain type: Translated Jul 12 00:20:19.764101 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 12 00:20:19.764108 kernel: vgaarb: loaded Jul 12 00:20:19.764115 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 12 00:20:19.764122 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 12 00:20:19.764130 kernel: PTP clock support registered Jul 12 00:20:19.764137 kernel: Registered efivars operations Jul 12 00:20:19.764145 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 12 00:20:19.764152 kernel: VFS: Disk quotas dquot_6.6.0 Jul 12 00:20:19.764162 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 12 00:20:19.764169 kernel: pnp: PnP ACPI init Jul 12 00:20:19.764241 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 12 00:20:19.764251 kernel: pnp: PnP ACPI: found 1 devices Jul 12 00:20:19.764258 kernel: NET: Registered PF_INET protocol family Jul 12 00:20:19.764265 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 12 00:20:19.764273 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 12 00:20:19.764280 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 12 00:20:19.764290 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 12 00:20:19.764297 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 12 00:20:19.764305 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 12 00:20:19.764312 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:20:19.764320 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:20:19.764328 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 12 00:20:19.764335 kernel: PCI: CLS 0 bytes, default 64 Jul 12 00:20:19.764343 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 12 00:20:19.764350 kernel: kvm [1]: HYP mode not available Jul 12 00:20:19.764359 kernel: Initialise system trusted keyrings Jul 12 00:20:19.764366 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 12 00:20:19.764373 kernel: Key type asymmetric registered Jul 12 00:20:19.764380 kernel: Asymmetric key parser 'x509' registered Jul 12 00:20:19.764387 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 12 00:20:19.764395 kernel: io scheduler mq-deadline registered Jul 12 00:20:19.764402 kernel: io scheduler kyber registered Jul 12 00:20:19.764409 kernel: io scheduler bfq registered Jul 12 00:20:19.764416 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 12 00:20:19.764425 kernel: ACPI: button: Power Button [PWRB] Jul 12 00:20:19.764433 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 12 00:20:19.764496 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 12 00:20:19.764506 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 12 00:20:19.764513 kernel: thunder_xcv, ver 1.0 Jul 12 00:20:19.764521 kernel: thunder_bgx, ver 1.0 Jul 12 00:20:19.764528 kernel: nicpf, ver 1.0 Jul 12 00:20:19.764534 kernel: nicvf, ver 1.0 Jul 12 00:20:19.764603 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 12 00:20:19.764664 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-12T00:20:19 UTC (1752279619) Jul 12 00:20:19.764673 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 12 00:20:19.764681 kernel: NET: Registered PF_INET6 protocol family Jul 12 00:20:19.764688 kernel: Segment Routing with IPv6 Jul 12 00:20:19.764695 kernel: In-situ OAM (IOAM) with IPv6 Jul 12 00:20:19.764702 kernel: NET: Registered PF_PACKET protocol family Jul 12 00:20:19.764709 kernel: Key type dns_resolver registered Jul 12 00:20:19.764716 kernel: registered taskstats version 1 Jul 12 00:20:19.764724 kernel: Loading compiled-in X.509 certificates Jul 12 00:20:19.764732 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.186-flatcar: de2ee1d04443f96c763927c453375bbe23b5752a' Jul 12 00:20:19.764739 kernel: Key type .fscrypt registered Jul 12 00:20:19.764746 kernel: Key type fscrypt-provisioning registered Jul 12 00:20:19.764753 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 12 00:20:19.764760 kernel: ima: Allocated hash algorithm: sha1 Jul 12 00:20:19.764767 kernel: ima: No architecture policies found Jul 12 00:20:19.764774 kernel: clk: Disabling unused clocks Jul 12 00:20:19.764781 kernel: Freeing unused kernel memory: 36416K Jul 12 00:20:19.764789 kernel: Run /init as init process Jul 12 00:20:19.764824 kernel: with arguments: Jul 12 00:20:19.764832 kernel: /init Jul 12 00:20:19.764838 kernel: with environment: Jul 12 00:20:19.764846 kernel: HOME=/ Jul 12 00:20:19.764853 kernel: TERM=linux Jul 12 00:20:19.764859 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 12 00:20:19.764868 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 12 00:20:19.764879 systemd[1]: Detected virtualization kvm. Jul 12 00:20:19.764887 systemd[1]: Detected architecture arm64. Jul 12 00:20:19.764895 systemd[1]: Running in initrd. Jul 12 00:20:19.764902 systemd[1]: No hostname configured, using default hostname. Jul 12 00:20:19.764910 systemd[1]: Hostname set to . Jul 12 00:20:19.764918 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:20:19.764925 systemd[1]: Queued start job for default target initrd.target. Jul 12 00:20:19.764933 systemd[1]: Started systemd-ask-password-console.path. Jul 12 00:20:19.764942 systemd[1]: Reached target cryptsetup.target. Jul 12 00:20:19.764958 systemd[1]: Reached target paths.target. Jul 12 00:20:19.764976 systemd[1]: Reached target slices.target. Jul 12 00:20:19.764984 systemd[1]: Reached target swap.target. Jul 12 00:20:19.764991 systemd[1]: Reached target timers.target. Jul 12 00:20:19.764999 systemd[1]: Listening on iscsid.socket. Jul 12 00:20:19.765007 systemd[1]: Listening on iscsiuio.socket. Jul 12 00:20:19.765017 systemd[1]: Listening on systemd-journald-audit.socket. Jul 12 00:20:19.765024 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 12 00:20:19.765032 systemd[1]: Listening on systemd-journald.socket. Jul 12 00:20:19.765040 systemd[1]: Listening on systemd-networkd.socket. Jul 12 00:20:19.765048 systemd[1]: Listening on systemd-udevd-control.socket. Jul 12 00:20:19.765056 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 12 00:20:19.765063 systemd[1]: Reached target sockets.target. Jul 12 00:20:19.765071 systemd[1]: Starting kmod-static-nodes.service... Jul 12 00:20:19.765078 systemd[1]: Finished network-cleanup.service. Jul 12 00:20:19.765087 systemd[1]: Starting systemd-fsck-usr.service... Jul 12 00:20:19.765094 systemd[1]: Starting systemd-journald.service... Jul 12 00:20:19.765102 systemd[1]: Starting systemd-modules-load.service... Jul 12 00:20:19.765109 systemd[1]: Starting systemd-resolved.service... Jul 12 00:20:19.765117 systemd[1]: Starting systemd-vconsole-setup.service... Jul 12 00:20:19.765124 systemd[1]: Finished kmod-static-nodes.service. Jul 12 00:20:19.765131 systemd[1]: Finished systemd-fsck-usr.service. Jul 12 00:20:19.765139 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 12 00:20:19.765146 systemd[1]: Finished systemd-vconsole-setup.service. Jul 12 00:20:19.765159 systemd-journald[290]: Journal started Jul 12 00:20:19.765204 systemd-journald[290]: Runtime Journal (/run/log/journal/c27ff6bcafe7414398acd8f335d4e1c0) is 6.0M, max 48.7M, 42.6M free. Jul 12 00:20:19.753431 systemd-modules-load[291]: Inserted module 'overlay' Jul 12 00:20:19.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:19.769975 kernel: audit: type=1130 audit(1752279619.767:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:19.770008 systemd[1]: Started systemd-journald.service. Jul 12 00:20:19.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:19.770989 kernel: audit: type=1130 audit(1752279619.769:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:19.770997 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 12 00:20:19.776192 kernel: audit: type=1130 audit(1752279619.773:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:19.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:19.774297 systemd[1]: Starting dracut-cmdline-ask.service... Jul 12 00:20:19.782291 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 12 00:20:19.780741 systemd-resolved[292]: Positive Trust Anchors: Jul 12 00:20:19.780756 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:20:19.780784 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 12 00:20:19.787452 systemd-resolved[292]: Defaulting to hostname 'linux'. Jul 12 00:20:19.788889 systemd[1]: Started systemd-resolved.service. Jul 12 00:20:19.792613 systemd-modules-load[291]: Inserted module 'br_netfilter' Jul 12 00:20:19.793925 kernel: Bridge firewalling registered Jul 12 00:20:19.793486 systemd[1]: Reached target nss-lookup.target. Jul 12 00:20:19.797067 kernel: audit: type=1130 audit(1752279619.793:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:19.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:19.798764 systemd[1]: Finished dracut-cmdline-ask.service. Jul 12 00:20:19.800602 systemd[1]: Starting dracut-cmdline.service... Jul 12 00:20:19.803985 kernel: audit: type=1130 audit(1752279619.799:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:19.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:19.806015 kernel: SCSI subsystem initialized Jul 12 00:20:19.811003 dracut-cmdline[308]: dracut-dracut-053 Jul 12 00:20:19.814518 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=6cb548cec1e3020e9c3dcbc1d7670f4d8bdc2e3c8e062898ccaed7fc9d588f65 Jul 12 00:20:19.819942 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 12 00:20:19.819994 kernel: device-mapper: uevent: version 1.0.3 Jul 12 00:20:19.820005 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 12 00:20:19.822792 systemd-modules-load[291]: Inserted module 'dm_multipath' Jul 12 00:20:19.824047 systemd[1]: Finished systemd-modules-load.service. Jul 12 00:20:19.825882 systemd[1]: Starting systemd-sysctl.service... Jul 12 00:20:19.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:19.829965 kernel: audit: type=1130 audit(1752279619.824:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:19.837084 systemd[1]: Finished systemd-sysctl.service. Jul 12 00:20:19.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:19.840982 kernel: audit: type=1130 audit(1752279619.836:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:19.886016 kernel: Loading iSCSI transport class v2.0-870. Jul 12 00:20:19.897984 kernel: iscsi: registered transport (tcp) Jul 12 00:20:19.914985 kernel: iscsi: registered transport (qla4xxx) Jul 12 00:20:19.915048 kernel: QLogic iSCSI HBA Driver Jul 12 00:20:19.956785 systemd[1]: Finished dracut-cmdline.service. Jul 12 00:20:19.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:19.958460 systemd[1]: Starting dracut-pre-udev.service... Jul 12 00:20:19.960907 kernel: audit: type=1130 audit(1752279619.957:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:20.005979 kernel: raid6: neonx8 gen() 13763 MB/s Jul 12 00:20:20.022960 kernel: raid6: neonx8 xor() 10792 MB/s Jul 12 00:20:20.040009 kernel: raid6: neonx4 gen() 13498 MB/s Jul 12 00:20:20.056975 kernel: raid6: neonx4 xor() 8945 MB/s Jul 12 00:20:20.073965 kernel: raid6: neonx2 gen() 12915 MB/s Jul 12 00:20:20.090965 kernel: raid6: neonx2 xor() 10237 MB/s Jul 12 00:20:20.107962 kernel: raid6: neonx1 gen() 10574 MB/s Jul 12 00:20:20.124963 kernel: raid6: neonx1 xor() 8745 MB/s Jul 12 00:20:20.141964 kernel: raid6: int64x8 gen() 6259 MB/s Jul 12 00:20:20.158964 kernel: raid6: int64x8 xor() 3539 MB/s Jul 12 00:20:20.175964 kernel: raid6: int64x4 gen() 7209 MB/s Jul 12 00:20:20.192965 kernel: raid6: int64x4 xor() 3846 MB/s Jul 12 00:20:20.209998 kernel: raid6: int64x2 gen() 6139 MB/s Jul 12 00:20:20.226975 kernel: raid6: int64x2 xor() 3136 MB/s Jul 12 00:20:20.243966 kernel: raid6: int64x1 gen() 5039 MB/s Jul 12 00:20:20.261294 kernel: raid6: int64x1 xor() 2643 MB/s Jul 12 00:20:20.261307 kernel: raid6: using algorithm neonx8 gen() 13763 MB/s Jul 12 00:20:20.261315 kernel: raid6: .... xor() 10792 MB/s, rmw enabled Jul 12 00:20:20.261324 kernel: raid6: using neon recovery algorithm Jul 12 00:20:20.274987 kernel: xor: measuring software checksum speed Jul 12 00:20:20.275016 kernel: 8regs : 17231 MB/sec Jul 12 00:20:20.276040 kernel: 32regs : 20702 MB/sec Jul 12 00:20:20.276052 kernel: arm64_neon : 27710 MB/sec Jul 12 00:20:20.276066 kernel: xor: using function: arm64_neon (27710 MB/sec) Jul 12 00:20:20.355972 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Jul 12 00:20:20.366865 systemd[1]: Finished dracut-pre-udev.service. Jul 12 00:20:20.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:20.368504 systemd[1]: Starting systemd-udevd.service... Jul 12 00:20:20.370973 kernel: audit: type=1130 audit(1752279620.366:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:20.367000 audit: BPF prog-id=7 op=LOAD Jul 12 00:20:20.367000 audit: BPF prog-id=8 op=LOAD Jul 12 00:20:20.382023 systemd-udevd[491]: Using default interface naming scheme 'v252'. Jul 12 00:20:20.385533 systemd[1]: Started systemd-udevd.service. Jul 12 00:20:20.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:20.387137 systemd[1]: Starting dracut-pre-trigger.service... Jul 12 00:20:20.400883 dracut-pre-trigger[497]: rd.md=0: removing MD RAID activation Jul 12 00:20:20.433129 systemd[1]: Finished dracut-pre-trigger.service. Jul 12 00:20:20.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:20.434768 systemd[1]: Starting systemd-udev-trigger.service... Jul 12 00:20:20.470676 systemd[1]: Finished systemd-udev-trigger.service. Jul 12 00:20:20.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:20.508899 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 12 00:20:20.518228 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 12 00:20:20.518244 kernel: GPT:9289727 != 19775487 Jul 12 00:20:20.518253 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 12 00:20:20.518263 kernel: GPT:9289727 != 19775487 Jul 12 00:20:20.518271 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 12 00:20:20.518280 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 00:20:20.533971 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (544) Jul 12 00:20:20.536389 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 12 00:20:20.537485 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 12 00:20:20.542180 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 12 00:20:20.549399 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 12 00:20:20.552859 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 12 00:20:20.554899 systemd[1]: Starting disk-uuid.service... Jul 12 00:20:20.562140 disk-uuid[563]: Primary Header is updated. Jul 12 00:20:20.562140 disk-uuid[563]: Secondary Entries is updated. Jul 12 00:20:20.562140 disk-uuid[563]: Secondary Header is updated. Jul 12 00:20:20.565972 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 00:20:21.584969 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 00:20:21.585022 disk-uuid[564]: The operation has completed successfully. Jul 12 00:20:21.608045 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 12 00:20:21.608136 systemd[1]: Finished disk-uuid.service. Jul 12 00:20:21.609627 systemd[1]: Starting verity-setup.service... Jul 12 00:20:21.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:21.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:21.625968 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 12 00:20:21.647128 systemd[1]: Found device dev-mapper-usr.device. Jul 12 00:20:21.649043 systemd[1]: Mounting sysusr-usr.mount... Jul 12 00:20:21.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:21.651103 systemd[1]: Finished verity-setup.service. Jul 12 00:20:21.701764 systemd[1]: Mounted sysusr-usr.mount. Jul 12 00:20:21.702959 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 12 00:20:21.702452 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 12 00:20:21.703165 systemd[1]: Starting ignition-setup.service... Jul 12 00:20:21.705257 systemd[1]: Starting parse-ip-for-networkd.service... Jul 12 00:20:21.712151 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:20:21.712212 kernel: BTRFS info (device vda6): using free space tree Jul 12 00:20:21.712228 kernel: BTRFS info (device vda6): has skinny extents Jul 12 00:20:21.720457 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 12 00:20:21.730807 systemd[1]: Finished ignition-setup.service. Jul 12 00:20:21.732262 systemd[1]: Starting ignition-fetch-offline.service... Jul 12 00:20:21.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:21.788868 systemd[1]: Finished parse-ip-for-networkd.service. Jul 12 00:20:21.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:21.790000 audit: BPF prog-id=9 op=LOAD Jul 12 00:20:21.791154 systemd[1]: Starting systemd-networkd.service... Jul 12 00:20:21.806437 ignition[653]: Ignition 2.14.0 Jul 12 00:20:21.806446 ignition[653]: Stage: fetch-offline Jul 12 00:20:21.806481 ignition[653]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:20:21.806489 ignition[653]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:20:21.806612 ignition[653]: parsed url from cmdline: "" Jul 12 00:20:21.806615 ignition[653]: no config URL provided Jul 12 00:20:21.806619 ignition[653]: reading system config file "/usr/lib/ignition/user.ign" Jul 12 00:20:21.806626 ignition[653]: no config at "/usr/lib/ignition/user.ign" Jul 12 00:20:21.806644 ignition[653]: op(1): [started] loading QEMU firmware config module Jul 12 00:20:21.806648 ignition[653]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 12 00:20:21.813262 ignition[653]: op(1): [finished] loading QEMU firmware config module Jul 12 00:20:21.814343 systemd-networkd[740]: lo: Link UP Jul 12 00:20:21.814357 systemd-networkd[740]: lo: Gained carrier Jul 12 00:20:21.814970 systemd-networkd[740]: Enumeration completed Jul 12 00:20:21.815073 systemd[1]: Started systemd-networkd.service. Jul 12 00:20:21.815322 systemd-networkd[740]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:20:21.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:21.815866 systemd[1]: Reached target network.target. Jul 12 00:20:21.816713 systemd-networkd[740]: eth0: Link UP Jul 12 00:20:21.816717 systemd-networkd[740]: eth0: Gained carrier Jul 12 00:20:21.817788 systemd[1]: Starting iscsiuio.service... Jul 12 00:20:21.827040 systemd[1]: Started iscsiuio.service. Jul 12 00:20:21.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:21.828591 systemd[1]: Starting iscsid.service... Jul 12 00:20:21.829193 systemd-networkd[740]: eth0: DHCPv4 address 10.0.0.41/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 12 00:20:21.831955 iscsid[747]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 12 00:20:21.831955 iscsid[747]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 12 00:20:21.831955 iscsid[747]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 12 00:20:21.831955 iscsid[747]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 12 00:20:21.831955 iscsid[747]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 12 00:20:21.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:21.841520 iscsid[747]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 12 00:20:21.834653 systemd[1]: Started iscsid.service. Jul 12 00:20:21.838900 systemd[1]: Starting dracut-initqueue.service... Jul 12 00:20:21.848842 systemd[1]: Finished dracut-initqueue.service. Jul 12 00:20:21.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:21.849856 systemd[1]: Reached target remote-fs-pre.target. Jul 12 00:20:21.851240 systemd[1]: Reached target remote-cryptsetup.target. Jul 12 00:20:21.852711 systemd[1]: Reached target remote-fs.target. Jul 12 00:20:21.854828 systemd[1]: Starting dracut-pre-mount.service... Jul 12 00:20:21.862346 systemd[1]: Finished dracut-pre-mount.service. Jul 12 00:20:21.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:21.874205 ignition[653]: parsing config with SHA512: d8a23fe31daa8a1f232ceadee64ffc49ec738a2e1e2eb7aef449223d9b69f9fb8573c7952d4b84d26de5af1dfb4d7227c2a1b6fb97df10ee27c758bb6d171dbc Jul 12 00:20:21.887572 unknown[653]: fetched base config from "system" Jul 12 00:20:21.887584 unknown[653]: fetched user config from "qemu" Jul 12 00:20:21.888078 ignition[653]: fetch-offline: fetch-offline passed Jul 12 00:20:21.888903 systemd[1]: Finished ignition-fetch-offline.service. Jul 12 00:20:21.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:21.888128 ignition[653]: Ignition finished successfully Jul 12 00:20:21.890236 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 12 00:20:21.890986 systemd[1]: Starting ignition-kargs.service... Jul 12 00:20:21.900922 ignition[761]: Ignition 2.14.0 Jul 12 00:20:21.900931 ignition[761]: Stage: kargs Jul 12 00:20:21.901055 ignition[761]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:20:21.901064 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:20:21.904194 systemd[1]: Finished ignition-kargs.service. Jul 12 00:20:21.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:21.902067 ignition[761]: kargs: kargs passed Jul 12 00:20:21.902112 ignition[761]: Ignition finished successfully Jul 12 00:20:21.905839 systemd[1]: Starting ignition-disks.service... Jul 12 00:20:21.912473 ignition[767]: Ignition 2.14.0 Jul 12 00:20:21.912481 ignition[767]: Stage: disks Jul 12 00:20:21.912566 ignition[767]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:20:21.912575 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:20:21.913644 ignition[767]: disks: disks passed Jul 12 00:20:21.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:21.914438 systemd[1]: Finished ignition-disks.service. Jul 12 00:20:21.913690 ignition[767]: Ignition finished successfully Jul 12 00:20:21.915487 systemd[1]: Reached target initrd-root-device.target. Jul 12 00:20:21.916367 systemd[1]: Reached target local-fs-pre.target. Jul 12 00:20:21.917343 systemd[1]: Reached target local-fs.target. Jul 12 00:20:21.918330 systemd[1]: Reached target sysinit.target. Jul 12 00:20:21.919325 systemd[1]: Reached target basic.target. Jul 12 00:20:21.921061 systemd[1]: Starting systemd-fsck-root.service... Jul 12 00:20:21.931658 systemd-fsck[775]: ROOT: clean, 619/553520 files, 56022/553472 blocks Jul 12 00:20:21.935696 systemd[1]: Finished systemd-fsck-root.service. Jul 12 00:20:21.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:21.937189 systemd[1]: Mounting sysroot.mount... Jul 12 00:20:21.943964 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 12 00:20:21.944335 systemd[1]: Mounted sysroot.mount. Jul 12 00:20:21.944901 systemd[1]: Reached target initrd-root-fs.target. Jul 12 00:20:21.946798 systemd[1]: Mounting sysroot-usr.mount... Jul 12 00:20:21.947531 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 12 00:20:21.947566 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 12 00:20:21.947588 systemd[1]: Reached target ignition-diskful.target. Jul 12 00:20:21.949577 systemd[1]: Mounted sysroot-usr.mount. Jul 12 00:20:21.951577 systemd[1]: Starting initrd-setup-root.service... Jul 12 00:20:21.955938 initrd-setup-root[785]: cut: /sysroot/etc/passwd: No such file or directory Jul 12 00:20:21.960315 initrd-setup-root[793]: cut: /sysroot/etc/group: No such file or directory Jul 12 00:20:21.964012 initrd-setup-root[801]: cut: /sysroot/etc/shadow: No such file or directory Jul 12 00:20:21.967993 initrd-setup-root[809]: cut: /sysroot/etc/gshadow: No such file or directory Jul 12 00:20:22.001503 systemd[1]: Finished initrd-setup-root.service. Jul 12 00:20:22.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:22.002873 systemd[1]: Starting ignition-mount.service... Jul 12 00:20:22.004050 systemd[1]: Starting sysroot-boot.service... Jul 12 00:20:22.007987 bash[826]: umount: /sysroot/usr/share/oem: not mounted. Jul 12 00:20:22.015417 ignition[827]: INFO : Ignition 2.14.0 Jul 12 00:20:22.015417 ignition[827]: INFO : Stage: mount Jul 12 00:20:22.016613 ignition[827]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:20:22.016613 ignition[827]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:20:22.016613 ignition[827]: INFO : mount: mount passed Jul 12 00:20:22.016613 ignition[827]: INFO : Ignition finished successfully Jul 12 00:20:22.018506 systemd[1]: Finished ignition-mount.service. Jul 12 00:20:22.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:22.029612 systemd[1]: Finished sysroot-boot.service. Jul 12 00:20:22.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:22.659746 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 12 00:20:22.666483 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (837) Jul 12 00:20:22.666522 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:20:22.666532 kernel: BTRFS info (device vda6): using free space tree Jul 12 00:20:22.667401 kernel: BTRFS info (device vda6): has skinny extents Jul 12 00:20:22.670204 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 12 00:20:22.671626 systemd[1]: Starting ignition-files.service... Jul 12 00:20:22.685284 ignition[857]: INFO : Ignition 2.14.0 Jul 12 00:20:22.685284 ignition[857]: INFO : Stage: files Jul 12 00:20:22.686521 ignition[857]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:20:22.686521 ignition[857]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:20:22.686521 ignition[857]: DEBUG : files: compiled without relabeling support, skipping Jul 12 00:20:22.696923 ignition[857]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 12 00:20:22.696923 ignition[857]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 12 00:20:22.703678 ignition[857]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 12 00:20:22.704827 ignition[857]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 12 00:20:22.704827 ignition[857]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 12 00:20:22.704430 unknown[857]: wrote ssh authorized keys file for user: core Jul 12 00:20:22.707846 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 12 00:20:22.707846 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 12 00:20:22.707846 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 12 00:20:22.707846 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 12 00:20:22.744816 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 12 00:20:22.811003 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 12 00:20:22.812412 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 12 00:20:22.812412 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 12 00:20:22.938241 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jul 12 00:20:23.069881 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 12 00:20:23.071317 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jul 12 00:20:23.071317 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jul 12 00:20:23.071317 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:20:23.071317 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:20:23.071317 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:20:23.071317 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:20:23.071317 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:20:23.071317 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:20:23.071317 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:20:23.071317 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:20:23.071317 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:20:23.071317 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:20:23.071317 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:20:23.071317 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 12 00:20:23.362989 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jul 12 00:20:23.420081 systemd-networkd[740]: eth0: Gained IPv6LL Jul 12 00:20:23.816738 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:20:23.816738 ignition[857]: INFO : files: op(d): [started] processing unit "containerd.service" Jul 12 00:20:23.820918 ignition[857]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 12 00:20:23.820918 ignition[857]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 12 00:20:23.820918 ignition[857]: INFO : files: op(d): [finished] processing unit "containerd.service" Jul 12 00:20:23.820918 ignition[857]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jul 12 00:20:23.820918 ignition[857]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:20:23.820918 ignition[857]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:20:23.820918 ignition[857]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jul 12 00:20:23.820918 ignition[857]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Jul 12 00:20:23.820918 ignition[857]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 12 00:20:23.820918 ignition[857]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 12 00:20:23.820918 ignition[857]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Jul 12 00:20:23.820918 ignition[857]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Jul 12 00:20:23.820918 ignition[857]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Jul 12 00:20:23.820918 ignition[857]: INFO : files: op(14): [started] setting preset to disabled for "coreos-metadata.service" Jul 12 00:20:23.820918 ignition[857]: INFO : files: op(14): op(15): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 12 00:20:23.856983 ignition[857]: INFO : files: op(14): op(15): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 12 00:20:23.859073 ignition[857]: INFO : files: op(14): [finished] setting preset to disabled for "coreos-metadata.service" Jul 12 00:20:23.859073 ignition[857]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:20:23.859073 ignition[857]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:20:23.859073 ignition[857]: INFO : files: files passed Jul 12 00:20:23.859073 ignition[857]: INFO : Ignition finished successfully Jul 12 00:20:23.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:23.859407 systemd[1]: Finished ignition-files.service. Jul 12 00:20:23.861735 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 12 00:20:23.867049 initrd-setup-root-after-ignition[881]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Jul 12 00:20:23.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:23.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:23.862561 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 12 00:20:23.869858 initrd-setup-root-after-ignition[884]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:20:23.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:23.863170 systemd[1]: Starting ignition-quench.service... Jul 12 00:20:23.866926 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 12 00:20:23.867044 systemd[1]: Finished ignition-quench.service. Jul 12 00:20:23.869507 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 12 00:20:23.870582 systemd[1]: Reached target ignition-complete.target. Jul 12 00:20:23.872549 systemd[1]: Starting initrd-parse-etc.service... Jul 12 00:20:23.884595 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 12 00:20:23.884680 systemd[1]: Finished initrd-parse-etc.service. Jul 12 00:20:23.885940 systemd[1]: Reached target initrd-fs.target. Jul 12 00:20:23.886813 systemd[1]: Reached target initrd.target. Jul 12 00:20:23.887787 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 12 00:20:23.888484 systemd[1]: Starting dracut-pre-pivot.service... Jul 12 00:20:23.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:23.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:23.898395 systemd[1]: Finished dracut-pre-pivot.service. Jul 12 00:20:23.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:23.899811 systemd[1]: Starting initrd-cleanup.service... Jul 12 00:20:23.907315 systemd[1]: Stopped target nss-lookup.target. Jul 12 00:20:23.907995 systemd[1]: Stopped target remote-cryptsetup.target. Jul 12 00:20:23.909084 systemd[1]: Stopped target timers.target. Jul 12 00:20:23.910027 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 12 00:20:23.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:23.910128 systemd[1]: Stopped dracut-pre-pivot.service. Jul 12 00:20:23.911048 systemd[1]: Stopped target initrd.target. Jul 12 00:20:23.912227 systemd[1]: Stopped target basic.target. Jul 12 00:20:23.913139 systemd[1]: Stopped target ignition-complete.target. Jul 12 00:20:23.914099 systemd[1]: Stopped target ignition-diskful.target. Jul 12 00:20:23.915039 systemd[1]: Stopped target initrd-root-device.target. Jul 12 00:20:23.916105 systemd[1]: Stopped target remote-fs.target. Jul 12 00:20:23.917072 systemd[1]: Stopped target remote-fs-pre.target. Jul 12 00:20:23.918098 systemd[1]: Stopped target sysinit.target. Jul 12 00:20:23.919007 systemd[1]: Stopped target local-fs.target. Jul 12 00:20:23.919960 systemd[1]: Stopped target local-fs-pre.target. Jul 12 00:20:23.920918 systemd[1]: Stopped target swap.target. Jul 12 00:20:23.922000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:23.921820 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 12 00:20:23.921920 systemd[1]: Stopped dracut-pre-mount.service. Jul 12 00:20:23.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:23.922892 systemd[1]: Stopped target cryptsetup.target. Jul 12 00:20:23.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:23.923715 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 12 00:20:23.923813 systemd[1]: Stopped dracut-initqueue.service. Jul 12 00:20:23.924922 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 12 00:20:23.925027 systemd[1]: Stopped ignition-fetch-offline.service. Jul 12 00:20:23.925932 systemd[1]: Stopped target paths.target. Jul 12 00:20:23.926767 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 12 00:20:23.930995 systemd[1]: Stopped systemd-ask-password-console.path. Jul 12 00:20:23.931706 systemd[1]: Stopped target slices.target. Jul 12 00:20:23.932722 systemd[1]: Stopped target sockets.target. Jul 12 00:20:23.933630 systemd[1]: iscsid.socket: Deactivated successfully. Jul 12 00:20:23.933695 systemd[1]: Closed iscsid.socket. Jul 12 00:20:23.934494 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 12 00:20:23.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:23.934553 systemd[1]: Closed iscsiuio.socket. Jul 12 00:20:23.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:23.935469 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 12 00:20:23.935558 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 12 00:20:23.936479 systemd[1]: ignition-files.service: Deactivated successfully. Jul 12 00:20:23.936560 systemd[1]: Stopped ignition-files.service. Jul 12 00:20:23.938249 systemd[1]: Stopping ignition-mount.service... Jul 12 00:20:23.939900 systemd[1]: Stopping sysroot-boot.service... Jul 12 00:20:23.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:23.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:23.940808 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 12 00:20:23.940912 systemd[1]: Stopped systemd-udev-trigger.service. Jul 12 00:20:23.942102 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 12 00:20:23.942237 systemd[1]: Stopped dracut-pre-trigger.service. Jul 12 00:20:23.946513 ignition[897]: INFO : Ignition 2.14.0 Jul 12 00:20:23.946513 ignition[897]: INFO : Stage: umount Jul 12 00:20:23.946513 ignition[897]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:20:23.946513 ignition[897]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:20:23.946513 ignition[897]: INFO : umount: umount passed Jul 12 00:20:23.946513 ignition[897]: INFO : Ignition finished successfully Jul 12 00:20:23.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:23.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:23.948991 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 12 00:20:23.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:23.949353 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 12 00:20:23.949440 systemd[1]: Finished initrd-cleanup.service. Jul 12 00:20:23.950724 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 12 00:20:23.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:23.950812 systemd[1]: Stopped ignition-mount.service. Jul 12 00:20:23.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:23.951702 systemd[1]: Stopped target network.target. Jul 12 00:20:23.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:23.953581 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 12 00:20:23.953630 systemd[1]: Stopped ignition-disks.service. Jul 12 00:20:23.954538 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 12 00:20:23.954570 systemd[1]: Stopped ignition-kargs.service. Jul 12 00:20:23.955598 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 12 00:20:23.955631 systemd[1]: Stopped ignition-setup.service. Jul 12 00:20:23.956649 systemd[1]: Stopping systemd-networkd.service... Jul 12 00:20:23.957724 systemd[1]: Stopping systemd-resolved.service... Jul 12 00:20:23.966399 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 12 00:20:23.966503 systemd[1]: Stopped systemd-resolved.service. Jul 12 00:20:23.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:23.968005 systemd-networkd[740]: eth0: DHCPv6 lease lost Jul 12 00:20:23.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:23.968980 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 12 00:20:23.969076 systemd[1]: Stopped systemd-networkd.service. Jul 12 00:20:23.970149 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 12 00:20:23.972000 audit: BPF prog-id=6 op=UNLOAD Jul 12 00:20:23.970180 systemd[1]: Closed systemd-networkd.socket. Jul 12 00:20:23.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:23.971901 systemd[1]: Stopping network-cleanup.service... Jul 12 00:20:23.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:23.973569 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 12 00:20:23.976000 audit: BPF prog-id=9 op=UNLOAD Jul 12 00:20:23.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:23.973627 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 12 00:20:23.974939 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 00:20:23.974987 systemd[1]: Stopped systemd-sysctl.service. Jul 12 00:20:23.976586 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 12 00:20:23.976629 systemd[1]: Stopped systemd-modules-load.service. Jul 12 00:20:23.978182 systemd[1]: Stopping systemd-udevd.service... Jul 12 00:20:23.981918 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 12 00:20:23.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:23.984709 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 12 00:20:23.984812 systemd[1]: Stopped network-cleanup.service. Jul 12 00:20:23.987869 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 12 00:20:23.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:23.988014 systemd[1]: Stopped systemd-udevd.service. Jul 12 00:20:23.989268 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 12 00:20:23.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:23.989304 systemd[1]: Closed systemd-udevd-control.socket. Jul 12 00:20:23.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:23.990205 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 12 00:20:23.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:23.990235 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 12 00:20:23.991295 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 12 00:20:23.991337 systemd[1]: Stopped dracut-pre-udev.service. Jul 12 00:20:23.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:23.992319 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 12 00:20:23.992353 systemd[1]: Stopped dracut-cmdline.service. Jul 12 00:20:23.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:23.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:23.993457 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:20:23.993491 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 12 00:20:24.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:23.995438 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 12 00:20:24.008328 kernel: kauditd_printk_skb: 60 callbacks suppressed Jul 12 00:20:24.008350 kernel: audit: type=1130 audit(1752279624.002:71): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:24.008361 kernel: audit: type=1131 audit(1752279624.002:72): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:24.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:24.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:23.997076 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 12 00:20:23.997135 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Jul 12 00:20:24.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:23.998771 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 12 00:20:24.013479 kernel: audit: type=1131 audit(1752279624.010:73): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:23.998817 systemd[1]: Stopped kmod-static-nodes.service. Jul 12 00:20:23.999501 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:20:23.999535 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 12 00:20:24.001334 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 12 00:20:24.001759 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 12 00:20:24.001858 systemd[1]: Stopped sysroot-boot.service. Jul 12 00:20:24.002629 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 12 00:20:24.002705 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 12 00:20:24.003690 systemd[1]: Reached target initrd-switch-root.target. Jul 12 00:20:24.008911 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 12 00:20:24.009059 systemd[1]: Stopped initrd-setup-root.service. Jul 12 00:20:24.010892 systemd[1]: Starting initrd-switch-root.service... Jul 12 00:20:24.016930 systemd[1]: Switching root. Jul 12 00:20:24.021000 audit: BPF prog-id=5 op=UNLOAD Jul 12 00:20:24.021000 audit: BPF prog-id=4 op=UNLOAD Jul 12 00:20:24.023290 kernel: audit: type=1334 audit(1752279624.021:74): prog-id=5 op=UNLOAD Jul 12 00:20:24.023312 kernel: audit: type=1334 audit(1752279624.021:75): prog-id=4 op=UNLOAD Jul 12 00:20:24.023322 kernel: audit: type=1334 audit(1752279624.021:76): prog-id=3 op=UNLOAD Jul 12 00:20:24.021000 audit: BPF prog-id=3 op=UNLOAD Jul 12 00:20:24.023000 audit: BPF prog-id=8 op=UNLOAD Jul 12 00:20:24.023000 audit: BPF prog-id=7 op=UNLOAD Jul 12 00:20:24.025968 kernel: audit: type=1334 audit(1752279624.023:77): prog-id=8 op=UNLOAD Jul 12 00:20:24.025990 kernel: audit: type=1334 audit(1752279624.023:78): prog-id=7 op=UNLOAD Jul 12 00:20:24.047353 iscsid[747]: iscsid shutting down. Jul 12 00:20:24.047896 systemd-journald[290]: Journal stopped Jul 12 00:20:26.072571 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). Jul 12 00:20:26.072626 kernel: audit: type=1335 audit(1752279624.047:79): pid=290 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=kernel comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" nl-mcgrp=1 op=disconnect res=1 Jul 12 00:20:26.072644 kernel: SELinux: Class mctp_socket not defined in policy. Jul 12 00:20:26.072654 kernel: SELinux: Class anon_inode not defined in policy. Jul 12 00:20:26.072666 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 12 00:20:26.072679 kernel: SELinux: policy capability network_peer_controls=1 Jul 12 00:20:26.072689 kernel: SELinux: policy capability open_perms=1 Jul 12 00:20:26.072699 kernel: SELinux: policy capability extended_socket_class=1 Jul 12 00:20:26.072708 kernel: SELinux: policy capability always_check_network=0 Jul 12 00:20:26.072719 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 12 00:20:26.072729 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 12 00:20:26.072742 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 12 00:20:26.072751 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 12 00:20:26.072761 kernel: audit: type=1403 audit(1752279624.140:80): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 12 00:20:26.072780 systemd[1]: Successfully loaded SELinux policy in 33.669ms. Jul 12 00:20:26.072797 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.916ms. Jul 12 00:20:26.072809 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 12 00:20:26.072821 systemd[1]: Detected virtualization kvm. Jul 12 00:20:26.072833 systemd[1]: Detected architecture arm64. Jul 12 00:20:26.072845 systemd[1]: Detected first boot. Jul 12 00:20:26.072856 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:20:26.072866 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 12 00:20:26.072876 systemd[1]: Populated /etc with preset unit settings. Jul 12 00:20:26.072887 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 12 00:20:26.072898 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 12 00:20:26.072911 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:20:26.072923 systemd[1]: Queued start job for default target multi-user.target. Jul 12 00:20:26.072934 systemd[1]: Unnecessary job was removed for dev-vda6.device. Jul 12 00:20:26.072945 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 12 00:20:26.072966 systemd[1]: Created slice system-addon\x2drun.slice. Jul 12 00:20:26.072977 systemd[1]: Created slice system-getty.slice. Jul 12 00:20:26.072987 systemd[1]: Created slice system-modprobe.slice. Jul 12 00:20:26.072997 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 12 00:20:26.073009 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 12 00:20:26.073021 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 12 00:20:26.073031 systemd[1]: Created slice user.slice. Jul 12 00:20:26.073042 systemd[1]: Started systemd-ask-password-console.path. Jul 12 00:20:26.073053 systemd[1]: Started systemd-ask-password-wall.path. Jul 12 00:20:26.073063 systemd[1]: Set up automount boot.automount. Jul 12 00:20:26.073073 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 12 00:20:26.073084 systemd[1]: Reached target integritysetup.target. Jul 12 00:20:26.073096 systemd[1]: Reached target remote-cryptsetup.target. Jul 12 00:20:26.073106 systemd[1]: Reached target remote-fs.target. Jul 12 00:20:26.073116 systemd[1]: Reached target slices.target. Jul 12 00:20:26.073127 systemd[1]: Reached target swap.target. Jul 12 00:20:26.073137 systemd[1]: Reached target torcx.target. Jul 12 00:20:26.073148 systemd[1]: Reached target veritysetup.target. Jul 12 00:20:26.073159 systemd[1]: Listening on systemd-coredump.socket. Jul 12 00:20:26.073169 systemd[1]: Listening on systemd-initctl.socket. Jul 12 00:20:26.073179 systemd[1]: Listening on systemd-journald-audit.socket. Jul 12 00:20:26.073189 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 12 00:20:26.073200 systemd[1]: Listening on systemd-journald.socket. Jul 12 00:20:26.073211 systemd[1]: Listening on systemd-networkd.socket. Jul 12 00:20:26.073221 systemd[1]: Listening on systemd-udevd-control.socket. Jul 12 00:20:26.073232 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 12 00:20:26.073242 systemd[1]: Listening on systemd-userdbd.socket. Jul 12 00:20:26.073252 systemd[1]: Mounting dev-hugepages.mount... Jul 12 00:20:26.073263 systemd[1]: Mounting dev-mqueue.mount... Jul 12 00:20:26.073273 systemd[1]: Mounting media.mount... Jul 12 00:20:26.073283 systemd[1]: Mounting sys-kernel-debug.mount... Jul 12 00:20:26.073295 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 12 00:20:26.073305 systemd[1]: Mounting tmp.mount... Jul 12 00:20:26.073316 systemd[1]: Starting flatcar-tmpfiles.service... Jul 12 00:20:26.073326 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 12 00:20:26.073336 systemd[1]: Starting kmod-static-nodes.service... Jul 12 00:20:26.073346 systemd[1]: Starting modprobe@configfs.service... Jul 12 00:20:26.073357 systemd[1]: Starting modprobe@dm_mod.service... Jul 12 00:20:26.073368 systemd[1]: Starting modprobe@drm.service... Jul 12 00:20:26.073378 systemd[1]: Starting modprobe@efi_pstore.service... Jul 12 00:20:26.073389 systemd[1]: Starting modprobe@fuse.service... Jul 12 00:20:26.073399 systemd[1]: Starting modprobe@loop.service... Jul 12 00:20:26.073410 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 12 00:20:26.073421 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 12 00:20:26.073432 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Jul 12 00:20:26.073442 systemd[1]: Starting systemd-journald.service... Jul 12 00:20:26.073453 systemd[1]: Starting systemd-modules-load.service... Jul 12 00:20:26.073463 systemd[1]: Starting systemd-network-generator.service... Jul 12 00:20:26.073473 kernel: fuse: init (API version 7.34) Jul 12 00:20:26.073485 systemd[1]: Starting systemd-remount-fs.service... Jul 12 00:20:26.073497 systemd[1]: Starting systemd-udev-trigger.service... Jul 12 00:20:26.073508 systemd[1]: Mounted dev-hugepages.mount. Jul 12 00:20:26.073519 systemd[1]: Mounted dev-mqueue.mount. Jul 12 00:20:26.073529 kernel: loop: module loaded Jul 12 00:20:26.073539 systemd[1]: Mounted media.mount. Jul 12 00:20:26.073549 systemd[1]: Mounted sys-kernel-debug.mount. Jul 12 00:20:26.073559 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 12 00:20:26.073570 systemd[1]: Mounted tmp.mount. Jul 12 00:20:26.073580 systemd[1]: Finished kmod-static-nodes.service. Jul 12 00:20:26.073591 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 12 00:20:26.073602 systemd[1]: Finished modprobe@configfs.service. Jul 12 00:20:26.073613 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:20:26.073623 systemd[1]: Finished modprobe@dm_mod.service. Jul 12 00:20:26.073634 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:20:26.073644 systemd[1]: Finished modprobe@drm.service. Jul 12 00:20:26.073654 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:20:26.073664 systemd[1]: Finished modprobe@efi_pstore.service. Jul 12 00:20:26.073674 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 12 00:20:26.073684 systemd[1]: Finished modprobe@fuse.service. Jul 12 00:20:26.073694 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:20:26.073706 systemd[1]: Finished modprobe@loop.service. Jul 12 00:20:26.073716 systemd[1]: Finished systemd-modules-load.service. Jul 12 00:20:26.073727 systemd[1]: Finished systemd-network-generator.service. Jul 12 00:20:26.073737 systemd[1]: Finished systemd-remount-fs.service. Jul 12 00:20:26.073747 systemd[1]: Reached target network-pre.target. Jul 12 00:20:26.073759 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 12 00:20:26.073775 systemd-journald[1028]: Journal started Jul 12 00:20:26.073821 systemd-journald[1028]: Runtime Journal (/run/log/journal/c27ff6bcafe7414398acd8f335d4e1c0) is 6.0M, max 48.7M, 42.6M free. Jul 12 00:20:25.980000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 12 00:20:25.980000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jul 12 00:20:26.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:26.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:26.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:26.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:26.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:26.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:26.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:26.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:26.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:26.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:26.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:26.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:26.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:26.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:26.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:26.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:26.067000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 12 00:20:26.067000 audit[1028]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=fffff1700d10 a2=4000 a3=1 items=0 ppid=1 pid=1028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:20:26.067000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 12 00:20:26.076533 systemd[1]: Mounting sys-kernel-config.mount... Jul 12 00:20:26.076574 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 12 00:20:26.083676 systemd[1]: Starting systemd-hwdb-update.service... Jul 12 00:20:26.083718 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:20:26.083733 systemd[1]: Starting systemd-random-seed.service... Jul 12 00:20:26.083746 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 12 00:20:26.094013 systemd[1]: Starting systemd-sysctl.service... Jul 12 00:20:26.094046 systemd[1]: Started systemd-journald.service. Jul 12 00:20:26.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:26.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:26.090658 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 12 00:20:26.091485 systemd[1]: Mounted sys-kernel-config.mount. Jul 12 00:20:26.092438 systemd[1]: Finished systemd-random-seed.service. Jul 12 00:20:26.093582 systemd[1]: Reached target first-boot-complete.target. Jul 12 00:20:26.095383 systemd[1]: Starting systemd-journal-flush.service... Jul 12 00:20:26.099261 systemd[1]: Finished systemd-sysctl.service. Jul 12 00:20:26.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:26.103413 systemd-journald[1028]: Time spent on flushing to /var/log/journal/c27ff6bcafe7414398acd8f335d4e1c0 is 20.265ms for 936 entries. Jul 12 00:20:26.103413 systemd-journald[1028]: System Journal (/var/log/journal/c27ff6bcafe7414398acd8f335d4e1c0) is 8.0M, max 195.6M, 187.6M free. Jul 12 00:20:26.130242 systemd-journald[1028]: Received client request to flush runtime journal. Jul 12 00:20:26.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:26.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:26.107920 systemd[1]: Finished flatcar-tmpfiles.service. Jul 12 00:20:26.109665 systemd[1]: Starting systemd-sysusers.service... Jul 12 00:20:26.131101 udevadm[1084]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 12 00:20:26.114758 systemd[1]: Finished systemd-udev-trigger.service. Jul 12 00:20:26.116518 systemd[1]: Starting systemd-udev-settle.service... Jul 12 00:20:26.131227 systemd[1]: Finished systemd-journal-flush.service. Jul 12 00:20:26.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:26.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:26.136248 systemd[1]: Finished systemd-sysusers.service. Jul 12 00:20:26.137987 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 12 00:20:26.155467 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 12 00:20:26.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:26.455650 systemd[1]: Finished systemd-hwdb-update.service. Jul 12 00:20:26.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:26.457537 systemd[1]: Starting systemd-udevd.service... Jul 12 00:20:26.476499 systemd-udevd[1092]: Using default interface naming scheme 'v252'. Jul 12 00:20:26.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:26.489712 systemd[1]: Started systemd-udevd.service. Jul 12 00:20:26.492041 systemd[1]: Starting systemd-networkd.service... Jul 12 00:20:26.504380 systemd[1]: Starting systemd-userdbd.service... Jul 12 00:20:26.509061 systemd[1]: Found device dev-ttyAMA0.device. Jul 12 00:20:26.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:26.540593 systemd[1]: Started systemd-userdbd.service. Jul 12 00:20:26.570214 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 12 00:20:26.603296 systemd-networkd[1101]: lo: Link UP Jul 12 00:20:26.603304 systemd-networkd[1101]: lo: Gained carrier Jul 12 00:20:26.603657 systemd-networkd[1101]: Enumeration completed Jul 12 00:20:26.603757 systemd-networkd[1101]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:20:26.603760 systemd[1]: Started systemd-networkd.service. Jul 12 00:20:26.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:26.606626 systemd-networkd[1101]: eth0: Link UP Jul 12 00:20:26.606636 systemd-networkd[1101]: eth0: Gained carrier Jul 12 00:20:26.614389 systemd[1]: Finished systemd-udev-settle.service. Jul 12 00:20:26.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:26.616308 systemd[1]: Starting lvm2-activation-early.service... Jul 12 00:20:26.627092 systemd-networkd[1101]: eth0: DHCPv4 address 10.0.0.41/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 12 00:20:26.638013 lvm[1126]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:20:26.665909 systemd[1]: Finished lvm2-activation-early.service. Jul 12 00:20:26.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:26.666875 systemd[1]: Reached target cryptsetup.target. Jul 12 00:20:26.668674 systemd[1]: Starting lvm2-activation.service... Jul 12 00:20:26.672184 lvm[1128]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:20:26.696942 systemd[1]: Finished lvm2-activation.service. Jul 12 00:20:26.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:26.697801 systemd[1]: Reached target local-fs-pre.target. Jul 12 00:20:26.698478 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 12 00:20:26.698503 systemd[1]: Reached target local-fs.target. Jul 12 00:20:26.699065 systemd[1]: Reached target machines.target. Jul 12 00:20:26.701915 systemd[1]: Starting ldconfig.service... Jul 12 00:20:26.703860 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 12 00:20:26.703915 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:20:26.704973 systemd[1]: Starting systemd-boot-update.service... Jul 12 00:20:26.707307 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 12 00:20:26.709422 systemd[1]: Starting systemd-machine-id-commit.service... Jul 12 00:20:26.711839 systemd[1]: Starting systemd-sysext.service... Jul 12 00:20:26.713258 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1131 (bootctl) Jul 12 00:20:26.714677 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 12 00:20:26.718856 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 12 00:20:26.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:26.724426 systemd[1]: Unmounting usr-share-oem.mount... Jul 12 00:20:26.729880 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 12 00:20:26.730159 systemd[1]: Unmounted usr-share-oem.mount. Jul 12 00:20:26.775024 systemd[1]: Finished systemd-machine-id-commit.service. Jul 12 00:20:26.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:26.779977 kernel: loop0: detected capacity change from 0 to 203944 Jul 12 00:20:26.788982 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 12 00:20:26.799692 systemd-fsck[1143]: fsck.fat 4.2 (2021-01-31) Jul 12 00:20:26.799692 systemd-fsck[1143]: /dev/vda1: 236 files, 117310/258078 clusters Jul 12 00:20:26.801492 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 12 00:20:26.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:26.804967 kernel: loop1: detected capacity change from 0 to 203944 Jul 12 00:20:26.810340 (sd-sysext)[1148]: Using extensions 'kubernetes'. Jul 12 00:20:26.811809 (sd-sysext)[1148]: Merged extensions into '/usr'. Jul 12 00:20:26.831424 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 12 00:20:26.832836 systemd[1]: Starting modprobe@dm_mod.service... Jul 12 00:20:26.834866 systemd[1]: Starting modprobe@efi_pstore.service... Jul 12 00:20:26.836829 systemd[1]: Starting modprobe@loop.service... Jul 12 00:20:26.837773 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 12 00:20:26.837913 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:20:26.838670 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:20:26.839107 systemd[1]: Finished modprobe@dm_mod.service. Jul 12 00:20:26.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:26.838000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:26.840151 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:20:26.840301 systemd[1]: Finished modprobe@efi_pstore.service. Jul 12 00:20:26.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:26.840000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:26.841431 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:20:26.841593 systemd[1]: Finished modprobe@loop.service. Jul 12 00:20:26.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:26.841000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:26.842811 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:20:26.842907 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 12 00:20:26.880135 ldconfig[1130]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 12 00:20:26.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:26.883168 systemd[1]: Finished ldconfig.service. Jul 12 00:20:27.038208 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 12 00:20:27.040060 systemd[1]: Mounting boot.mount... Jul 12 00:20:27.041795 systemd[1]: Mounting usr-share-oem.mount... Jul 12 00:20:27.048106 systemd[1]: Mounted boot.mount. Jul 12 00:20:27.049012 systemd[1]: Mounted usr-share-oem.mount. Jul 12 00:20:27.050977 systemd[1]: Finished systemd-sysext.service. Jul 12 00:20:27.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:27.053107 systemd[1]: Starting ensure-sysext.service... Jul 12 00:20:27.055041 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 12 00:20:27.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:27.058194 systemd[1]: Finished systemd-boot-update.service. Jul 12 00:20:27.060612 systemd[1]: Reloading. Jul 12 00:20:27.065328 systemd-tmpfiles[1166]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 12 00:20:27.066067 systemd-tmpfiles[1166]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 12 00:20:27.067521 systemd-tmpfiles[1166]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 12 00:20:27.098099 /usr/lib/systemd/system-generators/torcx-generator[1187]: time="2025-07-12T00:20:27Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 12 00:20:27.098129 /usr/lib/systemd/system-generators/torcx-generator[1187]: time="2025-07-12T00:20:27Z" level=info msg="torcx already run" Jul 12 00:20:27.163788 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 12 00:20:27.163807 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 12 00:20:27.181131 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:20:27.228662 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 12 00:20:27.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:27.231853 systemd[1]: Starting audit-rules.service... Jul 12 00:20:27.233844 systemd[1]: Starting clean-ca-certificates.service... Jul 12 00:20:27.235880 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 12 00:20:27.238329 systemd[1]: Starting systemd-resolved.service... Jul 12 00:20:27.240616 systemd[1]: Starting systemd-timesyncd.service... Jul 12 00:20:27.242512 systemd[1]: Starting systemd-update-utmp.service... Jul 12 00:20:27.243990 systemd[1]: Finished clean-ca-certificates.service. Jul 12 00:20:27.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:27.248721 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 12 00:20:27.248000 audit[1245]: SYSTEM_BOOT pid=1245 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 12 00:20:27.250191 systemd[1]: Starting modprobe@dm_mod.service... Jul 12 00:20:27.252265 systemd[1]: Starting modprobe@efi_pstore.service... Jul 12 00:20:27.254083 systemd[1]: Starting modprobe@loop.service... Jul 12 00:20:27.254724 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 12 00:20:27.254857 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:20:27.254981 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 00:20:27.255693 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:20:27.255837 systemd[1]: Finished modprobe@dm_mod.service. Jul 12 00:20:27.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:27.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:27.257058 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:20:27.257225 systemd[1]: Finished modprobe@efi_pstore.service. Jul 12 00:20:27.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:27.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:27.258322 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:20:27.258463 systemd[1]: Finished modprobe@loop.service. Jul 12 00:20:27.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:27.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:27.260870 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:20:27.261092 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 12 00:20:27.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:27.262499 systemd[1]: Finished systemd-update-utmp.service. Jul 12 00:20:27.264044 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 12 00:20:27.265143 systemd[1]: Starting modprobe@dm_mod.service... Jul 12 00:20:27.266831 systemd[1]: Starting modprobe@efi_pstore.service... Jul 12 00:20:27.268728 systemd[1]: Starting modprobe@loop.service... Jul 12 00:20:27.269450 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 12 00:20:27.269581 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:20:27.269702 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 00:20:27.270663 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 12 00:20:27.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:27.271809 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:20:27.271940 systemd[1]: Finished modprobe@dm_mod.service. Jul 12 00:20:27.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:27.271000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:27.272862 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:20:27.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:27.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:27.273053 systemd[1]: Finished modprobe@efi_pstore.service. Jul 12 00:20:27.274096 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:20:27.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:27.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:27.275205 systemd[1]: Finished modprobe@loop.service. Jul 12 00:20:27.276112 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:20:27.276201 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 12 00:20:27.280043 systemd[1]: Starting systemd-update-done.service... Jul 12 00:20:27.283039 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 12 00:20:27.284322 systemd[1]: Starting modprobe@dm_mod.service... Jul 12 00:20:27.286123 systemd[1]: Starting modprobe@drm.service... Jul 12 00:20:27.287744 systemd[1]: Starting modprobe@efi_pstore.service... Jul 12 00:20:27.289471 systemd[1]: Starting modprobe@loop.service... Jul 12 00:20:27.290158 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 12 00:20:27.290275 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:20:27.291456 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 12 00:20:27.292204 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 00:20:27.293260 systemd[1]: Finished systemd-update-done.service. Jul 12 00:20:27.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:27.294245 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:20:27.294373 systemd[1]: Finished modprobe@dm_mod.service. Jul 12 00:20:27.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:27.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:27.295351 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:20:27.295477 systemd[1]: Finished modprobe@drm.service. Jul 12 00:20:27.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:27.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:27.299523 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:20:27.299658 systemd[1]: Finished modprobe@efi_pstore.service. Jul 12 00:20:27.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:27.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:27.300750 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:20:27.300918 systemd[1]: Finished modprobe@loop.service. Jul 12 00:20:27.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:27.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:27.302161 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:20:27.302245 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 12 00:20:27.303305 systemd[1]: Finished ensure-sysext.service. Jul 12 00:20:27.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:20:27.307000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 12 00:20:27.307000 audit[1282]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffffb510e40 a2=420 a3=0 items=0 ppid=1233 pid=1282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:20:27.307000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 12 00:20:27.308129 augenrules[1282]: No rules Jul 12 00:20:27.308911 systemd[1]: Finished audit-rules.service. Jul 12 00:20:27.333817 systemd-resolved[1238]: Positive Trust Anchors: Jul 12 00:20:27.334159 systemd-resolved[1238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:20:27.334244 systemd-resolved[1238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 12 00:20:27.335017 systemd[1]: Started systemd-timesyncd.service. Jul 12 00:20:27.335831 systemd-timesyncd[1239]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 12 00:20:27.335885 systemd-timesyncd[1239]: Initial clock synchronization to Sat 2025-07-12 00:20:27.315451 UTC. Jul 12 00:20:27.336235 systemd[1]: Reached target time-set.target. Jul 12 00:20:27.348711 systemd-resolved[1238]: Defaulting to hostname 'linux'. Jul 12 00:20:27.350383 systemd[1]: Started systemd-resolved.service. Jul 12 00:20:27.351064 systemd[1]: Reached target network.target. Jul 12 00:20:27.351611 systemd[1]: Reached target nss-lookup.target. Jul 12 00:20:27.352195 systemd[1]: Reached target sysinit.target. Jul 12 00:20:27.352811 systemd[1]: Started motdgen.path. Jul 12 00:20:27.353350 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 12 00:20:27.354271 systemd[1]: Started logrotate.timer. Jul 12 00:20:27.354865 systemd[1]: Started mdadm.timer. Jul 12 00:20:27.355395 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 12 00:20:27.355997 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 12 00:20:27.356022 systemd[1]: Reached target paths.target. Jul 12 00:20:27.356535 systemd[1]: Reached target timers.target. Jul 12 00:20:27.357388 systemd[1]: Listening on dbus.socket. Jul 12 00:20:27.359121 systemd[1]: Starting docker.socket... Jul 12 00:20:27.360723 systemd[1]: Listening on sshd.socket. Jul 12 00:20:27.361400 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:20:27.361733 systemd[1]: Listening on docker.socket. Jul 12 00:20:27.362358 systemd[1]: Reached target sockets.target. Jul 12 00:20:27.362910 systemd[1]: Reached target basic.target. Jul 12 00:20:27.363627 systemd[1]: System is tainted: cgroupsv1 Jul 12 00:20:27.363671 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 12 00:20:27.363692 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 12 00:20:27.364774 systemd[1]: Starting containerd.service... Jul 12 00:20:27.366404 systemd[1]: Starting dbus.service... Jul 12 00:20:27.368158 systemd[1]: Starting enable-oem-cloudinit.service... Jul 12 00:20:27.370165 systemd[1]: Starting extend-filesystems.service... Jul 12 00:20:27.371049 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 12 00:20:27.372473 systemd[1]: Starting motdgen.service... Jul 12 00:20:27.374377 systemd[1]: Starting prepare-helm.service... Jul 12 00:20:27.376385 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 12 00:20:27.378270 systemd[1]: Starting sshd-keygen.service... Jul 12 00:20:27.380568 systemd[1]: Starting systemd-logind.service... Jul 12 00:20:27.381338 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:20:27.381415 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 12 00:20:27.385150 systemd[1]: Starting update-engine.service... Jul 12 00:20:27.387456 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 12 00:20:27.391262 jq[1311]: true Jul 12 00:20:27.400930 jq[1296]: false Jul 12 00:20:27.401361 extend-filesystems[1297]: Found loop1 Jul 12 00:20:27.408168 jq[1317]: true Jul 12 00:20:27.402057 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 12 00:20:27.408357 extend-filesystems[1297]: Found vda Jul 12 00:20:27.408357 extend-filesystems[1297]: Found vda1 Jul 12 00:20:27.408357 extend-filesystems[1297]: Found vda2 Jul 12 00:20:27.408357 extend-filesystems[1297]: Found vda3 Jul 12 00:20:27.408357 extend-filesystems[1297]: Found usr Jul 12 00:20:27.408357 extend-filesystems[1297]: Found vda4 Jul 12 00:20:27.408357 extend-filesystems[1297]: Found vda6 Jul 12 00:20:27.408357 extend-filesystems[1297]: Found vda7 Jul 12 00:20:27.408357 extend-filesystems[1297]: Found vda9 Jul 12 00:20:27.408357 extend-filesystems[1297]: Checking size of /dev/vda9 Jul 12 00:20:27.402291 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 12 00:20:27.430088 dbus-daemon[1295]: [system] SELinux support is enabled Jul 12 00:20:27.438703 tar[1313]: linux-arm64/helm Jul 12 00:20:27.438897 extend-filesystems[1297]: Resized partition /dev/vda9 Jul 12 00:20:27.419592 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 12 00:20:27.419987 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 12 00:20:27.421696 systemd[1]: motdgen.service: Deactivated successfully. Jul 12 00:20:27.421927 systemd[1]: Finished motdgen.service. Jul 12 00:20:27.430263 systemd[1]: Started dbus.service. Jul 12 00:20:27.443192 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 12 00:20:27.443214 systemd[1]: Reached target system-config.target. Jul 12 00:20:27.444608 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 12 00:20:27.444631 systemd[1]: Reached target user-config.target. Jul 12 00:20:27.451681 extend-filesystems[1344]: resize2fs 1.46.5 (30-Dec-2021) Jul 12 00:20:27.475362 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 12 00:20:27.488011 systemd-logind[1305]: Watching system buttons on /dev/input/event0 (Power Button) Jul 12 00:20:27.488442 systemd-logind[1305]: New seat seat0. Jul 12 00:20:27.493439 systemd[1]: Started systemd-logind.service. Jul 12 00:20:27.531151 env[1315]: time="2025-07-12T00:20:27.531093640Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 12 00:20:27.548224 env[1315]: time="2025-07-12T00:20:27.548130320Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 12 00:20:27.548450 env[1315]: time="2025-07-12T00:20:27.548429440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:20:27.549697 env[1315]: time="2025-07-12T00:20:27.549658520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.186-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:20:27.549697 env[1315]: time="2025-07-12T00:20:27.549692560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:20:27.549979 env[1315]: time="2025-07-12T00:20:27.549941720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:20:27.550029 env[1315]: time="2025-07-12T00:20:27.549980120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 12 00:20:27.550029 env[1315]: time="2025-07-12T00:20:27.549993480Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 12 00:20:27.550029 env[1315]: time="2025-07-12T00:20:27.550003240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 12 00:20:27.550091 env[1315]: time="2025-07-12T00:20:27.550080080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:20:27.550381 env[1315]: time="2025-07-12T00:20:27.550353840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:20:27.550521 env[1315]: time="2025-07-12T00:20:27.550495120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:20:27.550521 env[1315]: time="2025-07-12T00:20:27.550513520Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 12 00:20:27.550580 env[1315]: time="2025-07-12T00:20:27.550565440Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 12 00:20:27.550612 env[1315]: time="2025-07-12T00:20:27.550581280Z" level=info msg="metadata content store policy set" policy=shared Jul 12 00:20:27.573507 update_engine[1310]: I0712 00:20:27.573315 1310 main.cc:92] Flatcar Update Engine starting Jul 12 00:20:27.575650 systemd[1]: Started update-engine.service. Jul 12 00:20:27.575776 update_engine[1310]: I0712 00:20:27.575646 1310 update_check_scheduler.cc:74] Next update check in 7m57s Jul 12 00:20:27.578288 systemd[1]: Started locksmithd.service. Jul 12 00:20:27.587993 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 12 00:20:27.603160 extend-filesystems[1344]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 12 00:20:27.603160 extend-filesystems[1344]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 12 00:20:27.603160 extend-filesystems[1344]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 12 00:20:27.607384 extend-filesystems[1297]: Resized filesystem in /dev/vda9 Jul 12 00:20:27.609076 bash[1353]: Updated "/home/core/.ssh/authorized_keys" Jul 12 00:20:27.609166 env[1315]: time="2025-07-12T00:20:27.603541120Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 12 00:20:27.609166 env[1315]: time="2025-07-12T00:20:27.603581080Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 12 00:20:27.609166 env[1315]: time="2025-07-12T00:20:27.603623720Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 12 00:20:27.609166 env[1315]: time="2025-07-12T00:20:27.605200200Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 12 00:20:27.609166 env[1315]: time="2025-07-12T00:20:27.605236800Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 12 00:20:27.609166 env[1315]: time="2025-07-12T00:20:27.605252480Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 12 00:20:27.609166 env[1315]: time="2025-07-12T00:20:27.605275320Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 12 00:20:27.609166 env[1315]: time="2025-07-12T00:20:27.609001120Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 12 00:20:27.609166 env[1315]: time="2025-07-12T00:20:27.609045720Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 12 00:20:27.609166 env[1315]: time="2025-07-12T00:20:27.609063880Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 12 00:20:27.609166 env[1315]: time="2025-07-12T00:20:27.609077760Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 12 00:20:27.609166 env[1315]: time="2025-07-12T00:20:27.609090560Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 12 00:20:27.603979 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 12 00:20:27.609497 env[1315]: time="2025-07-12T00:20:27.609227360Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 12 00:20:27.609497 env[1315]: time="2025-07-12T00:20:27.609315440Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 12 00:20:27.604224 systemd[1]: Finished extend-filesystems.service. Jul 12 00:20:27.606612 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 12 00:20:27.610236 env[1315]: time="2025-07-12T00:20:27.610210880Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 12 00:20:27.610300 env[1315]: time="2025-07-12T00:20:27.610248880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 12 00:20:27.610300 env[1315]: time="2025-07-12T00:20:27.610263560Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 12 00:20:27.610467 env[1315]: time="2025-07-12T00:20:27.610451760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 12 00:20:27.610506 env[1315]: time="2025-07-12T00:20:27.610472520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 12 00:20:27.610506 env[1315]: time="2025-07-12T00:20:27.610493720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 12 00:20:27.610551 env[1315]: time="2025-07-12T00:20:27.610506080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 12 00:20:27.610551 env[1315]: time="2025-07-12T00:20:27.610518560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 12 00:20:27.610551 env[1315]: time="2025-07-12T00:20:27.610530600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 12 00:20:27.610614 env[1315]: time="2025-07-12T00:20:27.610599760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 12 00:20:27.610635 env[1315]: time="2025-07-12T00:20:27.610617880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 12 00:20:27.610661 env[1315]: time="2025-07-12T00:20:27.610640720Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 12 00:20:27.610969 env[1315]: time="2025-07-12T00:20:27.610873960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 12 00:20:27.611011 env[1315]: time="2025-07-12T00:20:27.610977800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 12 00:20:27.611011 env[1315]: time="2025-07-12T00:20:27.610992120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 12 00:20:27.611074 env[1315]: time="2025-07-12T00:20:27.611004000Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 12 00:20:27.611120 env[1315]: time="2025-07-12T00:20:27.611028440Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 12 00:20:27.611150 env[1315]: time="2025-07-12T00:20:27.611120640Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 12 00:20:27.611150 env[1315]: time="2025-07-12T00:20:27.611142880Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 12 00:20:27.611212 env[1315]: time="2025-07-12T00:20:27.611194040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 12 00:20:27.611637 env[1315]: time="2025-07-12T00:20:27.611534720Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 12 00:20:27.613993 env[1315]: time="2025-07-12T00:20:27.611650360Z" level=info msg="Connect containerd service" Jul 12 00:20:27.613993 env[1315]: time="2025-07-12T00:20:27.611694680Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 12 00:20:27.613993 env[1315]: time="2025-07-12T00:20:27.612572440Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:20:27.616110 env[1315]: time="2025-07-12T00:20:27.614360040Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 12 00:20:27.616110 env[1315]: time="2025-07-12T00:20:27.614430600Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 12 00:20:27.616110 env[1315]: time="2025-07-12T00:20:27.614618800Z" level=info msg="containerd successfully booted in 0.084461s" Jul 12 00:20:27.616110 env[1315]: time="2025-07-12T00:20:27.615660400Z" level=info msg="Start subscribing containerd event" Jul 12 00:20:27.616110 env[1315]: time="2025-07-12T00:20:27.615713600Z" level=info msg="Start recovering state" Jul 12 00:20:27.616110 env[1315]: time="2025-07-12T00:20:27.615821840Z" level=info msg="Start event monitor" Jul 12 00:20:27.616110 env[1315]: time="2025-07-12T00:20:27.615848680Z" level=info msg="Start snapshots syncer" Jul 12 00:20:27.616110 env[1315]: time="2025-07-12T00:20:27.615861200Z" level=info msg="Start cni network conf syncer for default" Jul 12 00:20:27.616110 env[1315]: time="2025-07-12T00:20:27.615872160Z" level=info msg="Start streaming server" Jul 12 00:20:27.614716 systemd[1]: Started containerd.service. Jul 12 00:20:27.651303 locksmithd[1358]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 12 00:20:27.825414 tar[1313]: linux-arm64/LICENSE Jul 12 00:20:27.825414 tar[1313]: linux-arm64/README.md Jul 12 00:20:27.829471 systemd[1]: Finished prepare-helm.service. Jul 12 00:20:28.604103 systemd-networkd[1101]: eth0: Gained IPv6LL Jul 12 00:20:28.605743 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 12 00:20:28.606769 systemd[1]: Reached target network-online.target. Jul 12 00:20:28.609588 systemd[1]: Starting kubelet.service... Jul 12 00:20:29.249341 systemd[1]: Started kubelet.service. Jul 12 00:20:29.339013 sshd_keygen[1324]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 12 00:20:29.357176 systemd[1]: Finished sshd-keygen.service. Jul 12 00:20:29.359604 systemd[1]: Starting issuegen.service... Jul 12 00:20:29.364754 systemd[1]: issuegen.service: Deactivated successfully. Jul 12 00:20:29.365006 systemd[1]: Finished issuegen.service. Jul 12 00:20:29.367184 systemd[1]: Starting systemd-user-sessions.service... Jul 12 00:20:29.377889 systemd[1]: Finished systemd-user-sessions.service. Jul 12 00:20:29.380255 systemd[1]: Started getty@tty1.service. Jul 12 00:20:29.382346 systemd[1]: Started serial-getty@ttyAMA0.service. Jul 12 00:20:29.383266 systemd[1]: Reached target getty.target. Jul 12 00:20:29.383984 systemd[1]: Reached target multi-user.target. Jul 12 00:20:29.386280 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 12 00:20:29.394163 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 12 00:20:29.394413 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 12 00:20:29.395287 systemd[1]: Startup finished in 5.133s (kernel) + 5.288s (userspace) = 10.422s. Jul 12 00:20:29.756207 kubelet[1379]: E0712 00:20:29.756167 1379 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:20:29.758033 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:20:29.758178 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:20:32.802319 systemd[1]: Created slice system-sshd.slice. Jul 12 00:20:32.803484 systemd[1]: Started sshd@0-10.0.0.41:22-10.0.0.1:53284.service. Jul 12 00:20:32.849344 sshd[1405]: Accepted publickey for core from 10.0.0.1 port 53284 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:20:32.851310 sshd[1405]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:20:32.862405 systemd[1]: Created slice user-500.slice. Jul 12 00:20:32.863370 systemd[1]: Starting user-runtime-dir@500.service... Jul 12 00:20:32.866051 systemd-logind[1305]: New session 1 of user core. Jul 12 00:20:32.871791 systemd[1]: Finished user-runtime-dir@500.service. Jul 12 00:20:32.873108 systemd[1]: Starting user@500.service... Jul 12 00:20:32.876226 (systemd)[1410]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:20:32.940761 systemd[1410]: Queued start job for default target default.target. Jul 12 00:20:32.941015 systemd[1410]: Reached target paths.target. Jul 12 00:20:32.941031 systemd[1410]: Reached target sockets.target. Jul 12 00:20:32.941041 systemd[1410]: Reached target timers.target. Jul 12 00:20:32.941051 systemd[1410]: Reached target basic.target. Jul 12 00:20:32.941092 systemd[1410]: Reached target default.target. Jul 12 00:20:32.941117 systemd[1410]: Startup finished in 59ms. Jul 12 00:20:32.941191 systemd[1]: Started user@500.service. Jul 12 00:20:32.942157 systemd[1]: Started session-1.scope. Jul 12 00:20:32.991908 systemd[1]: Started sshd@1-10.0.0.41:22-10.0.0.1:53292.service. Jul 12 00:20:33.031838 sshd[1419]: Accepted publickey for core from 10.0.0.1 port 53292 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:20:33.033378 sshd[1419]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:20:33.036983 systemd-logind[1305]: New session 2 of user core. Jul 12 00:20:33.037746 systemd[1]: Started session-2.scope. Jul 12 00:20:33.090759 sshd[1419]: pam_unix(sshd:session): session closed for user core Jul 12 00:20:33.093194 systemd[1]: Started sshd@2-10.0.0.41:22-10.0.0.1:53308.service. Jul 12 00:20:33.093620 systemd[1]: sshd@1-10.0.0.41:22-10.0.0.1:53292.service: Deactivated successfully. Jul 12 00:20:33.094683 systemd[1]: session-2.scope: Deactivated successfully. Jul 12 00:20:33.094686 systemd-logind[1305]: Session 2 logged out. Waiting for processes to exit. Jul 12 00:20:33.095840 systemd-logind[1305]: Removed session 2. Jul 12 00:20:33.125384 sshd[1425]: Accepted publickey for core from 10.0.0.1 port 53308 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:20:33.126437 sshd[1425]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:20:33.129904 systemd-logind[1305]: New session 3 of user core. Jul 12 00:20:33.130434 systemd[1]: Started session-3.scope. Jul 12 00:20:33.180160 sshd[1425]: pam_unix(sshd:session): session closed for user core Jul 12 00:20:33.181861 systemd[1]: Started sshd@3-10.0.0.41:22-10.0.0.1:53324.service. Jul 12 00:20:33.183189 systemd-logind[1305]: Session 3 logged out. Waiting for processes to exit. Jul 12 00:20:33.183358 systemd[1]: sshd@2-10.0.0.41:22-10.0.0.1:53308.service: Deactivated successfully. Jul 12 00:20:33.184039 systemd[1]: session-3.scope: Deactivated successfully. Jul 12 00:20:33.184397 systemd-logind[1305]: Removed session 3. Jul 12 00:20:33.214087 sshd[1431]: Accepted publickey for core from 10.0.0.1 port 53324 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:20:33.215684 sshd[1431]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:20:33.219047 systemd-logind[1305]: New session 4 of user core. Jul 12 00:20:33.219895 systemd[1]: Started session-4.scope. Jul 12 00:20:33.272707 sshd[1431]: pam_unix(sshd:session): session closed for user core Jul 12 00:20:33.275025 systemd[1]: Started sshd@4-10.0.0.41:22-10.0.0.1:53326.service. Jul 12 00:20:33.275451 systemd[1]: sshd@3-10.0.0.41:22-10.0.0.1:53324.service: Deactivated successfully. Jul 12 00:20:33.277273 systemd[1]: session-4.scope: Deactivated successfully. Jul 12 00:20:33.277705 systemd-logind[1305]: Session 4 logged out. Waiting for processes to exit. Jul 12 00:20:33.278531 systemd-logind[1305]: Removed session 4. Jul 12 00:20:33.308921 sshd[1438]: Accepted publickey for core from 10.0.0.1 port 53326 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:20:33.310163 sshd[1438]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:20:33.314843 systemd-logind[1305]: New session 5 of user core. Jul 12 00:20:33.315448 systemd[1]: Started session-5.scope. Jul 12 00:20:33.393446 sudo[1444]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 12 00:20:33.393667 sudo[1444]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 12 00:20:33.451088 systemd[1]: Starting docker.service... Jul 12 00:20:33.546017 env[1456]: time="2025-07-12T00:20:33.545939746Z" level=info msg="Starting up" Jul 12 00:20:33.547497 env[1456]: time="2025-07-12T00:20:33.547470105Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 12 00:20:33.547578 env[1456]: time="2025-07-12T00:20:33.547565977Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 12 00:20:33.547652 env[1456]: time="2025-07-12T00:20:33.547627760Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 12 00:20:33.547704 env[1456]: time="2025-07-12T00:20:33.547691662Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 12 00:20:33.549889 env[1456]: time="2025-07-12T00:20:33.549851124Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 12 00:20:33.549889 env[1456]: time="2025-07-12T00:20:33.549876461Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 12 00:20:33.549989 env[1456]: time="2025-07-12T00:20:33.549891807Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 12 00:20:33.549989 env[1456]: time="2025-07-12T00:20:33.549904595Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 12 00:20:33.757910 env[1456]: time="2025-07-12T00:20:33.757815163Z" level=warning msg="Your kernel does not support cgroup blkio weight" Jul 12 00:20:33.757910 env[1456]: time="2025-07-12T00:20:33.757843537Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Jul 12 00:20:33.758551 env[1456]: time="2025-07-12T00:20:33.757979173Z" level=info msg="Loading containers: start." Jul 12 00:20:33.867986 kernel: Initializing XFRM netlink socket Jul 12 00:20:33.892293 env[1456]: time="2025-07-12T00:20:33.892250775Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 12 00:20:33.949989 systemd-networkd[1101]: docker0: Link UP Jul 12 00:20:33.969566 env[1456]: time="2025-07-12T00:20:33.969515980Z" level=info msg="Loading containers: done." Jul 12 00:20:33.993929 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2916614431-merged.mount: Deactivated successfully. Jul 12 00:20:33.997773 env[1456]: time="2025-07-12T00:20:33.997723350Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 12 00:20:33.997955 env[1456]: time="2025-07-12T00:20:33.997920129Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 12 00:20:33.998063 env[1456]: time="2025-07-12T00:20:33.998039260Z" level=info msg="Daemon has completed initialization" Jul 12 00:20:34.011179 systemd[1]: Started docker.service. Jul 12 00:20:34.017497 env[1456]: time="2025-07-12T00:20:34.017449710Z" level=info msg="API listen on /run/docker.sock" Jul 12 00:20:34.606729 env[1315]: time="2025-07-12T00:20:34.606665533Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 12 00:20:35.146293 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3247835411.mount: Deactivated successfully. Jul 12 00:20:36.383292 env[1315]: time="2025-07-12T00:20:36.383238765Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:36.384717 env[1315]: time="2025-07-12T00:20:36.384673482Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:36.386466 env[1315]: time="2025-07-12T00:20:36.386439070Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:36.388394 env[1315]: time="2025-07-12T00:20:36.388371532Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:36.389040 env[1315]: time="2025-07-12T00:20:36.389015327Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 12 00:20:36.392035 env[1315]: time="2025-07-12T00:20:36.391996478Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 12 00:20:37.825613 env[1315]: time="2025-07-12T00:20:37.825569945Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:37.826729 env[1315]: time="2025-07-12T00:20:37.826698507Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:37.828539 env[1315]: time="2025-07-12T00:20:37.828510985Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:37.830240 env[1315]: time="2025-07-12T00:20:37.830213860Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:37.831134 env[1315]: time="2025-07-12T00:20:37.831095996Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 12 00:20:37.831586 env[1315]: time="2025-07-12T00:20:37.831526412Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 12 00:20:39.021073 env[1315]: time="2025-07-12T00:20:39.021025334Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:39.023113 env[1315]: time="2025-07-12T00:20:39.023076979Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:39.027006 env[1315]: time="2025-07-12T00:20:39.026977315Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:39.029354 env[1315]: time="2025-07-12T00:20:39.029327974Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:39.030070 env[1315]: time="2025-07-12T00:20:39.030043090Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 12 00:20:39.030565 env[1315]: time="2025-07-12T00:20:39.030541979Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 12 00:20:39.791740 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 12 00:20:39.791860 systemd[1]: Stopped kubelet.service. Jul 12 00:20:39.793275 systemd[1]: Starting kubelet.service... Jul 12 00:20:39.888119 systemd[1]: Started kubelet.service. Jul 12 00:20:39.923917 kubelet[1595]: E0712 00:20:39.923866 1595 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:20:39.926403 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:20:39.926544 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:20:40.115276 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2661249097.mount: Deactivated successfully. Jul 12 00:20:40.577041 env[1315]: time="2025-07-12T00:20:40.576912708Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:40.578285 env[1315]: time="2025-07-12T00:20:40.578234458Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:40.579697 env[1315]: time="2025-07-12T00:20:40.579490885Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:40.581004 env[1315]: time="2025-07-12T00:20:40.580944638Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:40.581549 env[1315]: time="2025-07-12T00:20:40.581353640Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 12 00:20:40.581821 env[1315]: time="2025-07-12T00:20:40.581794823Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 12 00:20:41.193478 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3860312699.mount: Deactivated successfully. Jul 12 00:20:42.048703 env[1315]: time="2025-07-12T00:20:42.045157201Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:42.050318 env[1315]: time="2025-07-12T00:20:42.050265105Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:42.052438 env[1315]: time="2025-07-12T00:20:42.052400172Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:42.054004 env[1315]: time="2025-07-12T00:20:42.053978124Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:42.055726 env[1315]: time="2025-07-12T00:20:42.055692926Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 12 00:20:42.057777 env[1315]: time="2025-07-12T00:20:42.056247842Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 12 00:20:42.521504 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3772972448.mount: Deactivated successfully. Jul 12 00:20:42.525678 env[1315]: time="2025-07-12T00:20:42.525639315Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:42.527025 env[1315]: time="2025-07-12T00:20:42.526988744Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:42.528560 env[1315]: time="2025-07-12T00:20:42.528536192Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:42.529971 env[1315]: time="2025-07-12T00:20:42.529920643Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:42.530568 env[1315]: time="2025-07-12T00:20:42.530542124Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 12 00:20:42.531069 env[1315]: time="2025-07-12T00:20:42.531040829Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 12 00:20:43.079924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1963307835.mount: Deactivated successfully. Jul 12 00:20:45.195900 env[1315]: time="2025-07-12T00:20:45.195831607Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:45.197676 env[1315]: time="2025-07-12T00:20:45.197641763Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:45.199426 env[1315]: time="2025-07-12T00:20:45.199386307Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:45.201883 env[1315]: time="2025-07-12T00:20:45.201853067Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:45.202829 env[1315]: time="2025-07-12T00:20:45.202794789Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 12 00:20:50.177441 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 12 00:20:50.177608 systemd[1]: Stopped kubelet.service. Jul 12 00:20:50.179049 systemd[1]: Starting kubelet.service... Jul 12 00:20:50.285877 systemd[1]: Started kubelet.service. Jul 12 00:20:50.323931 kubelet[1632]: E0712 00:20:50.323896 1632 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:20:50.325962 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:20:50.326100 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:20:50.973261 systemd[1]: Stopped kubelet.service. Jul 12 00:20:50.975248 systemd[1]: Starting kubelet.service... Jul 12 00:20:50.998663 systemd[1]: Reloading. Jul 12 00:20:51.057160 /usr/lib/systemd/system-generators/torcx-generator[1669]: time="2025-07-12T00:20:51Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 12 00:20:51.057494 /usr/lib/systemd/system-generators/torcx-generator[1669]: time="2025-07-12T00:20:51Z" level=info msg="torcx already run" Jul 12 00:20:51.218186 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 12 00:20:51.218206 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 12 00:20:51.235202 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:20:51.292618 systemd[1]: Started kubelet.service. Jul 12 00:20:51.296251 systemd[1]: Stopping kubelet.service... Jul 12 00:20:51.297027 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 00:20:51.297377 systemd[1]: Stopped kubelet.service. Jul 12 00:20:51.299043 systemd[1]: Starting kubelet.service... Jul 12 00:20:51.394175 systemd[1]: Started kubelet.service. Jul 12 00:20:51.432809 kubelet[1733]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:20:51.432809 kubelet[1733]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 12 00:20:51.432809 kubelet[1733]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:20:51.433245 kubelet[1733]: I0712 00:20:51.432859 1733 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:20:52.579759 kubelet[1733]: I0712 00:20:52.579712 1733 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 12 00:20:52.579759 kubelet[1733]: I0712 00:20:52.579744 1733 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:20:52.580197 kubelet[1733]: I0712 00:20:52.580173 1733 server.go:934] "Client rotation is on, will bootstrap in background" Jul 12 00:20:52.643883 kubelet[1733]: E0712 00:20:52.643842 1733 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.41:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:20:52.648545 kubelet[1733]: I0712 00:20:52.648508 1733 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:20:52.656644 kubelet[1733]: E0712 00:20:52.656607 1733 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:20:52.656644 kubelet[1733]: I0712 00:20:52.656644 1733 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:20:52.660431 kubelet[1733]: I0712 00:20:52.660397 1733 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:20:52.661404 kubelet[1733]: I0712 00:20:52.661371 1733 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 12 00:20:52.661591 kubelet[1733]: I0712 00:20:52.661531 1733 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:20:52.661738 kubelet[1733]: I0712 00:20:52.661564 1733 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 12 00:20:52.661977 kubelet[1733]: I0712 00:20:52.661941 1733 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:20:52.662013 kubelet[1733]: I0712 00:20:52.661986 1733 container_manager_linux.go:300] "Creating device plugin manager" Jul 12 00:20:52.662265 kubelet[1733]: I0712 00:20:52.662239 1733 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:20:52.668884 kubelet[1733]: I0712 00:20:52.668847 1733 kubelet.go:408] "Attempting to sync node with API server" Jul 12 00:20:52.668884 kubelet[1733]: I0712 00:20:52.668884 1733 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:20:52.669018 kubelet[1733]: I0712 00:20:52.668913 1733 kubelet.go:314] "Adding apiserver pod source" Jul 12 00:20:52.669018 kubelet[1733]: I0712 00:20:52.668925 1733 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:20:52.669320 kubelet[1733]: W0712 00:20:52.669259 1733 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.41:6443: connect: connection refused Jul 12 00:20:52.669320 kubelet[1733]: W0712 00:20:52.669276 1733 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.41:6443: connect: connection refused Jul 12 00:20:52.669390 kubelet[1733]: E0712 00:20:52.669322 1733 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:20:52.669390 kubelet[1733]: E0712 00:20:52.669327 1733 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:20:52.673903 kubelet[1733]: I0712 00:20:52.673880 1733 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 12 00:20:52.676237 kubelet[1733]: I0712 00:20:52.676194 1733 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 00:20:52.676404 kubelet[1733]: W0712 00:20:52.676378 1733 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 12 00:20:52.677426 kubelet[1733]: I0712 00:20:52.677398 1733 server.go:1274] "Started kubelet" Jul 12 00:20:52.691257 kubelet[1733]: I0712 00:20:52.691202 1733 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:20:52.691478 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 12 00:20:52.691639 kubelet[1733]: I0712 00:20:52.691615 1733 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:20:52.692606 kubelet[1733]: I0712 00:20:52.692574 1733 server.go:449] "Adding debug handlers to kubelet server" Jul 12 00:20:52.692715 kubelet[1733]: I0712 00:20:52.692581 1733 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:20:52.693068 kubelet[1733]: I0712 00:20:52.693053 1733 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:20:52.695135 kubelet[1733]: I0712 00:20:52.693997 1733 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 12 00:20:52.697902 kubelet[1733]: I0712 00:20:52.697863 1733 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:20:52.699615 kubelet[1733]: E0712 00:20:52.699428 1733 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:20:52.699877 kubelet[1733]: I0712 00:20:52.699856 1733 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 12 00:20:52.704101 kubelet[1733]: I0712 00:20:52.702065 1733 factory.go:221] Registration of the systemd container factory successfully Jul 12 00:20:52.704101 kubelet[1733]: I0712 00:20:52.703384 1733 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:20:52.704101 kubelet[1733]: E0712 00:20:52.702141 1733 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.41:6443: connect: connection refused" interval="200ms" Jul 12 00:20:52.704101 kubelet[1733]: W0712 00:20:52.703050 1733 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.41:6443: connect: connection refused Jul 12 00:20:52.704101 kubelet[1733]: E0712 00:20:52.703667 1733 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:20:52.704101 kubelet[1733]: I0712 00:20:52.702652 1733 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:20:52.709989 kubelet[1733]: I0712 00:20:52.709654 1733 factory.go:221] Registration of the containerd container factory successfully Jul 12 00:20:52.710081 kubelet[1733]: E0712 00:20:52.707729 1733 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.41:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.41:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1851590f64faaf5d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-12 00:20:52.677365597 +0000 UTC m=+1.279775977,LastTimestamp:2025-07-12 00:20:52.677365597 +0000 UTC m=+1.279775977,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 12 00:20:52.710380 kubelet[1733]: E0712 00:20:52.710363 1733 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 00:20:52.725207 kubelet[1733]: I0712 00:20:52.725156 1733 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 00:20:52.726255 kubelet[1733]: I0712 00:20:52.726228 1733 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 00:20:52.726255 kubelet[1733]: I0712 00:20:52.726255 1733 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 12 00:20:52.726383 kubelet[1733]: I0712 00:20:52.726274 1733 kubelet.go:2321] "Starting kubelet main sync loop" Jul 12 00:20:52.726383 kubelet[1733]: E0712 00:20:52.726323 1733 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:20:52.727119 kubelet[1733]: W0712 00:20:52.727064 1733 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.41:6443: connect: connection refused Jul 12 00:20:52.727211 kubelet[1733]: E0712 00:20:52.727130 1733 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:20:52.728621 kubelet[1733]: I0712 00:20:52.728599 1733 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 12 00:20:52.728621 kubelet[1733]: I0712 00:20:52.728615 1733 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 12 00:20:52.728735 kubelet[1733]: I0712 00:20:52.728632 1733 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:20:52.800129 kubelet[1733]: E0712 00:20:52.800088 1733 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:20:52.810610 kubelet[1733]: I0712 00:20:52.810587 1733 policy_none.go:49] "None policy: Start" Jul 12 00:20:52.811277 kubelet[1733]: I0712 00:20:52.811262 1733 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 12 00:20:52.811331 kubelet[1733]: I0712 00:20:52.811288 1733 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:20:52.816396 kubelet[1733]: I0712 00:20:52.816368 1733 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 00:20:52.816618 kubelet[1733]: I0712 00:20:52.816602 1733 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:20:52.816658 kubelet[1733]: I0712 00:20:52.816621 1733 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:20:52.818203 kubelet[1733]: I0712 00:20:52.818188 1733 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:20:52.818413 kubelet[1733]: E0712 00:20:52.818392 1733 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 12 00:20:52.904605 kubelet[1733]: I0712 00:20:52.904495 1733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:20:52.904816 kubelet[1733]: I0712 00:20:52.904782 1733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/56a8938c39d35ff3cd31062a1a4d98bd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"56a8938c39d35ff3cd31062a1a4d98bd\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:20:52.904965 kubelet[1733]: I0712 00:20:52.904934 1733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/56a8938c39d35ff3cd31062a1a4d98bd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"56a8938c39d35ff3cd31062a1a4d98bd\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:20:52.905073 kubelet[1733]: I0712 00:20:52.905058 1733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:20:52.905186 kubelet[1733]: I0712 00:20:52.905173 1733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:20:52.905683 kubelet[1733]: I0712 00:20:52.905620 1733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:20:52.905820 kubelet[1733]: I0712 00:20:52.905801 1733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 12 00:20:52.905944 kubelet[1733]: I0712 00:20:52.905928 1733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/56a8938c39d35ff3cd31062a1a4d98bd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"56a8938c39d35ff3cd31062a1a4d98bd\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:20:52.906080 kubelet[1733]: I0712 00:20:52.906066 1733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:20:52.906419 kubelet[1733]: E0712 00:20:52.904690 1733 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.41:6443: connect: connection refused" interval="400ms" Jul 12 00:20:52.917811 kubelet[1733]: I0712 00:20:52.917790 1733 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 12 00:20:52.918403 kubelet[1733]: E0712 00:20:52.918376 1733 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.41:6443/api/v1/nodes\": dial tcp 10.0.0.41:6443: connect: connection refused" node="localhost" Jul 12 00:20:53.120426 kubelet[1733]: I0712 00:20:53.120393 1733 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 12 00:20:53.120766 kubelet[1733]: E0712 00:20:53.120725 1733 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.41:6443/api/v1/nodes\": dial tcp 10.0.0.41:6443: connect: connection refused" node="localhost" Jul 12 00:20:53.133077 kubelet[1733]: E0712 00:20:53.133054 1733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:53.133986 kubelet[1733]: E0712 00:20:53.133806 1733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:53.134078 env[1315]: time="2025-07-12T00:20:53.133791688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:56a8938c39d35ff3cd31062a1a4d98bd,Namespace:kube-system,Attempt:0,}" Jul 12 00:20:53.134497 env[1315]: time="2025-07-12T00:20:53.134452161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 12 00:20:53.134869 kubelet[1733]: E0712 00:20:53.134838 1733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:53.136261 env[1315]: time="2025-07-12T00:20:53.136222876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 12 00:20:53.307633 kubelet[1733]: E0712 00:20:53.307213 1733 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.41:6443: connect: connection refused" interval="800ms" Jul 12 00:20:53.522746 kubelet[1733]: I0712 00:20:53.522695 1733 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 12 00:20:53.523097 kubelet[1733]: E0712 00:20:53.523073 1733 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.41:6443/api/v1/nodes\": dial tcp 10.0.0.41:6443: connect: connection refused" node="localhost" Jul 12 00:20:53.605458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1155686052.mount: Deactivated successfully. Jul 12 00:20:53.608104 env[1315]: time="2025-07-12T00:20:53.608055708Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:53.609981 env[1315]: time="2025-07-12T00:20:53.609916520Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:53.611694 env[1315]: time="2025-07-12T00:20:53.611642405Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:53.612653 env[1315]: time="2025-07-12T00:20:53.612623798Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:53.614697 env[1315]: time="2025-07-12T00:20:53.614669963Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:53.615383 env[1315]: time="2025-07-12T00:20:53.615357390Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:53.619262 env[1315]: time="2025-07-12T00:20:53.619216859Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:53.620564 env[1315]: time="2025-07-12T00:20:53.620523530Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:53.622405 env[1315]: time="2025-07-12T00:20:53.622374584Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:53.625041 env[1315]: time="2025-07-12T00:20:53.625010680Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:53.626468 env[1315]: time="2025-07-12T00:20:53.626439321Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:53.627567 env[1315]: time="2025-07-12T00:20:53.627540483Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:53.651659 env[1315]: time="2025-07-12T00:20:53.651084637Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:20:53.651659 env[1315]: time="2025-07-12T00:20:53.651123027Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:20:53.651659 env[1315]: time="2025-07-12T00:20:53.651133025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:20:53.651659 env[1315]: time="2025-07-12T00:20:53.651339293Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f3ca346e8b69b3d6addb6b65e3af2581716550e54f723697783f5669efbbbf75 pid=1784 runtime=io.containerd.runc.v2 Jul 12 00:20:53.652262 env[1315]: time="2025-07-12T00:20:53.652080466Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:20:53.652262 env[1315]: time="2025-07-12T00:20:53.652110139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:20:53.652262 env[1315]: time="2025-07-12T00:20:53.652120136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:20:53.652666 env[1315]: time="2025-07-12T00:20:53.651932903Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:20:53.652666 env[1315]: time="2025-07-12T00:20:53.651981371Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:20:53.652666 env[1315]: time="2025-07-12T00:20:53.651991329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:20:53.652666 env[1315]: time="2025-07-12T00:20:53.652293773Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/aee9578e39b7b74ce7697f4d670ab15f65ce2d05bd96018790e8178864640bdf pid=1790 runtime=io.containerd.runc.v2 Jul 12 00:20:53.653060 env[1315]: time="2025-07-12T00:20:53.652936291Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ee5b8615d3871721586e7208deb09407b3e3ae93f48e23ade3785e9fd21f8a99 pid=1791 runtime=io.containerd.runc.v2 Jul 12 00:20:53.680422 kubelet[1733]: W0712 00:20:53.677338 1733 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.41:6443: connect: connection refused Jul 12 00:20:53.680422 kubelet[1733]: E0712 00:20:53.677405 1733 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:20:53.726179 env[1315]: time="2025-07-12T00:20:53.726129707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:56a8938c39d35ff3cd31062a1a4d98bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee5b8615d3871721586e7208deb09407b3e3ae93f48e23ade3785e9fd21f8a99\"" Jul 12 00:20:53.728161 kubelet[1733]: E0712 00:20:53.727629 1733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:53.728811 env[1315]: time="2025-07-12T00:20:53.728770242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"f3ca346e8b69b3d6addb6b65e3af2581716550e54f723697783f5669efbbbf75\"" Jul 12 00:20:53.729143 env[1315]: time="2025-07-12T00:20:53.729109117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"aee9578e39b7b74ce7697f4d670ab15f65ce2d05bd96018790e8178864640bdf\"" Jul 12 00:20:53.730392 kubelet[1733]: E0712 00:20:53.730364 1733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:53.731069 kubelet[1733]: E0712 00:20:53.730924 1733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:53.732225 env[1315]: time="2025-07-12T00:20:53.732183423Z" level=info msg="CreateContainer within sandbox \"ee5b8615d3871721586e7208deb09407b3e3ae93f48e23ade3785e9fd21f8a99\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 12 00:20:53.732350 env[1315]: time="2025-07-12T00:20:53.732306392Z" level=info msg="CreateContainer within sandbox \"f3ca346e8b69b3d6addb6b65e3af2581716550e54f723697783f5669efbbbf75\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 12 00:20:53.733202 env[1315]: time="2025-07-12T00:20:53.733168375Z" level=info msg="CreateContainer within sandbox \"aee9578e39b7b74ce7697f4d670ab15f65ce2d05bd96018790e8178864640bdf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 12 00:20:53.748888 env[1315]: time="2025-07-12T00:20:53.748849828Z" level=info msg="CreateContainer within sandbox \"f3ca346e8b69b3d6addb6b65e3af2581716550e54f723697783f5669efbbbf75\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"be5550f88d8b113aac0f73f3e6edb5a9d96389cb4ddc1ed79912e0b42104726d\"" Jul 12 00:20:53.749569 env[1315]: time="2025-07-12T00:20:53.749533496Z" level=info msg="StartContainer for \"be5550f88d8b113aac0f73f3e6edb5a9d96389cb4ddc1ed79912e0b42104726d\"" Jul 12 00:20:53.753156 env[1315]: time="2025-07-12T00:20:53.753123832Z" level=info msg="CreateContainer within sandbox \"ee5b8615d3871721586e7208deb09407b3e3ae93f48e23ade3785e9fd21f8a99\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2925d5e7f6b9cf0562720ff074bd15763c08b8537036ca005a52f8571f9dba78\"" Jul 12 00:20:53.753522 env[1315]: time="2025-07-12T00:20:53.753494819Z" level=info msg="StartContainer for \"2925d5e7f6b9cf0562720ff074bd15763c08b8537036ca005a52f8571f9dba78\"" Jul 12 00:20:53.753730 env[1315]: time="2025-07-12T00:20:53.753692849Z" level=info msg="CreateContainer within sandbox \"aee9578e39b7b74ce7697f4d670ab15f65ce2d05bd96018790e8178864640bdf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5d716af9674332b95193b5e2d9a27350cf16d2a8aaec9c89193e45cc526080ec\"" Jul 12 00:20:53.754090 env[1315]: time="2025-07-12T00:20:53.754061556Z" level=info msg="StartContainer for \"5d716af9674332b95193b5e2d9a27350cf16d2a8aaec9c89193e45cc526080ec\"" Jul 12 00:20:53.772519 kubelet[1733]: W0712 00:20:53.772455 1733 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.41:6443: connect: connection refused Jul 12 00:20:53.772635 kubelet[1733]: E0712 00:20:53.772524 1733 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:20:53.789421 kubelet[1733]: W0712 00:20:53.789321 1733 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.41:6443: connect: connection refused Jul 12 00:20:53.789421 kubelet[1733]: E0712 00:20:53.789390 1733 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:20:53.827372 kubelet[1733]: W0712 00:20:53.827269 1733 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.41:6443: connect: connection refused Jul 12 00:20:53.827372 kubelet[1733]: E0712 00:20:53.827337 1733 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:20:53.866523 env[1315]: time="2025-07-12T00:20:53.864352874Z" level=info msg="StartContainer for \"be5550f88d8b113aac0f73f3e6edb5a9d96389cb4ddc1ed79912e0b42104726d\" returns successfully" Jul 12 00:20:53.868728 env[1315]: time="2025-07-12T00:20:53.868684584Z" level=info msg="StartContainer for \"2925d5e7f6b9cf0562720ff074bd15763c08b8537036ca005a52f8571f9dba78\" returns successfully" Jul 12 00:20:53.877997 env[1315]: time="2025-07-12T00:20:53.877917180Z" level=info msg="StartContainer for \"5d716af9674332b95193b5e2d9a27350cf16d2a8aaec9c89193e45cc526080ec\" returns successfully" Jul 12 00:20:54.325036 kubelet[1733]: I0712 00:20:54.324927 1733 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 12 00:20:54.737701 kubelet[1733]: E0712 00:20:54.737611 1733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:54.739314 kubelet[1733]: E0712 00:20:54.739294 1733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:54.740740 kubelet[1733]: E0712 00:20:54.740714 1733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:55.237027 kubelet[1733]: E0712 00:20:55.236882 1733 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 12 00:20:55.336011 kubelet[1733]: I0712 00:20:55.335970 1733 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 12 00:20:55.336011 kubelet[1733]: E0712 00:20:55.336011 1733 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 12 00:20:55.371038 kubelet[1733]: E0712 00:20:55.370997 1733 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:20:55.471463 kubelet[1733]: E0712 00:20:55.471407 1733 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:20:55.572259 kubelet[1733]: E0712 00:20:55.572150 1733 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:20:55.672349 kubelet[1733]: E0712 00:20:55.672313 1733 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:20:55.743017 kubelet[1733]: E0712 00:20:55.742989 1733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:55.743345 kubelet[1733]: E0712 00:20:55.743304 1733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:55.772655 kubelet[1733]: E0712 00:20:55.772606 1733 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:20:55.873818 kubelet[1733]: E0712 00:20:55.873651 1733 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:20:55.974288 kubelet[1733]: E0712 00:20:55.974237 1733 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:20:56.074823 kubelet[1733]: E0712 00:20:56.074778 1733 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:20:56.672215 kubelet[1733]: I0712 00:20:56.672167 1733 apiserver.go:52] "Watching apiserver" Jul 12 00:20:56.703410 kubelet[1733]: I0712 00:20:56.703356 1733 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 12 00:20:56.750877 kubelet[1733]: E0712 00:20:56.750814 1733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:57.352370 kubelet[1733]: E0712 00:20:57.352323 1733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:57.374723 systemd[1]: Reloading. Jul 12 00:20:57.419508 /usr/lib/systemd/system-generators/torcx-generator[2034]: time="2025-07-12T00:20:57Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 12 00:20:57.419538 /usr/lib/systemd/system-generators/torcx-generator[2034]: time="2025-07-12T00:20:57Z" level=info msg="torcx already run" Jul 12 00:20:57.481052 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 12 00:20:57.481202 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 12 00:20:57.498656 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:20:57.576835 systemd[1]: Stopping kubelet.service... Jul 12 00:20:57.594314 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 00:20:57.594618 systemd[1]: Stopped kubelet.service. Jul 12 00:20:57.596429 systemd[1]: Starting kubelet.service... Jul 12 00:20:57.690500 systemd[1]: Started kubelet.service. Jul 12 00:20:57.729650 kubelet[2087]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:20:57.729650 kubelet[2087]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 12 00:20:57.729650 kubelet[2087]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:20:57.730107 kubelet[2087]: I0712 00:20:57.729694 2087 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:20:57.736224 kubelet[2087]: I0712 00:20:57.736186 2087 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 12 00:20:57.736353 kubelet[2087]: I0712 00:20:57.736342 2087 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:20:57.736654 kubelet[2087]: I0712 00:20:57.736638 2087 server.go:934] "Client rotation is on, will bootstrap in background" Jul 12 00:20:57.738029 kubelet[2087]: I0712 00:20:57.738011 2087 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 12 00:20:57.740064 kubelet[2087]: I0712 00:20:57.740033 2087 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:20:57.743016 kubelet[2087]: E0712 00:20:57.742990 2087 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:20:57.743109 kubelet[2087]: I0712 00:20:57.743094 2087 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:20:57.745470 kubelet[2087]: I0712 00:20:57.745448 2087 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:20:57.745841 kubelet[2087]: I0712 00:20:57.745816 2087 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 12 00:20:57.745959 kubelet[2087]: I0712 00:20:57.745925 2087 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:20:57.746194 kubelet[2087]: I0712 00:20:57.745991 2087 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 12 00:20:57.746264 kubelet[2087]: I0712 00:20:57.746206 2087 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:20:57.746264 kubelet[2087]: I0712 00:20:57.746218 2087 container_manager_linux.go:300] "Creating device plugin manager" Jul 12 00:20:57.746264 kubelet[2087]: I0712 00:20:57.746260 2087 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:20:57.746384 kubelet[2087]: I0712 00:20:57.746373 2087 kubelet.go:408] "Attempting to sync node with API server" Jul 12 00:20:57.746424 kubelet[2087]: I0712 00:20:57.746389 2087 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:20:57.746424 kubelet[2087]: I0712 00:20:57.746408 2087 kubelet.go:314] "Adding apiserver pod source" Jul 12 00:20:57.746424 kubelet[2087]: I0712 00:20:57.746422 2087 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:20:57.750669 kubelet[2087]: I0712 00:20:57.750641 2087 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 12 00:20:57.751216 kubelet[2087]: I0712 00:20:57.751191 2087 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 00:20:57.751666 kubelet[2087]: I0712 00:20:57.751645 2087 server.go:1274] "Started kubelet" Jul 12 00:20:57.754979 kubelet[2087]: I0712 00:20:57.752203 2087 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:20:57.754979 kubelet[2087]: I0712 00:20:57.753506 2087 server.go:449] "Adding debug handlers to kubelet server" Jul 12 00:20:57.754979 kubelet[2087]: I0712 00:20:57.754766 2087 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:20:57.755132 kubelet[2087]: I0712 00:20:57.755074 2087 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:20:57.756967 kubelet[2087]: I0712 00:20:57.756932 2087 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:20:57.757478 kubelet[2087]: I0712 00:20:57.757457 2087 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:20:57.759344 kubelet[2087]: I0712 00:20:57.758107 2087 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 12 00:20:57.759344 kubelet[2087]: E0712 00:20:57.758238 2087 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:20:57.759344 kubelet[2087]: I0712 00:20:57.758526 2087 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 12 00:20:57.760915 kubelet[2087]: I0712 00:20:57.760894 2087 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:20:57.771768 kubelet[2087]: I0712 00:20:57.771725 2087 factory.go:221] Registration of the systemd container factory successfully Jul 12 00:20:57.771894 kubelet[2087]: I0712 00:20:57.771847 2087 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:20:57.775996 kubelet[2087]: I0712 00:20:57.775966 2087 factory.go:221] Registration of the containerd container factory successfully Jul 12 00:20:57.788565 kubelet[2087]: E0712 00:20:57.788531 2087 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 00:20:57.792431 kubelet[2087]: I0712 00:20:57.792395 2087 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 00:20:57.793629 kubelet[2087]: I0712 00:20:57.793607 2087 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 00:20:57.793740 kubelet[2087]: I0712 00:20:57.793729 2087 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 12 00:20:57.793803 kubelet[2087]: I0712 00:20:57.793795 2087 kubelet.go:2321] "Starting kubelet main sync loop" Jul 12 00:20:57.793904 kubelet[2087]: E0712 00:20:57.793887 2087 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:20:57.831401 kubelet[2087]: I0712 00:20:57.831373 2087 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 12 00:20:57.831401 kubelet[2087]: I0712 00:20:57.831393 2087 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 12 00:20:57.831582 kubelet[2087]: I0712 00:20:57.831415 2087 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:20:57.831610 kubelet[2087]: I0712 00:20:57.831592 2087 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 12 00:20:57.831640 kubelet[2087]: I0712 00:20:57.831604 2087 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 12 00:20:57.831640 kubelet[2087]: I0712 00:20:57.831623 2087 policy_none.go:49] "None policy: Start" Jul 12 00:20:57.832219 kubelet[2087]: I0712 00:20:57.832203 2087 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 12 00:20:57.832314 kubelet[2087]: I0712 00:20:57.832293 2087 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:20:57.832527 kubelet[2087]: I0712 00:20:57.832510 2087 state_mem.go:75] "Updated machine memory state" Jul 12 00:20:57.833855 kubelet[2087]: I0712 00:20:57.833829 2087 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 00:20:57.834120 kubelet[2087]: I0712 00:20:57.834102 2087 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:20:57.834214 kubelet[2087]: I0712 00:20:57.834185 2087 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:20:57.835019 kubelet[2087]: I0712 00:20:57.835000 2087 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:20:57.902596 kubelet[2087]: E0712 00:20:57.902549 2087 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 12 00:20:57.902936 kubelet[2087]: E0712 00:20:57.902910 2087 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 12 00:20:57.939222 kubelet[2087]: I0712 00:20:57.939193 2087 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 12 00:20:57.946605 kubelet[2087]: I0712 00:20:57.945113 2087 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 12 00:20:57.946830 kubelet[2087]: I0712 00:20:57.946814 2087 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 12 00:20:57.962846 kubelet[2087]: I0712 00:20:57.962782 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/56a8938c39d35ff3cd31062a1a4d98bd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"56a8938c39d35ff3cd31062a1a4d98bd\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:20:57.963016 kubelet[2087]: I0712 00:20:57.962866 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:20:57.963016 kubelet[2087]: I0712 00:20:57.962890 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:20:57.963016 kubelet[2087]: I0712 00:20:57.962937 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:20:57.963016 kubelet[2087]: I0712 00:20:57.962990 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/56a8938c39d35ff3cd31062a1a4d98bd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"56a8938c39d35ff3cd31062a1a4d98bd\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:20:57.963016 kubelet[2087]: I0712 00:20:57.963014 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/56a8938c39d35ff3cd31062a1a4d98bd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"56a8938c39d35ff3cd31062a1a4d98bd\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:20:57.963157 kubelet[2087]: I0712 00:20:57.963073 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:20:57.963157 kubelet[2087]: I0712 00:20:57.963100 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:20:57.963157 kubelet[2087]: I0712 00:20:57.963149 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 12 00:20:58.203349 kubelet[2087]: E0712 00:20:58.202908 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:58.203554 kubelet[2087]: E0712 00:20:58.203177 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:58.203655 kubelet[2087]: E0712 00:20:58.203212 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:58.375141 sudo[2122]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 12 00:20:58.375714 sudo[2122]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 12 00:20:58.747187 kubelet[2087]: I0712 00:20:58.747150 2087 apiserver.go:52] "Watching apiserver" Jul 12 00:20:58.758991 kubelet[2087]: I0712 00:20:58.758942 2087 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 12 00:20:58.808221 kubelet[2087]: E0712 00:20:58.808178 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:58.809502 kubelet[2087]: E0712 00:20:58.808869 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:58.819659 sudo[2122]: pam_unix(sudo:session): session closed for user root Jul 12 00:20:58.820652 kubelet[2087]: E0712 00:20:58.820386 2087 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 12 00:20:58.820652 kubelet[2087]: E0712 00:20:58.820522 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:58.839449 kubelet[2087]: I0712 00:20:58.839385 2087 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.839370366 podStartE2EDuration="2.839370366s" podCreationTimestamp="2025-07-12 00:20:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:20:58.831914925 +0000 UTC m=+1.136626422" watchObservedRunningTime="2025-07-12 00:20:58.839370366 +0000 UTC m=+1.144081863" Jul 12 00:20:58.848974 kubelet[2087]: I0712 00:20:58.848905 2087 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.848887671 podStartE2EDuration="1.848887671s" podCreationTimestamp="2025-07-12 00:20:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:20:58.839738979 +0000 UTC m=+1.144450476" watchObservedRunningTime="2025-07-12 00:20:58.848887671 +0000 UTC m=+1.153599168" Jul 12 00:20:59.809593 kubelet[2087]: E0712 00:20:59.809560 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:00.703882 sudo[1444]: pam_unix(sudo:session): session closed for user root Jul 12 00:21:00.706165 sshd[1438]: pam_unix(sshd:session): session closed for user core Jul 12 00:21:00.708924 systemd-logind[1305]: Session 5 logged out. Waiting for processes to exit. Jul 12 00:21:00.709567 systemd[1]: sshd@4-10.0.0.41:22-10.0.0.1:53326.service: Deactivated successfully. Jul 12 00:21:00.710415 systemd[1]: session-5.scope: Deactivated successfully. Jul 12 00:21:00.711095 systemd-logind[1305]: Removed session 5. Jul 12 00:21:00.811183 kubelet[2087]: E0712 00:21:00.811151 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:01.230820 kubelet[2087]: E0712 00:21:01.230783 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:03.579558 kubelet[2087]: I0712 00:21:03.579506 2087 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 12 00:21:03.579975 env[1315]: time="2025-07-12T00:21:03.579916446Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 12 00:21:03.580291 kubelet[2087]: I0712 00:21:03.580270 2087 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 12 00:21:03.616823 kubelet[2087]: I0712 00:21:03.616764 2087 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=6.616747384 podStartE2EDuration="6.616747384s" podCreationTimestamp="2025-07-12 00:20:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:20:58.849262363 +0000 UTC m=+1.153973900" watchObservedRunningTime="2025-07-12 00:21:03.616747384 +0000 UTC m=+5.921458881" Jul 12 00:21:03.704745 kubelet[2087]: I0712 00:21:03.704689 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c97d2532-bdc0-4c2d-be16-eb15e8a02984-kube-proxy\") pod \"kube-proxy-tcnp5\" (UID: \"c97d2532-bdc0-4c2d-be16-eb15e8a02984\") " pod="kube-system/kube-proxy-tcnp5" Jul 12 00:21:03.704745 kubelet[2087]: I0712 00:21:03.704738 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c97d2532-bdc0-4c2d-be16-eb15e8a02984-lib-modules\") pod \"kube-proxy-tcnp5\" (UID: \"c97d2532-bdc0-4c2d-be16-eb15e8a02984\") " pod="kube-system/kube-proxy-tcnp5" Jul 12 00:21:03.704937 kubelet[2087]: I0712 00:21:03.704760 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eada4750-77df-4e71-80a8-964af09e2b3d-hostproc\") pod \"cilium-vhqk6\" (UID: \"eada4750-77df-4e71-80a8-964af09e2b3d\") " pod="kube-system/cilium-vhqk6" Jul 12 00:21:03.704937 kubelet[2087]: I0712 00:21:03.704777 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eada4750-77df-4e71-80a8-964af09e2b3d-host-proc-sys-kernel\") pod \"cilium-vhqk6\" (UID: \"eada4750-77df-4e71-80a8-964af09e2b3d\") " pod="kube-system/cilium-vhqk6" Jul 12 00:21:03.704937 kubelet[2087]: I0712 00:21:03.704795 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eada4750-77df-4e71-80a8-964af09e2b3d-cilium-config-path\") pod \"cilium-vhqk6\" (UID: \"eada4750-77df-4e71-80a8-964af09e2b3d\") " pod="kube-system/cilium-vhqk6" Jul 12 00:21:03.704937 kubelet[2087]: I0712 00:21:03.704812 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eada4750-77df-4e71-80a8-964af09e2b3d-cilium-run\") pod \"cilium-vhqk6\" (UID: \"eada4750-77df-4e71-80a8-964af09e2b3d\") " pod="kube-system/cilium-vhqk6" Jul 12 00:21:03.704937 kubelet[2087]: I0712 00:21:03.704828 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eada4750-77df-4e71-80a8-964af09e2b3d-lib-modules\") pod \"cilium-vhqk6\" (UID: \"eada4750-77df-4e71-80a8-964af09e2b3d\") " pod="kube-system/cilium-vhqk6" Jul 12 00:21:03.705079 kubelet[2087]: I0712 00:21:03.704846 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df9zs\" (UniqueName: \"kubernetes.io/projected/c97d2532-bdc0-4c2d-be16-eb15e8a02984-kube-api-access-df9zs\") pod \"kube-proxy-tcnp5\" (UID: \"c97d2532-bdc0-4c2d-be16-eb15e8a02984\") " pod="kube-system/kube-proxy-tcnp5" Jul 12 00:21:03.705079 kubelet[2087]: I0712 00:21:03.704866 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eada4750-77df-4e71-80a8-964af09e2b3d-host-proc-sys-net\") pod \"cilium-vhqk6\" (UID: \"eada4750-77df-4e71-80a8-964af09e2b3d\") " pod="kube-system/cilium-vhqk6" Jul 12 00:21:03.705163 kubelet[2087]: I0712 00:21:03.705130 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzl2v\" (UniqueName: \"kubernetes.io/projected/eada4750-77df-4e71-80a8-964af09e2b3d-kube-api-access-xzl2v\") pod \"cilium-vhqk6\" (UID: \"eada4750-77df-4e71-80a8-964af09e2b3d\") " pod="kube-system/cilium-vhqk6" Jul 12 00:21:03.705205 kubelet[2087]: I0712 00:21:03.705178 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eada4750-77df-4e71-80a8-964af09e2b3d-etc-cni-netd\") pod \"cilium-vhqk6\" (UID: \"eada4750-77df-4e71-80a8-964af09e2b3d\") " pod="kube-system/cilium-vhqk6" Jul 12 00:21:03.705233 kubelet[2087]: I0712 00:21:03.705224 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eada4750-77df-4e71-80a8-964af09e2b3d-clustermesh-secrets\") pod \"cilium-vhqk6\" (UID: \"eada4750-77df-4e71-80a8-964af09e2b3d\") " pod="kube-system/cilium-vhqk6" Jul 12 00:21:03.705261 kubelet[2087]: I0712 00:21:03.705249 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eada4750-77df-4e71-80a8-964af09e2b3d-cilium-cgroup\") pod \"cilium-vhqk6\" (UID: \"eada4750-77df-4e71-80a8-964af09e2b3d\") " pod="kube-system/cilium-vhqk6" Jul 12 00:21:03.705290 kubelet[2087]: I0712 00:21:03.705270 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eada4750-77df-4e71-80a8-964af09e2b3d-cni-path\") pod \"cilium-vhqk6\" (UID: \"eada4750-77df-4e71-80a8-964af09e2b3d\") " pod="kube-system/cilium-vhqk6" Jul 12 00:21:03.705318 kubelet[2087]: I0712 00:21:03.705290 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eada4750-77df-4e71-80a8-964af09e2b3d-xtables-lock\") pod \"cilium-vhqk6\" (UID: \"eada4750-77df-4e71-80a8-964af09e2b3d\") " pod="kube-system/cilium-vhqk6" Jul 12 00:21:03.705368 kubelet[2087]: I0712 00:21:03.705349 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eada4750-77df-4e71-80a8-964af09e2b3d-hubble-tls\") pod \"cilium-vhqk6\" (UID: \"eada4750-77df-4e71-80a8-964af09e2b3d\") " pod="kube-system/cilium-vhqk6" Jul 12 00:21:03.705402 kubelet[2087]: I0712 00:21:03.705373 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c97d2532-bdc0-4c2d-be16-eb15e8a02984-xtables-lock\") pod \"kube-proxy-tcnp5\" (UID: \"c97d2532-bdc0-4c2d-be16-eb15e8a02984\") " pod="kube-system/kube-proxy-tcnp5" Jul 12 00:21:03.705427 kubelet[2087]: I0712 00:21:03.705399 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eada4750-77df-4e71-80a8-964af09e2b3d-bpf-maps\") pod \"cilium-vhqk6\" (UID: \"eada4750-77df-4e71-80a8-964af09e2b3d\") " pod="kube-system/cilium-vhqk6" Jul 12 00:21:03.806518 kubelet[2087]: I0712 00:21:03.806472 2087 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jul 12 00:21:03.817124 kubelet[2087]: E0712 00:21:03.817096 2087 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 12 00:21:03.817268 kubelet[2087]: E0712 00:21:03.817255 2087 projected.go:194] Error preparing data for projected volume kube-api-access-df9zs for pod kube-system/kube-proxy-tcnp5: configmap "kube-root-ca.crt" not found Jul 12 00:21:03.817394 kubelet[2087]: E0712 00:21:03.817379 2087 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c97d2532-bdc0-4c2d-be16-eb15e8a02984-kube-api-access-df9zs podName:c97d2532-bdc0-4c2d-be16-eb15e8a02984 nodeName:}" failed. No retries permitted until 2025-07-12 00:21:04.317356904 +0000 UTC m=+6.622068401 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-df9zs" (UniqueName: "kubernetes.io/projected/c97d2532-bdc0-4c2d-be16-eb15e8a02984-kube-api-access-df9zs") pod "kube-proxy-tcnp5" (UID: "c97d2532-bdc0-4c2d-be16-eb15e8a02984") : configmap "kube-root-ca.crt" not found Jul 12 00:21:03.817525 kubelet[2087]: E0712 00:21:03.817491 2087 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 12 00:21:03.817525 kubelet[2087]: E0712 00:21:03.817518 2087 projected.go:194] Error preparing data for projected volume kube-api-access-xzl2v for pod kube-system/cilium-vhqk6: configmap "kube-root-ca.crt" not found Jul 12 00:21:03.817596 kubelet[2087]: E0712 00:21:03.817570 2087 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eada4750-77df-4e71-80a8-964af09e2b3d-kube-api-access-xzl2v podName:eada4750-77df-4e71-80a8-964af09e2b3d nodeName:}" failed. No retries permitted until 2025-07-12 00:21:04.31753892 +0000 UTC m=+6.622250417 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xzl2v" (UniqueName: "kubernetes.io/projected/eada4750-77df-4e71-80a8-964af09e2b3d-kube-api-access-xzl2v") pod "cilium-vhqk6" (UID: "eada4750-77df-4e71-80a8-964af09e2b3d") : configmap "kube-root-ca.crt" not found Jul 12 00:21:04.520023 kubelet[2087]: E0712 00:21:04.519976 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:04.520664 env[1315]: time="2025-07-12T00:21:04.520618120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tcnp5,Uid:c97d2532-bdc0-4c2d-be16-eb15e8a02984,Namespace:kube-system,Attempt:0,}" Jul 12 00:21:04.532530 kubelet[2087]: E0712 00:21:04.530931 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:04.532631 env[1315]: time="2025-07-12T00:21:04.531577844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vhqk6,Uid:eada4750-77df-4e71-80a8-964af09e2b3d,Namespace:kube-system,Attempt:0,}" Jul 12 00:21:04.536724 env[1315]: time="2025-07-12T00:21:04.536582825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:21:04.536724 env[1315]: time="2025-07-12T00:21:04.536631019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:21:04.536724 env[1315]: time="2025-07-12T00:21:04.536641857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:21:04.537010 env[1315]: time="2025-07-12T00:21:04.536954779Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5b61389700fd1e8b77518c7c63c17b58108f6d0ae5a8702bdac99c30d6e0b0c7 pid=2178 runtime=io.containerd.runc.v2 Jul 12 00:21:04.553026 env[1315]: time="2025-07-12T00:21:04.548590899Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:21:04.553026 env[1315]: time="2025-07-12T00:21:04.548633533Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:21:04.553026 env[1315]: time="2025-07-12T00:21:04.548654531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:21:04.553026 env[1315]: time="2025-07-12T00:21:04.549077358Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8f6bae6d956c3c58e6976508dbaa0e0bf632c048a6983fc724414f301d33d139 pid=2204 runtime=io.containerd.runc.v2 Jul 12 00:21:04.605161 env[1315]: time="2025-07-12T00:21:04.605120743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tcnp5,Uid:c97d2532-bdc0-4c2d-be16-eb15e8a02984,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b61389700fd1e8b77518c7c63c17b58108f6d0ae5a8702bdac99c30d6e0b0c7\"" Jul 12 00:21:04.609959 kubelet[2087]: E0712 00:21:04.609896 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:04.614749 env[1315]: time="2025-07-12T00:21:04.612525547Z" level=info msg="CreateContainer within sandbox \"5b61389700fd1e8b77518c7c63c17b58108f6d0ae5a8702bdac99c30d6e0b0c7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 12 00:21:04.618117 env[1315]: time="2025-07-12T00:21:04.618073700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vhqk6,Uid:eada4750-77df-4e71-80a8-964af09e2b3d,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f6bae6d956c3c58e6976508dbaa0e0bf632c048a6983fc724414f301d33d139\"" Jul 12 00:21:04.618964 kubelet[2087]: E0712 00:21:04.618910 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:04.620431 env[1315]: time="2025-07-12T00:21:04.620310983Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 12 00:21:04.630622 env[1315]: time="2025-07-12T00:21:04.630565794Z" level=info msg="CreateContainer within sandbox \"5b61389700fd1e8b77518c7c63c17b58108f6d0ae5a8702bdac99c30d6e0b0c7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b2f5ebccbfbd2d233e2cd47fb6983dcb335e752bed5f97fa94df76214b2c0a9e\"" Jul 12 00:21:04.631344 env[1315]: time="2025-07-12T00:21:04.631310262Z" level=info msg="StartContainer for \"b2f5ebccbfbd2d233e2cd47fb6983dcb335e752bed5f97fa94df76214b2c0a9e\"" Jul 12 00:21:04.717004 env[1315]: time="2025-07-12T00:21:04.706898788Z" level=info msg="StartContainer for \"b2f5ebccbfbd2d233e2cd47fb6983dcb335e752bed5f97fa94df76214b2c0a9e\" returns successfully" Jul 12 00:21:04.812053 kubelet[2087]: I0712 00:21:04.811934 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/30613dfc-9a3a-436d-9a84-7daa41622c72-cilium-config-path\") pod \"cilium-operator-5d85765b45-sgqzd\" (UID: \"30613dfc-9a3a-436d-9a84-7daa41622c72\") " pod="kube-system/cilium-operator-5d85765b45-sgqzd" Jul 12 00:21:04.812053 kubelet[2087]: I0712 00:21:04.811990 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t82dq\" (UniqueName: \"kubernetes.io/projected/30613dfc-9a3a-436d-9a84-7daa41622c72-kube-api-access-t82dq\") pod \"cilium-operator-5d85765b45-sgqzd\" (UID: \"30613dfc-9a3a-436d-9a84-7daa41622c72\") " pod="kube-system/cilium-operator-5d85765b45-sgqzd" Jul 12 00:21:04.821048 kubelet[2087]: E0712 00:21:04.820992 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:04.830912 kubelet[2087]: I0712 00:21:04.830780 2087 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tcnp5" podStartSLOduration=1.83076358 podStartE2EDuration="1.83076358s" podCreationTimestamp="2025-07-12 00:21:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:21:04.830078585 +0000 UTC m=+7.134790082" watchObservedRunningTime="2025-07-12 00:21:04.83076358 +0000 UTC m=+7.135475077" Jul 12 00:21:05.047754 kubelet[2087]: E0712 00:21:05.044892 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:05.047882 env[1315]: time="2025-07-12T00:21:05.045564424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-sgqzd,Uid:30613dfc-9a3a-436d-9a84-7daa41622c72,Namespace:kube-system,Attempt:0,}" Jul 12 00:21:05.059415 env[1315]: time="2025-07-12T00:21:05.059340266Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:21:05.059415 env[1315]: time="2025-07-12T00:21:05.059385781Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:21:05.059604 env[1315]: time="2025-07-12T00:21:05.059396980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:21:05.059822 env[1315]: time="2025-07-12T00:21:05.059786574Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/94bcd7943b3faf890240d4c98633b86b31e6f170d5f46faeb28e5af5f10a4ab6 pid=2343 runtime=io.containerd.runc.v2 Jul 12 00:21:05.127581 env[1315]: time="2025-07-12T00:21:05.127490320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-sgqzd,Uid:30613dfc-9a3a-436d-9a84-7daa41622c72,Namespace:kube-system,Attempt:0,} returns sandbox id \"94bcd7943b3faf890240d4c98633b86b31e6f170d5f46faeb28e5af5f10a4ab6\"" Jul 12 00:21:05.128007 kubelet[2087]: E0712 00:21:05.127981 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:05.429404 kubelet[2087]: E0712 00:21:05.429294 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:05.823767 kubelet[2087]: E0712 00:21:05.823450 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:08.522043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount830991919.mount: Deactivated successfully. Jul 12 00:21:10.607615 kubelet[2087]: E0712 00:21:10.607584 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:10.751192 env[1315]: time="2025-07-12T00:21:10.750432852Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:21:10.752067 env[1315]: time="2025-07-12T00:21:10.752033637Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:21:10.755085 env[1315]: time="2025-07-12T00:21:10.755047304Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:21:10.755393 env[1315]: time="2025-07-12T00:21:10.755355638Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 12 00:21:10.762370 env[1315]: time="2025-07-12T00:21:10.762330652Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 12 00:21:10.763784 env[1315]: time="2025-07-12T00:21:10.763740654Z" level=info msg="CreateContainer within sandbox \"8f6bae6d956c3c58e6976508dbaa0e0bf632c048a6983fc724414f301d33d139\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 12 00:21:10.773111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3108537707.mount: Deactivated successfully. Jul 12 00:21:10.779848 env[1315]: time="2025-07-12T00:21:10.779786306Z" level=info msg="CreateContainer within sandbox \"8f6bae6d956c3c58e6976508dbaa0e0bf632c048a6983fc724414f301d33d139\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c75b19618216af7509261c5744fd20a663f9766e094a45429327d9c10c00b7e0\"" Jul 12 00:21:10.783929 env[1315]: time="2025-07-12T00:21:10.783893321Z" level=info msg="StartContainer for \"c75b19618216af7509261c5744fd20a663f9766e094a45429327d9c10c00b7e0\"" Jul 12 00:21:10.902013 env[1315]: time="2025-07-12T00:21:10.901767138Z" level=info msg="StartContainer for \"c75b19618216af7509261c5744fd20a663f9766e094a45429327d9c10c00b7e0\" returns successfully" Jul 12 00:21:10.951524 env[1315]: time="2025-07-12T00:21:10.951480321Z" level=info msg="shim disconnected" id=c75b19618216af7509261c5744fd20a663f9766e094a45429327d9c10c00b7e0 Jul 12 00:21:10.951524 env[1315]: time="2025-07-12T00:21:10.951526757Z" level=warning msg="cleaning up after shim disconnected" id=c75b19618216af7509261c5744fd20a663f9766e094a45429327d9c10c00b7e0 namespace=k8s.io Jul 12 00:21:10.951815 env[1315]: time="2025-07-12T00:21:10.951547076Z" level=info msg="cleaning up dead shim" Jul 12 00:21:10.959840 env[1315]: time="2025-07-12T00:21:10.959787823Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:21:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2508 runtime=io.containerd.runc.v2\n" Jul 12 00:21:11.242280 kubelet[2087]: E0712 00:21:11.242020 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:11.770351 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c75b19618216af7509261c5744fd20a663f9766e094a45429327d9c10c00b7e0-rootfs.mount: Deactivated successfully. Jul 12 00:21:11.837009 kubelet[2087]: E0712 00:21:11.836682 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:11.846497 env[1315]: time="2025-07-12T00:21:11.846391896Z" level=info msg="CreateContainer within sandbox \"8f6bae6d956c3c58e6976508dbaa0e0bf632c048a6983fc724414f301d33d139\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 12 00:21:11.931958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3699166343.mount: Deactivated successfully. Jul 12 00:21:12.017224 env[1315]: time="2025-07-12T00:21:12.017149602Z" level=info msg="CreateContainer within sandbox \"8f6bae6d956c3c58e6976508dbaa0e0bf632c048a6983fc724414f301d33d139\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"19950acd4a93cc9d8c3073e4be785b5bc434065f9a16e9b1308af925ff651963\"" Jul 12 00:21:12.019444 env[1315]: time="2025-07-12T00:21:12.019415195Z" level=info msg="StartContainer for \"19950acd4a93cc9d8c3073e4be785b5bc434065f9a16e9b1308af925ff651963\"" Jul 12 00:21:12.082627 env[1315]: time="2025-07-12T00:21:12.082328789Z" level=info msg="StartContainer for \"19950acd4a93cc9d8c3073e4be785b5bc434065f9a16e9b1308af925ff651963\" returns successfully" Jul 12 00:21:12.094879 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 00:21:12.095455 systemd[1]: Stopped systemd-sysctl.service. Jul 12 00:21:12.095631 systemd[1]: Stopping systemd-sysctl.service... Jul 12 00:21:12.097184 systemd[1]: Starting systemd-sysctl.service... Jul 12 00:21:12.106487 systemd[1]: Finished systemd-sysctl.service. Jul 12 00:21:12.142537 env[1315]: time="2025-07-12T00:21:12.142475068Z" level=info msg="shim disconnected" id=19950acd4a93cc9d8c3073e4be785b5bc434065f9a16e9b1308af925ff651963 Jul 12 00:21:12.142537 env[1315]: time="2025-07-12T00:21:12.142519385Z" level=warning msg="cleaning up after shim disconnected" id=19950acd4a93cc9d8c3073e4be785b5bc434065f9a16e9b1308af925ff651963 namespace=k8s.io Jul 12 00:21:12.142537 env[1315]: time="2025-07-12T00:21:12.142528664Z" level=info msg="cleaning up dead shim" Jul 12 00:21:12.149418 env[1315]: time="2025-07-12T00:21:12.149361040Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:21:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2572 runtime=io.containerd.runc.v2\n" Jul 12 00:21:12.381306 env[1315]: time="2025-07-12T00:21:12.380926501Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:21:12.382997 env[1315]: time="2025-07-12T00:21:12.382960111Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:21:12.384438 env[1315]: time="2025-07-12T00:21:12.384404404Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:21:12.384859 env[1315]: time="2025-07-12T00:21:12.384828333Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 12 00:21:12.387205 env[1315]: time="2025-07-12T00:21:12.387177519Z" level=info msg="CreateContainer within sandbox \"94bcd7943b3faf890240d4c98633b86b31e6f170d5f46faeb28e5af5f10a4ab6\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 12 00:21:12.398306 env[1315]: time="2025-07-12T00:21:12.398267021Z" level=info msg="CreateContainer within sandbox \"94bcd7943b3faf890240d4c98633b86b31e6f170d5f46faeb28e5af5f10a4ab6\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1a1d49f90bcd911c29e2ba5d474c3cbb9cab1778eeae3a939fbc5d84be36bf90\"" Jul 12 00:21:12.398857 env[1315]: time="2025-07-12T00:21:12.398828299Z" level=info msg="StartContainer for \"1a1d49f90bcd911c29e2ba5d474c3cbb9cab1778eeae3a939fbc5d84be36bf90\"" Jul 12 00:21:12.482116 env[1315]: time="2025-07-12T00:21:12.482057474Z" level=info msg="StartContainer for \"1a1d49f90bcd911c29e2ba5d474c3cbb9cab1778eeae3a939fbc5d84be36bf90\" returns successfully" Jul 12 00:21:12.770896 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19950acd4a93cc9d8c3073e4be785b5bc434065f9a16e9b1308af925ff651963-rootfs.mount: Deactivated successfully. Jul 12 00:21:12.838693 kubelet[2087]: E0712 00:21:12.838653 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:12.842135 kubelet[2087]: E0712 00:21:12.842107 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:12.843789 env[1315]: time="2025-07-12T00:21:12.843742767Z" level=info msg="CreateContainer within sandbox \"8f6bae6d956c3c58e6976508dbaa0e0bf632c048a6983fc724414f301d33d139\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 12 00:21:12.862170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1976823559.mount: Deactivated successfully. Jul 12 00:21:12.867541 env[1315]: time="2025-07-12T00:21:12.867498173Z" level=info msg="CreateContainer within sandbox \"8f6bae6d956c3c58e6976508dbaa0e0bf632c048a6983fc724414f301d33d139\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"09e1741b4121af4808d1768efea8a19a46ce96666c9364657dd9b115259d90c0\"" Jul 12 00:21:12.873983 env[1315]: time="2025-07-12T00:21:12.868501739Z" level=info msg="StartContainer for \"09e1741b4121af4808d1768efea8a19a46ce96666c9364657dd9b115259d90c0\"" Jul 12 00:21:12.874129 kubelet[2087]: I0712 00:21:12.868785 2087 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-sgqzd" podStartSLOduration=1.6115899759999999 podStartE2EDuration="8.868768839s" podCreationTimestamp="2025-07-12 00:21:04 +0000 UTC" firstStartedPulling="2025-07-12 00:21:05.128400694 +0000 UTC m=+7.433112191" lastFinishedPulling="2025-07-12 00:21:12.385579597 +0000 UTC m=+14.690291054" observedRunningTime="2025-07-12 00:21:12.849816159 +0000 UTC m=+15.154527656" watchObservedRunningTime="2025-07-12 00:21:12.868768839 +0000 UTC m=+15.173480296" Jul 12 00:21:12.985793 env[1315]: time="2025-07-12T00:21:12.985735562Z" level=info msg="StartContainer for \"09e1741b4121af4808d1768efea8a19a46ce96666c9364657dd9b115259d90c0\" returns successfully" Jul 12 00:21:12.986045 update_engine[1310]: I0712 00:21:12.986016 1310 update_attempter.cc:509] Updating boot flags... Jul 12 00:21:13.117099 env[1315]: time="2025-07-12T00:21:13.116429845Z" level=info msg="shim disconnected" id=09e1741b4121af4808d1768efea8a19a46ce96666c9364657dd9b115259d90c0 Jul 12 00:21:13.117099 env[1315]: time="2025-07-12T00:21:13.116481321Z" level=warning msg="cleaning up after shim disconnected" id=09e1741b4121af4808d1768efea8a19a46ce96666c9364657dd9b115259d90c0 namespace=k8s.io Jul 12 00:21:13.117099 env[1315]: time="2025-07-12T00:21:13.116492961Z" level=info msg="cleaning up dead shim" Jul 12 00:21:13.135849 env[1315]: time="2025-07-12T00:21:13.135805704Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:21:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2678 runtime=io.containerd.runc.v2\n" Jul 12 00:21:13.770170 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09e1741b4121af4808d1768efea8a19a46ce96666c9364657dd9b115259d90c0-rootfs.mount: Deactivated successfully. Jul 12 00:21:13.850273 kubelet[2087]: E0712 00:21:13.850242 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:13.851137 kubelet[2087]: E0712 00:21:13.850517 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:13.854454 env[1315]: time="2025-07-12T00:21:13.854408759Z" level=info msg="CreateContainer within sandbox \"8f6bae6d956c3c58e6976508dbaa0e0bf632c048a6983fc724414f301d33d139\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 12 00:21:13.903137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount692406782.mount: Deactivated successfully. Jul 12 00:21:13.939509 env[1315]: time="2025-07-12T00:21:13.939445073Z" level=info msg="CreateContainer within sandbox \"8f6bae6d956c3c58e6976508dbaa0e0bf632c048a6983fc724414f301d33d139\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"96667469e6d03bbaf5fc2519e53195234eddab69e3b6fd23408b77659012e4a8\"" Jul 12 00:21:13.940829 env[1315]: time="2025-07-12T00:21:13.940243897Z" level=info msg="StartContainer for \"96667469e6d03bbaf5fc2519e53195234eddab69e3b6fd23408b77659012e4a8\"" Jul 12 00:21:14.047256 env[1315]: time="2025-07-12T00:21:14.047138377Z" level=info msg="StartContainer for \"96667469e6d03bbaf5fc2519e53195234eddab69e3b6fd23408b77659012e4a8\" returns successfully" Jul 12 00:21:14.153621 env[1315]: time="2025-07-12T00:21:14.153566711Z" level=info msg="shim disconnected" id=96667469e6d03bbaf5fc2519e53195234eddab69e3b6fd23408b77659012e4a8 Jul 12 00:21:14.153621 env[1315]: time="2025-07-12T00:21:14.153614667Z" level=warning msg="cleaning up after shim disconnected" id=96667469e6d03bbaf5fc2519e53195234eddab69e3b6fd23408b77659012e4a8 namespace=k8s.io Jul 12 00:21:14.153621 env[1315]: time="2025-07-12T00:21:14.153624987Z" level=info msg="cleaning up dead shim" Jul 12 00:21:14.164441 env[1315]: time="2025-07-12T00:21:14.164394728Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:21:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2733 runtime=io.containerd.runc.v2\n" Jul 12 00:21:14.770262 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96667469e6d03bbaf5fc2519e53195234eddab69e3b6fd23408b77659012e4a8-rootfs.mount: Deactivated successfully. Jul 12 00:21:14.849512 kubelet[2087]: E0712 00:21:14.849483 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:14.851288 env[1315]: time="2025-07-12T00:21:14.851251313Z" level=info msg="CreateContainer within sandbox \"8f6bae6d956c3c58e6976508dbaa0e0bf632c048a6983fc724414f301d33d139\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 12 00:21:14.866102 env[1315]: time="2025-07-12T00:21:14.866042873Z" level=info msg="CreateContainer within sandbox \"8f6bae6d956c3c58e6976508dbaa0e0bf632c048a6983fc724414f301d33d139\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2cf65a23b84cdba5faa925592a9702b78e722d4330d9534449b9004bffb872b1\"" Jul 12 00:21:14.866726 env[1315]: time="2025-07-12T00:21:14.866700590Z" level=info msg="StartContainer for \"2cf65a23b84cdba5faa925592a9702b78e722d4330d9534449b9004bffb872b1\"" Jul 12 00:21:14.933497 env[1315]: time="2025-07-12T00:21:14.933453178Z" level=info msg="StartContainer for \"2cf65a23b84cdba5faa925592a9702b78e722d4330d9534449b9004bffb872b1\" returns successfully" Jul 12 00:21:15.113751 kubelet[2087]: I0712 00:21:15.113642 2087 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 12 00:21:15.192852 kubelet[2087]: I0712 00:21:15.192789 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d79fcc8-6ac1-449f-b4fe-1735c8a47b82-config-volume\") pod \"coredns-7c65d6cfc9-cd4ft\" (UID: \"3d79fcc8-6ac1-449f-b4fe-1735c8a47b82\") " pod="kube-system/coredns-7c65d6cfc9-cd4ft" Jul 12 00:21:15.192852 kubelet[2087]: I0712 00:21:15.192833 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb7v7\" (UniqueName: \"kubernetes.io/projected/3d79fcc8-6ac1-449f-b4fe-1735c8a47b82-kube-api-access-tb7v7\") pod \"coredns-7c65d6cfc9-cd4ft\" (UID: \"3d79fcc8-6ac1-449f-b4fe-1735c8a47b82\") " pod="kube-system/coredns-7c65d6cfc9-cd4ft" Jul 12 00:21:15.192852 kubelet[2087]: I0712 00:21:15.192853 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/338360fe-b251-4f74-bb81-4d5fb908c6df-config-volume\") pod \"coredns-7c65d6cfc9-mhztp\" (UID: \"338360fe-b251-4f74-bb81-4d5fb908c6df\") " pod="kube-system/coredns-7c65d6cfc9-mhztp" Jul 12 00:21:15.193082 kubelet[2087]: I0712 00:21:15.192869 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbk4r\" (UniqueName: \"kubernetes.io/projected/338360fe-b251-4f74-bb81-4d5fb908c6df-kube-api-access-lbk4r\") pod \"coredns-7c65d6cfc9-mhztp\" (UID: \"338360fe-b251-4f74-bb81-4d5fb908c6df\") " pod="kube-system/coredns-7c65d6cfc9-mhztp" Jul 12 00:21:15.213979 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 12 00:21:15.440286 kubelet[2087]: E0712 00:21:15.440175 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:15.440902 env[1315]: time="2025-07-12T00:21:15.440862913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mhztp,Uid:338360fe-b251-4f74-bb81-4d5fb908c6df,Namespace:kube-system,Attempt:0,}" Jul 12 00:21:15.442499 kubelet[2087]: E0712 00:21:15.442475 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:15.442922 env[1315]: time="2025-07-12T00:21:15.442887150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-cd4ft,Uid:3d79fcc8-6ac1-449f-b4fe-1735c8a47b82,Namespace:kube-system,Attempt:0,}" Jul 12 00:21:15.461972 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 12 00:21:15.853516 kubelet[2087]: E0712 00:21:15.853479 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:15.872912 kubelet[2087]: I0712 00:21:15.872843 2087 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vhqk6" podStartSLOduration=6.732551716 podStartE2EDuration="12.872826992s" podCreationTimestamp="2025-07-12 00:21:03 +0000 UTC" firstStartedPulling="2025-07-12 00:21:04.619822124 +0000 UTC m=+6.924533621" lastFinishedPulling="2025-07-12 00:21:10.7600974 +0000 UTC m=+13.064808897" observedRunningTime="2025-07-12 00:21:15.87006596 +0000 UTC m=+18.174777457" watchObservedRunningTime="2025-07-12 00:21:15.872826992 +0000 UTC m=+18.177538489" Jul 12 00:21:16.855248 kubelet[2087]: E0712 00:21:16.855218 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:17.108077 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Jul 12 00:21:17.108193 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 12 00:21:17.108546 systemd-networkd[1101]: cilium_host: Link UP Jul 12 00:21:17.108660 systemd-networkd[1101]: cilium_net: Link UP Jul 12 00:21:17.108775 systemd-networkd[1101]: cilium_net: Gained carrier Jul 12 00:21:17.108882 systemd-networkd[1101]: cilium_host: Gained carrier Jul 12 00:21:17.192864 systemd-networkd[1101]: cilium_vxlan: Link UP Jul 12 00:21:17.192871 systemd-networkd[1101]: cilium_vxlan: Gained carrier Jul 12 00:21:17.525975 kernel: NET: Registered PF_ALG protocol family Jul 12 00:21:17.821226 systemd-networkd[1101]: cilium_host: Gained IPv6LL Jul 12 00:21:17.857376 kubelet[2087]: E0712 00:21:17.857323 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:18.077102 systemd-networkd[1101]: cilium_net: Gained IPv6LL Jul 12 00:21:18.124142 systemd-networkd[1101]: lxc_health: Link UP Jul 12 00:21:18.142544 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 12 00:21:18.143324 systemd-networkd[1101]: lxc_health: Gained carrier Jul 12 00:21:18.549374 systemd-networkd[1101]: lxc4e7a6b00d262: Link UP Jul 12 00:21:18.558826 kernel: eth0: renamed from tmp47df0 Jul 12 00:21:18.569216 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc4e7a6b00d262: link becomes ready Jul 12 00:21:18.569318 kernel: eth0: renamed from tmp940f8 Jul 12 00:21:18.568885 systemd-networkd[1101]: lxc6bbe01d7e694: Link UP Jul 12 00:21:18.575492 systemd-networkd[1101]: lxc4e7a6b00d262: Gained carrier Jul 12 00:21:18.576976 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc6bbe01d7e694: link becomes ready Jul 12 00:21:18.578472 systemd-networkd[1101]: lxc6bbe01d7e694: Gained carrier Jul 12 00:21:18.858832 kubelet[2087]: E0712 00:21:18.858717 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:19.164066 systemd-networkd[1101]: cilium_vxlan: Gained IPv6LL Jul 12 00:21:19.676078 systemd-networkd[1101]: lxc_health: Gained IPv6LL Jul 12 00:21:20.124099 systemd-networkd[1101]: lxc6bbe01d7e694: Gained IPv6LL Jul 12 00:21:20.444116 systemd-networkd[1101]: lxc4e7a6b00d262: Gained IPv6LL Jul 12 00:21:22.069765 env[1315]: time="2025-07-12T00:21:22.069533168Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:21:22.069765 env[1315]: time="2025-07-12T00:21:22.069583526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:21:22.069765 env[1315]: time="2025-07-12T00:21:22.069602125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:21:22.070207 env[1315]: time="2025-07-12T00:21:22.069793998Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/940f891d2354d1d081ab576734dcf45f75eb9c93b76190dc5531d1c7264267ba pid=3303 runtime=io.containerd.runc.v2 Jul 12 00:21:22.075795 env[1315]: time="2025-07-12T00:21:22.070260060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:21:22.075795 env[1315]: time="2025-07-12T00:21:22.070298498Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:21:22.075795 env[1315]: time="2025-07-12T00:21:22.070308098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:21:22.075795 env[1315]: time="2025-07-12T00:21:22.070462612Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/47df07d363c547523d7d1681c3ebe800657a46ffddc3089e4f48294f08ab1d59 pid=3308 runtime=io.containerd.runc.v2 Jul 12 00:21:22.082690 systemd[1]: run-containerd-runc-k8s.io-940f891d2354d1d081ab576734dcf45f75eb9c93b76190dc5531d1c7264267ba-runc.XC25i7.mount: Deactivated successfully. Jul 12 00:21:22.122911 systemd-resolved[1238]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:21:22.136865 systemd-resolved[1238]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:21:22.142259 env[1315]: time="2025-07-12T00:21:22.142214393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mhztp,Uid:338360fe-b251-4f74-bb81-4d5fb908c6df,Namespace:kube-system,Attempt:0,} returns sandbox id \"940f891d2354d1d081ab576734dcf45f75eb9c93b76190dc5531d1c7264267ba\"" Jul 12 00:21:22.143161 kubelet[2087]: E0712 00:21:22.142963 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:22.144474 env[1315]: time="2025-07-12T00:21:22.144440827Z" level=info msg="CreateContainer within sandbox \"940f891d2354d1d081ab576734dcf45f75eb9c93b76190dc5531d1c7264267ba\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:21:22.163386 env[1315]: time="2025-07-12T00:21:22.163325136Z" level=info msg="CreateContainer within sandbox \"940f891d2354d1d081ab576734dcf45f75eb9c93b76190dc5531d1c7264267ba\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8c29701b5e5422f1c9f8f97f4cd0dcdd4d3fc8bb5338f4641da28f1fec33a97a\"" Jul 12 00:21:22.165900 env[1315]: time="2025-07-12T00:21:22.165513731Z" level=info msg="StartContainer for \"8c29701b5e5422f1c9f8f97f4cd0dcdd4d3fc8bb5338f4641da28f1fec33a97a\"" Jul 12 00:21:22.167725 env[1315]: time="2025-07-12T00:21:22.167690647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-cd4ft,Uid:3d79fcc8-6ac1-449f-b4fe-1735c8a47b82,Namespace:kube-system,Attempt:0,} returns sandbox id \"47df07d363c547523d7d1681c3ebe800657a46ffddc3089e4f48294f08ab1d59\"" Jul 12 00:21:22.168458 kubelet[2087]: E0712 00:21:22.168431 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:22.172563 env[1315]: time="2025-07-12T00:21:22.172521020Z" level=info msg="CreateContainer within sandbox \"47df07d363c547523d7d1681c3ebe800657a46ffddc3089e4f48294f08ab1d59\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:21:22.186134 env[1315]: time="2025-07-12T00:21:22.186087974Z" level=info msg="CreateContainer within sandbox \"47df07d363c547523d7d1681c3ebe800657a46ffddc3089e4f48294f08ab1d59\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"03eadf6e5cf9b046df50b1762ad42d3dde054a12d5b240ab2c2f2dc43d09ca7f\"" Jul 12 00:21:22.186575 env[1315]: time="2025-07-12T00:21:22.186535077Z" level=info msg="StartContainer for \"03eadf6e5cf9b046df50b1762ad42d3dde054a12d5b240ab2c2f2dc43d09ca7f\"" Jul 12 00:21:22.235770 env[1315]: time="2025-07-12T00:21:22.235717373Z" level=info msg="StartContainer for \"8c29701b5e5422f1c9f8f97f4cd0dcdd4d3fc8bb5338f4641da28f1fec33a97a\" returns successfully" Jul 12 00:21:22.266988 env[1315]: time="2025-07-12T00:21:22.266906405Z" level=info msg="StartContainer for \"03eadf6e5cf9b046df50b1762ad42d3dde054a12d5b240ab2c2f2dc43d09ca7f\" returns successfully" Jul 12 00:21:22.866760 kubelet[2087]: E0712 00:21:22.866704 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:22.870717 kubelet[2087]: E0712 00:21:22.870691 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:22.878967 kubelet[2087]: I0712 00:21:22.878880 2087 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-cd4ft" podStartSLOduration=18.878864907 podStartE2EDuration="18.878864907s" podCreationTimestamp="2025-07-12 00:21:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:21:22.878514281 +0000 UTC m=+25.183225778" watchObservedRunningTime="2025-07-12 00:21:22.878864907 +0000 UTC m=+25.183576404" Jul 12 00:21:22.897116 kubelet[2087]: I0712 00:21:22.897051 2087 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-mhztp" podStartSLOduration=18.897031364 podStartE2EDuration="18.897031364s" podCreationTimestamp="2025-07-12 00:21:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:21:22.896225955 +0000 UTC m=+25.200937452" watchObservedRunningTime="2025-07-12 00:21:22.897031364 +0000 UTC m=+25.201742861" Jul 12 00:21:23.871944 kubelet[2087]: E0712 00:21:23.871885 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:23.872283 kubelet[2087]: E0712 00:21:23.871974 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:24.619006 systemd[1]: Started sshd@5-10.0.0.41:22-10.0.0.1:41564.service. Jul 12 00:21:24.657594 sshd[3455]: Accepted publickey for core from 10.0.0.1 port 41564 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:21:24.659539 sshd[3455]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:21:24.664042 systemd-logind[1305]: New session 6 of user core. Jul 12 00:21:24.664383 systemd[1]: Started session-6.scope. Jul 12 00:21:24.851595 sshd[3455]: pam_unix(sshd:session): session closed for user core Jul 12 00:21:24.854567 systemd-logind[1305]: Session 6 logged out. Waiting for processes to exit. Jul 12 00:21:24.854814 systemd[1]: sshd@5-10.0.0.41:22-10.0.0.1:41564.service: Deactivated successfully. Jul 12 00:21:24.855599 systemd[1]: session-6.scope: Deactivated successfully. Jul 12 00:21:24.856589 systemd-logind[1305]: Removed session 6. Jul 12 00:21:24.874058 kubelet[2087]: E0712 00:21:24.873963 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:24.874376 kubelet[2087]: E0712 00:21:24.874206 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:29.853966 systemd[1]: Started sshd@6-10.0.0.41:22-10.0.0.1:41580.service. Jul 12 00:21:29.895624 sshd[3470]: Accepted publickey for core from 10.0.0.1 port 41580 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:21:29.896968 sshd[3470]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:21:29.900556 systemd-logind[1305]: New session 7 of user core. Jul 12 00:21:29.901429 systemd[1]: Started session-7.scope. Jul 12 00:21:30.019525 sshd[3470]: pam_unix(sshd:session): session closed for user core Jul 12 00:21:30.022338 systemd[1]: sshd@6-10.0.0.41:22-10.0.0.1:41580.service: Deactivated successfully. Jul 12 00:21:30.023342 systemd-logind[1305]: Session 7 logged out. Waiting for processes to exit. Jul 12 00:21:30.023400 systemd[1]: session-7.scope: Deactivated successfully. Jul 12 00:21:30.024261 systemd-logind[1305]: Removed session 7. Jul 12 00:21:33.483595 kubelet[2087]: I0712 00:21:33.483549 2087 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:21:33.484078 kubelet[2087]: E0712 00:21:33.484039 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:33.892807 kubelet[2087]: E0712 00:21:33.892768 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:35.022369 systemd[1]: Started sshd@7-10.0.0.41:22-10.0.0.1:52184.service. Jul 12 00:21:35.058902 sshd[3487]: Accepted publickey for core from 10.0.0.1 port 52184 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:21:35.060334 sshd[3487]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:21:35.064157 systemd-logind[1305]: New session 8 of user core. Jul 12 00:21:35.065036 systemd[1]: Started session-8.scope. Jul 12 00:21:35.183716 sshd[3487]: pam_unix(sshd:session): session closed for user core Jul 12 00:21:35.186399 systemd[1]: sshd@7-10.0.0.41:22-10.0.0.1:52184.service: Deactivated successfully. Jul 12 00:21:35.187618 systemd[1]: session-8.scope: Deactivated successfully. Jul 12 00:21:35.188103 systemd-logind[1305]: Session 8 logged out. Waiting for processes to exit. Jul 12 00:21:35.188850 systemd-logind[1305]: Removed session 8. Jul 12 00:21:40.186839 systemd[1]: Started sshd@8-10.0.0.41:22-10.0.0.1:52194.service. Jul 12 00:21:40.220061 sshd[3502]: Accepted publickey for core from 10.0.0.1 port 52194 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:21:40.221524 sshd[3502]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:21:40.226452 systemd[1]: Started session-9.scope. Jul 12 00:21:40.226581 systemd-logind[1305]: New session 9 of user core. Jul 12 00:21:40.344388 sshd[3502]: pam_unix(sshd:session): session closed for user core Jul 12 00:21:40.347023 systemd[1]: Started sshd@9-10.0.0.41:22-10.0.0.1:52202.service. Jul 12 00:21:40.347758 systemd[1]: sshd@8-10.0.0.41:22-10.0.0.1:52194.service: Deactivated successfully. Jul 12 00:21:40.349032 systemd-logind[1305]: Session 9 logged out. Waiting for processes to exit. Jul 12 00:21:40.349083 systemd[1]: session-9.scope: Deactivated successfully. Jul 12 00:21:40.349895 systemd-logind[1305]: Removed session 9. Jul 12 00:21:40.382210 sshd[3517]: Accepted publickey for core from 10.0.0.1 port 52202 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:21:40.383907 sshd[3517]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:21:40.387745 systemd-logind[1305]: New session 10 of user core. Jul 12 00:21:40.388249 systemd[1]: Started session-10.scope. Jul 12 00:21:40.544020 sshd[3517]: pam_unix(sshd:session): session closed for user core Jul 12 00:21:40.546362 systemd[1]: Started sshd@10-10.0.0.41:22-10.0.0.1:52216.service. Jul 12 00:21:40.548244 systemd[1]: sshd@9-10.0.0.41:22-10.0.0.1:52202.service: Deactivated successfully. Jul 12 00:21:40.549327 systemd-logind[1305]: Session 10 logged out. Waiting for processes to exit. Jul 12 00:21:40.549328 systemd[1]: session-10.scope: Deactivated successfully. Jul 12 00:21:40.553897 systemd-logind[1305]: Removed session 10. Jul 12 00:21:40.595399 sshd[3529]: Accepted publickey for core from 10.0.0.1 port 52216 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:21:40.596904 sshd[3529]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:21:40.603538 systemd-logind[1305]: New session 11 of user core. Jul 12 00:21:40.604007 systemd[1]: Started session-11.scope. Jul 12 00:21:40.750111 sshd[3529]: pam_unix(sshd:session): session closed for user core Jul 12 00:21:40.752530 systemd[1]: sshd@10-10.0.0.41:22-10.0.0.1:52216.service: Deactivated successfully. Jul 12 00:21:40.753499 systemd-logind[1305]: Session 11 logged out. Waiting for processes to exit. Jul 12 00:21:40.753575 systemd[1]: session-11.scope: Deactivated successfully. Jul 12 00:21:40.755397 systemd-logind[1305]: Removed session 11. Jul 12 00:21:45.752831 systemd[1]: Started sshd@11-10.0.0.41:22-10.0.0.1:45244.service. Jul 12 00:21:45.785494 sshd[3545]: Accepted publickey for core from 10.0.0.1 port 45244 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:21:45.786912 sshd[3545]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:21:45.790792 systemd-logind[1305]: New session 12 of user core. Jul 12 00:21:45.792186 systemd[1]: Started session-12.scope. Jul 12 00:21:45.907585 sshd[3545]: pam_unix(sshd:session): session closed for user core Jul 12 00:21:45.909859 systemd[1]: sshd@11-10.0.0.41:22-10.0.0.1:45244.service: Deactivated successfully. Jul 12 00:21:45.910775 systemd-logind[1305]: Session 12 logged out. Waiting for processes to exit. Jul 12 00:21:45.910840 systemd[1]: session-12.scope: Deactivated successfully. Jul 12 00:21:45.911562 systemd-logind[1305]: Removed session 12. Jul 12 00:21:50.910893 systemd[1]: Started sshd@12-10.0.0.41:22-10.0.0.1:45252.service. Jul 12 00:21:50.945632 sshd[3559]: Accepted publickey for core from 10.0.0.1 port 45252 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:21:50.946754 sshd[3559]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:21:50.950685 systemd-logind[1305]: New session 13 of user core. Jul 12 00:21:50.950910 systemd[1]: Started session-13.scope. Jul 12 00:21:51.062708 sshd[3559]: pam_unix(sshd:session): session closed for user core Jul 12 00:21:51.065206 systemd[1]: Started sshd@13-10.0.0.41:22-10.0.0.1:45264.service. Jul 12 00:21:51.065730 systemd[1]: sshd@12-10.0.0.41:22-10.0.0.1:45252.service: Deactivated successfully. Jul 12 00:21:51.066706 systemd-logind[1305]: Session 13 logged out. Waiting for processes to exit. Jul 12 00:21:51.066761 systemd[1]: session-13.scope: Deactivated successfully. Jul 12 00:21:51.067416 systemd-logind[1305]: Removed session 13. Jul 12 00:21:51.098029 sshd[3571]: Accepted publickey for core from 10.0.0.1 port 45264 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:21:51.099329 sshd[3571]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:21:51.102510 systemd-logind[1305]: New session 14 of user core. Jul 12 00:21:51.103303 systemd[1]: Started session-14.scope. Jul 12 00:21:51.311456 sshd[3571]: pam_unix(sshd:session): session closed for user core Jul 12 00:21:51.313875 systemd[1]: Started sshd@14-10.0.0.41:22-10.0.0.1:45274.service. Jul 12 00:21:51.314602 systemd-logind[1305]: Session 14 logged out. Waiting for processes to exit. Jul 12 00:21:51.314776 systemd[1]: sshd@13-10.0.0.41:22-10.0.0.1:45264.service: Deactivated successfully. Jul 12 00:21:51.315672 systemd[1]: session-14.scope: Deactivated successfully. Jul 12 00:21:51.316279 systemd-logind[1305]: Removed session 14. Jul 12 00:21:51.350462 sshd[3584]: Accepted publickey for core from 10.0.0.1 port 45274 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:21:51.352115 sshd[3584]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:21:51.358243 systemd-logind[1305]: New session 15 of user core. Jul 12 00:21:51.359200 systemd[1]: Started session-15.scope. Jul 12 00:21:52.721902 sshd[3584]: pam_unix(sshd:session): session closed for user core Jul 12 00:21:52.724782 systemd[1]: Started sshd@15-10.0.0.41:22-10.0.0.1:36902.service. Jul 12 00:21:52.726032 systemd[1]: sshd@14-10.0.0.41:22-10.0.0.1:45274.service: Deactivated successfully. Jul 12 00:21:52.727423 systemd[1]: session-15.scope: Deactivated successfully. Jul 12 00:21:52.727672 systemd-logind[1305]: Session 15 logged out. Waiting for processes to exit. Jul 12 00:21:52.730761 systemd-logind[1305]: Removed session 15. Jul 12 00:21:52.759923 sshd[3603]: Accepted publickey for core from 10.0.0.1 port 36902 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:21:52.761244 sshd[3603]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:21:52.764874 systemd-logind[1305]: New session 16 of user core. Jul 12 00:21:52.765685 systemd[1]: Started session-16.scope. Jul 12 00:21:52.998758 sshd[3603]: pam_unix(sshd:session): session closed for user core Jul 12 00:21:53.001079 systemd[1]: Started sshd@16-10.0.0.41:22-10.0.0.1:36914.service. Jul 12 00:21:53.009089 systemd[1]: sshd@15-10.0.0.41:22-10.0.0.1:36902.service: Deactivated successfully. Jul 12 00:21:53.010328 systemd[1]: session-16.scope: Deactivated successfully. Jul 12 00:21:53.010485 systemd-logind[1305]: Session 16 logged out. Waiting for processes to exit. Jul 12 00:21:53.011396 systemd-logind[1305]: Removed session 16. Jul 12 00:21:53.037849 sshd[3615]: Accepted publickey for core from 10.0.0.1 port 36914 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:21:53.039017 sshd[3615]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:21:53.042946 systemd-logind[1305]: New session 17 of user core. Jul 12 00:21:53.043255 systemd[1]: Started session-17.scope. Jul 12 00:21:53.158719 sshd[3615]: pam_unix(sshd:session): session closed for user core Jul 12 00:21:53.165082 systemd[1]: sshd@16-10.0.0.41:22-10.0.0.1:36914.service: Deactivated successfully. Jul 12 00:21:53.165861 systemd[1]: session-17.scope: Deactivated successfully. Jul 12 00:21:53.166441 systemd-logind[1305]: Session 17 logged out. Waiting for processes to exit. Jul 12 00:21:53.167538 systemd-logind[1305]: Removed session 17. Jul 12 00:21:58.161888 systemd[1]: Started sshd@17-10.0.0.41:22-10.0.0.1:36924.service. Jul 12 00:21:58.203713 sshd[3638]: Accepted publickey for core from 10.0.0.1 port 36924 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:21:58.204803 sshd[3638]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:21:58.210300 systemd-logind[1305]: New session 18 of user core. Jul 12 00:21:58.211124 systemd[1]: Started session-18.scope. Jul 12 00:21:58.335301 sshd[3638]: pam_unix(sshd:session): session closed for user core Jul 12 00:21:58.337832 systemd-logind[1305]: Session 18 logged out. Waiting for processes to exit. Jul 12 00:21:58.338054 systemd[1]: sshd@17-10.0.0.41:22-10.0.0.1:36924.service: Deactivated successfully. Jul 12 00:21:58.338872 systemd[1]: session-18.scope: Deactivated successfully. Jul 12 00:21:58.339293 systemd-logind[1305]: Removed session 18. Jul 12 00:22:03.338669 systemd[1]: Started sshd@18-10.0.0.41:22-10.0.0.1:40438.service. Jul 12 00:22:03.371826 sshd[3652]: Accepted publickey for core from 10.0.0.1 port 40438 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:22:03.373218 sshd[3652]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:22:03.377977 systemd-logind[1305]: New session 19 of user core. Jul 12 00:22:03.378453 systemd[1]: Started session-19.scope. Jul 12 00:22:03.489549 sshd[3652]: pam_unix(sshd:session): session closed for user core Jul 12 00:22:03.491801 systemd[1]: sshd@18-10.0.0.41:22-10.0.0.1:40438.service: Deactivated successfully. Jul 12 00:22:03.492752 systemd-logind[1305]: Session 19 logged out. Waiting for processes to exit. Jul 12 00:22:03.492821 systemd[1]: session-19.scope: Deactivated successfully. Jul 12 00:22:03.493911 systemd-logind[1305]: Removed session 19. Jul 12 00:22:08.493085 systemd[1]: Started sshd@19-10.0.0.41:22-10.0.0.1:40454.service. Jul 12 00:22:08.527470 sshd[3668]: Accepted publickey for core from 10.0.0.1 port 40454 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:22:08.528754 sshd[3668]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:22:08.533157 systemd[1]: Started session-20.scope. Jul 12 00:22:08.534020 systemd-logind[1305]: New session 20 of user core. Jul 12 00:22:08.647210 sshd[3668]: pam_unix(sshd:session): session closed for user core Jul 12 00:22:08.649886 systemd-logind[1305]: Session 20 logged out. Waiting for processes to exit. Jul 12 00:22:08.650211 systemd[1]: sshd@19-10.0.0.41:22-10.0.0.1:40454.service: Deactivated successfully. Jul 12 00:22:08.651062 systemd[1]: session-20.scope: Deactivated successfully. Jul 12 00:22:08.651786 systemd-logind[1305]: Removed session 20. Jul 12 00:22:13.650170 systemd[1]: Started sshd@20-10.0.0.41:22-10.0.0.1:53588.service. Jul 12 00:22:13.691412 sshd[3682]: Accepted publickey for core from 10.0.0.1 port 53588 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:22:13.692601 sshd[3682]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:22:13.697729 systemd-logind[1305]: New session 21 of user core. Jul 12 00:22:13.698554 systemd[1]: Started session-21.scope. Jul 12 00:22:13.812709 sshd[3682]: pam_unix(sshd:session): session closed for user core Jul 12 00:22:13.815342 systemd[1]: Started sshd@21-10.0.0.41:22-10.0.0.1:53596.service. Jul 12 00:22:13.817205 systemd[1]: sshd@20-10.0.0.41:22-10.0.0.1:53588.service: Deactivated successfully. Jul 12 00:22:13.819281 systemd-logind[1305]: Session 21 logged out. Waiting for processes to exit. Jul 12 00:22:13.819825 systemd[1]: session-21.scope: Deactivated successfully. Jul 12 00:22:13.820759 systemd-logind[1305]: Removed session 21. Jul 12 00:22:13.851831 sshd[3694]: Accepted publickey for core from 10.0.0.1 port 53596 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:22:13.853121 sshd[3694]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:22:13.856939 systemd-logind[1305]: New session 22 of user core. Jul 12 00:22:13.857629 systemd[1]: Started session-22.scope. Jul 12 00:22:15.824838 env[1315]: time="2025-07-12T00:22:15.824572539Z" level=info msg="StopContainer for \"1a1d49f90bcd911c29e2ba5d474c3cbb9cab1778eeae3a939fbc5d84be36bf90\" with timeout 30 (s)" Jul 12 00:22:15.827866 env[1315]: time="2025-07-12T00:22:15.825143459Z" level=info msg="Stop container \"1a1d49f90bcd911c29e2ba5d474c3cbb9cab1778eeae3a939fbc5d84be36bf90\" with signal terminated" Jul 12 00:22:15.860797 env[1315]: time="2025-07-12T00:22:15.858672984Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:22:15.859939 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1a1d49f90bcd911c29e2ba5d474c3cbb9cab1778eeae3a939fbc5d84be36bf90-rootfs.mount: Deactivated successfully. Jul 12 00:22:15.865315 env[1315]: time="2025-07-12T00:22:15.865264584Z" level=info msg="StopContainer for \"2cf65a23b84cdba5faa925592a9702b78e722d4330d9534449b9004bffb872b1\" with timeout 2 (s)" Jul 12 00:22:15.865560 env[1315]: time="2025-07-12T00:22:15.865486864Z" level=info msg="Stop container \"2cf65a23b84cdba5faa925592a9702b78e722d4330d9534449b9004bffb872b1\" with signal terminated" Jul 12 00:22:15.866745 env[1315]: time="2025-07-12T00:22:15.866698985Z" level=info msg="shim disconnected" id=1a1d49f90bcd911c29e2ba5d474c3cbb9cab1778eeae3a939fbc5d84be36bf90 Jul 12 00:22:15.866745 env[1315]: time="2025-07-12T00:22:15.866737545Z" level=warning msg="cleaning up after shim disconnected" id=1a1d49f90bcd911c29e2ba5d474c3cbb9cab1778eeae3a939fbc5d84be36bf90 namespace=k8s.io Jul 12 00:22:15.866745 env[1315]: time="2025-07-12T00:22:15.866746105Z" level=info msg="cleaning up dead shim" Jul 12 00:22:15.871765 systemd-networkd[1101]: lxc_health: Link DOWN Jul 12 00:22:15.871836 systemd-networkd[1101]: lxc_health: Lost carrier Jul 12 00:22:15.875820 env[1315]: time="2025-07-12T00:22:15.875762146Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:22:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3747 runtime=io.containerd.runc.v2\n" Jul 12 00:22:15.878098 env[1315]: time="2025-07-12T00:22:15.878062546Z" level=info msg="StopContainer for \"1a1d49f90bcd911c29e2ba5d474c3cbb9cab1778eeae3a939fbc5d84be36bf90\" returns successfully" Jul 12 00:22:15.878660 env[1315]: time="2025-07-12T00:22:15.878595106Z" level=info msg="StopPodSandbox for \"94bcd7943b3faf890240d4c98633b86b31e6f170d5f46faeb28e5af5f10a4ab6\"" Jul 12 00:22:15.878730 env[1315]: time="2025-07-12T00:22:15.878662546Z" level=info msg="Container to stop \"1a1d49f90bcd911c29e2ba5d474c3cbb9cab1778eeae3a939fbc5d84be36bf90\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:22:15.880595 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-94bcd7943b3faf890240d4c98633b86b31e6f170d5f46faeb28e5af5f10a4ab6-shm.mount: Deactivated successfully. Jul 12 00:22:15.909286 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-94bcd7943b3faf890240d4c98633b86b31e6f170d5f46faeb28e5af5f10a4ab6-rootfs.mount: Deactivated successfully. Jul 12 00:22:15.913955 env[1315]: time="2025-07-12T00:22:15.913884230Z" level=info msg="shim disconnected" id=94bcd7943b3faf890240d4c98633b86b31e6f170d5f46faeb28e5af5f10a4ab6 Jul 12 00:22:15.913955 env[1315]: time="2025-07-12T00:22:15.913939910Z" level=warning msg="cleaning up after shim disconnected" id=94bcd7943b3faf890240d4c98633b86b31e6f170d5f46faeb28e5af5f10a4ab6 namespace=k8s.io Jul 12 00:22:15.913955 env[1315]: time="2025-07-12T00:22:15.913957190Z" level=info msg="cleaning up dead shim" Jul 12 00:22:15.920777 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2cf65a23b84cdba5faa925592a9702b78e722d4330d9534449b9004bffb872b1-rootfs.mount: Deactivated successfully. Jul 12 00:22:15.922957 env[1315]: time="2025-07-12T00:22:15.922910351Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:22:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3795 runtime=io.containerd.runc.v2\n" Jul 12 00:22:15.923267 env[1315]: time="2025-07-12T00:22:15.923231111Z" level=info msg="TearDown network for sandbox \"94bcd7943b3faf890240d4c98633b86b31e6f170d5f46faeb28e5af5f10a4ab6\" successfully" Jul 12 00:22:15.923311 env[1315]: time="2025-07-12T00:22:15.923267671Z" level=info msg="StopPodSandbox for \"94bcd7943b3faf890240d4c98633b86b31e6f170d5f46faeb28e5af5f10a4ab6\" returns successfully" Jul 12 00:22:15.925828 env[1315]: time="2025-07-12T00:22:15.925749072Z" level=info msg="shim disconnected" id=2cf65a23b84cdba5faa925592a9702b78e722d4330d9534449b9004bffb872b1 Jul 12 00:22:15.925929 env[1315]: time="2025-07-12T00:22:15.925790632Z" level=warning msg="cleaning up after shim disconnected" id=2cf65a23b84cdba5faa925592a9702b78e722d4330d9534449b9004bffb872b1 namespace=k8s.io Jul 12 00:22:15.925929 env[1315]: time="2025-07-12T00:22:15.925922872Z" level=info msg="cleaning up dead shim" Jul 12 00:22:15.934088 env[1315]: time="2025-07-12T00:22:15.934049993Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:22:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3813 runtime=io.containerd.runc.v2\n" Jul 12 00:22:15.938130 env[1315]: time="2025-07-12T00:22:15.938079833Z" level=info msg="StopContainer for \"2cf65a23b84cdba5faa925592a9702b78e722d4330d9534449b9004bffb872b1\" returns successfully" Jul 12 00:22:15.938671 env[1315]: time="2025-07-12T00:22:15.938636673Z" level=info msg="StopPodSandbox for \"8f6bae6d956c3c58e6976508dbaa0e0bf632c048a6983fc724414f301d33d139\"" Jul 12 00:22:15.938742 env[1315]: time="2025-07-12T00:22:15.938696353Z" level=info msg="Container to stop \"2cf65a23b84cdba5faa925592a9702b78e722d4330d9534449b9004bffb872b1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:22:15.938742 env[1315]: time="2025-07-12T00:22:15.938711233Z" level=info msg="Container to stop \"c75b19618216af7509261c5744fd20a663f9766e094a45429327d9c10c00b7e0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:22:15.938742 env[1315]: time="2025-07-12T00:22:15.938725313Z" level=info msg="Container to stop \"09e1741b4121af4808d1768efea8a19a46ce96666c9364657dd9b115259d90c0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:22:15.938742 env[1315]: time="2025-07-12T00:22:15.938739633Z" level=info msg="Container to stop \"96667469e6d03bbaf5fc2519e53195234eddab69e3b6fd23408b77659012e4a8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:22:15.940446 env[1315]: time="2025-07-12T00:22:15.938750233Z" level=info msg="Container to stop \"19950acd4a93cc9d8c3073e4be785b5bc434065f9a16e9b1308af925ff651963\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:22:15.960354 kubelet[2087]: I0712 00:22:15.960304 2087 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/30613dfc-9a3a-436d-9a84-7daa41622c72-cilium-config-path\") pod \"30613dfc-9a3a-436d-9a84-7daa41622c72\" (UID: \"30613dfc-9a3a-436d-9a84-7daa41622c72\") " Jul 12 00:22:15.960354 kubelet[2087]: I0712 00:22:15.960369 2087 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t82dq\" (UniqueName: \"kubernetes.io/projected/30613dfc-9a3a-436d-9a84-7daa41622c72-kube-api-access-t82dq\") pod \"30613dfc-9a3a-436d-9a84-7daa41622c72\" (UID: \"30613dfc-9a3a-436d-9a84-7daa41622c72\") " Jul 12 00:22:15.964608 kubelet[2087]: I0712 00:22:15.964546 2087 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30613dfc-9a3a-436d-9a84-7daa41622c72-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "30613dfc-9a3a-436d-9a84-7daa41622c72" (UID: "30613dfc-9a3a-436d-9a84-7daa41622c72"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 12 00:22:15.964934 kubelet[2087]: I0712 00:22:15.964887 2087 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30613dfc-9a3a-436d-9a84-7daa41622c72-kube-api-access-t82dq" (OuterVolumeSpecName: "kube-api-access-t82dq") pod "30613dfc-9a3a-436d-9a84-7daa41622c72" (UID: "30613dfc-9a3a-436d-9a84-7daa41622c72"). InnerVolumeSpecName "kube-api-access-t82dq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 12 00:22:15.968107 env[1315]: time="2025-07-12T00:22:15.968058397Z" level=info msg="shim disconnected" id=8f6bae6d956c3c58e6976508dbaa0e0bf632c048a6983fc724414f301d33d139 Jul 12 00:22:15.968697 env[1315]: time="2025-07-12T00:22:15.968672517Z" level=warning msg="cleaning up after shim disconnected" id=8f6bae6d956c3c58e6976508dbaa0e0bf632c048a6983fc724414f301d33d139 namespace=k8s.io Jul 12 00:22:15.968790 env[1315]: time="2025-07-12T00:22:15.968775717Z" level=info msg="cleaning up dead shim" Jul 12 00:22:15.987729 kubelet[2087]: I0712 00:22:15.984636 2087 scope.go:117] "RemoveContainer" containerID="1a1d49f90bcd911c29e2ba5d474c3cbb9cab1778eeae3a939fbc5d84be36bf90" Jul 12 00:22:15.988758 env[1315]: time="2025-07-12T00:22:15.988677799Z" level=info msg="RemoveContainer for \"1a1d49f90bcd911c29e2ba5d474c3cbb9cab1778eeae3a939fbc5d84be36bf90\"" Jul 12 00:22:15.990992 env[1315]: time="2025-07-12T00:22:15.990899480Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:22:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3846 runtime=io.containerd.runc.v2\n" Jul 12 00:22:15.991763 env[1315]: time="2025-07-12T00:22:15.991722560Z" level=info msg="TearDown network for sandbox \"8f6bae6d956c3c58e6976508dbaa0e0bf632c048a6983fc724414f301d33d139\" successfully" Jul 12 00:22:15.991877 env[1315]: time="2025-07-12T00:22:15.991857760Z" level=info msg="StopPodSandbox for \"8f6bae6d956c3c58e6976508dbaa0e0bf632c048a6983fc724414f301d33d139\" returns successfully" Jul 12 00:22:15.998116 env[1315]: time="2025-07-12T00:22:15.998068001Z" level=info msg="RemoveContainer for \"1a1d49f90bcd911c29e2ba5d474c3cbb9cab1778eeae3a939fbc5d84be36bf90\" returns successfully" Jul 12 00:22:15.999826 kubelet[2087]: I0712 00:22:15.999391 2087 scope.go:117] "RemoveContainer" containerID="1a1d49f90bcd911c29e2ba5d474c3cbb9cab1778eeae3a939fbc5d84be36bf90" Jul 12 00:22:15.999934 env[1315]: time="2025-07-12T00:22:15.999564601Z" level=error msg="ContainerStatus for \"1a1d49f90bcd911c29e2ba5d474c3cbb9cab1778eeae3a939fbc5d84be36bf90\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1a1d49f90bcd911c29e2ba5d474c3cbb9cab1778eeae3a939fbc5d84be36bf90\": not found" Jul 12 00:22:16.000736 kubelet[2087]: E0712 00:22:16.000428 2087 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1a1d49f90bcd911c29e2ba5d474c3cbb9cab1778eeae3a939fbc5d84be36bf90\": not found" containerID="1a1d49f90bcd911c29e2ba5d474c3cbb9cab1778eeae3a939fbc5d84be36bf90" Jul 12 00:22:16.000736 kubelet[2087]: I0712 00:22:16.000459 2087 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1a1d49f90bcd911c29e2ba5d474c3cbb9cab1778eeae3a939fbc5d84be36bf90"} err="failed to get container status \"1a1d49f90bcd911c29e2ba5d474c3cbb9cab1778eeae3a939fbc5d84be36bf90\": rpc error: code = NotFound desc = an error occurred when try to find container \"1a1d49f90bcd911c29e2ba5d474c3cbb9cab1778eeae3a939fbc5d84be36bf90\": not found" Jul 12 00:22:16.060607 kubelet[2087]: I0712 00:22:16.060568 2087 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eada4750-77df-4e71-80a8-964af09e2b3d-cilium-config-path\") pod \"eada4750-77df-4e71-80a8-964af09e2b3d\" (UID: \"eada4750-77df-4e71-80a8-964af09e2b3d\") " Jul 12 00:22:16.060607 kubelet[2087]: I0712 00:22:16.060604 2087 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eada4750-77df-4e71-80a8-964af09e2b3d-etc-cni-netd\") pod \"eada4750-77df-4e71-80a8-964af09e2b3d\" (UID: \"eada4750-77df-4e71-80a8-964af09e2b3d\") " Jul 12 00:22:16.060794 kubelet[2087]: I0712 00:22:16.060626 2087 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eada4750-77df-4e71-80a8-964af09e2b3d-host-proc-sys-net\") pod \"eada4750-77df-4e71-80a8-964af09e2b3d\" (UID: \"eada4750-77df-4e71-80a8-964af09e2b3d\") " Jul 12 00:22:16.060794 kubelet[2087]: I0712 00:22:16.060643 2087 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eada4750-77df-4e71-80a8-964af09e2b3d-cilium-run\") pod \"eada4750-77df-4e71-80a8-964af09e2b3d\" (UID: \"eada4750-77df-4e71-80a8-964af09e2b3d\") " Jul 12 00:22:16.060794 kubelet[2087]: I0712 00:22:16.060661 2087 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eada4750-77df-4e71-80a8-964af09e2b3d-clustermesh-secrets\") pod \"eada4750-77df-4e71-80a8-964af09e2b3d\" (UID: \"eada4750-77df-4e71-80a8-964af09e2b3d\") " Jul 12 00:22:16.060794 kubelet[2087]: I0712 00:22:16.060678 2087 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eada4750-77df-4e71-80a8-964af09e2b3d-hostproc\") pod \"eada4750-77df-4e71-80a8-964af09e2b3d\" (UID: \"eada4750-77df-4e71-80a8-964af09e2b3d\") " Jul 12 00:22:16.060794 kubelet[2087]: I0712 00:22:16.060693 2087 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eada4750-77df-4e71-80a8-964af09e2b3d-bpf-maps\") pod \"eada4750-77df-4e71-80a8-964af09e2b3d\" (UID: \"eada4750-77df-4e71-80a8-964af09e2b3d\") " Jul 12 00:22:16.060794 kubelet[2087]: I0712 00:22:16.060709 2087 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eada4750-77df-4e71-80a8-964af09e2b3d-hubble-tls\") pod \"eada4750-77df-4e71-80a8-964af09e2b3d\" (UID: \"eada4750-77df-4e71-80a8-964af09e2b3d\") " Jul 12 00:22:16.060930 kubelet[2087]: I0712 00:22:16.060725 2087 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eada4750-77df-4e71-80a8-964af09e2b3d-xtables-lock\") pod \"eada4750-77df-4e71-80a8-964af09e2b3d\" (UID: \"eada4750-77df-4e71-80a8-964af09e2b3d\") " Jul 12 00:22:16.060930 kubelet[2087]: I0712 00:22:16.060738 2087 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eada4750-77df-4e71-80a8-964af09e2b3d-cni-path\") pod \"eada4750-77df-4e71-80a8-964af09e2b3d\" (UID: \"eada4750-77df-4e71-80a8-964af09e2b3d\") " Jul 12 00:22:16.060930 kubelet[2087]: I0712 00:22:16.060751 2087 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eada4750-77df-4e71-80a8-964af09e2b3d-lib-modules\") pod \"eada4750-77df-4e71-80a8-964af09e2b3d\" (UID: \"eada4750-77df-4e71-80a8-964af09e2b3d\") " Jul 12 00:22:16.060930 kubelet[2087]: I0712 00:22:16.060769 2087 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzl2v\" (UniqueName: \"kubernetes.io/projected/eada4750-77df-4e71-80a8-964af09e2b3d-kube-api-access-xzl2v\") pod \"eada4750-77df-4e71-80a8-964af09e2b3d\" (UID: \"eada4750-77df-4e71-80a8-964af09e2b3d\") " Jul 12 00:22:16.060930 kubelet[2087]: I0712 00:22:16.060785 2087 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eada4750-77df-4e71-80a8-964af09e2b3d-host-proc-sys-kernel\") pod \"eada4750-77df-4e71-80a8-964af09e2b3d\" (UID: \"eada4750-77df-4e71-80a8-964af09e2b3d\") " Jul 12 00:22:16.060930 kubelet[2087]: I0712 00:22:16.060798 2087 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eada4750-77df-4e71-80a8-964af09e2b3d-cilium-cgroup\") pod \"eada4750-77df-4e71-80a8-964af09e2b3d\" (UID: \"eada4750-77df-4e71-80a8-964af09e2b3d\") " Jul 12 00:22:16.061081 kubelet[2087]: I0712 00:22:16.060829 2087 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t82dq\" (UniqueName: \"kubernetes.io/projected/30613dfc-9a3a-436d-9a84-7daa41622c72-kube-api-access-t82dq\") on node \"localhost\" DevicePath \"\"" Jul 12 00:22:16.061081 kubelet[2087]: I0712 00:22:16.060840 2087 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/30613dfc-9a3a-436d-9a84-7daa41622c72-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 12 00:22:16.061081 kubelet[2087]: I0712 00:22:16.060889 2087 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eada4750-77df-4e71-80a8-964af09e2b3d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "eada4750-77df-4e71-80a8-964af09e2b3d" (UID: "eada4750-77df-4e71-80a8-964af09e2b3d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:22:16.061417 kubelet[2087]: I0712 00:22:16.061195 2087 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eada4750-77df-4e71-80a8-964af09e2b3d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "eada4750-77df-4e71-80a8-964af09e2b3d" (UID: "eada4750-77df-4e71-80a8-964af09e2b3d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:22:16.061787 kubelet[2087]: I0712 00:22:16.061273 2087 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eada4750-77df-4e71-80a8-964af09e2b3d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "eada4750-77df-4e71-80a8-964af09e2b3d" (UID: "eada4750-77df-4e71-80a8-964af09e2b3d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:22:16.061787 kubelet[2087]: I0712 00:22:16.061288 2087 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eada4750-77df-4e71-80a8-964af09e2b3d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "eada4750-77df-4e71-80a8-964af09e2b3d" (UID: "eada4750-77df-4e71-80a8-964af09e2b3d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:22:16.061787 kubelet[2087]: I0712 00:22:16.061302 2087 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eada4750-77df-4e71-80a8-964af09e2b3d-cni-path" (OuterVolumeSpecName: "cni-path") pod "eada4750-77df-4e71-80a8-964af09e2b3d" (UID: "eada4750-77df-4e71-80a8-964af09e2b3d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:22:16.061787 kubelet[2087]: I0712 00:22:16.061316 2087 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eada4750-77df-4e71-80a8-964af09e2b3d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "eada4750-77df-4e71-80a8-964af09e2b3d" (UID: "eada4750-77df-4e71-80a8-964af09e2b3d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:22:16.061787 kubelet[2087]: I0712 00:22:16.061661 2087 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eada4750-77df-4e71-80a8-964af09e2b3d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "eada4750-77df-4e71-80a8-964af09e2b3d" (UID: "eada4750-77df-4e71-80a8-964af09e2b3d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:22:16.061937 kubelet[2087]: I0712 00:22:16.061729 2087 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eada4750-77df-4e71-80a8-964af09e2b3d-hostproc" (OuterVolumeSpecName: "hostproc") pod "eada4750-77df-4e71-80a8-964af09e2b3d" (UID: "eada4750-77df-4e71-80a8-964af09e2b3d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:22:16.061937 kubelet[2087]: I0712 00:22:16.061744 2087 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eada4750-77df-4e71-80a8-964af09e2b3d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "eada4750-77df-4e71-80a8-964af09e2b3d" (UID: "eada4750-77df-4e71-80a8-964af09e2b3d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:22:16.061937 kubelet[2087]: I0712 00:22:16.061759 2087 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eada4750-77df-4e71-80a8-964af09e2b3d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "eada4750-77df-4e71-80a8-964af09e2b3d" (UID: "eada4750-77df-4e71-80a8-964af09e2b3d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:22:16.062822 kubelet[2087]: I0712 00:22:16.062772 2087 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eada4750-77df-4e71-80a8-964af09e2b3d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "eada4750-77df-4e71-80a8-964af09e2b3d" (UID: "eada4750-77df-4e71-80a8-964af09e2b3d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 12 00:22:16.064290 kubelet[2087]: I0712 00:22:16.064262 2087 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eada4750-77df-4e71-80a8-964af09e2b3d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "eada4750-77df-4e71-80a8-964af09e2b3d" (UID: "eada4750-77df-4e71-80a8-964af09e2b3d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 12 00:22:16.064383 kubelet[2087]: I0712 00:22:16.064364 2087 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eada4750-77df-4e71-80a8-964af09e2b3d-kube-api-access-xzl2v" (OuterVolumeSpecName: "kube-api-access-xzl2v") pod "eada4750-77df-4e71-80a8-964af09e2b3d" (UID: "eada4750-77df-4e71-80a8-964af09e2b3d"). InnerVolumeSpecName "kube-api-access-xzl2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 12 00:22:16.065029 kubelet[2087]: I0712 00:22:16.065007 2087 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eada4750-77df-4e71-80a8-964af09e2b3d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "eada4750-77df-4e71-80a8-964af09e2b3d" (UID: "eada4750-77df-4e71-80a8-964af09e2b3d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 12 00:22:16.161468 kubelet[2087]: I0712 00:22:16.161361 2087 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eada4750-77df-4e71-80a8-964af09e2b3d-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 12 00:22:16.161624 kubelet[2087]: I0712 00:22:16.161610 2087 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eada4750-77df-4e71-80a8-964af09e2b3d-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 12 00:22:16.161689 kubelet[2087]: I0712 00:22:16.161678 2087 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eada4750-77df-4e71-80a8-964af09e2b3d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 12 00:22:16.161761 kubelet[2087]: I0712 00:22:16.161751 2087 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eada4750-77df-4e71-80a8-964af09e2b3d-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 12 00:22:16.161823 kubelet[2087]: I0712 00:22:16.161811 2087 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eada4750-77df-4e71-80a8-964af09e2b3d-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 12 00:22:16.161874 kubelet[2087]: I0712 00:22:16.161865 2087 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eada4750-77df-4e71-80a8-964af09e2b3d-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 12 00:22:16.161941 kubelet[2087]: I0712 00:22:16.161932 2087 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eada4750-77df-4e71-80a8-964af09e2b3d-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 12 00:22:16.162043 kubelet[2087]: I0712 00:22:16.162032 2087 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eada4750-77df-4e71-80a8-964af09e2b3d-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 12 00:22:16.162105 kubelet[2087]: I0712 00:22:16.162093 2087 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eada4750-77df-4e71-80a8-964af09e2b3d-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 12 00:22:16.162164 kubelet[2087]: I0712 00:22:16.162153 2087 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eada4750-77df-4e71-80a8-964af09e2b3d-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 12 00:22:16.162218 kubelet[2087]: I0712 00:22:16.162209 2087 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eada4750-77df-4e71-80a8-964af09e2b3d-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 12 00:22:16.162292 kubelet[2087]: I0712 00:22:16.162281 2087 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eada4750-77df-4e71-80a8-964af09e2b3d-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 12 00:22:16.162350 kubelet[2087]: I0712 00:22:16.162340 2087 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eada4750-77df-4e71-80a8-964af09e2b3d-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 12 00:22:16.162426 kubelet[2087]: I0712 00:22:16.162413 2087 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzl2v\" (UniqueName: \"kubernetes.io/projected/eada4750-77df-4e71-80a8-964af09e2b3d-kube-api-access-xzl2v\") on node \"localhost\" DevicePath \"\"" Jul 12 00:22:16.833588 systemd[1]: var-lib-kubelet-pods-30613dfc\x2d9a3a\x2d436d\x2d9a84\x2d7daa41622c72-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt82dq.mount: Deactivated successfully. Jul 12 00:22:16.833759 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f6bae6d956c3c58e6976508dbaa0e0bf632c048a6983fc724414f301d33d139-rootfs.mount: Deactivated successfully. Jul 12 00:22:16.833848 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8f6bae6d956c3c58e6976508dbaa0e0bf632c048a6983fc724414f301d33d139-shm.mount: Deactivated successfully. Jul 12 00:22:16.833929 systemd[1]: var-lib-kubelet-pods-eada4750\x2d77df\x2d4e71\x2d80a8\x2d964af09e2b3d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxzl2v.mount: Deactivated successfully. Jul 12 00:22:16.834036 systemd[1]: var-lib-kubelet-pods-eada4750\x2d77df\x2d4e71\x2d80a8\x2d964af09e2b3d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 12 00:22:16.834121 systemd[1]: var-lib-kubelet-pods-eada4750\x2d77df\x2d4e71\x2d80a8\x2d964af09e2b3d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 12 00:22:16.988533 kubelet[2087]: I0712 00:22:16.988504 2087 scope.go:117] "RemoveContainer" containerID="2cf65a23b84cdba5faa925592a9702b78e722d4330d9534449b9004bffb872b1" Jul 12 00:22:16.992644 env[1315]: time="2025-07-12T00:22:16.992602399Z" level=info msg="RemoveContainer for \"2cf65a23b84cdba5faa925592a9702b78e722d4330d9534449b9004bffb872b1\"" Jul 12 00:22:16.995280 env[1315]: time="2025-07-12T00:22:16.995081519Z" level=info msg="RemoveContainer for \"2cf65a23b84cdba5faa925592a9702b78e722d4330d9534449b9004bffb872b1\" returns successfully" Jul 12 00:22:16.995346 kubelet[2087]: I0712 00:22:16.995244 2087 scope.go:117] "RemoveContainer" containerID="96667469e6d03bbaf5fc2519e53195234eddab69e3b6fd23408b77659012e4a8" Jul 12 00:22:16.996373 env[1315]: time="2025-07-12T00:22:16.996342120Z" level=info msg="RemoveContainer for \"96667469e6d03bbaf5fc2519e53195234eddab69e3b6fd23408b77659012e4a8\"" Jul 12 00:22:17.000310 env[1315]: time="2025-07-12T00:22:17.000037720Z" level=info msg="RemoveContainer for \"96667469e6d03bbaf5fc2519e53195234eddab69e3b6fd23408b77659012e4a8\" returns successfully" Jul 12 00:22:17.001040 kubelet[2087]: I0712 00:22:17.000193 2087 scope.go:117] "RemoveContainer" containerID="09e1741b4121af4808d1768efea8a19a46ce96666c9364657dd9b115259d90c0" Jul 12 00:22:17.002271 env[1315]: time="2025-07-12T00:22:17.002239360Z" level=info msg="RemoveContainer for \"09e1741b4121af4808d1768efea8a19a46ce96666c9364657dd9b115259d90c0\"" Jul 12 00:22:17.004861 env[1315]: time="2025-07-12T00:22:17.004827721Z" level=info msg="RemoveContainer for \"09e1741b4121af4808d1768efea8a19a46ce96666c9364657dd9b115259d90c0\" returns successfully" Jul 12 00:22:17.005130 kubelet[2087]: I0712 00:22:17.005105 2087 scope.go:117] "RemoveContainer" containerID="19950acd4a93cc9d8c3073e4be785b5bc434065f9a16e9b1308af925ff651963" Jul 12 00:22:17.006195 env[1315]: time="2025-07-12T00:22:17.006168721Z" level=info msg="RemoveContainer for \"19950acd4a93cc9d8c3073e4be785b5bc434065f9a16e9b1308af925ff651963\"" Jul 12 00:22:17.008210 env[1315]: time="2025-07-12T00:22:17.008176401Z" level=info msg="RemoveContainer for \"19950acd4a93cc9d8c3073e4be785b5bc434065f9a16e9b1308af925ff651963\" returns successfully" Jul 12 00:22:17.008401 kubelet[2087]: I0712 00:22:17.008374 2087 scope.go:117] "RemoveContainer" containerID="c75b19618216af7509261c5744fd20a663f9766e094a45429327d9c10c00b7e0" Jul 12 00:22:17.009474 env[1315]: time="2025-07-12T00:22:17.009447441Z" level=info msg="RemoveContainer for \"c75b19618216af7509261c5744fd20a663f9766e094a45429327d9c10c00b7e0\"" Jul 12 00:22:17.011625 env[1315]: time="2025-07-12T00:22:17.011593361Z" level=info msg="RemoveContainer for \"c75b19618216af7509261c5744fd20a663f9766e094a45429327d9c10c00b7e0\" returns successfully" Jul 12 00:22:17.775190 sshd[3694]: pam_unix(sshd:session): session closed for user core Jul 12 00:22:17.777613 systemd[1]: Started sshd@22-10.0.0.41:22-10.0.0.1:53612.service. Jul 12 00:22:17.778137 systemd[1]: sshd@21-10.0.0.41:22-10.0.0.1:53596.service: Deactivated successfully. Jul 12 00:22:17.779535 systemd-logind[1305]: Session 22 logged out. Waiting for processes to exit. Jul 12 00:22:17.779559 systemd[1]: session-22.scope: Deactivated successfully. Jul 12 00:22:17.780831 systemd-logind[1305]: Removed session 22. Jul 12 00:22:17.796266 kubelet[2087]: I0712 00:22:17.796224 2087 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30613dfc-9a3a-436d-9a84-7daa41622c72" path="/var/lib/kubelet/pods/30613dfc-9a3a-436d-9a84-7daa41622c72/volumes" Jul 12 00:22:17.796787 kubelet[2087]: I0712 00:22:17.796766 2087 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eada4750-77df-4e71-80a8-964af09e2b3d" path="/var/lib/kubelet/pods/eada4750-77df-4e71-80a8-964af09e2b3d/volumes" Jul 12 00:22:17.816500 sshd[3863]: Accepted publickey for core from 10.0.0.1 port 53612 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:22:17.817720 sshd[3863]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:22:17.821821 systemd-logind[1305]: New session 23 of user core. Jul 12 00:22:17.822330 systemd[1]: Started session-23.scope. Jul 12 00:22:17.847561 kubelet[2087]: E0712 00:22:17.847516 2087 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 12 00:22:19.260530 kubelet[2087]: I0712 00:22:19.260465 2087 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-12T00:22:19Z","lastTransitionTime":"2025-07-12T00:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 12 00:22:19.941037 systemd[1]: Started sshd@23-10.0.0.41:22-10.0.0.1:53626.service. Jul 12 00:22:19.943044 sshd[3863]: pam_unix(sshd:session): session closed for user core Jul 12 00:22:19.949988 kubelet[2087]: E0712 00:22:19.948249 2087 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="eada4750-77df-4e71-80a8-964af09e2b3d" containerName="mount-bpf-fs" Jul 12 00:22:19.949988 kubelet[2087]: E0712 00:22:19.948280 2087 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="eada4750-77df-4e71-80a8-964af09e2b3d" containerName="cilium-agent" Jul 12 00:22:19.949988 kubelet[2087]: E0712 00:22:19.948289 2087 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="eada4750-77df-4e71-80a8-964af09e2b3d" containerName="mount-cgroup" Jul 12 00:22:19.949988 kubelet[2087]: E0712 00:22:19.948294 2087 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="30613dfc-9a3a-436d-9a84-7daa41622c72" containerName="cilium-operator" Jul 12 00:22:19.949988 kubelet[2087]: E0712 00:22:19.948300 2087 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="eada4750-77df-4e71-80a8-964af09e2b3d" containerName="apply-sysctl-overwrites" Jul 12 00:22:19.949988 kubelet[2087]: E0712 00:22:19.948305 2087 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="eada4750-77df-4e71-80a8-964af09e2b3d" containerName="clean-cilium-state" Jul 12 00:22:19.949988 kubelet[2087]: I0712 00:22:19.948328 2087 memory_manager.go:354] "RemoveStaleState removing state" podUID="30613dfc-9a3a-436d-9a84-7daa41622c72" containerName="cilium-operator" Jul 12 00:22:19.949988 kubelet[2087]: I0712 00:22:19.948334 2087 memory_manager.go:354] "RemoveStaleState removing state" podUID="eada4750-77df-4e71-80a8-964af09e2b3d" containerName="cilium-agent" Jul 12 00:22:19.958375 systemd[1]: sshd@22-10.0.0.41:22-10.0.0.1:53612.service: Deactivated successfully. Jul 12 00:22:19.959602 systemd[1]: session-23.scope: Deactivated successfully. Jul 12 00:22:19.971592 systemd-logind[1305]: Session 23 logged out. Waiting for processes to exit. Jul 12 00:22:19.972527 systemd-logind[1305]: Removed session 23. Jul 12 00:22:19.985302 sshd[3877]: Accepted publickey for core from 10.0.0.1 port 53626 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:22:19.987413 sshd[3877]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:22:19.988142 kubelet[2087]: I0712 00:22:19.988110 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-hostproc\") pod \"cilium-jp6zb\" (UID: \"e155e9a6-ea4c-4ad8-9620-b06b5855ff79\") " pod="kube-system/cilium-jp6zb" Jul 12 00:22:19.988283 kubelet[2087]: I0712 00:22:19.988263 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-host-proc-sys-kernel\") pod \"cilium-jp6zb\" (UID: \"e155e9a6-ea4c-4ad8-9620-b06b5855ff79\") " pod="kube-system/cilium-jp6zb" Jul 12 00:22:19.988377 kubelet[2087]: I0712 00:22:19.988364 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-lib-modules\") pod \"cilium-jp6zb\" (UID: \"e155e9a6-ea4c-4ad8-9620-b06b5855ff79\") " pod="kube-system/cilium-jp6zb" Jul 12 00:22:19.988457 kubelet[2087]: I0712 00:22:19.988444 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-cilium-ipsec-secrets\") pod \"cilium-jp6zb\" (UID: \"e155e9a6-ea4c-4ad8-9620-b06b5855ff79\") " pod="kube-system/cilium-jp6zb" Jul 12 00:22:19.988533 kubelet[2087]: I0712 00:22:19.988520 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-hubble-tls\") pod \"cilium-jp6zb\" (UID: \"e155e9a6-ea4c-4ad8-9620-b06b5855ff79\") " pod="kube-system/cilium-jp6zb" Jul 12 00:22:19.988614 kubelet[2087]: I0712 00:22:19.988600 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-cilium-cgroup\") pod \"cilium-jp6zb\" (UID: \"e155e9a6-ea4c-4ad8-9620-b06b5855ff79\") " pod="kube-system/cilium-jp6zb" Jul 12 00:22:19.988698 kubelet[2087]: I0712 00:22:19.988685 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-cni-path\") pod \"cilium-jp6zb\" (UID: \"e155e9a6-ea4c-4ad8-9620-b06b5855ff79\") " pod="kube-system/cilium-jp6zb" Jul 12 00:22:19.988782 kubelet[2087]: I0712 00:22:19.988769 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-clustermesh-secrets\") pod \"cilium-jp6zb\" (UID: \"e155e9a6-ea4c-4ad8-9620-b06b5855ff79\") " pod="kube-system/cilium-jp6zb" Jul 12 00:22:19.989519 kubelet[2087]: I0712 00:22:19.988869 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-etc-cni-netd\") pod \"cilium-jp6zb\" (UID: \"e155e9a6-ea4c-4ad8-9620-b06b5855ff79\") " pod="kube-system/cilium-jp6zb" Jul 12 00:22:19.989519 kubelet[2087]: I0712 00:22:19.989417 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-cilium-config-path\") pod \"cilium-jp6zb\" (UID: \"e155e9a6-ea4c-4ad8-9620-b06b5855ff79\") " pod="kube-system/cilium-jp6zb" Jul 12 00:22:19.989519 kubelet[2087]: I0712 00:22:19.989449 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-cilium-run\") pod \"cilium-jp6zb\" (UID: \"e155e9a6-ea4c-4ad8-9620-b06b5855ff79\") " pod="kube-system/cilium-jp6zb" Jul 12 00:22:19.989519 kubelet[2087]: I0712 00:22:19.989465 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-bpf-maps\") pod \"cilium-jp6zb\" (UID: \"e155e9a6-ea4c-4ad8-9620-b06b5855ff79\") " pod="kube-system/cilium-jp6zb" Jul 12 00:22:19.989519 kubelet[2087]: I0712 00:22:19.989481 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-xtables-lock\") pod \"cilium-jp6zb\" (UID: \"e155e9a6-ea4c-4ad8-9620-b06b5855ff79\") " pod="kube-system/cilium-jp6zb" Jul 12 00:22:19.989519 kubelet[2087]: I0712 00:22:19.989503 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-host-proc-sys-net\") pod \"cilium-jp6zb\" (UID: \"e155e9a6-ea4c-4ad8-9620-b06b5855ff79\") " pod="kube-system/cilium-jp6zb" Jul 12 00:22:19.989911 kubelet[2087]: I0712 00:22:19.989519 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6cpl\" (UniqueName: \"kubernetes.io/projected/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-kube-api-access-p6cpl\") pod \"cilium-jp6zb\" (UID: \"e155e9a6-ea4c-4ad8-9620-b06b5855ff79\") " pod="kube-system/cilium-jp6zb" Jul 12 00:22:19.992139 systemd-logind[1305]: New session 24 of user core. Jul 12 00:22:19.993213 systemd[1]: Started session-24.scope. Jul 12 00:22:20.129693 sshd[3877]: pam_unix(sshd:session): session closed for user core Jul 12 00:22:20.131996 systemd[1]: Started sshd@24-10.0.0.41:22-10.0.0.1:53636.service. Jul 12 00:22:20.135324 systemd-logind[1305]: Session 24 logged out. Waiting for processes to exit. Jul 12 00:22:20.137278 systemd[1]: sshd@23-10.0.0.41:22-10.0.0.1:53626.service: Deactivated successfully. Jul 12 00:22:20.146187 env[1315]: time="2025-07-12T00:22:20.142503519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jp6zb,Uid:e155e9a6-ea4c-4ad8-9620-b06b5855ff79,Namespace:kube-system,Attempt:0,}" Jul 12 00:22:20.146574 kubelet[2087]: E0712 00:22:20.139346 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:22:20.138073 systemd[1]: session-24.scope: Deactivated successfully. Jul 12 00:22:20.139842 systemd-logind[1305]: Removed session 24. Jul 12 00:22:20.159533 env[1315]: time="2025-07-12T00:22:20.159459601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:22:20.159533 env[1315]: time="2025-07-12T00:22:20.159498881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:22:20.159705 env[1315]: time="2025-07-12T00:22:20.159509561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:22:20.159871 env[1315]: time="2025-07-12T00:22:20.159835601Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/184639af67b1edde574d2c747d726993e5e76a79e1ab9e02f75e97023112e884 pid=3908 runtime=io.containerd.runc.v2 Jul 12 00:22:20.169749 sshd[3897]: Accepted publickey for core from 10.0.0.1 port 53636 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:22:20.171265 sshd[3897]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:22:20.177566 systemd[1]: Started session-25.scope. Jul 12 00:22:20.178024 systemd-logind[1305]: New session 25 of user core. Jul 12 00:22:20.208005 env[1315]: time="2025-07-12T00:22:20.205646486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jp6zb,Uid:e155e9a6-ea4c-4ad8-9620-b06b5855ff79,Namespace:kube-system,Attempt:0,} returns sandbox id \"184639af67b1edde574d2c747d726993e5e76a79e1ab9e02f75e97023112e884\"" Jul 12 00:22:20.208371 kubelet[2087]: E0712 00:22:20.206352 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:22:20.208451 env[1315]: time="2025-07-12T00:22:20.208314646Z" level=info msg="CreateContainer within sandbox \"184639af67b1edde574d2c747d726993e5e76a79e1ab9e02f75e97023112e884\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 12 00:22:20.218695 env[1315]: time="2025-07-12T00:22:20.218648847Z" level=info msg="CreateContainer within sandbox \"184639af67b1edde574d2c747d726993e5e76a79e1ab9e02f75e97023112e884\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"47e2a131e30faccfb82f5c90ba9e9a0d11dc122e70b21604e6c334bf1c20fd4f\"" Jul 12 00:22:20.220561 env[1315]: time="2025-07-12T00:22:20.220531688Z" level=info msg="StartContainer for \"47e2a131e30faccfb82f5c90ba9e9a0d11dc122e70b21604e6c334bf1c20fd4f\"" Jul 12 00:22:20.268991 env[1315]: time="2025-07-12T00:22:20.267902173Z" level=info msg="StartContainer for \"47e2a131e30faccfb82f5c90ba9e9a0d11dc122e70b21604e6c334bf1c20fd4f\" returns successfully" Jul 12 00:22:20.313024 env[1315]: time="2025-07-12T00:22:20.312978058Z" level=info msg="shim disconnected" id=47e2a131e30faccfb82f5c90ba9e9a0d11dc122e70b21604e6c334bf1c20fd4f Jul 12 00:22:20.313315 env[1315]: time="2025-07-12T00:22:20.313293298Z" level=warning msg="cleaning up after shim disconnected" id=47e2a131e30faccfb82f5c90ba9e9a0d11dc122e70b21604e6c334bf1c20fd4f namespace=k8s.io Jul 12 00:22:20.313395 env[1315]: time="2025-07-12T00:22:20.313380138Z" level=info msg="cleaning up dead shim" Jul 12 00:22:20.320293 env[1315]: time="2025-07-12T00:22:20.320218619Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:22:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3999 runtime=io.containerd.runc.v2\n" Jul 12 00:22:20.794612 kubelet[2087]: E0712 00:22:20.794564 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:22:21.000315 env[1315]: time="2025-07-12T00:22:21.000271813Z" level=info msg="StopPodSandbox for \"184639af67b1edde574d2c747d726993e5e76a79e1ab9e02f75e97023112e884\"" Jul 12 00:22:21.000469 env[1315]: time="2025-07-12T00:22:21.000331133Z" level=info msg="Container to stop \"47e2a131e30faccfb82f5c90ba9e9a0d11dc122e70b21604e6c334bf1c20fd4f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:22:21.026715 env[1315]: time="2025-07-12T00:22:21.026664536Z" level=info msg="shim disconnected" id=184639af67b1edde574d2c747d726993e5e76a79e1ab9e02f75e97023112e884 Jul 12 00:22:21.026715 env[1315]: time="2025-07-12T00:22:21.026714336Z" level=warning msg="cleaning up after shim disconnected" id=184639af67b1edde574d2c747d726993e5e76a79e1ab9e02f75e97023112e884 namespace=k8s.io Jul 12 00:22:21.026907 env[1315]: time="2025-07-12T00:22:21.026724896Z" level=info msg="cleaning up dead shim" Jul 12 00:22:21.033590 env[1315]: time="2025-07-12T00:22:21.033553577Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:22:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4032 runtime=io.containerd.runc.v2\n" Jul 12 00:22:21.033860 env[1315]: time="2025-07-12T00:22:21.033834617Z" level=info msg="TearDown network for sandbox \"184639af67b1edde574d2c747d726993e5e76a79e1ab9e02f75e97023112e884\" successfully" Jul 12 00:22:21.033907 env[1315]: time="2025-07-12T00:22:21.033859977Z" level=info msg="StopPodSandbox for \"184639af67b1edde574d2c747d726993e5e76a79e1ab9e02f75e97023112e884\" returns successfully" Jul 12 00:22:21.094931 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-184639af67b1edde574d2c747d726993e5e76a79e1ab9e02f75e97023112e884-rootfs.mount: Deactivated successfully. Jul 12 00:22:21.095108 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-184639af67b1edde574d2c747d726993e5e76a79e1ab9e02f75e97023112e884-shm.mount: Deactivated successfully. Jul 12 00:22:21.096207 kubelet[2087]: I0712 00:22:21.095962 2087 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-hubble-tls\") pod \"e155e9a6-ea4c-4ad8-9620-b06b5855ff79\" (UID: \"e155e9a6-ea4c-4ad8-9620-b06b5855ff79\") " Jul 12 00:22:21.096207 kubelet[2087]: I0712 00:22:21.095998 2087 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6cpl\" (UniqueName: \"kubernetes.io/projected/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-kube-api-access-p6cpl\") pod \"e155e9a6-ea4c-4ad8-9620-b06b5855ff79\" (UID: \"e155e9a6-ea4c-4ad8-9620-b06b5855ff79\") " Jul 12 00:22:21.096207 kubelet[2087]: I0712 00:22:21.096027 2087 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-cilium-run\") pod \"e155e9a6-ea4c-4ad8-9620-b06b5855ff79\" (UID: \"e155e9a6-ea4c-4ad8-9620-b06b5855ff79\") " Jul 12 00:22:21.096207 kubelet[2087]: I0712 00:22:21.096053 2087 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-xtables-lock\") pod \"e155e9a6-ea4c-4ad8-9620-b06b5855ff79\" (UID: \"e155e9a6-ea4c-4ad8-9620-b06b5855ff79\") " Jul 12 00:22:21.096207 kubelet[2087]: I0712 00:22:21.096068 2087 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-bpf-maps\") pod \"e155e9a6-ea4c-4ad8-9620-b06b5855ff79\" (UID: \"e155e9a6-ea4c-4ad8-9620-b06b5855ff79\") " Jul 12 00:22:21.096207 kubelet[2087]: I0712 00:22:21.096083 2087 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-cni-path\") pod \"e155e9a6-ea4c-4ad8-9620-b06b5855ff79\" (UID: \"e155e9a6-ea4c-4ad8-9620-b06b5855ff79\") " Jul 12 00:22:21.096458 kubelet[2087]: I0712 00:22:21.096096 2087 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-hostproc\") pod \"e155e9a6-ea4c-4ad8-9620-b06b5855ff79\" (UID: \"e155e9a6-ea4c-4ad8-9620-b06b5855ff79\") " Jul 12 00:22:21.096458 kubelet[2087]: I0712 00:22:21.096118 2087 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-etc-cni-netd\") pod \"e155e9a6-ea4c-4ad8-9620-b06b5855ff79\" (UID: \"e155e9a6-ea4c-4ad8-9620-b06b5855ff79\") " Jul 12 00:22:21.096458 kubelet[2087]: I0712 00:22:21.096138 2087 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-cilium-config-path\") pod \"e155e9a6-ea4c-4ad8-9620-b06b5855ff79\" (UID: \"e155e9a6-ea4c-4ad8-9620-b06b5855ff79\") " Jul 12 00:22:21.096458 kubelet[2087]: I0712 00:22:21.096155 2087 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-host-proc-sys-kernel\") pod \"e155e9a6-ea4c-4ad8-9620-b06b5855ff79\" (UID: \"e155e9a6-ea4c-4ad8-9620-b06b5855ff79\") " Jul 12 00:22:21.096458 kubelet[2087]: I0712 00:22:21.096169 2087 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-lib-modules\") pod \"e155e9a6-ea4c-4ad8-9620-b06b5855ff79\" (UID: \"e155e9a6-ea4c-4ad8-9620-b06b5855ff79\") " Jul 12 00:22:21.096458 kubelet[2087]: I0712 00:22:21.096194 2087 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-clustermesh-secrets\") pod \"e155e9a6-ea4c-4ad8-9620-b06b5855ff79\" (UID: \"e155e9a6-ea4c-4ad8-9620-b06b5855ff79\") " Jul 12 00:22:21.096594 kubelet[2087]: I0712 00:22:21.096216 2087 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-cilium-cgroup\") pod \"e155e9a6-ea4c-4ad8-9620-b06b5855ff79\" (UID: \"e155e9a6-ea4c-4ad8-9620-b06b5855ff79\") " Jul 12 00:22:21.096594 kubelet[2087]: I0712 00:22:21.096235 2087 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-host-proc-sys-net\") pod \"e155e9a6-ea4c-4ad8-9620-b06b5855ff79\" (UID: \"e155e9a6-ea4c-4ad8-9620-b06b5855ff79\") " Jul 12 00:22:21.096594 kubelet[2087]: I0712 00:22:21.096254 2087 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-cilium-ipsec-secrets\") pod \"e155e9a6-ea4c-4ad8-9620-b06b5855ff79\" (UID: \"e155e9a6-ea4c-4ad8-9620-b06b5855ff79\") " Jul 12 00:22:21.096594 kubelet[2087]: I0712 00:22:21.096373 2087 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e155e9a6-ea4c-4ad8-9620-b06b5855ff79" (UID: "e155e9a6-ea4c-4ad8-9620-b06b5855ff79"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:22:21.096594 kubelet[2087]: I0712 00:22:21.096410 2087 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e155e9a6-ea4c-4ad8-9620-b06b5855ff79" (UID: "e155e9a6-ea4c-4ad8-9620-b06b5855ff79"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:22:21.096712 kubelet[2087]: I0712 00:22:21.096428 2087 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e155e9a6-ea4c-4ad8-9620-b06b5855ff79" (UID: "e155e9a6-ea4c-4ad8-9620-b06b5855ff79"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:22:21.096712 kubelet[2087]: I0712 00:22:21.096480 2087 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e155e9a6-ea4c-4ad8-9620-b06b5855ff79" (UID: "e155e9a6-ea4c-4ad8-9620-b06b5855ff79"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:22:21.096712 kubelet[2087]: I0712 00:22:21.096497 2087 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-cni-path" (OuterVolumeSpecName: "cni-path") pod "e155e9a6-ea4c-4ad8-9620-b06b5855ff79" (UID: "e155e9a6-ea4c-4ad8-9620-b06b5855ff79"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:22:21.096712 kubelet[2087]: I0712 00:22:21.096513 2087 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-hostproc" (OuterVolumeSpecName: "hostproc") pod "e155e9a6-ea4c-4ad8-9620-b06b5855ff79" (UID: "e155e9a6-ea4c-4ad8-9620-b06b5855ff79"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:22:21.096712 kubelet[2087]: I0712 00:22:21.096527 2087 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e155e9a6-ea4c-4ad8-9620-b06b5855ff79" (UID: "e155e9a6-ea4c-4ad8-9620-b06b5855ff79"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:22:21.096827 kubelet[2087]: I0712 00:22:21.096610 2087 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e155e9a6-ea4c-4ad8-9620-b06b5855ff79" (UID: "e155e9a6-ea4c-4ad8-9620-b06b5855ff79"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:22:21.096827 kubelet[2087]: I0712 00:22:21.096680 2087 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e155e9a6-ea4c-4ad8-9620-b06b5855ff79" (UID: "e155e9a6-ea4c-4ad8-9620-b06b5855ff79"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:22:21.096827 kubelet[2087]: I0712 00:22:21.096701 2087 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e155e9a6-ea4c-4ad8-9620-b06b5855ff79" (UID: "e155e9a6-ea4c-4ad8-9620-b06b5855ff79"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:22:21.098882 kubelet[2087]: I0712 00:22:21.098830 2087 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e155e9a6-ea4c-4ad8-9620-b06b5855ff79" (UID: "e155e9a6-ea4c-4ad8-9620-b06b5855ff79"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 12 00:22:21.101489 systemd[1]: var-lib-kubelet-pods-e155e9a6\x2dea4c\x2d4ad8\x2d9620\x2db06b5855ff79-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp6cpl.mount: Deactivated successfully. Jul 12 00:22:21.102089 kubelet[2087]: I0712 00:22:21.102055 2087 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e155e9a6-ea4c-4ad8-9620-b06b5855ff79" (UID: "e155e9a6-ea4c-4ad8-9620-b06b5855ff79"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 12 00:22:21.102174 kubelet[2087]: I0712 00:22:21.102120 2087 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "e155e9a6-ea4c-4ad8-9620-b06b5855ff79" (UID: "e155e9a6-ea4c-4ad8-9620-b06b5855ff79"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 12 00:22:21.103534 kubelet[2087]: I0712 00:22:21.103505 2087 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-kube-api-access-p6cpl" (OuterVolumeSpecName: "kube-api-access-p6cpl") pod "e155e9a6-ea4c-4ad8-9620-b06b5855ff79" (UID: "e155e9a6-ea4c-4ad8-9620-b06b5855ff79"). InnerVolumeSpecName "kube-api-access-p6cpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 12 00:22:21.103723 kubelet[2087]: I0712 00:22:21.103701 2087 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e155e9a6-ea4c-4ad8-9620-b06b5855ff79" (UID: "e155e9a6-ea4c-4ad8-9620-b06b5855ff79"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 12 00:22:21.103719 systemd[1]: var-lib-kubelet-pods-e155e9a6\x2dea4c\x2d4ad8\x2d9620\x2db06b5855ff79-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 12 00:22:21.103836 systemd[1]: var-lib-kubelet-pods-e155e9a6\x2dea4c\x2d4ad8\x2d9620\x2db06b5855ff79-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 12 00:22:21.106161 systemd[1]: var-lib-kubelet-pods-e155e9a6\x2dea4c\x2d4ad8\x2d9620\x2db06b5855ff79-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 12 00:22:21.196898 kubelet[2087]: I0712 00:22:21.196856 2087 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 12 00:22:21.197112 kubelet[2087]: I0712 00:22:21.197097 2087 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 12 00:22:21.197175 kubelet[2087]: I0712 00:22:21.197164 2087 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 12 00:22:21.197261 kubelet[2087]: I0712 00:22:21.197248 2087 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Jul 12 00:22:21.197325 kubelet[2087]: I0712 00:22:21.197311 2087 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 12 00:22:21.197378 kubelet[2087]: I0712 00:22:21.197368 2087 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 12 00:22:21.197450 kubelet[2087]: I0712 00:22:21.197439 2087 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6cpl\" (UniqueName: \"kubernetes.io/projected/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-kube-api-access-p6cpl\") on node \"localhost\" DevicePath \"\"" Jul 12 00:22:21.197502 kubelet[2087]: I0712 00:22:21.197493 2087 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 12 00:22:21.197555 kubelet[2087]: I0712 00:22:21.197546 2087 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 12 00:22:21.197606 kubelet[2087]: I0712 00:22:21.197596 2087 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 12 00:22:21.197663 kubelet[2087]: I0712 00:22:21.197653 2087 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 12 00:22:21.197721 kubelet[2087]: I0712 00:22:21.197711 2087 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 12 00:22:21.197783 kubelet[2087]: I0712 00:22:21.197774 2087 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 12 00:22:21.197844 kubelet[2087]: I0712 00:22:21.197834 2087 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 12 00:22:21.197900 kubelet[2087]: I0712 00:22:21.197890 2087 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e155e9a6-ea4c-4ad8-9620-b06b5855ff79-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 12 00:22:22.003934 kubelet[2087]: I0712 00:22:22.003899 2087 scope.go:117] "RemoveContainer" containerID="47e2a131e30faccfb82f5c90ba9e9a0d11dc122e70b21604e6c334bf1c20fd4f" Jul 12 00:22:22.005441 env[1315]: time="2025-07-12T00:22:22.005402921Z" level=info msg="RemoveContainer for \"47e2a131e30faccfb82f5c90ba9e9a0d11dc122e70b21604e6c334bf1c20fd4f\"" Jul 12 00:22:22.010273 env[1315]: time="2025-07-12T00:22:22.010233842Z" level=info msg="RemoveContainer for \"47e2a131e30faccfb82f5c90ba9e9a0d11dc122e70b21604e6c334bf1c20fd4f\" returns successfully" Jul 12 00:22:22.045101 kubelet[2087]: E0712 00:22:22.044883 2087 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e155e9a6-ea4c-4ad8-9620-b06b5855ff79" containerName="mount-cgroup" Jul 12 00:22:22.045101 kubelet[2087]: I0712 00:22:22.044935 2087 memory_manager.go:354] "RemoveStaleState removing state" podUID="e155e9a6-ea4c-4ad8-9620-b06b5855ff79" containerName="mount-cgroup" Jul 12 00:22:22.102966 kubelet[2087]: I0712 00:22:22.102891 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3c14a88b-0c57-469a-8a71-4f5e457eaf05-cilium-cgroup\") pod \"cilium-8bnml\" (UID: \"3c14a88b-0c57-469a-8a71-4f5e457eaf05\") " pod="kube-system/cilium-8bnml" Jul 12 00:22:22.102966 kubelet[2087]: I0712 00:22:22.102944 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3c14a88b-0c57-469a-8a71-4f5e457eaf05-cilium-run\") pod \"cilium-8bnml\" (UID: \"3c14a88b-0c57-469a-8a71-4f5e457eaf05\") " pod="kube-system/cilium-8bnml" Jul 12 00:22:22.103156 kubelet[2087]: I0712 00:22:22.102989 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3c14a88b-0c57-469a-8a71-4f5e457eaf05-bpf-maps\") pod \"cilium-8bnml\" (UID: \"3c14a88b-0c57-469a-8a71-4f5e457eaf05\") " pod="kube-system/cilium-8bnml" Jul 12 00:22:22.103156 kubelet[2087]: I0712 00:22:22.103008 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3c14a88b-0c57-469a-8a71-4f5e457eaf05-hostproc\") pod \"cilium-8bnml\" (UID: \"3c14a88b-0c57-469a-8a71-4f5e457eaf05\") " pod="kube-system/cilium-8bnml" Jul 12 00:22:22.103156 kubelet[2087]: I0712 00:22:22.103058 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c14a88b-0c57-469a-8a71-4f5e457eaf05-lib-modules\") pod \"cilium-8bnml\" (UID: \"3c14a88b-0c57-469a-8a71-4f5e457eaf05\") " pod="kube-system/cilium-8bnml" Jul 12 00:22:22.103156 kubelet[2087]: I0712 00:22:22.103081 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3c14a88b-0c57-469a-8a71-4f5e457eaf05-cni-path\") pod \"cilium-8bnml\" (UID: \"3c14a88b-0c57-469a-8a71-4f5e457eaf05\") " pod="kube-system/cilium-8bnml" Jul 12 00:22:22.103156 kubelet[2087]: I0712 00:22:22.103119 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c14a88b-0c57-469a-8a71-4f5e457eaf05-xtables-lock\") pod \"cilium-8bnml\" (UID: \"3c14a88b-0c57-469a-8a71-4f5e457eaf05\") " pod="kube-system/cilium-8bnml" Jul 12 00:22:22.103156 kubelet[2087]: I0712 00:22:22.103137 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3c14a88b-0c57-469a-8a71-4f5e457eaf05-clustermesh-secrets\") pod \"cilium-8bnml\" (UID: \"3c14a88b-0c57-469a-8a71-4f5e457eaf05\") " pod="kube-system/cilium-8bnml" Jul 12 00:22:22.103309 kubelet[2087]: I0712 00:22:22.103154 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3c14a88b-0c57-469a-8a71-4f5e457eaf05-etc-cni-netd\") pod \"cilium-8bnml\" (UID: \"3c14a88b-0c57-469a-8a71-4f5e457eaf05\") " pod="kube-system/cilium-8bnml" Jul 12 00:22:22.103309 kubelet[2087]: I0712 00:22:22.103195 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3c14a88b-0c57-469a-8a71-4f5e457eaf05-cilium-ipsec-secrets\") pod \"cilium-8bnml\" (UID: \"3c14a88b-0c57-469a-8a71-4f5e457eaf05\") " pod="kube-system/cilium-8bnml" Jul 12 00:22:22.103309 kubelet[2087]: I0712 00:22:22.103226 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3c14a88b-0c57-469a-8a71-4f5e457eaf05-hubble-tls\") pod \"cilium-8bnml\" (UID: \"3c14a88b-0c57-469a-8a71-4f5e457eaf05\") " pod="kube-system/cilium-8bnml" Jul 12 00:22:22.103309 kubelet[2087]: I0712 00:22:22.103256 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3c14a88b-0c57-469a-8a71-4f5e457eaf05-cilium-config-path\") pod \"cilium-8bnml\" (UID: \"3c14a88b-0c57-469a-8a71-4f5e457eaf05\") " pod="kube-system/cilium-8bnml" Jul 12 00:22:22.103309 kubelet[2087]: I0712 00:22:22.103275 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3c14a88b-0c57-469a-8a71-4f5e457eaf05-host-proc-sys-net\") pod \"cilium-8bnml\" (UID: \"3c14a88b-0c57-469a-8a71-4f5e457eaf05\") " pod="kube-system/cilium-8bnml" Jul 12 00:22:22.103426 kubelet[2087]: I0712 00:22:22.103293 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3c14a88b-0c57-469a-8a71-4f5e457eaf05-host-proc-sys-kernel\") pod \"cilium-8bnml\" (UID: \"3c14a88b-0c57-469a-8a71-4f5e457eaf05\") " pod="kube-system/cilium-8bnml" Jul 12 00:22:22.103426 kubelet[2087]: I0712 00:22:22.103336 2087 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8lxb\" (UniqueName: \"kubernetes.io/projected/3c14a88b-0c57-469a-8a71-4f5e457eaf05-kube-api-access-b8lxb\") pod \"cilium-8bnml\" (UID: \"3c14a88b-0c57-469a-8a71-4f5e457eaf05\") " pod="kube-system/cilium-8bnml" Jul 12 00:22:22.348172 kubelet[2087]: E0712 00:22:22.348009 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:22:22.349001 env[1315]: time="2025-07-12T00:22:22.348592758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8bnml,Uid:3c14a88b-0c57-469a-8a71-4f5e457eaf05,Namespace:kube-system,Attempt:0,}" Jul 12 00:22:22.360421 env[1315]: time="2025-07-12T00:22:22.360354439Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:22:22.360421 env[1315]: time="2025-07-12T00:22:22.360391919Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:22:22.360583 env[1315]: time="2025-07-12T00:22:22.360407359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:22:22.361061 env[1315]: time="2025-07-12T00:22:22.360998719Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/01b0fe77917ebb2edb039b8a01dca7c81a4be13182faf6a6dd1877d7ad9b3b20 pid=4061 runtime=io.containerd.runc.v2 Jul 12 00:22:22.394407 env[1315]: time="2025-07-12T00:22:22.394358083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8bnml,Uid:3c14a88b-0c57-469a-8a71-4f5e457eaf05,Namespace:kube-system,Attempt:0,} returns sandbox id \"01b0fe77917ebb2edb039b8a01dca7c81a4be13182faf6a6dd1877d7ad9b3b20\"" Jul 12 00:22:22.395108 kubelet[2087]: E0712 00:22:22.394891 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:22:22.396506 env[1315]: time="2025-07-12T00:22:22.396472043Z" level=info msg="CreateContainer within sandbox \"01b0fe77917ebb2edb039b8a01dca7c81a4be13182faf6a6dd1877d7ad9b3b20\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 12 00:22:22.404996 env[1315]: time="2025-07-12T00:22:22.404941324Z" level=info msg="CreateContainer within sandbox \"01b0fe77917ebb2edb039b8a01dca7c81a4be13182faf6a6dd1877d7ad9b3b20\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"930fd449b31dc541b4d6453d2691ac1287a7d44a3090d4d9111f40b7e44a5cf4\"" Jul 12 00:22:22.405469 env[1315]: time="2025-07-12T00:22:22.405428724Z" level=info msg="StartContainer for \"930fd449b31dc541b4d6453d2691ac1287a7d44a3090d4d9111f40b7e44a5cf4\"" Jul 12 00:22:22.446985 env[1315]: time="2025-07-12T00:22:22.446886928Z" level=info msg="StartContainer for \"930fd449b31dc541b4d6453d2691ac1287a7d44a3090d4d9111f40b7e44a5cf4\" returns successfully" Jul 12 00:22:22.477133 env[1315]: time="2025-07-12T00:22:22.477082171Z" level=info msg="shim disconnected" id=930fd449b31dc541b4d6453d2691ac1287a7d44a3090d4d9111f40b7e44a5cf4 Jul 12 00:22:22.477133 env[1315]: time="2025-07-12T00:22:22.477127691Z" level=warning msg="cleaning up after shim disconnected" id=930fd449b31dc541b4d6453d2691ac1287a7d44a3090d4d9111f40b7e44a5cf4 namespace=k8s.io Jul 12 00:22:22.477133 env[1315]: time="2025-07-12T00:22:22.477136731Z" level=info msg="cleaning up dead shim" Jul 12 00:22:22.483090 env[1315]: time="2025-07-12T00:22:22.483059052Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:22:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4145 runtime=io.containerd.runc.v2\n" Jul 12 00:22:22.848609 kubelet[2087]: E0712 00:22:22.848550 2087 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 12 00:22:23.007774 kubelet[2087]: E0712 00:22:23.007745 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:22:23.012848 env[1315]: time="2025-07-12T00:22:23.012806268Z" level=info msg="CreateContainer within sandbox \"01b0fe77917ebb2edb039b8a01dca7c81a4be13182faf6a6dd1877d7ad9b3b20\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 12 00:22:23.030884 env[1315]: time="2025-07-12T00:22:23.030824630Z" level=info msg="CreateContainer within sandbox \"01b0fe77917ebb2edb039b8a01dca7c81a4be13182faf6a6dd1877d7ad9b3b20\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a969906abe50fb8d632f42426bee4b03a7b98d03c6217c27607f862006c016a0\"" Jul 12 00:22:23.031477 env[1315]: time="2025-07-12T00:22:23.031445830Z" level=info msg="StartContainer for \"a969906abe50fb8d632f42426bee4b03a7b98d03c6217c27607f862006c016a0\"" Jul 12 00:22:23.075178 env[1315]: time="2025-07-12T00:22:23.075136554Z" level=info msg="StartContainer for \"a969906abe50fb8d632f42426bee4b03a7b98d03c6217c27607f862006c016a0\" returns successfully" Jul 12 00:22:23.096664 env[1315]: time="2025-07-12T00:22:23.096619076Z" level=info msg="shim disconnected" id=a969906abe50fb8d632f42426bee4b03a7b98d03c6217c27607f862006c016a0 Jul 12 00:22:23.096919 env[1315]: time="2025-07-12T00:22:23.096900836Z" level=warning msg="cleaning up after shim disconnected" id=a969906abe50fb8d632f42426bee4b03a7b98d03c6217c27607f862006c016a0 namespace=k8s.io Jul 12 00:22:23.097028 env[1315]: time="2025-07-12T00:22:23.097012636Z" level=info msg="cleaning up dead shim" Jul 12 00:22:23.104627 env[1315]: time="2025-07-12T00:22:23.104527397Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:22:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4207 runtime=io.containerd.runc.v2\n" Jul 12 00:22:23.797162 kubelet[2087]: I0712 00:22:23.797116 2087 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e155e9a6-ea4c-4ad8-9620-b06b5855ff79" path="/var/lib/kubelet/pods/e155e9a6-ea4c-4ad8-9620-b06b5855ff79/volumes" Jul 12 00:22:24.011541 kubelet[2087]: E0712 00:22:24.011501 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:22:24.015203 env[1315]: time="2025-07-12T00:22:24.015154531Z" level=info msg="CreateContainer within sandbox \"01b0fe77917ebb2edb039b8a01dca7c81a4be13182faf6a6dd1877d7ad9b3b20\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 12 00:22:24.026087 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount414207713.mount: Deactivated successfully. Jul 12 00:22:24.032229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2211605045.mount: Deactivated successfully. Jul 12 00:22:24.035445 env[1315]: time="2025-07-12T00:22:24.035394334Z" level=info msg="CreateContainer within sandbox \"01b0fe77917ebb2edb039b8a01dca7c81a4be13182faf6a6dd1877d7ad9b3b20\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"92bd415f846773250765e9bae357b3b5d4b072370398da9f313f94eab734bdec\"" Jul 12 00:22:24.036205 env[1315]: time="2025-07-12T00:22:24.036167014Z" level=info msg="StartContainer for \"92bd415f846773250765e9bae357b3b5d4b072370398da9f313f94eab734bdec\"" Jul 12 00:22:24.086041 env[1315]: time="2025-07-12T00:22:24.085914819Z" level=info msg="StartContainer for \"92bd415f846773250765e9bae357b3b5d4b072370398da9f313f94eab734bdec\" returns successfully" Jul 12 00:22:24.107297 env[1315]: time="2025-07-12T00:22:24.107206701Z" level=info msg="shim disconnected" id=92bd415f846773250765e9bae357b3b5d4b072370398da9f313f94eab734bdec Jul 12 00:22:24.107297 env[1315]: time="2025-07-12T00:22:24.107288421Z" level=warning msg="cleaning up after shim disconnected" id=92bd415f846773250765e9bae357b3b5d4b072370398da9f313f94eab734bdec namespace=k8s.io Jul 12 00:22:24.107297 env[1315]: time="2025-07-12T00:22:24.107298381Z" level=info msg="cleaning up dead shim" Jul 12 00:22:24.116816 env[1315]: time="2025-07-12T00:22:24.116774102Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:22:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4267 runtime=io.containerd.runc.v2\n" Jul 12 00:22:25.013560 kubelet[2087]: E0712 00:22:25.013515 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:22:25.015703 env[1315]: time="2025-07-12T00:22:25.015653673Z" level=info msg="CreateContainer within sandbox \"01b0fe77917ebb2edb039b8a01dca7c81a4be13182faf6a6dd1877d7ad9b3b20\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 12 00:22:25.026154 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2810445658.mount: Deactivated successfully. Jul 12 00:22:25.029430 env[1315]: time="2025-07-12T00:22:25.029377754Z" level=info msg="CreateContainer within sandbox \"01b0fe77917ebb2edb039b8a01dca7c81a4be13182faf6a6dd1877d7ad9b3b20\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a72f4c9810c18faeae2defab5e3b7a74df3a3d99b7f721096d0449bcbb1f72b1\"" Jul 12 00:22:25.030049 env[1315]: time="2025-07-12T00:22:25.029999554Z" level=info msg="StartContainer for \"a72f4c9810c18faeae2defab5e3b7a74df3a3d99b7f721096d0449bcbb1f72b1\"" Jul 12 00:22:25.084798 env[1315]: time="2025-07-12T00:22:25.084742480Z" level=info msg="StartContainer for \"a72f4c9810c18faeae2defab5e3b7a74df3a3d99b7f721096d0449bcbb1f72b1\" returns successfully" Jul 12 00:22:25.101234 env[1315]: time="2025-07-12T00:22:25.101168882Z" level=info msg="shim disconnected" id=a72f4c9810c18faeae2defab5e3b7a74df3a3d99b7f721096d0449bcbb1f72b1 Jul 12 00:22:25.101234 env[1315]: time="2025-07-12T00:22:25.101227402Z" level=warning msg="cleaning up after shim disconnected" id=a72f4c9810c18faeae2defab5e3b7a74df3a3d99b7f721096d0449bcbb1f72b1 namespace=k8s.io Jul 12 00:22:25.101234 env[1315]: time="2025-07-12T00:22:25.101237842Z" level=info msg="cleaning up dead shim" Jul 12 00:22:25.107497 env[1315]: time="2025-07-12T00:22:25.107446842Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:22:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4322 runtime=io.containerd.runc.v2\n" Jul 12 00:22:25.208577 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a72f4c9810c18faeae2defab5e3b7a74df3a3d99b7f721096d0449bcbb1f72b1-rootfs.mount: Deactivated successfully. Jul 12 00:22:26.017682 kubelet[2087]: E0712 00:22:26.017646 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:22:26.021281 env[1315]: time="2025-07-12T00:22:26.020531653Z" level=info msg="CreateContainer within sandbox \"01b0fe77917ebb2edb039b8a01dca7c81a4be13182faf6a6dd1877d7ad9b3b20\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 12 00:22:26.035461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1458674837.mount: Deactivated successfully. Jul 12 00:22:26.041918 env[1315]: time="2025-07-12T00:22:26.041873095Z" level=info msg="CreateContainer within sandbox \"01b0fe77917ebb2edb039b8a01dca7c81a4be13182faf6a6dd1877d7ad9b3b20\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3a5ac317068a5ff16890abffc50e7751ae860c33daa4df13edcd22da69713cca\"" Jul 12 00:22:26.042418 env[1315]: time="2025-07-12T00:22:26.042390975Z" level=info msg="StartContainer for \"3a5ac317068a5ff16890abffc50e7751ae860c33daa4df13edcd22da69713cca\"" Jul 12 00:22:26.091842 env[1315]: time="2025-07-12T00:22:26.091798540Z" level=info msg="StartContainer for \"3a5ac317068a5ff16890abffc50e7751ae860c33daa4df13edcd22da69713cca\" returns successfully" Jul 12 00:22:26.344972 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Jul 12 00:22:27.022671 kubelet[2087]: E0712 00:22:27.022635 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:22:28.349342 kubelet[2087]: E0712 00:22:28.349295 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:22:28.473237 systemd[1]: run-containerd-runc-k8s.io-3a5ac317068a5ff16890abffc50e7751ae860c33daa4df13edcd22da69713cca-runc.cNCvCn.mount: Deactivated successfully. Jul 12 00:22:28.795216 kubelet[2087]: E0712 00:22:28.795181 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:22:29.204931 systemd-networkd[1101]: lxc_health: Link UP Jul 12 00:22:29.215680 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 12 00:22:29.219121 systemd-networkd[1101]: lxc_health: Gained carrier Jul 12 00:22:30.349743 kubelet[2087]: E0712 00:22:30.349692 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:22:30.365836 kubelet[2087]: I0712 00:22:30.365754 2087 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8bnml" podStartSLOduration=8.365736065 podStartE2EDuration="8.365736065s" podCreationTimestamp="2025-07-12 00:22:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:22:27.037612112 +0000 UTC m=+89.342323649" watchObservedRunningTime="2025-07-12 00:22:30.365736065 +0000 UTC m=+92.670447562" Jul 12 00:22:30.589071 systemd-networkd[1101]: lxc_health: Gained IPv6LL Jul 12 00:22:30.619080 systemd[1]: run-containerd-runc-k8s.io-3a5ac317068a5ff16890abffc50e7751ae860c33daa4df13edcd22da69713cca-runc.rPyTzA.mount: Deactivated successfully. Jul 12 00:22:31.030936 kubelet[2087]: E0712 00:22:31.030881 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:22:32.032532 kubelet[2087]: E0712 00:22:32.032491 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:22:32.752906 systemd[1]: run-containerd-runc-k8s.io-3a5ac317068a5ff16890abffc50e7751ae860c33daa4df13edcd22da69713cca-runc.eqPo1k.mount: Deactivated successfully. Jul 12 00:22:32.794725 kubelet[2087]: E0712 00:22:32.794681 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:22:34.894425 systemd[1]: run-containerd-runc-k8s.io-3a5ac317068a5ff16890abffc50e7751ae860c33daa4df13edcd22da69713cca-runc.IDPy0a.mount: Deactivated successfully. Jul 12 00:22:34.960976 sshd[3897]: pam_unix(sshd:session): session closed for user core Jul 12 00:22:34.964304 systemd[1]: sshd@24-10.0.0.41:22-10.0.0.1:53636.service: Deactivated successfully. Jul 12 00:22:34.965345 systemd[1]: session-25.scope: Deactivated successfully. Jul 12 00:22:34.965358 systemd-logind[1305]: Session 25 logged out. Waiting for processes to exit. Jul 12 00:22:34.966641 systemd-logind[1305]: Removed session 25.