Nov 1 00:30:21.692312 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Nov 1 00:30:21.692331 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Oct 31 23:12:38 -00 2025 Nov 1 00:30:21.692339 kernel: efi: EFI v2.70 by EDK II Nov 1 00:30:21.692344 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Nov 1 00:30:21.692349 kernel: random: crng init done Nov 1 00:30:21.692354 kernel: ACPI: Early table checksum verification disabled Nov 1 00:30:21.692361 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Nov 1 00:30:21.692367 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Nov 1 00:30:21.692373 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:30:21.692378 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:30:21.692384 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:30:21.692389 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:30:21.692394 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:30:21.692399 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:30:21.692407 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:30:21.692412 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:30:21.692418 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:30:21.692424 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Nov 1 00:30:21.692430 kernel: NUMA: Failed to initialise from firmware Nov 1 00:30:21.692435 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Nov 1 00:30:21.692441 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Nov 1 00:30:21.692447 kernel: Zone ranges: Nov 1 00:30:21.692453 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Nov 1 00:30:21.692460 kernel: DMA32 empty Nov 1 00:30:21.692465 kernel: Normal empty Nov 1 00:30:21.692471 kernel: Movable zone start for each node Nov 1 00:30:21.692476 kernel: Early memory node ranges Nov 1 00:30:21.692482 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Nov 1 00:30:21.692488 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Nov 1 00:30:21.692494 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Nov 1 00:30:21.692499 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Nov 1 00:30:21.692505 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Nov 1 00:30:21.692511 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Nov 1 00:30:21.692516 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Nov 1 00:30:21.692522 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Nov 1 00:30:21.692529 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Nov 1 00:30:21.692535 kernel: psci: probing for conduit method from ACPI. Nov 1 00:30:21.692555 kernel: psci: PSCIv1.1 detected in firmware. Nov 1 00:30:21.692561 kernel: psci: Using standard PSCI v0.2 function IDs Nov 1 00:30:21.692567 kernel: psci: Trusted OS migration not required Nov 1 00:30:21.692575 kernel: psci: SMC Calling Convention v1.1 Nov 1 00:30:21.692582 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Nov 1 00:30:21.692589 kernel: ACPI: SRAT not present Nov 1 00:30:21.692602 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Nov 1 00:30:21.692631 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Nov 1 00:30:21.692638 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Nov 1 00:30:21.692643 kernel: Detected PIPT I-cache on CPU0 Nov 1 00:30:21.692650 kernel: CPU features: detected: GIC system register CPU interface Nov 1 00:30:21.692655 kernel: CPU features: detected: Hardware dirty bit management Nov 1 00:30:21.692661 kernel: CPU features: detected: Spectre-v4 Nov 1 00:30:21.692668 kernel: CPU features: detected: Spectre-BHB Nov 1 00:30:21.692675 kernel: CPU features: kernel page table isolation forced ON by KASLR Nov 1 00:30:21.692681 kernel: CPU features: detected: Kernel page table isolation (KPTI) Nov 1 00:30:21.692687 kernel: CPU features: detected: ARM erratum 1418040 Nov 1 00:30:21.692693 kernel: CPU features: detected: SSBS not fully self-synchronizing Nov 1 00:30:21.692699 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Nov 1 00:30:21.692704 kernel: Policy zone: DMA Nov 1 00:30:21.692712 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=284392058f112e827cd7c521dcce1be27e1367d0030df494642d12e41e342e29 Nov 1 00:30:21.692718 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 1 00:30:21.692724 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 1 00:30:21.692730 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 00:30:21.692736 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 00:30:21.692743 kernel: Memory: 2457340K/2572288K available (9792K kernel code, 2094K rwdata, 7592K rodata, 36416K init, 777K bss, 114948K reserved, 0K cma-reserved) Nov 1 00:30:21.692750 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 1 00:30:21.692755 kernel: trace event string verifier disabled Nov 1 00:30:21.692761 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 1 00:30:21.692768 kernel: rcu: RCU event tracing is enabled. Nov 1 00:30:21.692774 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 1 00:30:21.692780 kernel: Trampoline variant of Tasks RCU enabled. Nov 1 00:30:21.692786 kernel: Tracing variant of Tasks RCU enabled. Nov 1 00:30:21.692792 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 00:30:21.692798 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 1 00:30:21.692804 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 1 00:30:21.692811 kernel: GICv3: 256 SPIs implemented Nov 1 00:30:21.692817 kernel: GICv3: 0 Extended SPIs implemented Nov 1 00:30:21.692823 kernel: GICv3: Distributor has no Range Selector support Nov 1 00:30:21.692829 kernel: Root IRQ handler: gic_handle_irq Nov 1 00:30:21.692835 kernel: GICv3: 16 PPIs implemented Nov 1 00:30:21.692841 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Nov 1 00:30:21.692847 kernel: ACPI: SRAT not present Nov 1 00:30:21.692853 kernel: ITS [mem 0x08080000-0x0809ffff] Nov 1 00:30:21.692859 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Nov 1 00:30:21.692865 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Nov 1 00:30:21.692871 kernel: GICv3: using LPI property table @0x00000000400d0000 Nov 1 00:30:21.692877 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Nov 1 00:30:21.692884 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 1 00:30:21.692890 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Nov 1 00:30:21.692897 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Nov 1 00:30:21.692903 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Nov 1 00:30:21.692909 kernel: arm-pv: using stolen time PV Nov 1 00:30:21.692915 kernel: Console: colour dummy device 80x25 Nov 1 00:30:21.692922 kernel: ACPI: Core revision 20210730 Nov 1 00:30:21.692928 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Nov 1 00:30:21.692934 kernel: pid_max: default: 32768 minimum: 301 Nov 1 00:30:21.692941 kernel: LSM: Security Framework initializing Nov 1 00:30:21.692948 kernel: SELinux: Initializing. Nov 1 00:30:21.692954 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 00:30:21.692960 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 00:30:21.692966 kernel: rcu: Hierarchical SRCU implementation. Nov 1 00:30:21.692972 kernel: Platform MSI: ITS@0x8080000 domain created Nov 1 00:30:21.692978 kernel: PCI/MSI: ITS@0x8080000 domain created Nov 1 00:30:21.692985 kernel: Remapping and enabling EFI services. Nov 1 00:30:21.692991 kernel: smp: Bringing up secondary CPUs ... Nov 1 00:30:21.692997 kernel: Detected PIPT I-cache on CPU1 Nov 1 00:30:21.693004 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Nov 1 00:30:21.693010 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Nov 1 00:30:21.693017 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 1 00:30:21.693023 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Nov 1 00:30:21.693029 kernel: Detected PIPT I-cache on CPU2 Nov 1 00:30:21.693036 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Nov 1 00:30:21.693042 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Nov 1 00:30:21.693048 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 1 00:30:21.693054 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Nov 1 00:30:21.693060 kernel: Detected PIPT I-cache on CPU3 Nov 1 00:30:21.693068 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Nov 1 00:30:21.693074 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Nov 1 00:30:21.693080 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 1 00:30:21.693086 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Nov 1 00:30:21.693096 kernel: smp: Brought up 1 node, 4 CPUs Nov 1 00:30:21.693105 kernel: SMP: Total of 4 processors activated. Nov 1 00:30:21.693111 kernel: CPU features: detected: 32-bit EL0 Support Nov 1 00:30:21.693118 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Nov 1 00:30:21.693125 kernel: CPU features: detected: Common not Private translations Nov 1 00:30:21.693131 kernel: CPU features: detected: CRC32 instructions Nov 1 00:30:21.693137 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Nov 1 00:30:21.693144 kernel: CPU features: detected: LSE atomic instructions Nov 1 00:30:21.693152 kernel: CPU features: detected: Privileged Access Never Nov 1 00:30:21.693158 kernel: CPU features: detected: RAS Extension Support Nov 1 00:30:21.693165 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Nov 1 00:30:21.693171 kernel: CPU: All CPU(s) started at EL1 Nov 1 00:30:21.693178 kernel: alternatives: patching kernel code Nov 1 00:30:21.693185 kernel: devtmpfs: initialized Nov 1 00:30:21.693192 kernel: KASLR enabled Nov 1 00:30:21.693199 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 00:30:21.693205 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 1 00:30:21.693212 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 00:30:21.693219 kernel: SMBIOS 3.0.0 present. Nov 1 00:30:21.693226 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Nov 1 00:30:21.693232 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 00:30:21.693239 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 1 00:30:21.693247 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 1 00:30:21.693253 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 1 00:30:21.693260 kernel: audit: initializing netlink subsys (disabled) Nov 1 00:30:21.693266 kernel: audit: type=2000 audit(0.039:1): state=initialized audit_enabled=0 res=1 Nov 1 00:30:21.693273 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 00:30:21.693279 kernel: cpuidle: using governor menu Nov 1 00:30:21.693286 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 1 00:30:21.693292 kernel: ASID allocator initialised with 32768 entries Nov 1 00:30:21.693299 kernel: ACPI: bus type PCI registered Nov 1 00:30:21.693306 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 00:30:21.693313 kernel: Serial: AMBA PL011 UART driver Nov 1 00:30:21.693320 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 00:30:21.693326 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Nov 1 00:30:21.693333 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 00:30:21.693340 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Nov 1 00:30:21.693346 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 00:30:21.693353 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 1 00:30:21.693359 kernel: ACPI: Added _OSI(Module Device) Nov 1 00:30:21.693367 kernel: ACPI: Added _OSI(Processor Device) Nov 1 00:30:21.693373 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 00:30:21.693380 kernel: ACPI: Added _OSI(Linux-Dell-Video) Nov 1 00:30:21.693386 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Nov 1 00:30:21.693393 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Nov 1 00:30:21.693400 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 1 00:30:21.693406 kernel: ACPI: Interpreter enabled Nov 1 00:30:21.693413 kernel: ACPI: Using GIC for interrupt routing Nov 1 00:30:21.693419 kernel: ACPI: MCFG table detected, 1 entries Nov 1 00:30:21.693427 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Nov 1 00:30:21.693433 kernel: printk: console [ttyAMA0] enabled Nov 1 00:30:21.693440 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 1 00:30:21.693579 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 1 00:30:21.693645 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 1 00:30:21.693705 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 1 00:30:21.693764 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Nov 1 00:30:21.693824 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Nov 1 00:30:21.693832 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Nov 1 00:30:21.693839 kernel: PCI host bridge to bus 0000:00 Nov 1 00:30:21.693903 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Nov 1 00:30:21.693958 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 1 00:30:21.694010 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Nov 1 00:30:21.694061 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 1 00:30:21.694132 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Nov 1 00:30:21.694238 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Nov 1 00:30:21.694307 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Nov 1 00:30:21.694396 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Nov 1 00:30:21.694523 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Nov 1 00:30:21.694625 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Nov 1 00:30:21.694689 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Nov 1 00:30:21.694751 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Nov 1 00:30:21.694806 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Nov 1 00:30:21.694873 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 1 00:30:21.694928 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Nov 1 00:30:21.694936 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 1 00:30:21.694943 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 1 00:30:21.694950 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 1 00:30:21.694958 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 1 00:30:21.694964 kernel: iommu: Default domain type: Translated Nov 1 00:30:21.694971 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 1 00:30:21.694977 kernel: vgaarb: loaded Nov 1 00:30:21.694984 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 1 00:30:21.694990 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 1 00:30:21.694997 kernel: PTP clock support registered Nov 1 00:30:21.695003 kernel: Registered efivars operations Nov 1 00:30:21.695009 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 1 00:30:21.695016 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 00:30:21.695023 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 00:30:21.695030 kernel: pnp: PnP ACPI init Nov 1 00:30:21.695098 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Nov 1 00:30:21.695107 kernel: pnp: PnP ACPI: found 1 devices Nov 1 00:30:21.695114 kernel: NET: Registered PF_INET protocol family Nov 1 00:30:21.695120 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 00:30:21.695127 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 1 00:30:21.695134 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 00:30:21.695142 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 1 00:30:21.695148 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Nov 1 00:30:21.695155 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 1 00:30:21.695161 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 00:30:21.695168 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 00:30:21.695174 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 00:30:21.695181 kernel: PCI: CLS 0 bytes, default 64 Nov 1 00:30:21.695187 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Nov 1 00:30:21.695194 kernel: kvm [1]: HYP mode not available Nov 1 00:30:21.695201 kernel: Initialise system trusted keyrings Nov 1 00:30:21.695208 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 1 00:30:21.695214 kernel: Key type asymmetric registered Nov 1 00:30:21.695222 kernel: Asymmetric key parser 'x509' registered Nov 1 00:30:21.695228 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 1 00:30:21.695235 kernel: io scheduler mq-deadline registered Nov 1 00:30:21.695241 kernel: io scheduler kyber registered Nov 1 00:30:21.695248 kernel: io scheduler bfq registered Nov 1 00:30:21.695255 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 1 00:30:21.695262 kernel: ACPI: button: Power Button [PWRB] Nov 1 00:30:21.695269 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 1 00:30:21.695329 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Nov 1 00:30:21.695338 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 00:30:21.695344 kernel: thunder_xcv, ver 1.0 Nov 1 00:30:21.695351 kernel: thunder_bgx, ver 1.0 Nov 1 00:30:21.695357 kernel: nicpf, ver 1.0 Nov 1 00:30:21.695363 kernel: nicvf, ver 1.0 Nov 1 00:30:21.695428 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 1 00:30:21.695484 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-01T00:30:21 UTC (1761957021) Nov 1 00:30:21.695493 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 1 00:30:21.695499 kernel: NET: Registered PF_INET6 protocol family Nov 1 00:30:21.695506 kernel: Segment Routing with IPv6 Nov 1 00:30:21.695512 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 00:30:21.695519 kernel: NET: Registered PF_PACKET protocol family Nov 1 00:30:21.695525 kernel: Key type dns_resolver registered Nov 1 00:30:21.695531 kernel: registered taskstats version 1 Nov 1 00:30:21.695560 kernel: Loading compiled-in X.509 certificates Nov 1 00:30:21.695568 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: 4aa5071b9a6f96878595e36d4bd5862a671c915d' Nov 1 00:30:21.695574 kernel: Key type .fscrypt registered Nov 1 00:30:21.695581 kernel: Key type fscrypt-provisioning registered Nov 1 00:30:21.695588 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 1 00:30:21.695594 kernel: ima: Allocated hash algorithm: sha1 Nov 1 00:30:21.695601 kernel: ima: No architecture policies found Nov 1 00:30:21.695607 kernel: clk: Disabling unused clocks Nov 1 00:30:21.695614 kernel: Freeing unused kernel memory: 36416K Nov 1 00:30:21.695622 kernel: Run /init as init process Nov 1 00:30:21.695629 kernel: with arguments: Nov 1 00:30:21.695636 kernel: /init Nov 1 00:30:21.695642 kernel: with environment: Nov 1 00:30:21.695648 kernel: HOME=/ Nov 1 00:30:21.695655 kernel: TERM=linux Nov 1 00:30:21.695661 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 1 00:30:21.695669 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 00:30:21.695679 systemd[1]: Detected virtualization kvm. Nov 1 00:30:21.695687 systemd[1]: Detected architecture arm64. Nov 1 00:30:21.695694 systemd[1]: Running in initrd. Nov 1 00:30:21.695700 systemd[1]: No hostname configured, using default hostname. Nov 1 00:30:21.695707 systemd[1]: Hostname set to . Nov 1 00:30:21.695715 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:30:21.695722 systemd[1]: Queued start job for default target initrd.target. Nov 1 00:30:21.695729 systemd[1]: Started systemd-ask-password-console.path. Nov 1 00:30:21.695736 systemd[1]: Reached target cryptsetup.target. Nov 1 00:30:21.695743 systemd[1]: Reached target paths.target. Nov 1 00:30:21.695750 systemd[1]: Reached target slices.target. Nov 1 00:30:21.695757 systemd[1]: Reached target swap.target. Nov 1 00:30:21.695764 systemd[1]: Reached target timers.target. Nov 1 00:30:21.695771 systemd[1]: Listening on iscsid.socket. Nov 1 00:30:21.695778 systemd[1]: Listening on iscsiuio.socket. Nov 1 00:30:21.695786 systemd[1]: Listening on systemd-journald-audit.socket. Nov 1 00:30:21.695793 systemd[1]: Listening on systemd-journald-dev-log.socket. Nov 1 00:30:21.695800 systemd[1]: Listening on systemd-journald.socket. Nov 1 00:30:21.695807 systemd[1]: Listening on systemd-networkd.socket. Nov 1 00:30:21.695814 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 00:30:21.695821 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 00:30:21.695828 systemd[1]: Reached target sockets.target. Nov 1 00:30:21.695835 systemd[1]: Starting kmod-static-nodes.service... Nov 1 00:30:21.695842 systemd[1]: Finished network-cleanup.service. Nov 1 00:30:21.695850 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 00:30:21.695857 systemd[1]: Starting systemd-journald.service... Nov 1 00:30:21.695864 systemd[1]: Starting systemd-modules-load.service... Nov 1 00:30:21.695871 systemd[1]: Starting systemd-resolved.service... Nov 1 00:30:21.695878 systemd[1]: Starting systemd-vconsole-setup.service... Nov 1 00:30:21.695885 systemd[1]: Finished kmod-static-nodes.service. Nov 1 00:30:21.695892 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 00:30:21.695899 kernel: audit: type=1130 audit(1761957021.691:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:21.695906 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Nov 1 00:30:21.695917 systemd-journald[290]: Journal started Nov 1 00:30:21.695955 systemd-journald[290]: Runtime Journal (/run/log/journal/188eef9dcb674827b01b232863678afb) is 6.0M, max 48.7M, 42.6M free. Nov 1 00:30:21.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:21.694588 systemd-modules-load[291]: Inserted module 'overlay' Nov 1 00:30:21.697661 systemd[1]: Started systemd-journald.service. Nov 1 00:30:21.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:21.700441 kernel: audit: type=1130 audit(1761957021.697:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:21.704579 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Nov 1 00:30:21.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:21.708417 systemd[1]: Finished systemd-vconsole-setup.service. Nov 1 00:30:21.712049 kernel: audit: type=1130 audit(1761957021.704:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:21.712070 kernel: audit: type=1130 audit(1761957021.709:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:21.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:21.712690 systemd-resolved[292]: Positive Trust Anchors: Nov 1 00:30:21.712703 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:30:21.712731 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 00:30:21.722609 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 00:30:21.722631 kernel: Bridge firewalling registered Nov 1 00:30:21.712808 systemd[1]: Starting dracut-cmdline-ask.service... Nov 1 00:30:21.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:21.717000 systemd-resolved[292]: Defaulting to hostname 'linux'. Nov 1 00:30:21.727617 kernel: audit: type=1130 audit(1761957021.722:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:21.717922 systemd[1]: Started systemd-resolved.service. Nov 1 00:30:21.722561 systemd-modules-load[291]: Inserted module 'br_netfilter' Nov 1 00:30:21.723326 systemd[1]: Reached target nss-lookup.target. Nov 1 00:30:21.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:21.728421 systemd[1]: Finished dracut-cmdline-ask.service. Nov 1 00:30:21.733705 kernel: audit: type=1130 audit(1761957021.729:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:21.732378 systemd[1]: Starting dracut-cmdline.service... Nov 1 00:30:21.736559 kernel: SCSI subsystem initialized Nov 1 00:30:21.741070 dracut-cmdline[309]: dracut-dracut-053 Nov 1 00:30:21.743234 dracut-cmdline[309]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=284392058f112e827cd7c521dcce1be27e1367d0030df494642d12e41e342e29 Nov 1 00:30:21.749188 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 00:30:21.749207 kernel: device-mapper: uevent: version 1.0.3 Nov 1 00:30:21.749216 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Nov 1 00:30:21.749358 systemd-modules-load[291]: Inserted module 'dm_multipath' Nov 1 00:30:21.750109 systemd[1]: Finished systemd-modules-load.service. Nov 1 00:30:21.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:21.751557 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:30:21.754991 kernel: audit: type=1130 audit(1761957021.750:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:21.759790 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:30:21.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:21.763565 kernel: audit: type=1130 audit(1761957021.759:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:21.806570 kernel: Loading iSCSI transport class v2.0-870. Nov 1 00:30:21.817571 kernel: iscsi: registered transport (tcp) Nov 1 00:30:21.832567 kernel: iscsi: registered transport (qla4xxx) Nov 1 00:30:21.832597 kernel: QLogic iSCSI HBA Driver Nov 1 00:30:21.865525 systemd[1]: Finished dracut-cmdline.service. Nov 1 00:30:21.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:21.867030 systemd[1]: Starting dracut-pre-udev.service... Nov 1 00:30:21.870119 kernel: audit: type=1130 audit(1761957021.865:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:21.909565 kernel: raid6: neonx8 gen() 13700 MB/s Nov 1 00:30:21.925558 kernel: raid6: neonx8 xor() 10773 MB/s Nov 1 00:30:21.942564 kernel: raid6: neonx4 gen() 13473 MB/s Nov 1 00:30:21.959563 kernel: raid6: neonx4 xor() 11159 MB/s Nov 1 00:30:21.976575 kernel: raid6: neonx2 gen() 12949 MB/s Nov 1 00:30:21.993565 kernel: raid6: neonx2 xor() 10358 MB/s Nov 1 00:30:22.010563 kernel: raid6: neonx1 gen() 10521 MB/s Nov 1 00:30:22.027561 kernel: raid6: neonx1 xor() 8762 MB/s Nov 1 00:30:22.044570 kernel: raid6: int64x8 gen() 6265 MB/s Nov 1 00:30:22.061569 kernel: raid6: int64x8 xor() 3542 MB/s Nov 1 00:30:22.078561 kernel: raid6: int64x4 gen() 7218 MB/s Nov 1 00:30:22.095561 kernel: raid6: int64x4 xor() 3853 MB/s Nov 1 00:30:22.112563 kernel: raid6: int64x2 gen() 6147 MB/s Nov 1 00:30:22.129563 kernel: raid6: int64x2 xor() 3307 MB/s Nov 1 00:30:22.146571 kernel: raid6: int64x1 gen() 5037 MB/s Nov 1 00:30:22.164071 kernel: raid6: int64x1 xor() 2644 MB/s Nov 1 00:30:22.164083 kernel: raid6: using algorithm neonx8 gen() 13700 MB/s Nov 1 00:30:22.164092 kernel: raid6: .... xor() 10773 MB/s, rmw enabled Nov 1 00:30:22.164104 kernel: raid6: using neon recovery algorithm Nov 1 00:30:22.174901 kernel: xor: measuring software checksum speed Nov 1 00:30:22.174919 kernel: 8regs : 17238 MB/sec Nov 1 00:30:22.175561 kernel: 32regs : 20717 MB/sec Nov 1 00:30:22.176591 kernel: arm64_neon : 24968 MB/sec Nov 1 00:30:22.176604 kernel: xor: using function: arm64_neon (24968 MB/sec) Nov 1 00:30:22.228568 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Nov 1 00:30:22.238348 systemd[1]: Finished dracut-pre-udev.service. Nov 1 00:30:22.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:22.239000 audit: BPF prog-id=7 op=LOAD Nov 1 00:30:22.239000 audit: BPF prog-id=8 op=LOAD Nov 1 00:30:22.240087 systemd[1]: Starting systemd-udevd.service... Nov 1 00:30:22.252049 systemd-udevd[492]: Using default interface naming scheme 'v252'. Nov 1 00:30:22.255350 systemd[1]: Started systemd-udevd.service. Nov 1 00:30:22.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:22.256835 systemd[1]: Starting dracut-pre-trigger.service... Nov 1 00:30:22.267560 dracut-pre-trigger[498]: rd.md=0: removing MD RAID activation Nov 1 00:30:22.292763 systemd[1]: Finished dracut-pre-trigger.service. Nov 1 00:30:22.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:22.294135 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 00:30:22.326337 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 00:30:22.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:22.354629 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 1 00:30:22.359899 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 00:30:22.359914 kernel: GPT:9289727 != 19775487 Nov 1 00:30:22.359922 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 00:30:22.359931 kernel: GPT:9289727 != 19775487 Nov 1 00:30:22.359938 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 00:30:22.359946 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:30:22.374564 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (538) Nov 1 00:30:22.376420 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Nov 1 00:30:22.377432 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Nov 1 00:30:22.385635 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 00:30:22.388821 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Nov 1 00:30:22.391995 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Nov 1 00:30:22.395466 systemd[1]: Starting disk-uuid.service... Nov 1 00:30:22.401215 disk-uuid[561]: Primary Header is updated. Nov 1 00:30:22.401215 disk-uuid[561]: Secondary Entries is updated. Nov 1 00:30:22.401215 disk-uuid[561]: Secondary Header is updated. Nov 1 00:30:22.404115 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:30:23.410586 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:30:23.410637 disk-uuid[562]: The operation has completed successfully. Nov 1 00:30:23.434003 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 00:30:23.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:23.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:23.434093 systemd[1]: Finished disk-uuid.service. Nov 1 00:30:23.435536 systemd[1]: Starting verity-setup.service... Nov 1 00:30:23.447826 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Nov 1 00:30:23.466152 systemd[1]: Found device dev-mapper-usr.device. Nov 1 00:30:23.468356 systemd[1]: Mounting sysusr-usr.mount... Nov 1 00:30:23.470004 systemd[1]: Finished verity-setup.service. Nov 1 00:30:23.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:23.516266 systemd[1]: Mounted sysusr-usr.mount. Nov 1 00:30:23.517447 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Nov 1 00:30:23.517040 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Nov 1 00:30:23.517703 systemd[1]: Starting ignition-setup.service... Nov 1 00:30:23.520068 systemd[1]: Starting parse-ip-for-networkd.service... Nov 1 00:30:23.528130 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 1 00:30:23.528168 kernel: BTRFS info (device vda6): using free space tree Nov 1 00:30:23.528179 kernel: BTRFS info (device vda6): has skinny extents Nov 1 00:30:23.535945 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 1 00:30:23.540806 systemd[1]: Finished ignition-setup.service. Nov 1 00:30:23.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:23.542160 systemd[1]: Starting ignition-fetch-offline.service... Nov 1 00:30:23.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:23.594000 audit: BPF prog-id=9 op=LOAD Nov 1 00:30:23.593817 systemd[1]: Finished parse-ip-for-networkd.service. Nov 1 00:30:23.595739 systemd[1]: Starting systemd-networkd.service... Nov 1 00:30:23.601984 ignition[650]: Ignition 2.14.0 Nov 1 00:30:23.601994 ignition[650]: Stage: fetch-offline Nov 1 00:30:23.602029 ignition[650]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:30:23.602038 ignition[650]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:30:23.602157 ignition[650]: parsed url from cmdline: "" Nov 1 00:30:23.602160 ignition[650]: no config URL provided Nov 1 00:30:23.602165 ignition[650]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:30:23.602171 ignition[650]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:30:23.602189 ignition[650]: op(1): [started] loading QEMU firmware config module Nov 1 00:30:23.602193 ignition[650]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 1 00:30:23.609830 ignition[650]: op(1): [finished] loading QEMU firmware config module Nov 1 00:30:23.609856 ignition[650]: QEMU firmware config was not found. Ignoring... Nov 1 00:30:23.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:23.620764 systemd-networkd[739]: lo: Link UP Nov 1 00:30:23.620775 systemd-networkd[739]: lo: Gained carrier Nov 1 00:30:23.621147 systemd-networkd[739]: Enumeration completed Nov 1 00:30:23.621323 systemd-networkd[739]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:30:23.622353 systemd-networkd[739]: eth0: Link UP Nov 1 00:30:23.622356 systemd-networkd[739]: eth0: Gained carrier Nov 1 00:30:23.622663 systemd[1]: Started systemd-networkd.service. Nov 1 00:30:23.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:23.623631 systemd[1]: Reached target network.target. Nov 1 00:30:23.625144 systemd[1]: Starting iscsiuio.service... Nov 1 00:30:23.632296 systemd[1]: Started iscsiuio.service. Nov 1 00:30:23.633874 systemd[1]: Starting iscsid.service... Nov 1 00:30:23.637285 iscsid[746]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Nov 1 00:30:23.637285 iscsid[746]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Nov 1 00:30:23.637285 iscsid[746]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Nov 1 00:30:23.637285 iscsid[746]: If using hardware iscsi like qla4xxx this message can be ignored. Nov 1 00:30:23.637285 iscsid[746]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Nov 1 00:30:23.637285 iscsid[746]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Nov 1 00:30:23.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:23.639893 systemd[1]: Started iscsid.service. Nov 1 00:30:23.645659 systemd[1]: Starting dracut-initqueue.service... Nov 1 00:30:23.652621 systemd-networkd[739]: eth0: DHCPv4 address 10.0.0.138/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 1 00:30:23.655658 systemd[1]: Finished dracut-initqueue.service. Nov 1 00:30:23.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:23.656739 systemd[1]: Reached target remote-fs-pre.target. Nov 1 00:30:23.658194 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 00:30:23.659802 systemd[1]: Reached target remote-fs.target. Nov 1 00:30:23.662038 systemd[1]: Starting dracut-pre-mount.service... Nov 1 00:30:23.669173 systemd[1]: Finished dracut-pre-mount.service. Nov 1 00:30:23.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:23.681129 ignition[650]: parsing config with SHA512: a16410420319655f50743a11780962610288dc9a6b33d7be1549616f4fc926cea57b77f2ac80e2bd3cc7c0118313db06c7305696d0ee5bb2d5d1668d65b91807 Nov 1 00:30:23.687881 unknown[650]: fetched base config from "system" Nov 1 00:30:23.687892 unknown[650]: fetched user config from "qemu" Nov 1 00:30:23.688329 ignition[650]: fetch-offline: fetch-offline passed Nov 1 00:30:23.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:23.689333 systemd[1]: Finished ignition-fetch-offline.service. Nov 1 00:30:23.688377 ignition[650]: Ignition finished successfully Nov 1 00:30:23.690725 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 1 00:30:23.691402 systemd[1]: Starting ignition-kargs.service... Nov 1 00:30:23.699570 ignition[760]: Ignition 2.14.0 Nov 1 00:30:23.699580 ignition[760]: Stage: kargs Nov 1 00:30:23.699669 ignition[760]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:30:23.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:23.701556 systemd[1]: Finished ignition-kargs.service. Nov 1 00:30:23.699678 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:30:23.702994 systemd[1]: Starting ignition-disks.service... Nov 1 00:30:23.700542 ignition[760]: kargs: kargs passed Nov 1 00:30:23.700605 ignition[760]: Ignition finished successfully Nov 1 00:30:23.708858 ignition[766]: Ignition 2.14.0 Nov 1 00:30:23.708866 ignition[766]: Stage: disks Nov 1 00:30:23.708947 ignition[766]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:30:23.708957 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:30:23.709962 ignition[766]: disks: disks passed Nov 1 00:30:23.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:23.711620 systemd[1]: Finished ignition-disks.service. Nov 1 00:30:23.710004 ignition[766]: Ignition finished successfully Nov 1 00:30:23.713217 systemd[1]: Reached target initrd-root-device.target. Nov 1 00:30:23.714297 systemd[1]: Reached target local-fs-pre.target. Nov 1 00:30:23.715459 systemd[1]: Reached target local-fs.target. Nov 1 00:30:23.716635 systemd[1]: Reached target sysinit.target. Nov 1 00:30:23.717859 systemd[1]: Reached target basic.target. Nov 1 00:30:23.719790 systemd[1]: Starting systemd-fsck-root.service... Nov 1 00:30:23.730661 systemd-fsck[774]: ROOT: clean, 637/553520 files, 56031/553472 blocks Nov 1 00:30:23.734086 systemd[1]: Finished systemd-fsck-root.service. Nov 1 00:30:23.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:23.736454 systemd[1]: Mounting sysroot.mount... Nov 1 00:30:23.741572 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Nov 1 00:30:23.741760 systemd[1]: Mounted sysroot.mount. Nov 1 00:30:23.742400 systemd[1]: Reached target initrd-root-fs.target. Nov 1 00:30:23.744366 systemd[1]: Mounting sysroot-usr.mount... Nov 1 00:30:23.745242 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Nov 1 00:30:23.745278 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 00:30:23.745299 systemd[1]: Reached target ignition-diskful.target. Nov 1 00:30:23.746970 systemd[1]: Mounted sysroot-usr.mount. Nov 1 00:30:23.748656 systemd[1]: Starting initrd-setup-root.service... Nov 1 00:30:23.752714 initrd-setup-root[784]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 00:30:23.756852 initrd-setup-root[792]: cut: /sysroot/etc/group: No such file or directory Nov 1 00:30:23.760402 initrd-setup-root[800]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 00:30:23.764082 initrd-setup-root[808]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 00:30:23.793943 systemd[1]: Finished initrd-setup-root.service. Nov 1 00:30:23.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:23.795349 systemd[1]: Starting ignition-mount.service... Nov 1 00:30:23.796622 systemd[1]: Starting sysroot-boot.service... Nov 1 00:30:23.800507 bash[825]: umount: /sysroot/usr/share/oem: not mounted. Nov 1 00:30:23.809341 ignition[827]: INFO : Ignition 2.14.0 Nov 1 00:30:23.809341 ignition[827]: INFO : Stage: mount Nov 1 00:30:23.810822 ignition[827]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:30:23.810822 ignition[827]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:30:23.810822 ignition[827]: INFO : mount: mount passed Nov 1 00:30:23.810822 ignition[827]: INFO : Ignition finished successfully Nov 1 00:30:23.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:23.811147 systemd[1]: Finished ignition-mount.service. Nov 1 00:30:23.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:23.815092 systemd[1]: Finished sysroot-boot.service. Nov 1 00:30:24.479336 systemd[1]: Mounting sysroot-usr-share-oem.mount... Nov 1 00:30:24.490377 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (835) Nov 1 00:30:24.490411 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 1 00:30:24.490421 kernel: BTRFS info (device vda6): using free space tree Nov 1 00:30:24.490980 kernel: BTRFS info (device vda6): has skinny extents Nov 1 00:30:24.494999 systemd[1]: Mounted sysroot-usr-share-oem.mount. Nov 1 00:30:24.496435 systemd[1]: Starting ignition-files.service... Nov 1 00:30:24.509859 ignition[855]: INFO : Ignition 2.14.0 Nov 1 00:30:24.509859 ignition[855]: INFO : Stage: files Nov 1 00:30:24.511401 ignition[855]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:30:24.511401 ignition[855]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:30:24.511401 ignition[855]: DEBUG : files: compiled without relabeling support, skipping Nov 1 00:30:24.514567 ignition[855]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 00:30:24.514567 ignition[855]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 00:30:24.514567 ignition[855]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 00:30:24.514567 ignition[855]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 00:30:24.514567 ignition[855]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 00:30:24.514097 unknown[855]: wrote ssh authorized keys file for user: core Nov 1 00:30:24.521177 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 1 00:30:24.521177 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Nov 1 00:30:24.557594 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 1 00:30:24.695689 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 1 00:30:24.697451 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 1 00:30:24.697451 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Nov 1 00:30:24.895000 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 1 00:30:25.048494 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 1 00:30:25.050119 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 1 00:30:25.050119 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 00:30:25.050119 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:30:25.050119 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:30:25.050119 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:30:25.050119 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:30:25.050119 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:30:25.050119 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:30:25.050119 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:30:25.050119 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:30:25.050119 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 1 00:30:25.050119 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 1 00:30:25.050119 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 1 00:30:25.050119 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Nov 1 00:30:25.333646 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 1 00:30:25.653417 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 1 00:30:25.653417 ignition[855]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 1 00:30:25.657411 ignition[855]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:30:25.657411 ignition[855]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:30:25.657411 ignition[855]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 1 00:30:25.657411 ignition[855]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Nov 1 00:30:25.657411 ignition[855]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 1 00:30:25.657411 ignition[855]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 1 00:30:25.657411 ignition[855]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Nov 1 00:30:25.657411 ignition[855]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Nov 1 00:30:25.657411 ignition[855]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 00:30:25.657411 ignition[855]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Nov 1 00:30:25.657411 ignition[855]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 1 00:30:25.676938 ignition[855]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 1 00:30:25.678343 ignition[855]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Nov 1 00:30:25.678343 ignition[855]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:30:25.678343 ignition[855]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:30:25.678343 ignition[855]: INFO : files: files passed Nov 1 00:30:25.678343 ignition[855]: INFO : Ignition finished successfully Nov 1 00:30:25.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:25.678265 systemd[1]: Finished ignition-files.service. Nov 1 00:30:25.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:25.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:25.679988 systemd[1]: Starting initrd-setup-root-after-ignition.service... Nov 1 00:30:25.681164 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Nov 1 00:30:25.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:25.691078 initrd-setup-root-after-ignition[880]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Nov 1 00:30:25.681817 systemd[1]: Starting ignition-quench.service... Nov 1 00:30:25.694005 initrd-setup-root-after-ignition[882]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:30:25.685579 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 00:30:25.685659 systemd[1]: Finished ignition-quench.service. Nov 1 00:30:25.686623 systemd-networkd[739]: eth0: Gained IPv6LL Nov 1 00:30:25.688491 systemd[1]: Finished initrd-setup-root-after-ignition.service. Nov 1 00:30:25.689498 systemd[1]: Reached target ignition-complete.target. Nov 1 00:30:25.692231 systemd[1]: Starting initrd-parse-etc.service... Nov 1 00:30:25.703726 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 00:30:25.703806 systemd[1]: Finished initrd-parse-etc.service. Nov 1 00:30:25.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:25.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:25.705288 systemd[1]: Reached target initrd-fs.target. Nov 1 00:30:25.706320 systemd[1]: Reached target initrd.target. Nov 1 00:30:25.707469 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Nov 1 00:30:25.708138 systemd[1]: Starting dracut-pre-pivot.service... Nov 1 00:30:25.717882 systemd[1]: Finished dracut-pre-pivot.service. Nov 1 00:30:25.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:25.719204 systemd[1]: Starting initrd-cleanup.service... Nov 1 00:30:25.726408 systemd[1]: Stopped target nss-lookup.target. Nov 1 00:30:25.727229 systemd[1]: Stopped target remote-cryptsetup.target. Nov 1 00:30:25.728477 systemd[1]: Stopped target timers.target. Nov 1 00:30:25.729702 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 00:30:25.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:25.729796 systemd[1]: Stopped dracut-pre-pivot.service. Nov 1 00:30:25.730928 systemd[1]: Stopped target initrd.target. Nov 1 00:30:25.732170 systemd[1]: Stopped target basic.target. Nov 1 00:30:25.733257 systemd[1]: Stopped target ignition-complete.target. Nov 1 00:30:25.734432 systemd[1]: Stopped target ignition-diskful.target. Nov 1 00:30:25.735608 systemd[1]: Stopped target initrd-root-device.target. Nov 1 00:30:25.737061 systemd[1]: Stopped target remote-fs.target. Nov 1 00:30:25.738269 systemd[1]: Stopped target remote-fs-pre.target. Nov 1 00:30:25.739503 systemd[1]: Stopped target sysinit.target. Nov 1 00:30:25.740632 systemd[1]: Stopped target local-fs.target. Nov 1 00:30:25.741812 systemd[1]: Stopped target local-fs-pre.target. Nov 1 00:30:25.742958 systemd[1]: Stopped target swap.target. Nov 1 00:30:25.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:25.744027 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 00:30:25.744123 systemd[1]: Stopped dracut-pre-mount.service. Nov 1 00:30:25.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:25.745326 systemd[1]: Stopped target cryptsetup.target. Nov 1 00:30:25.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:25.746300 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 00:30:25.746391 systemd[1]: Stopped dracut-initqueue.service. Nov 1 00:30:25.747692 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 00:30:25.747781 systemd[1]: Stopped ignition-fetch-offline.service. Nov 1 00:30:25.748988 systemd[1]: Stopped target paths.target. Nov 1 00:30:25.750062 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 00:30:25.754600 systemd[1]: Stopped systemd-ask-password-console.path. Nov 1 00:30:25.756088 systemd[1]: Stopped target slices.target. Nov 1 00:30:25.756792 systemd[1]: Stopped target sockets.target. Nov 1 00:30:25.757897 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 00:30:25.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:25.758000 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Nov 1 00:30:25.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:25.759228 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 00:30:25.762722 iscsid[746]: iscsid shutting down. Nov 1 00:30:25.759316 systemd[1]: Stopped ignition-files.service. Nov 1 00:30:25.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:25.761306 systemd[1]: Stopping ignition-mount.service... Nov 1 00:30:25.762116 systemd[1]: Stopping iscsid.service... Nov 1 00:30:25.763099 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 00:30:25.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:25.769456 ignition[895]: INFO : Ignition 2.14.0 Nov 1 00:30:25.769456 ignition[895]: INFO : Stage: umount Nov 1 00:30:25.769456 ignition[895]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:30:25.769456 ignition[895]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:30:25.769456 ignition[895]: INFO : umount: umount passed Nov 1 00:30:25.769456 ignition[895]: INFO : Ignition finished successfully Nov 1 00:30:25.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:25.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:25.763210 systemd[1]: Stopped kmod-static-nodes.service. Nov 1 00:30:25.765112 systemd[1]: Stopping sysroot-boot.service... Nov 1 00:30:25.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:25.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:25.767194 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 00:30:25.778000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:25.767332 systemd[1]: Stopped systemd-udev-trigger.service. Nov 1 00:30:25.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:25.768761 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 00:30:25.768850 systemd[1]: Stopped dracut-pre-trigger.service. Nov 1 00:30:25.771539 systemd[1]: iscsid.service: Deactivated successfully. Nov 1 00:30:25.771636 systemd[1]: Stopped iscsid.service. Nov 1 00:30:25.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:25.773171 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 00:30:25.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:25.773244 systemd[1]: Closed iscsid.socket. Nov 1 00:30:25.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:25.774311 systemd[1]: Stopping iscsiuio.service... Nov 1 00:30:25.776199 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 00:30:25.776636 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 00:30:25.776703 systemd[1]: Finished initrd-cleanup.service. Nov 1 00:30:25.777737 systemd[1]: iscsiuio.service: Deactivated successfully. Nov 1 00:30:25.777815 systemd[1]: Stopped iscsiuio.service. Nov 1 00:30:25.778966 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 00:30:25.779047 systemd[1]: Stopped ignition-mount.service. Nov 1 00:30:25.781610 systemd[1]: Stopped target network.target. Nov 1 00:30:25.795000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:25.782356 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 00:30:25.782387 systemd[1]: Closed iscsiuio.socket. Nov 1 00:30:25.783503 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 00:30:25.783564 systemd[1]: Stopped ignition-disks.service. Nov 1 00:30:25.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:25.784797 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 00:30:25.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:25.784836 systemd[1]: Stopped ignition-kargs.service. Nov 1 00:30:25.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:25.785843 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 00:30:25.785878 systemd[1]: Stopped ignition-setup.service. Nov 1 00:30:25.787153 systemd[1]: Stopping systemd-networkd.service... Nov 1 00:30:25.788264 systemd[1]: Stopping systemd-resolved.service... Nov 1 00:30:25.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:25.792580 systemd-networkd[739]: eth0: DHCPv6 lease lost Nov 1 00:30:25.794119 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:30:25.812000 audit: BPF prog-id=9 op=UNLOAD Nov 1 00:30:25.794212 systemd[1]: Stopped systemd-networkd.service. Nov 1 00:30:25.796971 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 00:30:25.797000 systemd[1]: Closed systemd-networkd.socket. Nov 1 00:30:25.815000 audit: BPF prog-id=6 op=UNLOAD Nov 1 00:30:25.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:25.798806 systemd[1]: Stopping network-cleanup.service... Nov 1 00:30:25.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:25.799400 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 00:30:25.799448 systemd[1]: Stopped parse-ip-for-networkd.service. Nov 1 00:30:25.801301 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:30:25.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:25.801345 systemd[1]: Stopped systemd-sysctl.service. Nov 1 00:30:25.802948 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 00:30:25.802986 systemd[1]: Stopped systemd-modules-load.service. Nov 1 00:30:25.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:25.803913 systemd[1]: Stopping systemd-udevd.service... Nov 1 00:30:25.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:25.808856 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 1 00:30:25.809290 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 00:30:25.809383 systemd[1]: Stopped systemd-resolved.service. Nov 1 00:30:25.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:25.813821 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 00:30:25.813925 systemd[1]: Stopped network-cleanup.service. Nov 1 00:30:25.816247 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 00:30:25.816365 systemd[1]: Stopped systemd-udevd.service. Nov 1 00:30:25.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:25.832000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:25.817438 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 00:30:25.817472 systemd[1]: Closed systemd-udevd-control.socket. Nov 1 00:30:25.819204 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 00:30:25.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:25.819236 systemd[1]: Closed systemd-udevd-kernel.socket. Nov 1 00:30:25.820675 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 00:30:25.838000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:25.820719 systemd[1]: Stopped dracut-pre-udev.service. Nov 1 00:30:25.821914 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 00:30:25.821949 systemd[1]: Stopped dracut-cmdline.service. Nov 1 00:30:25.824875 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:30:25.824921 systemd[1]: Stopped dracut-cmdline-ask.service. Nov 1 00:30:25.826825 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Nov 1 00:30:25.828386 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:30:25.828436 systemd[1]: Stopped systemd-vconsole-setup.service. Nov 1 00:30:25.832522 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 00:30:25.832620 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Nov 1 00:30:25.835048 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 00:30:25.835131 systemd[1]: Stopped sysroot-boot.service. Nov 1 00:30:25.836315 systemd[1]: Reached target initrd-switch-root.target. Nov 1 00:30:25.837676 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 00:30:25.837720 systemd[1]: Stopped initrd-setup-root.service. Nov 1 00:30:25.839485 systemd[1]: Starting initrd-switch-root.service... Nov 1 00:30:25.845217 systemd[1]: Switching root. Nov 1 00:30:25.864805 systemd-journald[290]: Journal stopped Nov 1 00:30:27.833478 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). Nov 1 00:30:27.833558 kernel: SELinux: Class mctp_socket not defined in policy. Nov 1 00:30:27.833587 kernel: SELinux: Class anon_inode not defined in policy. Nov 1 00:30:27.833603 kernel: SELinux: the above unknown classes and permissions will be allowed Nov 1 00:30:27.833617 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 00:30:27.833627 kernel: SELinux: policy capability open_perms=1 Nov 1 00:30:27.833637 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 00:30:27.833646 kernel: SELinux: policy capability always_check_network=0 Nov 1 00:30:27.833656 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 00:30:27.833666 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 00:30:27.833677 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 00:30:27.833686 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 00:30:27.833696 systemd[1]: Successfully loaded SELinux policy in 31.837ms. Nov 1 00:30:27.833710 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.823ms. Nov 1 00:30:27.833723 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 00:30:27.833734 systemd[1]: Detected virtualization kvm. Nov 1 00:30:27.833744 systemd[1]: Detected architecture arm64. Nov 1 00:30:27.833755 systemd[1]: Detected first boot. Nov 1 00:30:27.833766 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:30:27.833779 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Nov 1 00:30:27.833789 kernel: kauditd_printk_skb: 72 callbacks suppressed Nov 1 00:30:27.833801 kernel: audit: type=1400 audit(1761957026.053:83): avc: denied { associate } for pid=930 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Nov 1 00:30:27.833813 kernel: audit: type=1300 audit(1761957026.053:83): arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c589c a1=40000c8de0 a2=40000cf0c0 a3=32 items=0 ppid=913 pid=930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:30:27.833824 kernel: audit: type=1327 audit(1761957026.053:83): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Nov 1 00:30:27.833834 kernel: audit: type=1400 audit(1761957026.054:84): avc: denied { associate } for pid=930 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Nov 1 00:30:27.833845 kernel: audit: type=1300 audit(1761957026.054:84): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001c5979 a2=1ed a3=0 items=2 ppid=913 pid=930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:30:27.833855 kernel: audit: type=1307 audit(1761957026.054:84): cwd="/" Nov 1 00:30:27.833865 kernel: audit: type=1302 audit(1761957026.054:84): item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:30:27.833875 kernel: audit: type=1302 audit(1761957026.054:84): item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:30:27.833887 kernel: audit: type=1327 audit(1761957026.054:84): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Nov 1 00:30:27.833897 systemd[1]: Populated /etc with preset unit settings. Nov 1 00:30:27.833908 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:30:27.833918 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:30:27.833929 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:30:27.833939 kernel: audit: type=1334 audit(1761957027.727:85): prog-id=12 op=LOAD Nov 1 00:30:27.833949 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 1 00:30:27.833961 systemd[1]: Stopped initrd-switch-root.service. Nov 1 00:30:27.833972 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 1 00:30:27.833983 systemd[1]: Created slice system-addon\x2dconfig.slice. Nov 1 00:30:27.833993 systemd[1]: Created slice system-addon\x2drun.slice. Nov 1 00:30:27.834003 systemd[1]: Created slice system-getty.slice. Nov 1 00:30:27.834013 systemd[1]: Created slice system-modprobe.slice. Nov 1 00:30:27.834025 systemd[1]: Created slice system-serial\x2dgetty.slice. Nov 1 00:30:27.834036 systemd[1]: Created slice system-system\x2dcloudinit.slice. Nov 1 00:30:27.834046 systemd[1]: Created slice system-systemd\x2dfsck.slice. Nov 1 00:30:27.834056 systemd[1]: Created slice user.slice. Nov 1 00:30:27.834067 systemd[1]: Started systemd-ask-password-console.path. Nov 1 00:30:27.834077 systemd[1]: Started systemd-ask-password-wall.path. Nov 1 00:30:27.834087 systemd[1]: Set up automount boot.automount. Nov 1 00:30:27.834097 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Nov 1 00:30:27.834108 systemd[1]: Stopped target initrd-switch-root.target. Nov 1 00:30:27.834123 systemd[1]: Stopped target initrd-fs.target. Nov 1 00:30:27.834133 systemd[1]: Stopped target initrd-root-fs.target. Nov 1 00:30:27.834143 systemd[1]: Reached target integritysetup.target. Nov 1 00:30:27.834172 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 00:30:27.834183 systemd[1]: Reached target remote-fs.target. Nov 1 00:30:27.834198 systemd[1]: Reached target slices.target. Nov 1 00:30:27.834208 systemd[1]: Reached target swap.target. Nov 1 00:30:27.834219 systemd[1]: Reached target torcx.target. Nov 1 00:30:27.834230 systemd[1]: Reached target veritysetup.target. Nov 1 00:30:27.834241 systemd[1]: Listening on systemd-coredump.socket. Nov 1 00:30:27.834252 systemd[1]: Listening on systemd-initctl.socket. Nov 1 00:30:27.834263 systemd[1]: Listening on systemd-networkd.socket. Nov 1 00:30:27.834370 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 00:30:27.834381 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 00:30:27.834391 systemd[1]: Listening on systemd-userdbd.socket. Nov 1 00:30:27.834403 systemd[1]: Mounting dev-hugepages.mount... Nov 1 00:30:27.834413 systemd[1]: Mounting dev-mqueue.mount... Nov 1 00:30:27.834423 systemd[1]: Mounting media.mount... Nov 1 00:30:27.834442 systemd[1]: Mounting sys-kernel-debug.mount... Nov 1 00:30:27.834455 systemd[1]: Mounting sys-kernel-tracing.mount... Nov 1 00:30:27.834465 systemd[1]: Mounting tmp.mount... Nov 1 00:30:27.834475 systemd[1]: Starting flatcar-tmpfiles.service... Nov 1 00:30:27.834486 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:30:27.834496 systemd[1]: Starting kmod-static-nodes.service... Nov 1 00:30:27.834515 systemd[1]: Starting modprobe@configfs.service... Nov 1 00:30:27.834652 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:30:27.834666 systemd[1]: Starting modprobe@drm.service... Nov 1 00:30:27.834677 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:30:27.834690 systemd[1]: Starting modprobe@fuse.service... Nov 1 00:30:27.834701 systemd[1]: Starting modprobe@loop.service... Nov 1 00:30:27.834711 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 00:30:27.834722 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 1 00:30:27.834732 systemd[1]: Stopped systemd-fsck-root.service. Nov 1 00:30:27.834743 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 1 00:30:27.834752 kernel: loop: module loaded Nov 1 00:30:27.834763 kernel: fuse: init (API version 7.34) Nov 1 00:30:27.834775 systemd[1]: Stopped systemd-fsck-usr.service. Nov 1 00:30:27.834785 systemd[1]: Stopped systemd-journald.service. Nov 1 00:30:27.834795 systemd[1]: Starting systemd-journald.service... Nov 1 00:30:27.834816 systemd[1]: Starting systemd-modules-load.service... Nov 1 00:30:27.834827 systemd[1]: Starting systemd-network-generator.service... Nov 1 00:30:27.834837 systemd[1]: Starting systemd-remount-fs.service... Nov 1 00:30:27.834847 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 00:30:27.834857 systemd[1]: verity-setup.service: Deactivated successfully. Nov 1 00:30:27.834867 systemd[1]: Stopped verity-setup.service. Nov 1 00:30:27.834877 systemd[1]: Mounted dev-hugepages.mount. Nov 1 00:30:27.834889 systemd-journald[1008]: Journal started Nov 1 00:30:27.835011 systemd-journald[1008]: Runtime Journal (/run/log/journal/188eef9dcb674827b01b232863678afb) is 6.0M, max 48.7M, 42.6M free. Nov 1 00:30:25.918000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 00:30:26.013000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 00:30:26.013000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 00:30:26.013000 audit: BPF prog-id=10 op=LOAD Nov 1 00:30:26.013000 audit: BPF prog-id=10 op=UNLOAD Nov 1 00:30:26.014000 audit: BPF prog-id=11 op=LOAD Nov 1 00:30:26.014000 audit: BPF prog-id=11 op=UNLOAD Nov 1 00:30:26.053000 audit[930]: AVC avc: denied { associate } for pid=930 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Nov 1 00:30:26.053000 audit[930]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c589c a1=40000c8de0 a2=40000cf0c0 a3=32 items=0 ppid=913 pid=930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:30:26.053000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Nov 1 00:30:26.054000 audit[930]: AVC avc: denied { associate } for pid=930 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Nov 1 00:30:26.054000 audit[930]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001c5979 a2=1ed a3=0 items=2 ppid=913 pid=930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:30:26.054000 audit: CWD cwd="/" Nov 1 00:30:26.054000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:30:26.054000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:30:26.054000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Nov 1 00:30:27.727000 audit: BPF prog-id=12 op=LOAD Nov 1 00:30:27.727000 audit: BPF prog-id=3 op=UNLOAD Nov 1 00:30:27.728000 audit: BPF prog-id=13 op=LOAD Nov 1 00:30:27.728000 audit: BPF prog-id=14 op=LOAD Nov 1 00:30:27.728000 audit: BPF prog-id=4 op=UNLOAD Nov 1 00:30:27.728000 audit: BPF prog-id=5 op=UNLOAD Nov 1 00:30:27.728000 audit: BPF prog-id=15 op=LOAD Nov 1 00:30:27.728000 audit: BPF prog-id=12 op=UNLOAD Nov 1 00:30:27.728000 audit: BPF prog-id=16 op=LOAD Nov 1 00:30:27.728000 audit: BPF prog-id=17 op=LOAD Nov 1 00:30:27.728000 audit: BPF prog-id=13 op=UNLOAD Nov 1 00:30:27.728000 audit: BPF prog-id=14 op=UNLOAD Nov 1 00:30:27.729000 audit: BPF prog-id=18 op=LOAD Nov 1 00:30:27.729000 audit: BPF prog-id=15 op=UNLOAD Nov 1 00:30:27.729000 audit: BPF prog-id=19 op=LOAD Nov 1 00:30:27.729000 audit: BPF prog-id=20 op=LOAD Nov 1 00:30:27.729000 audit: BPF prog-id=16 op=UNLOAD Nov 1 00:30:27.729000 audit: BPF prog-id=17 op=UNLOAD Nov 1 00:30:27.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:27.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:27.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:27.740000 audit: BPF prog-id=18 op=UNLOAD Nov 1 00:30:27.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:27.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:27.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:27.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:27.817000 audit: BPF prog-id=21 op=LOAD Nov 1 00:30:27.817000 audit: BPF prog-id=22 op=LOAD Nov 1 00:30:27.817000 audit: BPF prog-id=23 op=LOAD Nov 1 00:30:27.817000 audit: BPF prog-id=19 op=UNLOAD Nov 1 00:30:27.817000 audit: BPF prog-id=20 op=UNLOAD Nov 1 00:30:27.831000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Nov 1 00:30:27.831000 audit[1008]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffe0aa4d90 a2=4000 a3=1 items=0 ppid=1 pid=1008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:30:27.831000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Nov 1 00:30:27.832000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:27.726606 systemd[1]: Queued start job for default target multi-user.target. Nov 1 00:30:26.051934 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-11-01T00:30:26Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:30:27.726619 systemd[1]: Unnecessary job was removed for dev-vda6.device. Nov 1 00:30:26.052246 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-11-01T00:30:26Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Nov 1 00:30:27.730661 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 1 00:30:26.052265 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-11-01T00:30:26Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Nov 1 00:30:26.052294 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-11-01T00:30:26Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Nov 1 00:30:26.052303 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-11-01T00:30:26Z" level=debug msg="skipped missing lower profile" missing profile=oem Nov 1 00:30:26.052330 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-11-01T00:30:26Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Nov 1 00:30:26.052342 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-11-01T00:30:26Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Nov 1 00:30:26.052539 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-11-01T00:30:26Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Nov 1 00:30:26.052588 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-11-01T00:30:26Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Nov 1 00:30:27.836976 systemd[1]: Started systemd-journald.service. Nov 1 00:30:26.052600 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-11-01T00:30:26Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Nov 1 00:30:26.053035 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-11-01T00:30:26Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Nov 1 00:30:27.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:26.053082 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-11-01T00:30:26Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Nov 1 00:30:26.053101 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-11-01T00:30:26Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Nov 1 00:30:26.053115 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-11-01T00:30:26Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Nov 1 00:30:26.053131 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-11-01T00:30:26Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Nov 1 00:30:26.053144 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-11-01T00:30:26Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Nov 1 00:30:27.467291 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-11-01T00:30:27Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 00:30:27.837458 systemd[1]: Mounted dev-mqueue.mount. Nov 1 00:30:27.467604 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-11-01T00:30:27Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 00:30:27.467996 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-11-01T00:30:27Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 00:30:27.468177 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-11-01T00:30:27Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 00:30:27.468244 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-11-01T00:30:27Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Nov 1 00:30:27.468450 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-11-01T00:30:27Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Nov 1 00:30:27.838337 systemd[1]: Mounted media.mount. Nov 1 00:30:27.839039 systemd[1]: Mounted sys-kernel-debug.mount. Nov 1 00:30:27.839789 systemd[1]: Mounted sys-kernel-tracing.mount. Nov 1 00:30:27.840561 systemd[1]: Mounted tmp.mount. Nov 1 00:30:27.841470 systemd[1]: Finished kmod-static-nodes.service. Nov 1 00:30:27.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:27.842497 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 00:30:27.842648 systemd[1]: Finished modprobe@configfs.service. Nov 1 00:30:27.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:27.842000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:27.843575 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:30:27.843725 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:30:27.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:27.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:27.844734 systemd[1]: Finished flatcar-tmpfiles.service. Nov 1 00:30:27.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:27.845647 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:30:27.845804 systemd[1]: Finished modprobe@drm.service. Nov 1 00:30:27.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:27.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:27.846712 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:30:27.846845 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:30:27.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:27.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:27.847787 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 00:30:27.847933 systemd[1]: Finished modprobe@fuse.service. Nov 1 00:30:27.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:27.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:27.848914 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:30:27.849050 systemd[1]: Finished modprobe@loop.service. Nov 1 00:30:27.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:27.849000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:27.850000 systemd[1]: Finished systemd-modules-load.service. Nov 1 00:30:27.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:27.851023 systemd[1]: Finished systemd-network-generator.service. Nov 1 00:30:27.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:27.852074 systemd[1]: Finished systemd-remount-fs.service. Nov 1 00:30:27.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:27.853420 systemd[1]: Reached target network-pre.target. Nov 1 00:30:27.855409 systemd[1]: Mounting sys-fs-fuse-connections.mount... Nov 1 00:30:27.857330 systemd[1]: Mounting sys-kernel-config.mount... Nov 1 00:30:27.858023 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 00:30:27.859707 systemd[1]: Starting systemd-hwdb-update.service... Nov 1 00:30:27.861406 systemd[1]: Starting systemd-journal-flush.service... Nov 1 00:30:27.862289 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:30:27.863276 systemd[1]: Starting systemd-random-seed.service... Nov 1 00:30:27.864183 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:30:27.865181 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:30:27.867667 systemd-journald[1008]: Time spent on flushing to /var/log/journal/188eef9dcb674827b01b232863678afb is 13.457ms for 1003 entries. Nov 1 00:30:27.867667 systemd-journald[1008]: System Journal (/var/log/journal/188eef9dcb674827b01b232863678afb) is 8.0M, max 195.6M, 187.6M free. Nov 1 00:30:27.895702 systemd-journald[1008]: Received client request to flush runtime journal. Nov 1 00:30:27.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:27.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:27.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:27.866956 systemd[1]: Starting systemd-sysusers.service... Nov 1 00:30:27.871020 systemd[1]: Mounted sys-fs-fuse-connections.mount. Nov 1 00:30:27.872145 systemd[1]: Mounted sys-kernel-config.mount. Nov 1 00:30:27.896923 udevadm[1030]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 1 00:30:27.873240 systemd[1]: Finished systemd-random-seed.service. Nov 1 00:30:27.874219 systemd[1]: Reached target first-boot-complete.target. Nov 1 00:30:27.878801 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 00:30:27.880587 systemd[1]: Starting systemd-udev-settle.service... Nov 1 00:30:27.891779 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:30:27.896592 systemd[1]: Finished systemd-journal-flush.service. Nov 1 00:30:27.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:27.898473 systemd[1]: Finished systemd-sysusers.service. Nov 1 00:30:27.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:28.215118 systemd[1]: Finished systemd-hwdb-update.service. Nov 1 00:30:28.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:28.215000 audit: BPF prog-id=24 op=LOAD Nov 1 00:30:28.217215 systemd[1]: Starting systemd-udevd.service... Nov 1 00:30:28.216000 audit: BPF prog-id=25 op=LOAD Nov 1 00:30:28.216000 audit: BPF prog-id=7 op=UNLOAD Nov 1 00:30:28.216000 audit: BPF prog-id=8 op=UNLOAD Nov 1 00:30:28.231668 systemd-udevd[1033]: Using default interface naming scheme 'v252'. Nov 1 00:30:28.244769 systemd[1]: Started systemd-udevd.service. Nov 1 00:30:28.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:28.246000 audit: BPF prog-id=26 op=LOAD Nov 1 00:30:28.247208 systemd[1]: Starting systemd-networkd.service... Nov 1 00:30:28.252000 audit: BPF prog-id=27 op=LOAD Nov 1 00:30:28.252000 audit: BPF prog-id=28 op=LOAD Nov 1 00:30:28.252000 audit: BPF prog-id=29 op=LOAD Nov 1 00:30:28.253357 systemd[1]: Starting systemd-userdbd.service... Nov 1 00:30:28.271428 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Nov 1 00:30:28.279341 systemd[1]: Started systemd-userdbd.service. Nov 1 00:30:28.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:28.299747 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 00:30:28.329206 systemd-networkd[1042]: lo: Link UP Nov 1 00:30:28.329216 systemd-networkd[1042]: lo: Gained carrier Nov 1 00:30:28.329651 systemd-networkd[1042]: Enumeration completed Nov 1 00:30:28.329738 systemd[1]: Started systemd-networkd.service. Nov 1 00:30:28.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:28.330622 systemd-networkd[1042]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:30:28.331873 systemd[1]: Finished systemd-udev-settle.service. Nov 1 00:30:28.332868 systemd-networkd[1042]: eth0: Link UP Nov 1 00:30:28.332881 systemd-networkd[1042]: eth0: Gained carrier Nov 1 00:30:28.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:28.333777 systemd[1]: Starting lvm2-activation-early.service... Nov 1 00:30:28.342277 lvm[1066]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:30:28.353661 systemd-networkd[1042]: eth0: DHCPv4 address 10.0.0.138/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 1 00:30:28.367282 systemd[1]: Finished lvm2-activation-early.service. Nov 1 00:30:28.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:28.368207 systemd[1]: Reached target cryptsetup.target. Nov 1 00:30:28.369992 systemd[1]: Starting lvm2-activation.service... Nov 1 00:30:28.373309 lvm[1067]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:30:28.403247 systemd[1]: Finished lvm2-activation.service. Nov 1 00:30:28.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:28.404111 systemd[1]: Reached target local-fs-pre.target. Nov 1 00:30:28.404889 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 00:30:28.404921 systemd[1]: Reached target local-fs.target. Nov 1 00:30:28.405576 systemd[1]: Reached target machines.target. Nov 1 00:30:28.407335 systemd[1]: Starting ldconfig.service... Nov 1 00:30:28.408427 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:30:28.408482 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:30:28.409477 systemd[1]: Starting systemd-boot-update.service... Nov 1 00:30:28.411329 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Nov 1 00:30:28.413299 systemd[1]: Starting systemd-machine-id-commit.service... Nov 1 00:30:28.415152 systemd[1]: Starting systemd-sysext.service... Nov 1 00:30:28.416240 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1069 (bootctl) Nov 1 00:30:28.417236 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Nov 1 00:30:28.424760 systemd[1]: Unmounting usr-share-oem.mount... Nov 1 00:30:28.426146 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Nov 1 00:30:28.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:28.428885 systemd[1]: usr-share-oem.mount: Deactivated successfully. Nov 1 00:30:28.429052 systemd[1]: Unmounted usr-share-oem.mount. Nov 1 00:30:28.441599 kernel: loop0: detected capacity change from 0 to 211168 Nov 1 00:30:28.490438 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 00:30:28.491075 systemd[1]: Finished systemd-machine-id-commit.service. Nov 1 00:30:28.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:28.500567 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 00:30:28.507166 systemd-fsck[1080]: fsck.fat 4.2 (2021-01-31) Nov 1 00:30:28.507166 systemd-fsck[1080]: /dev/vda1: 236 files, 117310/258078 clusters Nov 1 00:30:28.508617 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Nov 1 00:30:28.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:28.511866 systemd[1]: Mounting boot.mount... Nov 1 00:30:28.519581 kernel: loop1: detected capacity change from 0 to 211168 Nov 1 00:30:28.520542 systemd[1]: Mounted boot.mount. Nov 1 00:30:28.526105 (sd-sysext)[1084]: Using extensions 'kubernetes'. Nov 1 00:30:28.526491 (sd-sysext)[1084]: Merged extensions into '/usr'. Nov 1 00:30:28.527687 systemd[1]: Finished systemd-boot-update.service. Nov 1 00:30:28.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:28.546364 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:30:28.547667 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:30:28.549649 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:30:28.551591 systemd[1]: Starting modprobe@loop.service... Nov 1 00:30:28.552473 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:30:28.552632 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:30:28.553367 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:30:28.553490 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:30:28.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:28.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:28.554963 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:30:28.555069 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:30:28.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:28.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:28.556437 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:30:28.556583 systemd[1]: Finished modprobe@loop.service. Nov 1 00:30:28.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:28.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:28.557889 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:30:28.557994 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:30:28.589000 ldconfig[1068]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 00:30:28.592222 systemd[1]: Finished ldconfig.service. Nov 1 00:30:28.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:28.835643 systemd[1]: Mounting usr-share-oem.mount... Nov 1 00:30:28.840512 systemd[1]: Mounted usr-share-oem.mount. Nov 1 00:30:28.842304 systemd[1]: Finished systemd-sysext.service. Nov 1 00:30:28.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:28.844276 systemd[1]: Starting ensure-sysext.service... Nov 1 00:30:28.845963 systemd[1]: Starting systemd-tmpfiles-setup.service... Nov 1 00:30:28.850172 systemd[1]: Reloading. Nov 1 00:30:28.856956 systemd-tmpfiles[1091]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Nov 1 00:30:28.858998 systemd-tmpfiles[1091]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 00:30:28.861792 systemd-tmpfiles[1091]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 00:30:28.888115 /usr/lib/systemd/system-generators/torcx-generator[1111]: time="2025-11-01T00:30:28Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:30:28.888147 /usr/lib/systemd/system-generators/torcx-generator[1111]: time="2025-11-01T00:30:28Z" level=info msg="torcx already run" Nov 1 00:30:28.946365 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:30:28.946388 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:30:28.962610 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:30:29.006000 audit: BPF prog-id=30 op=LOAD Nov 1 00:30:29.006000 audit: BPF prog-id=27 op=UNLOAD Nov 1 00:30:29.006000 audit: BPF prog-id=31 op=LOAD Nov 1 00:30:29.006000 audit: BPF prog-id=32 op=LOAD Nov 1 00:30:29.006000 audit: BPF prog-id=28 op=UNLOAD Nov 1 00:30:29.006000 audit: BPF prog-id=29 op=UNLOAD Nov 1 00:30:29.007000 audit: BPF prog-id=33 op=LOAD Nov 1 00:30:29.007000 audit: BPF prog-id=34 op=LOAD Nov 1 00:30:29.007000 audit: BPF prog-id=24 op=UNLOAD Nov 1 00:30:29.007000 audit: BPF prog-id=25 op=UNLOAD Nov 1 00:30:29.008000 audit: BPF prog-id=35 op=LOAD Nov 1 00:30:29.008000 audit: BPF prog-id=26 op=UNLOAD Nov 1 00:30:29.009000 audit: BPF prog-id=36 op=LOAD Nov 1 00:30:29.009000 audit: BPF prog-id=21 op=UNLOAD Nov 1 00:30:29.009000 audit: BPF prog-id=37 op=LOAD Nov 1 00:30:29.009000 audit: BPF prog-id=38 op=LOAD Nov 1 00:30:29.009000 audit: BPF prog-id=22 op=UNLOAD Nov 1 00:30:29.009000 audit: BPF prog-id=23 op=UNLOAD Nov 1 00:30:29.012626 systemd[1]: Finished systemd-tmpfiles-setup.service. Nov 1 00:30:29.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:29.016706 systemd[1]: Starting audit-rules.service... Nov 1 00:30:29.018369 systemd[1]: Starting clean-ca-certificates.service... Nov 1 00:30:29.020384 systemd[1]: Starting systemd-journal-catalog-update.service... Nov 1 00:30:29.021000 audit: BPF prog-id=39 op=LOAD Nov 1 00:30:29.022742 systemd[1]: Starting systemd-resolved.service... Nov 1 00:30:29.023000 audit: BPF prog-id=40 op=LOAD Nov 1 00:30:29.024771 systemd[1]: Starting systemd-timesyncd.service... Nov 1 00:30:29.026680 systemd[1]: Starting systemd-update-utmp.service... Nov 1 00:30:29.027929 systemd[1]: Finished clean-ca-certificates.service. Nov 1 00:30:29.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:29.030000 audit[1161]: SYSTEM_BOOT pid=1161 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Nov 1 00:30:29.032209 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:30:29.035406 systemd[1]: Finished systemd-update-utmp.service. Nov 1 00:30:29.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:29.036620 systemd[1]: Finished systemd-journal-catalog-update.service. Nov 1 00:30:29.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:29.039071 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:30:29.040178 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:30:29.041990 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:30:29.043696 systemd[1]: Starting modprobe@loop.service... Nov 1 00:30:29.044344 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:30:29.044467 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:30:29.045652 systemd[1]: Starting systemd-update-done.service... Nov 1 00:30:29.046367 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:30:29.047293 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:30:29.047417 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:30:29.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:29.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:29.048599 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:30:29.048706 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:30:29.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:29.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:29.049877 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:30:29.049989 systemd[1]: Finished modprobe@loop.service. Nov 1 00:30:29.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:29.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:29.051113 systemd[1]: Finished systemd-update-done.service. Nov 1 00:30:29.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:29.052259 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:30:29.052365 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:30:29.054843 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:30:29.055992 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:30:29.057659 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:30:29.059526 systemd[1]: Starting modprobe@loop.service... Nov 1 00:30:29.060275 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:30:29.060421 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:30:29.060532 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:30:29.061388 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:30:29.061516 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:30:29.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:29.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:29.062714 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:30:29.062821 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:30:29.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:29.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:29.064181 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:30:29.064289 systemd[1]: Finished modprobe@loop.service. Nov 1 00:30:29.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:29.065000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:30:29.065587 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:30:29.065682 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:30:29.067960 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:30:29.069094 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:30:29.071020 systemd[1]: Starting modprobe@drm.service... Nov 1 00:30:29.072917 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:30:29.074620 systemd[1]: Starting modprobe@loop.service... Nov 1 00:30:29.075348 systemd-timesyncd[1160]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 1 00:30:29.075399 systemd-timesyncd[1160]: Initial clock synchronization to Sat 2025-11-01 00:30:29.217312 UTC. Nov 1 00:30:29.075519 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:30:29.075653 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:30:29.075000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Nov 1 00:30:29.075000 audit[1178]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffed4e65b0 a2=420 a3=0 items=0 ppid=1150 pid=1178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:30:29.075000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Nov 1 00:30:29.076851 augenrules[1178]: No rules Nov 1 00:30:29.076850 systemd[1]: Starting systemd-networkd-wait-online.service... Nov 1 00:30:29.077778 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:30:29.077865 systemd-resolved[1154]: Positive Trust Anchors: Nov 1 00:30:29.077875 systemd-resolved[1154]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:30:29.078677 systemd[1]: Started systemd-timesyncd.service. Nov 1 00:30:29.080105 systemd[1]: Finished audit-rules.service. Nov 1 00:30:29.081095 systemd-resolved[1154]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 00:30:29.081128 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:30:29.081236 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:30:29.082591 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:30:29.082702 systemd[1]: Finished modprobe@drm.service. Nov 1 00:30:29.083855 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:30:29.083954 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:30:29.085284 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:30:29.085387 systemd[1]: Finished modprobe@loop.service. Nov 1 00:30:29.086929 systemd[1]: Reached target time-set.target. Nov 1 00:30:29.087766 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:30:29.087805 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:30:29.088075 systemd[1]: Finished ensure-sysext.service. Nov 1 00:30:29.089266 systemd-resolved[1154]: Defaulting to hostname 'linux'. Nov 1 00:30:29.090612 systemd[1]: Started systemd-resolved.service. Nov 1 00:30:29.091328 systemd[1]: Reached target network.target. Nov 1 00:30:29.092149 systemd[1]: Reached target nss-lookup.target. Nov 1 00:30:29.092854 systemd[1]: Reached target sysinit.target. Nov 1 00:30:29.093585 systemd[1]: Started motdgen.path. Nov 1 00:30:29.094183 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Nov 1 00:30:29.095286 systemd[1]: Started logrotate.timer. Nov 1 00:30:29.096094 systemd[1]: Started mdadm.timer. Nov 1 00:30:29.096698 systemd[1]: Started systemd-tmpfiles-clean.timer. Nov 1 00:30:29.097381 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 00:30:29.097410 systemd[1]: Reached target paths.target. Nov 1 00:30:29.098085 systemd[1]: Reached target timers.target. Nov 1 00:30:29.099023 systemd[1]: Listening on dbus.socket. Nov 1 00:30:29.100620 systemd[1]: Starting docker.socket... Nov 1 00:30:29.103439 systemd[1]: Listening on sshd.socket. Nov 1 00:30:29.104283 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:30:29.104695 systemd[1]: Listening on docker.socket. Nov 1 00:30:29.105414 systemd[1]: Reached target sockets.target. Nov 1 00:30:29.106146 systemd[1]: Reached target basic.target. Nov 1 00:30:29.106838 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 00:30:29.106867 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 00:30:29.107762 systemd[1]: Starting containerd.service... Nov 1 00:30:29.109354 systemd[1]: Starting dbus.service... Nov 1 00:30:29.110940 systemd[1]: Starting enable-oem-cloudinit.service... Nov 1 00:30:29.112796 systemd[1]: Starting extend-filesystems.service... Nov 1 00:30:29.113815 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Nov 1 00:30:29.115040 jq[1193]: false Nov 1 00:30:29.114867 systemd[1]: Starting motdgen.service... Nov 1 00:30:29.116489 systemd[1]: Starting prepare-helm.service... Nov 1 00:30:29.118176 systemd[1]: Starting ssh-key-proc-cmdline.service... Nov 1 00:30:29.120757 systemd[1]: Starting sshd-keygen.service... Nov 1 00:30:29.123530 systemd[1]: Starting systemd-logind.service... Nov 1 00:30:29.124453 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:30:29.124528 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 1 00:30:29.124914 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 1 00:30:29.125574 systemd[1]: Starting update-engine.service... Nov 1 00:30:29.126896 dbus-daemon[1192]: [system] SELinux support is enabled Nov 1 00:30:29.127333 systemd[1]: Starting update-ssh-keys-after-ignition.service... Nov 1 00:30:29.129474 systemd[1]: Started dbus.service. Nov 1 00:30:29.131211 jq[1207]: true Nov 1 00:30:29.134290 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 00:30:29.134455 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Nov 1 00:30:29.141273 tar[1214]: linux-arm64/LICENSE Nov 1 00:30:29.141273 tar[1214]: linux-arm64/helm Nov 1 00:30:29.135506 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 00:30:29.135680 systemd[1]: Finished ssh-key-proc-cmdline.service. Nov 1 00:30:29.138457 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 00:30:29.138487 systemd[1]: Reached target system-config.target. Nov 1 00:30:29.139867 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 00:30:29.139888 systemd[1]: Reached target user-config.target. Nov 1 00:30:29.142418 extend-filesystems[1194]: Found loop1 Nov 1 00:30:29.142418 extend-filesystems[1194]: Found vda Nov 1 00:30:29.142418 extend-filesystems[1194]: Found vda1 Nov 1 00:30:29.142418 extend-filesystems[1194]: Found vda2 Nov 1 00:30:29.142418 extend-filesystems[1194]: Found vda3 Nov 1 00:30:29.142418 extend-filesystems[1194]: Found usr Nov 1 00:30:29.156129 extend-filesystems[1194]: Found vda4 Nov 1 00:30:29.156129 extend-filesystems[1194]: Found vda6 Nov 1 00:30:29.156129 extend-filesystems[1194]: Found vda7 Nov 1 00:30:29.156129 extend-filesystems[1194]: Found vda9 Nov 1 00:30:29.156129 extend-filesystems[1194]: Checking size of /dev/vda9 Nov 1 00:30:29.156129 extend-filesystems[1194]: Resized partition /dev/vda9 Nov 1 00:30:29.169039 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 1 00:30:29.144915 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 00:30:29.171436 jq[1215]: true Nov 1 00:30:29.171578 extend-filesystems[1222]: resize2fs 1.46.5 (30-Dec-2021) Nov 1 00:30:29.145066 systemd[1]: Finished motdgen.service. Nov 1 00:30:29.192675 systemd-logind[1203]: Watching system buttons on /dev/input/event0 (Power Button) Nov 1 00:30:29.193159 systemd-logind[1203]: New seat seat0. Nov 1 00:30:29.195548 systemd[1]: Started systemd-logind.service. Nov 1 00:30:29.196687 update_engine[1204]: I1101 00:30:29.195770 1204 main.cc:92] Flatcar Update Engine starting Nov 1 00:30:29.197695 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 1 00:30:29.199710 systemd[1]: Started update-engine.service. Nov 1 00:30:29.211838 extend-filesystems[1222]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 1 00:30:29.211838 extend-filesystems[1222]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 1 00:30:29.211838 extend-filesystems[1222]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 1 00:30:29.216925 update_engine[1204]: I1101 00:30:29.199769 1204 update_check_scheduler.cc:74] Next update check in 4m44s Nov 1 00:30:29.202283 systemd[1]: Started locksmithd.service. Nov 1 00:30:29.217023 extend-filesystems[1194]: Resized filesystem in /dev/vda9 Nov 1 00:30:29.217846 bash[1242]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:30:29.212588 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 00:30:29.212756 systemd[1]: Finished extend-filesystems.service. Nov 1 00:30:29.217157 systemd[1]: Finished update-ssh-keys-after-ignition.service. Nov 1 00:30:29.222566 env[1216]: time="2025-11-01T00:30:29.222299040Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Nov 1 00:30:29.239572 env[1216]: time="2025-11-01T00:30:29.239521880Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 00:30:29.239779 env[1216]: time="2025-11-01T00:30:29.239759640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:30:29.242520 env[1216]: time="2025-11-01T00:30:29.241653000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:30:29.242520 env[1216]: time="2025-11-01T00:30:29.241682320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:30:29.242520 env[1216]: time="2025-11-01T00:30:29.241854320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:30:29.242520 env[1216]: time="2025-11-01T00:30:29.241871000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 00:30:29.242520 env[1216]: time="2025-11-01T00:30:29.241882480Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Nov 1 00:30:29.242520 env[1216]: time="2025-11-01T00:30:29.241891480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 00:30:29.242520 env[1216]: time="2025-11-01T00:30:29.241958640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:30:29.242520 env[1216]: time="2025-11-01T00:30:29.242208200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:30:29.242520 env[1216]: time="2025-11-01T00:30:29.242314520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:30:29.242520 env[1216]: time="2025-11-01T00:30:29.242328800Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 00:30:29.242825 env[1216]: time="2025-11-01T00:30:29.242379360Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Nov 1 00:30:29.242825 env[1216]: time="2025-11-01T00:30:29.242391560Z" level=info msg="metadata content store policy set" policy=shared Nov 1 00:30:29.246598 env[1216]: time="2025-11-01T00:30:29.245768280Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 00:30:29.246598 env[1216]: time="2025-11-01T00:30:29.245806400Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 00:30:29.246598 env[1216]: time="2025-11-01T00:30:29.245820240Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 00:30:29.246598 env[1216]: time="2025-11-01T00:30:29.245848800Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 00:30:29.246598 env[1216]: time="2025-11-01T00:30:29.245863920Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 00:30:29.246598 env[1216]: time="2025-11-01T00:30:29.245876520Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 00:30:29.246598 env[1216]: time="2025-11-01T00:30:29.245888680Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 00:30:29.246598 env[1216]: time="2025-11-01T00:30:29.246198760Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 00:30:29.246598 env[1216]: time="2025-11-01T00:30:29.246215680Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Nov 1 00:30:29.246598 env[1216]: time="2025-11-01T00:30:29.246227480Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 00:30:29.246598 env[1216]: time="2025-11-01T00:30:29.246239400Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 00:30:29.246598 env[1216]: time="2025-11-01T00:30:29.246251120Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 00:30:29.246598 env[1216]: time="2025-11-01T00:30:29.246362720Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 00:30:29.246598 env[1216]: time="2025-11-01T00:30:29.246431400Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 00:30:29.247075 env[1216]: time="2025-11-01T00:30:29.247036640Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 00:30:29.247113 env[1216]: time="2025-11-01T00:30:29.247087080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 00:30:29.247113 env[1216]: time="2025-11-01T00:30:29.247101960Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 00:30:29.247219 env[1216]: time="2025-11-01T00:30:29.247203800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 00:30:29.247252 env[1216]: time="2025-11-01T00:30:29.247218960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 00:30:29.247252 env[1216]: time="2025-11-01T00:30:29.247230560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 00:30:29.247252 env[1216]: time="2025-11-01T00:30:29.247241840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 00:30:29.247252 env[1216]: time="2025-11-01T00:30:29.247253640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 00:30:29.247252 env[1216]: time="2025-11-01T00:30:29.247265040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 00:30:29.247252 env[1216]: time="2025-11-01T00:30:29.247275440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 00:30:29.247252 env[1216]: time="2025-11-01T00:30:29.247285880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 00:30:29.247447 env[1216]: time="2025-11-01T00:30:29.247298600Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 00:30:29.247447 env[1216]: time="2025-11-01T00:30:29.247414920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 00:30:29.247447 env[1216]: time="2025-11-01T00:30:29.247430280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 00:30:29.247447 env[1216]: time="2025-11-01T00:30:29.247441160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 00:30:29.247533 env[1216]: time="2025-11-01T00:30:29.247452080Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 00:30:29.247533 env[1216]: time="2025-11-01T00:30:29.247465560Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Nov 1 00:30:29.247533 env[1216]: time="2025-11-01T00:30:29.247476920Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 00:30:29.247533 env[1216]: time="2025-11-01T00:30:29.247503840Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Nov 1 00:30:29.247630 env[1216]: time="2025-11-01T00:30:29.247537280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 00:30:29.247792 env[1216]: time="2025-11-01T00:30:29.247728720Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 00:30:29.247792 env[1216]: time="2025-11-01T00:30:29.247786120Z" level=info msg="Connect containerd service" Nov 1 00:30:29.248403 env[1216]: time="2025-11-01T00:30:29.247818000Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 00:30:29.248403 env[1216]: time="2025-11-01T00:30:29.248363600Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:30:29.248795 env[1216]: time="2025-11-01T00:30:29.248726440Z" level=info msg="Start subscribing containerd event" Nov 1 00:30:29.248795 env[1216]: time="2025-11-01T00:30:29.248773880Z" level=info msg="Start recovering state" Nov 1 00:30:29.248864 env[1216]: time="2025-11-01T00:30:29.248823600Z" level=info msg="Start event monitor" Nov 1 00:30:29.248864 env[1216]: time="2025-11-01T00:30:29.248840040Z" level=info msg="Start snapshots syncer" Nov 1 00:30:29.248864 env[1216]: time="2025-11-01T00:30:29.248848120Z" level=info msg="Start cni network conf syncer for default" Nov 1 00:30:29.248864 env[1216]: time="2025-11-01T00:30:29.248855920Z" level=info msg="Start streaming server" Nov 1 00:30:29.250120 env[1216]: time="2025-11-01T00:30:29.248999360Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 00:30:29.250120 env[1216]: time="2025-11-01T00:30:29.249041760Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 00:30:29.250120 env[1216]: time="2025-11-01T00:30:29.249103280Z" level=info msg="containerd successfully booted in 0.027721s" Nov 1 00:30:29.249184 systemd[1]: Started containerd.service. Nov 1 00:30:29.259936 locksmithd[1243]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 00:30:29.534384 tar[1214]: linux-arm64/README.md Nov 1 00:30:29.538675 systemd[1]: Finished prepare-helm.service. Nov 1 00:30:30.068304 sshd_keygen[1213]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 00:30:30.085635 systemd[1]: Finished sshd-keygen.service. Nov 1 00:30:30.087804 systemd[1]: Starting issuegen.service... Nov 1 00:30:30.092382 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 00:30:30.092526 systemd[1]: Finished issuegen.service. Nov 1 00:30:30.094637 systemd[1]: Starting systemd-user-sessions.service... Nov 1 00:30:30.100214 systemd[1]: Finished systemd-user-sessions.service. Nov 1 00:30:30.102369 systemd[1]: Started getty@tty1.service. Nov 1 00:30:30.104319 systemd[1]: Started serial-getty@ttyAMA0.service. Nov 1 00:30:30.105367 systemd[1]: Reached target getty.target. Nov 1 00:30:30.230167 systemd-networkd[1042]: eth0: Gained IPv6LL Nov 1 00:30:30.231810 systemd[1]: Finished systemd-networkd-wait-online.service. Nov 1 00:30:30.233113 systemd[1]: Reached target network-online.target. Nov 1 00:30:30.235354 systemd[1]: Starting kubelet.service... Nov 1 00:30:30.828614 systemd[1]: Started kubelet.service. Nov 1 00:30:30.830379 systemd[1]: Reached target multi-user.target. Nov 1 00:30:30.833093 systemd[1]: Starting systemd-update-utmp-runlevel.service... Nov 1 00:30:30.839353 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Nov 1 00:30:30.839639 systemd[1]: Finished systemd-update-utmp-runlevel.service. Nov 1 00:30:30.840950 systemd[1]: Startup finished in 577ms (kernel) + 4.325s (initrd) + 4.956s (userspace) = 9.859s. Nov 1 00:30:31.195058 kubelet[1274]: E1101 00:30:31.194927 1274 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:30:31.196898 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:30:31.197036 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:30:33.907974 systemd[1]: Created slice system-sshd.slice. Nov 1 00:30:33.909081 systemd[1]: Started sshd@0-10.0.0.138:22-10.0.0.1:60466.service. Nov 1 00:30:33.949024 sshd[1283]: Accepted publickey for core from 10.0.0.1 port 60466 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:30:33.951303 sshd[1283]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:30:33.960173 systemd-logind[1203]: New session 1 of user core. Nov 1 00:30:33.961068 systemd[1]: Created slice user-500.slice. Nov 1 00:30:33.962093 systemd[1]: Starting user-runtime-dir@500.service... Nov 1 00:30:33.970098 systemd[1]: Finished user-runtime-dir@500.service. Nov 1 00:30:33.971346 systemd[1]: Starting user@500.service... Nov 1 00:30:33.973912 (systemd)[1286]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:30:34.031220 systemd[1286]: Queued start job for default target default.target. Nov 1 00:30:34.031667 systemd[1286]: Reached target paths.target. Nov 1 00:30:34.031698 systemd[1286]: Reached target sockets.target. Nov 1 00:30:34.031719 systemd[1286]: Reached target timers.target. Nov 1 00:30:34.031728 systemd[1286]: Reached target basic.target. Nov 1 00:30:34.031768 systemd[1286]: Reached target default.target. Nov 1 00:30:34.031792 systemd[1286]: Startup finished in 52ms. Nov 1 00:30:34.031840 systemd[1]: Started user@500.service. Nov 1 00:30:34.032957 systemd[1]: Started session-1.scope. Nov 1 00:30:34.083946 systemd[1]: Started sshd@1-10.0.0.138:22-10.0.0.1:60472.service. Nov 1 00:30:34.127969 sshd[1295]: Accepted publickey for core from 10.0.0.1 port 60472 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:30:34.129387 sshd[1295]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:30:34.133626 systemd-logind[1203]: New session 2 of user core. Nov 1 00:30:34.133901 systemd[1]: Started session-2.scope. Nov 1 00:30:34.187357 sshd[1295]: pam_unix(sshd:session): session closed for user core Nov 1 00:30:34.190263 systemd[1]: Started sshd@2-10.0.0.138:22-10.0.0.1:60484.service. Nov 1 00:30:34.190747 systemd[1]: sshd@1-10.0.0.138:22-10.0.0.1:60472.service: Deactivated successfully. Nov 1 00:30:34.191529 systemd[1]: session-2.scope: Deactivated successfully. Nov 1 00:30:34.192076 systemd-logind[1203]: Session 2 logged out. Waiting for processes to exit. Nov 1 00:30:34.192790 systemd-logind[1203]: Removed session 2. Nov 1 00:30:34.225214 sshd[1300]: Accepted publickey for core from 10.0.0.1 port 60484 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:30:34.226318 sshd[1300]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:30:34.229180 systemd-logind[1203]: New session 3 of user core. Nov 1 00:30:34.229937 systemd[1]: Started session-3.scope. Nov 1 00:30:34.278984 sshd[1300]: pam_unix(sshd:session): session closed for user core Nov 1 00:30:34.281618 systemd[1]: sshd@2-10.0.0.138:22-10.0.0.1:60484.service: Deactivated successfully. Nov 1 00:30:34.282159 systemd[1]: session-3.scope: Deactivated successfully. Nov 1 00:30:34.285053 systemd-logind[1203]: Session 3 logged out. Waiting for processes to exit. Nov 1 00:30:34.286100 systemd[1]: Started sshd@3-10.0.0.138:22-10.0.0.1:60500.service. Nov 1 00:30:34.286760 systemd-logind[1203]: Removed session 3. Nov 1 00:30:34.320281 sshd[1307]: Accepted publickey for core from 10.0.0.1 port 60500 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:30:34.321480 sshd[1307]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:30:34.324659 systemd-logind[1203]: New session 4 of user core. Nov 1 00:30:34.325479 systemd[1]: Started session-4.scope. Nov 1 00:30:34.378487 sshd[1307]: pam_unix(sshd:session): session closed for user core Nov 1 00:30:34.382303 systemd[1]: sshd@3-10.0.0.138:22-10.0.0.1:60500.service: Deactivated successfully. Nov 1 00:30:34.382867 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 00:30:34.383364 systemd-logind[1203]: Session 4 logged out. Waiting for processes to exit. Nov 1 00:30:34.384395 systemd[1]: Started sshd@4-10.0.0.138:22-10.0.0.1:60516.service. Nov 1 00:30:34.385042 systemd-logind[1203]: Removed session 4. Nov 1 00:30:34.417504 sshd[1313]: Accepted publickey for core from 10.0.0.1 port 60516 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:30:34.418532 sshd[1313]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:30:34.421377 systemd-logind[1203]: New session 5 of user core. Nov 1 00:30:34.422133 systemd[1]: Started session-5.scope. Nov 1 00:30:34.478266 sudo[1316]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 00:30:34.478508 sudo[1316]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 00:30:34.516157 systemd[1]: Starting docker.service... Nov 1 00:30:34.571311 env[1327]: time="2025-11-01T00:30:34.571252401Z" level=info msg="Starting up" Nov 1 00:30:34.572603 env[1327]: time="2025-11-01T00:30:34.572580035Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 00:30:34.572702 env[1327]: time="2025-11-01T00:30:34.572688674Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 00:30:34.572765 env[1327]: time="2025-11-01T00:30:34.572749824Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 00:30:34.572815 env[1327]: time="2025-11-01T00:30:34.572803214Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 00:30:34.575017 env[1327]: time="2025-11-01T00:30:34.574997172Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 00:30:34.575099 env[1327]: time="2025-11-01T00:30:34.575085522Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 00:30:34.575167 env[1327]: time="2025-11-01T00:30:34.575153300Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 00:30:34.575219 env[1327]: time="2025-11-01T00:30:34.575207053Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 00:30:34.713365 env[1327]: time="2025-11-01T00:30:34.713319801Z" level=info msg="Loading containers: start." Nov 1 00:30:34.820595 kernel: Initializing XFRM netlink socket Nov 1 00:30:34.843031 env[1327]: time="2025-11-01T00:30:34.842998035Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Nov 1 00:30:34.895623 systemd-networkd[1042]: docker0: Link UP Nov 1 00:30:34.915809 env[1327]: time="2025-11-01T00:30:34.915756399Z" level=info msg="Loading containers: done." Nov 1 00:30:34.932033 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck496169691-merged.mount: Deactivated successfully. Nov 1 00:30:34.934196 env[1327]: time="2025-11-01T00:30:34.934143868Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 00:30:34.934339 env[1327]: time="2025-11-01T00:30:34.934310747Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Nov 1 00:30:34.934418 env[1327]: time="2025-11-01T00:30:34.934404473Z" level=info msg="Daemon has completed initialization" Nov 1 00:30:34.947588 systemd[1]: Started docker.service. Nov 1 00:30:34.955070 env[1327]: time="2025-11-01T00:30:34.954959790Z" level=info msg="API listen on /run/docker.sock" Nov 1 00:30:35.719610 env[1216]: time="2025-11-01T00:30:35.719341408Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 1 00:30:36.411064 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1794381698.mount: Deactivated successfully. Nov 1 00:30:37.845315 env[1216]: time="2025-11-01T00:30:37.845268308Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:30:37.846998 env[1216]: time="2025-11-01T00:30:37.846959981Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:30:37.848788 env[1216]: time="2025-11-01T00:30:37.848763467Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:30:37.850531 env[1216]: time="2025-11-01T00:30:37.850501179Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:30:37.851413 env[1216]: time="2025-11-01T00:30:37.851366087Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\"" Nov 1 00:30:37.852611 env[1216]: time="2025-11-01T00:30:37.852579730Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 1 00:30:39.570327 env[1216]: time="2025-11-01T00:30:39.570263235Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:30:39.572365 env[1216]: time="2025-11-01T00:30:39.572333588Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:30:39.574799 env[1216]: time="2025-11-01T00:30:39.574772699Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:30:39.576995 env[1216]: time="2025-11-01T00:30:39.576962445Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:30:39.577899 env[1216]: time="2025-11-01T00:30:39.577873764Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\"" Nov 1 00:30:39.578392 env[1216]: time="2025-11-01T00:30:39.578366701Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 1 00:30:41.066624 env[1216]: time="2025-11-01T00:30:41.066567630Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:30:41.067997 env[1216]: time="2025-11-01T00:30:41.067947848Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:30:41.069662 env[1216]: time="2025-11-01T00:30:41.069630336Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:30:41.072330 env[1216]: time="2025-11-01T00:30:41.072299841Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:30:41.073008 env[1216]: time="2025-11-01T00:30:41.072976636Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\"" Nov 1 00:30:41.073600 env[1216]: time="2025-11-01T00:30:41.073575835Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 1 00:30:41.447887 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 00:30:41.448079 systemd[1]: Stopped kubelet.service. Nov 1 00:30:41.449496 systemd[1]: Starting kubelet.service... Nov 1 00:30:41.546470 systemd[1]: Started kubelet.service. Nov 1 00:30:41.580951 kubelet[1462]: E1101 00:30:41.580897 1462 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:30:41.583760 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:30:41.583888 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:30:42.193232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2004011992.mount: Deactivated successfully. Nov 1 00:30:42.789419 env[1216]: time="2025-11-01T00:30:42.788772376Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:30:42.790376 env[1216]: time="2025-11-01T00:30:42.790110263Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:30:42.791972 env[1216]: time="2025-11-01T00:30:42.791889121Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:30:42.793139 env[1216]: time="2025-11-01T00:30:42.792871341Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:30:42.793294 env[1216]: time="2025-11-01T00:30:42.793253744Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\"" Nov 1 00:30:42.794040 env[1216]: time="2025-11-01T00:30:42.793764443Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 1 00:30:43.276476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3690418113.mount: Deactivated successfully. Nov 1 00:30:44.703680 env[1216]: time="2025-11-01T00:30:44.703632440Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:30:44.705392 env[1216]: time="2025-11-01T00:30:44.705361481Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:30:44.706974 env[1216]: time="2025-11-01T00:30:44.706947251Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:30:44.708888 env[1216]: time="2025-11-01T00:30:44.708861477Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:30:44.709819 env[1216]: time="2025-11-01T00:30:44.709793221Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Nov 1 00:30:44.710223 env[1216]: time="2025-11-01T00:30:44.710199531Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 1 00:30:45.134187 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1752062982.mount: Deactivated successfully. Nov 1 00:30:45.138691 env[1216]: time="2025-11-01T00:30:45.138655099Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:30:45.140081 env[1216]: time="2025-11-01T00:30:45.140054202Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:30:45.141401 env[1216]: time="2025-11-01T00:30:45.141363892Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:30:45.143391 env[1216]: time="2025-11-01T00:30:45.143357016Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:30:45.143954 env[1216]: time="2025-11-01T00:30:45.143925895Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Nov 1 00:30:45.144470 env[1216]: time="2025-11-01T00:30:45.144439242Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 1 00:30:45.580714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4271207809.mount: Deactivated successfully. Nov 1 00:30:48.397662 env[1216]: time="2025-11-01T00:30:48.397610518Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:30:48.399560 env[1216]: time="2025-11-01T00:30:48.399522742Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:30:48.401894 env[1216]: time="2025-11-01T00:30:48.401862331Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:30:48.403767 env[1216]: time="2025-11-01T00:30:48.403741742Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:30:48.405578 env[1216]: time="2025-11-01T00:30:48.405528605Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Nov 1 00:30:51.601772 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 1 00:30:51.601953 systemd[1]: Stopped kubelet.service. Nov 1 00:30:51.603334 systemd[1]: Starting kubelet.service... Nov 1 00:30:51.697816 systemd[1]: Started kubelet.service. Nov 1 00:30:51.728726 kubelet[1495]: E1101 00:30:51.728686 1495 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:30:51.731015 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:30:51.731137 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:30:52.733767 systemd[1]: Stopped kubelet.service. Nov 1 00:30:52.735973 systemd[1]: Starting kubelet.service... Nov 1 00:30:52.755731 systemd[1]: Reloading. Nov 1 00:30:52.805801 /usr/lib/systemd/system-generators/torcx-generator[1531]: time="2025-11-01T00:30:52Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:30:52.806137 /usr/lib/systemd/system-generators/torcx-generator[1531]: time="2025-11-01T00:30:52Z" level=info msg="torcx already run" Nov 1 00:30:52.972939 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:30:52.972960 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:30:52.989108 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:30:53.056583 systemd[1]: Started kubelet.service. Nov 1 00:30:53.057793 systemd[1]: Stopping kubelet.service... Nov 1 00:30:53.058033 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:30:53.058192 systemd[1]: Stopped kubelet.service. Nov 1 00:30:53.059674 systemd[1]: Starting kubelet.service... Nov 1 00:30:53.147260 systemd[1]: Started kubelet.service. Nov 1 00:30:53.185811 kubelet[1576]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:30:53.185811 kubelet[1576]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:30:53.185811 kubelet[1576]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:30:53.186105 kubelet[1576]: I1101 00:30:53.185861 1576 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:30:53.794222 kubelet[1576]: I1101 00:30:53.794170 1576 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 1 00:30:53.794222 kubelet[1576]: I1101 00:30:53.794210 1576 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:30:53.794648 kubelet[1576]: I1101 00:30:53.794631 1576 server.go:956] "Client rotation is on, will bootstrap in background" Nov 1 00:30:53.821633 kubelet[1576]: E1101 00:30:53.821596 1576 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.138:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 1 00:30:53.822596 kubelet[1576]: I1101 00:30:53.822579 1576 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:30:53.828798 kubelet[1576]: E1101 00:30:53.828754 1576 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:30:53.828915 kubelet[1576]: I1101 00:30:53.828900 1576 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:30:53.831605 kubelet[1576]: I1101 00:30:53.831585 1576 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:30:53.832782 kubelet[1576]: I1101 00:30:53.832747 1576 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:30:53.833023 kubelet[1576]: I1101 00:30:53.832869 1576 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 00:30:53.833212 kubelet[1576]: I1101 00:30:53.833200 1576 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:30:53.833293 kubelet[1576]: I1101 00:30:53.833283 1576 container_manager_linux.go:303] "Creating device plugin manager" Nov 1 00:30:53.833529 kubelet[1576]: I1101 00:30:53.833515 1576 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:30:53.836349 kubelet[1576]: I1101 00:30:53.836325 1576 kubelet.go:480] "Attempting to sync node with API server" Nov 1 00:30:53.836452 kubelet[1576]: I1101 00:30:53.836441 1576 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:30:53.836608 kubelet[1576]: I1101 00:30:53.836595 1576 kubelet.go:386] "Adding apiserver pod source" Nov 1 00:30:53.846121 kubelet[1576]: I1101 00:30:53.846102 1576 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:30:53.847344 kubelet[1576]: I1101 00:30:53.847325 1576 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 00:30:53.849879 kubelet[1576]: E1101 00:30:53.849837 1576 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 1 00:30:53.849961 kubelet[1576]: I1101 00:30:53.849938 1576 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 1 00:30:53.850084 kubelet[1576]: W1101 00:30:53.850068 1576 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 00:30:53.850268 kubelet[1576]: E1101 00:30:53.850230 1576 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.138:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 1 00:30:53.852835 kubelet[1576]: I1101 00:30:53.852811 1576 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:30:53.852915 kubelet[1576]: I1101 00:30:53.852864 1576 server.go:1289] "Started kubelet" Nov 1 00:30:53.855895 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Nov 1 00:30:53.855978 kubelet[1576]: I1101 00:30:53.855074 1576 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:30:53.855978 kubelet[1576]: I1101 00:30:53.855155 1576 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:30:53.855978 kubelet[1576]: I1101 00:30:53.855377 1576 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:30:53.856304 kubelet[1576]: I1101 00:30:53.856071 1576 server.go:317] "Adding debug handlers to kubelet server" Nov 1 00:30:53.856741 kubelet[1576]: I1101 00:30:53.856698 1576 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:30:53.857208 kubelet[1576]: I1101 00:30:53.857130 1576 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:30:53.857660 kubelet[1576]: E1101 00:30:53.856650 1576 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.138:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.138:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1873ba9af0644e52 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-01 00:30:53.852831314 +0000 UTC m=+0.702063234,LastTimestamp:2025-11-01 00:30:53.852831314 +0000 UTC m=+0.702063234,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 1 00:30:53.858684 kubelet[1576]: I1101 00:30:53.858650 1576 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:30:53.858752 kubelet[1576]: I1101 00:30:53.858745 1576 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:30:53.858791 kubelet[1576]: I1101 00:30:53.858786 1576 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:30:53.858852 kubelet[1576]: E1101 00:30:53.858811 1576 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:30:53.859097 kubelet[1576]: E1101 00:30:53.858980 1576 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:30:53.859097 kubelet[1576]: E1101 00:30:53.859090 1576 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 1 00:30:53.859581 kubelet[1576]: I1101 00:30:53.859532 1576 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:30:53.859839 kubelet[1576]: E1101 00:30:53.859795 1576 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="200ms" Nov 1 00:30:53.860313 kubelet[1576]: I1101 00:30:53.860290 1576 factory.go:223] Registration of the containerd container factory successfully Nov 1 00:30:53.860313 kubelet[1576]: I1101 00:30:53.860307 1576 factory.go:223] Registration of the systemd container factory successfully Nov 1 00:30:53.870293 kubelet[1576]: I1101 00:30:53.870274 1576 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:30:53.870293 kubelet[1576]: I1101 00:30:53.870289 1576 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:30:53.870391 kubelet[1576]: I1101 00:30:53.870305 1576 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:30:53.875161 kubelet[1576]: I1101 00:30:53.875124 1576 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 1 00:30:53.876130 kubelet[1576]: I1101 00:30:53.876113 1576 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 1 00:30:53.876232 kubelet[1576]: I1101 00:30:53.876220 1576 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 1 00:30:53.876309 kubelet[1576]: I1101 00:30:53.876295 1576 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:30:53.876369 kubelet[1576]: I1101 00:30:53.876360 1576 kubelet.go:2436] "Starting kubelet main sync loop" Nov 1 00:30:53.876453 kubelet[1576]: E1101 00:30:53.876437 1576 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:30:53.880009 kubelet[1576]: I1101 00:30:53.879989 1576 policy_none.go:49] "None policy: Start" Nov 1 00:30:53.880009 kubelet[1576]: I1101 00:30:53.880012 1576 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:30:53.880100 kubelet[1576]: I1101 00:30:53.880024 1576 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:30:53.880616 kubelet[1576]: E1101 00:30:53.880594 1576 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 1 00:30:53.884382 systemd[1]: Created slice kubepods.slice. Nov 1 00:30:53.888136 systemd[1]: Created slice kubepods-burstable.slice. Nov 1 00:30:53.890449 systemd[1]: Created slice kubepods-besteffort.slice. Nov 1 00:30:53.905192 kubelet[1576]: E1101 00:30:53.905163 1576 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 1 00:30:53.905332 kubelet[1576]: I1101 00:30:53.905311 1576 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:30:53.905374 kubelet[1576]: I1101 00:30:53.905328 1576 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:30:53.905737 kubelet[1576]: I1101 00:30:53.905593 1576 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:30:53.906388 kubelet[1576]: E1101 00:30:53.906343 1576 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:30:53.906388 kubelet[1576]: E1101 00:30:53.906379 1576 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 1 00:30:53.984725 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Nov 1 00:30:53.996629 kubelet[1576]: E1101 00:30:53.996597 1576 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:30:53.998953 systemd[1]: Created slice kubepods-burstable-pod7135a393b0007c3f44cd3f79fa652fa7.slice. Nov 1 00:30:54.000198 kubelet[1576]: E1101 00:30:54.000181 1576 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:30:54.001312 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Nov 1 00:30:54.002499 kubelet[1576]: E1101 00:30:54.002481 1576 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:30:54.006466 kubelet[1576]: I1101 00:30:54.006447 1576 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:30:54.006885 kubelet[1576]: E1101 00:30:54.006864 1576 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.138:6443/api/v1/nodes\": dial tcp 10.0.0.138:6443: connect: connection refused" node="localhost" Nov 1 00:30:54.059944 kubelet[1576]: I1101 00:30:54.059305 1576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 1 00:30:54.059944 kubelet[1576]: I1101 00:30:54.059340 1576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7135a393b0007c3f44cd3f79fa652fa7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7135a393b0007c3f44cd3f79fa652fa7\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:30:54.059944 kubelet[1576]: I1101 00:30:54.059355 1576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7135a393b0007c3f44cd3f79fa652fa7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7135a393b0007c3f44cd3f79fa652fa7\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:30:54.059944 kubelet[1576]: I1101 00:30:54.059370 1576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7135a393b0007c3f44cd3f79fa652fa7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7135a393b0007c3f44cd3f79fa652fa7\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:30:54.059944 kubelet[1576]: I1101 00:30:54.059385 1576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:30:54.060497 kubelet[1576]: I1101 00:30:54.059398 1576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:30:54.060497 kubelet[1576]: I1101 00:30:54.059439 1576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:30:54.060497 kubelet[1576]: I1101 00:30:54.059457 1576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:30:54.060497 kubelet[1576]: I1101 00:30:54.059476 1576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:30:54.060948 kubelet[1576]: E1101 00:30:54.060913 1576 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="400ms" Nov 1 00:30:54.208897 kubelet[1576]: I1101 00:30:54.208873 1576 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:30:54.209586 kubelet[1576]: E1101 00:30:54.209557 1576 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.138:6443/api/v1/nodes\": dial tcp 10.0.0.138:6443: connect: connection refused" node="localhost" Nov 1 00:30:54.297102 kubelet[1576]: E1101 00:30:54.297079 1576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:30:54.297760 env[1216]: time="2025-11-01T00:30:54.297719696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Nov 1 00:30:54.300928 kubelet[1576]: E1101 00:30:54.300906 1576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:30:54.301564 env[1216]: time="2025-11-01T00:30:54.301338580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7135a393b0007c3f44cd3f79fa652fa7,Namespace:kube-system,Attempt:0,}" Nov 1 00:30:54.302869 kubelet[1576]: E1101 00:30:54.302851 1576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:30:54.303389 env[1216]: time="2025-11-01T00:30:54.303168297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Nov 1 00:30:54.462020 kubelet[1576]: E1101 00:30:54.461981 1576 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="800ms" Nov 1 00:30:54.611580 kubelet[1576]: I1101 00:30:54.611397 1576 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:30:54.611828 kubelet[1576]: E1101 00:30:54.611788 1576 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.138:6443/api/v1/nodes\": dial tcp 10.0.0.138:6443: connect: connection refused" node="localhost" Nov 1 00:30:54.757638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2732992849.mount: Deactivated successfully. Nov 1 00:30:54.762008 env[1216]: time="2025-11-01T00:30:54.761970477Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:30:54.764393 env[1216]: time="2025-11-01T00:30:54.764358836Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:30:54.765457 env[1216]: time="2025-11-01T00:30:54.765424003Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:30:54.767118 env[1216]: time="2025-11-01T00:30:54.767093364Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:30:54.767879 env[1216]: time="2025-11-01T00:30:54.767853752Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:30:54.769374 env[1216]: time="2025-11-01T00:30:54.769347147Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:30:54.772716 env[1216]: time="2025-11-01T00:30:54.772681667Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:30:54.774109 env[1216]: time="2025-11-01T00:30:54.774086197Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:30:54.776473 env[1216]: time="2025-11-01T00:30:54.776444935Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:30:54.779417 env[1216]: time="2025-11-01T00:30:54.779391736Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:30:54.780114 env[1216]: time="2025-11-01T00:30:54.780090879Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:30:54.781035 env[1216]: time="2025-11-01T00:30:54.781011142Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:30:54.804588 env[1216]: time="2025-11-01T00:30:54.804367432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:30:54.804588 env[1216]: time="2025-11-01T00:30:54.804397534Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:30:54.804588 env[1216]: time="2025-11-01T00:30:54.804407381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:30:54.804588 env[1216]: time="2025-11-01T00:30:54.804531390Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7e2ba6ae53500c6a19e9e11972b258d6f8062289ce72a5ab08e038d2c2e1aada pid=1644 runtime=io.containerd.runc.v2 Nov 1 00:30:54.804588 env[1216]: time="2025-11-01T00:30:54.804311632Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:30:54.804588 env[1216]: time="2025-11-01T00:30:54.804351140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:30:54.804588 env[1216]: time="2025-11-01T00:30:54.804361668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:30:54.806135 env[1216]: time="2025-11-01T00:30:54.804979152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:30:54.806135 env[1216]: time="2025-11-01T00:30:54.805010615Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:30:54.806135 env[1216]: time="2025-11-01T00:30:54.805061772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:30:54.806135 env[1216]: time="2025-11-01T00:30:54.805499127Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/58e3bca49a97c1113ac68a8987a306837c0d219c81ee44b63cb5ccc31666a9c5 pid=1630 runtime=io.containerd.runc.v2 Nov 1 00:30:54.806263 env[1216]: time="2025-11-01T00:30:54.806181578Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0e6ac5b9a3c651a14711cb9e38f605684a76721ac1b8fb327f018bc3926e78f3 pid=1637 runtime=io.containerd.runc.v2 Nov 1 00:30:54.820676 systemd[1]: Started cri-containerd-0e6ac5b9a3c651a14711cb9e38f605684a76721ac1b8fb327f018bc3926e78f3.scope. Nov 1 00:30:54.821580 systemd[1]: Started cri-containerd-58e3bca49a97c1113ac68a8987a306837c0d219c81ee44b63cb5ccc31666a9c5.scope. Nov 1 00:30:54.822599 systemd[1]: Started cri-containerd-7e2ba6ae53500c6a19e9e11972b258d6f8062289ce72a5ab08e038d2c2e1aada.scope. Nov 1 00:30:54.857678 env[1216]: time="2025-11-01T00:30:54.857638053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"58e3bca49a97c1113ac68a8987a306837c0d219c81ee44b63cb5ccc31666a9c5\"" Nov 1 00:30:54.858711 kubelet[1576]: E1101 00:30:54.858686 1576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:30:54.863958 env[1216]: time="2025-11-01T00:30:54.863885430Z" level=info msg="CreateContainer within sandbox \"58e3bca49a97c1113ac68a8987a306837c0d219c81ee44b63cb5ccc31666a9c5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 00:30:54.866996 env[1216]: time="2025-11-01T00:30:54.866954719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7135a393b0007c3f44cd3f79fa652fa7,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e6ac5b9a3c651a14711cb9e38f605684a76721ac1b8fb327f018bc3926e78f3\"" Nov 1 00:30:54.867481 kubelet[1576]: E1101 00:30:54.867459 1576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:30:54.871281 env[1216]: time="2025-11-01T00:30:54.871249770Z" level=info msg="CreateContainer within sandbox \"0e6ac5b9a3c651a14711cb9e38f605684a76721ac1b8fb327f018bc3926e78f3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 00:30:54.875416 env[1216]: time="2025-11-01T00:30:54.875379583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e2ba6ae53500c6a19e9e11972b258d6f8062289ce72a5ab08e038d2c2e1aada\"" Nov 1 00:30:54.876498 kubelet[1576]: E1101 00:30:54.876475 1576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:30:54.879655 env[1216]: time="2025-11-01T00:30:54.879613990Z" level=info msg="CreateContainer within sandbox \"7e2ba6ae53500c6a19e9e11972b258d6f8062289ce72a5ab08e038d2c2e1aada\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 00:30:54.880718 env[1216]: time="2025-11-01T00:30:54.880665427Z" level=info msg="CreateContainer within sandbox \"58e3bca49a97c1113ac68a8987a306837c0d219c81ee44b63cb5ccc31666a9c5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b19812fe5accea9b8163a0fc42cfbd02effbabef54d7d750996b83bf0ea45028\"" Nov 1 00:30:54.881374 env[1216]: time="2025-11-01T00:30:54.881350560Z" level=info msg="StartContainer for \"b19812fe5accea9b8163a0fc42cfbd02effbabef54d7d750996b83bf0ea45028\"" Nov 1 00:30:54.890962 env[1216]: time="2025-11-01T00:30:54.890909360Z" level=info msg="CreateContainer within sandbox \"0e6ac5b9a3c651a14711cb9e38f605684a76721ac1b8fb327f018bc3926e78f3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4400026ced33e673d515aa232f537b1355d3eb159b114c778b38f49e576c792c\"" Nov 1 00:30:54.891475 env[1216]: time="2025-11-01T00:30:54.891445506Z" level=info msg="StartContainer for \"4400026ced33e673d515aa232f537b1355d3eb159b114c778b38f49e576c792c\"" Nov 1 00:30:54.892924 env[1216]: time="2025-11-01T00:30:54.892893348Z" level=info msg="CreateContainer within sandbox \"7e2ba6ae53500c6a19e9e11972b258d6f8062289ce72a5ab08e038d2c2e1aada\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5f5ec85affa85f7e05ac06642e596b7af984f471b7fba4113dea682867dc3884\"" Nov 1 00:30:54.893373 env[1216]: time="2025-11-01T00:30:54.893342311Z" level=info msg="StartContainer for \"5f5ec85affa85f7e05ac06642e596b7af984f471b7fba4113dea682867dc3884\"" Nov 1 00:30:54.897410 systemd[1]: Started cri-containerd-b19812fe5accea9b8163a0fc42cfbd02effbabef54d7d750996b83bf0ea45028.scope. Nov 1 00:30:54.912604 systemd[1]: Started cri-containerd-4400026ced33e673d515aa232f537b1355d3eb159b114c778b38f49e576c792c.scope. Nov 1 00:30:54.916995 systemd[1]: Started cri-containerd-5f5ec85affa85f7e05ac06642e596b7af984f471b7fba4113dea682867dc3884.scope. Nov 1 00:30:54.941358 kubelet[1576]: E1101 00:30:54.941303 1576 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 1 00:30:54.950765 env[1216]: time="2025-11-01T00:30:54.950726893Z" level=info msg="StartContainer for \"b19812fe5accea9b8163a0fc42cfbd02effbabef54d7d750996b83bf0ea45028\" returns successfully" Nov 1 00:30:54.969890 env[1216]: time="2025-11-01T00:30:54.969850137Z" level=info msg="StartContainer for \"5f5ec85affa85f7e05ac06642e596b7af984f471b7fba4113dea682867dc3884\" returns successfully" Nov 1 00:30:54.975307 env[1216]: time="2025-11-01T00:30:54.975266396Z" level=info msg="StartContainer for \"4400026ced33e673d515aa232f537b1355d3eb159b114c778b38f49e576c792c\" returns successfully" Nov 1 00:30:55.413151 kubelet[1576]: I1101 00:30:55.413119 1576 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:30:55.887839 kubelet[1576]: E1101 00:30:55.887809 1576 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:30:55.887979 kubelet[1576]: E1101 00:30:55.887937 1576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:30:55.889665 kubelet[1576]: E1101 00:30:55.889630 1576 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:30:55.889765 kubelet[1576]: E1101 00:30:55.889747 1576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:30:55.891312 kubelet[1576]: E1101 00:30:55.891292 1576 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:30:55.891409 kubelet[1576]: E1101 00:30:55.891392 1576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:30:56.138927 kubelet[1576]: E1101 00:30:56.138830 1576 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 1 00:30:56.274820 kubelet[1576]: I1101 00:30:56.274780 1576 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 1 00:30:56.274986 kubelet[1576]: E1101 00:30:56.274972 1576 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 1 00:30:56.308778 kubelet[1576]: E1101 00:30:56.308738 1576 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:30:56.409898 kubelet[1576]: E1101 00:30:56.409811 1576 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:30:56.510360 kubelet[1576]: E1101 00:30:56.510330 1576 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:30:56.659335 kubelet[1576]: I1101 00:30:56.659288 1576 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:30:56.664119 kubelet[1576]: E1101 00:30:56.664019 1576 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 1 00:30:56.664119 kubelet[1576]: I1101 00:30:56.664051 1576 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 00:30:56.665613 kubelet[1576]: E1101 00:30:56.665581 1576 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 1 00:30:56.665613 kubelet[1576]: I1101 00:30:56.665604 1576 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:30:56.666931 kubelet[1576]: E1101 00:30:56.666901 1576 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:30:56.851868 kubelet[1576]: I1101 00:30:56.851822 1576 apiserver.go:52] "Watching apiserver" Nov 1 00:30:56.859384 kubelet[1576]: I1101 00:30:56.859352 1576 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:30:56.891813 kubelet[1576]: I1101 00:30:56.891784 1576 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 00:30:56.892146 kubelet[1576]: I1101 00:30:56.892126 1576 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:30:56.892448 kubelet[1576]: I1101 00:30:56.892428 1576 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:30:56.895325 kubelet[1576]: E1101 00:30:56.895287 1576 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 1 00:30:56.895456 kubelet[1576]: E1101 00:30:56.895432 1576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:30:56.895540 kubelet[1576]: E1101 00:30:56.895520 1576 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 1 00:30:56.895751 kubelet[1576]: E1101 00:30:56.895735 1576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:30:56.895869 kubelet[1576]: E1101 00:30:56.895786 1576 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:30:56.896026 kubelet[1576]: E1101 00:30:56.896011 1576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:30:57.893715 kubelet[1576]: I1101 00:30:57.893671 1576 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 00:30:57.894183 kubelet[1576]: I1101 00:30:57.893780 1576 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:30:57.898455 kubelet[1576]: E1101 00:30:57.898433 1576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:30:57.899306 kubelet[1576]: E1101 00:30:57.899283 1576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:30:58.095575 systemd[1]: Reloading. Nov 1 00:30:58.144430 /usr/lib/systemd/system-generators/torcx-generator[1880]: time="2025-11-01T00:30:58Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:30:58.144462 /usr/lib/systemd/system-generators/torcx-generator[1880]: time="2025-11-01T00:30:58Z" level=info msg="torcx already run" Nov 1 00:30:58.201639 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:30:58.201827 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:30:58.218633 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:30:58.306018 kubelet[1576]: I1101 00:30:58.305907 1576 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:30:58.306110 systemd[1]: Stopping kubelet.service... Nov 1 00:30:58.329941 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:30:58.330130 systemd[1]: Stopped kubelet.service. Nov 1 00:30:58.331743 systemd[1]: Starting kubelet.service... Nov 1 00:30:58.420226 systemd[1]: Started kubelet.service. Nov 1 00:30:58.452151 kubelet[1923]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:30:58.452151 kubelet[1923]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:30:58.452151 kubelet[1923]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:30:58.452464 kubelet[1923]: I1101 00:30:58.452182 1923 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:30:58.459197 kubelet[1923]: I1101 00:30:58.459146 1923 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 1 00:30:58.459197 kubelet[1923]: I1101 00:30:58.459173 1923 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:30:58.459372 kubelet[1923]: I1101 00:30:58.459356 1923 server.go:956] "Client rotation is on, will bootstrap in background" Nov 1 00:30:58.460530 kubelet[1923]: I1101 00:30:58.460514 1923 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 1 00:30:58.464068 kubelet[1923]: I1101 00:30:58.464047 1923 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:30:58.466724 kubelet[1923]: E1101 00:30:58.466701 1923 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:30:58.466829 kubelet[1923]: I1101 00:30:58.466815 1923 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:30:58.469226 kubelet[1923]: I1101 00:30:58.469188 1923 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:30:58.469516 kubelet[1923]: I1101 00:30:58.469495 1923 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:30:58.469752 kubelet[1923]: I1101 00:30:58.469608 1923 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 00:30:58.469887 kubelet[1923]: I1101 00:30:58.469874 1923 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:30:58.469963 kubelet[1923]: I1101 00:30:58.469953 1923 container_manager_linux.go:303] "Creating device plugin manager" Nov 1 00:30:58.470067 kubelet[1923]: I1101 00:30:58.470056 1923 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:30:58.470264 kubelet[1923]: I1101 00:30:58.470253 1923 kubelet.go:480] "Attempting to sync node with API server" Nov 1 00:30:58.470439 kubelet[1923]: I1101 00:30:58.470413 1923 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:30:58.470488 kubelet[1923]: I1101 00:30:58.470465 1923 kubelet.go:386] "Adding apiserver pod source" Nov 1 00:30:58.470488 kubelet[1923]: I1101 00:30:58.470479 1923 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:30:58.471955 kubelet[1923]: I1101 00:30:58.471935 1923 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 00:30:58.472935 kubelet[1923]: I1101 00:30:58.472895 1923 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 1 00:30:58.475382 kubelet[1923]: I1101 00:30:58.475362 1923 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:30:58.475457 kubelet[1923]: I1101 00:30:58.475412 1923 server.go:1289] "Started kubelet" Nov 1 00:30:58.476712 kubelet[1923]: I1101 00:30:58.476691 1923 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:30:58.478791 kubelet[1923]: I1101 00:30:58.477992 1923 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:30:58.478954 kubelet[1923]: I1101 00:30:58.478925 1923 server.go:317] "Adding debug handlers to kubelet server" Nov 1 00:30:58.479218 kubelet[1923]: I1101 00:30:58.479183 1923 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:30:58.479450 kubelet[1923]: I1101 00:30:58.479434 1923 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:30:58.480129 kubelet[1923]: I1101 00:30:58.480097 1923 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:30:58.481566 kubelet[1923]: I1101 00:30:58.480492 1923 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:30:58.481566 kubelet[1923]: E1101 00:30:58.481301 1923 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:30:58.482636 kubelet[1923]: I1101 00:30:58.482496 1923 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:30:58.483916 kubelet[1923]: I1101 00:30:58.483887 1923 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:30:58.488472 kubelet[1923]: I1101 00:30:58.488298 1923 factory.go:223] Registration of the systemd container factory successfully Nov 1 00:30:58.488472 kubelet[1923]: I1101 00:30:58.488383 1923 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:30:58.493779 kubelet[1923]: I1101 00:30:58.491096 1923 factory.go:223] Registration of the containerd container factory successfully Nov 1 00:30:58.517886 kubelet[1923]: I1101 00:30:58.517814 1923 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 1 00:30:58.518658 kubelet[1923]: I1101 00:30:58.518635 1923 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 1 00:30:58.518658 kubelet[1923]: I1101 00:30:58.518655 1923 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 1 00:30:58.518730 kubelet[1923]: I1101 00:30:58.518672 1923 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:30:58.518730 kubelet[1923]: I1101 00:30:58.518679 1923 kubelet.go:2436] "Starting kubelet main sync loop" Nov 1 00:30:58.518730 kubelet[1923]: E1101 00:30:58.518715 1923 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:30:58.543505 kubelet[1923]: I1101 00:30:58.543480 1923 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:30:58.543505 kubelet[1923]: I1101 00:30:58.543499 1923 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:30:58.543636 kubelet[1923]: I1101 00:30:58.543518 1923 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:30:58.543678 kubelet[1923]: I1101 00:30:58.543662 1923 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 00:30:58.543715 kubelet[1923]: I1101 00:30:58.543678 1923 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 00:30:58.543715 kubelet[1923]: I1101 00:30:58.543694 1923 policy_none.go:49] "None policy: Start" Nov 1 00:30:58.543715 kubelet[1923]: I1101 00:30:58.543703 1923 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:30:58.543715 kubelet[1923]: I1101 00:30:58.543712 1923 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:30:58.543809 kubelet[1923]: I1101 00:30:58.543791 1923 state_mem.go:75] "Updated machine memory state" Nov 1 00:30:58.546863 kubelet[1923]: E1101 00:30:58.546830 1923 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 1 00:30:58.547149 kubelet[1923]: I1101 00:30:58.547136 1923 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:30:58.547268 kubelet[1923]: I1101 00:30:58.547240 1923 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:30:58.547718 kubelet[1923]: I1101 00:30:58.547629 1923 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:30:58.549119 kubelet[1923]: E1101 00:30:58.549100 1923 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:30:58.619424 kubelet[1923]: I1101 00:30:58.619398 1923 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 00:30:58.619522 kubelet[1923]: I1101 00:30:58.619502 1923 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:30:58.620645 kubelet[1923]: I1101 00:30:58.620629 1923 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:30:58.624998 kubelet[1923]: E1101 00:30:58.624966 1923 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 1 00:30:58.625368 kubelet[1923]: E1101 00:30:58.625339 1923 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 1 00:30:58.651382 kubelet[1923]: I1101 00:30:58.651361 1923 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:30:58.657052 kubelet[1923]: I1101 00:30:58.657028 1923 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 1 00:30:58.657136 kubelet[1923]: I1101 00:30:58.657094 1923 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 1 00:30:58.683292 kubelet[1923]: I1101 00:30:58.683186 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 1 00:30:58.683292 kubelet[1923]: I1101 00:30:58.683239 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7135a393b0007c3f44cd3f79fa652fa7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7135a393b0007c3f44cd3f79fa652fa7\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:30:58.683397 kubelet[1923]: I1101 00:30:58.683310 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:30:58.683397 kubelet[1923]: I1101 00:30:58.683330 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:30:58.683397 kubelet[1923]: I1101 00:30:58.683378 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:30:58.683397 kubelet[1923]: I1101 00:30:58.683393 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7135a393b0007c3f44cd3f79fa652fa7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7135a393b0007c3f44cd3f79fa652fa7\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:30:58.683670 kubelet[1923]: I1101 00:30:58.683408 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7135a393b0007c3f44cd3f79fa652fa7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7135a393b0007c3f44cd3f79fa652fa7\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:30:58.683670 kubelet[1923]: I1101 00:30:58.683450 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:30:58.683670 kubelet[1923]: I1101 00:30:58.683463 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:30:58.924162 kubelet[1923]: E1101 00:30:58.924097 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:30:58.925201 kubelet[1923]: E1101 00:30:58.925178 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:30:58.926235 kubelet[1923]: E1101 00:30:58.926214 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:30:59.093154 sudo[1961]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 1 00:30:59.093772 sudo[1961]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Nov 1 00:30:59.472049 kubelet[1923]: I1101 00:30:59.472020 1923 apiserver.go:52] "Watching apiserver" Nov 1 00:30:59.481438 kubelet[1923]: I1101 00:30:59.481416 1923 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:30:59.528207 sudo[1961]: pam_unix(sudo:session): session closed for user root Nov 1 00:30:59.530579 kubelet[1923]: I1101 00:30:59.530540 1923 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:30:59.530690 kubelet[1923]: I1101 00:30:59.530612 1923 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 00:30:59.531004 kubelet[1923]: I1101 00:30:59.530976 1923 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:30:59.538403 kubelet[1923]: E1101 00:30:59.538368 1923 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 1 00:30:59.538538 kubelet[1923]: E1101 00:30:59.538518 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:30:59.539025 kubelet[1923]: E1101 00:30:59.538988 1923 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:30:59.539115 kubelet[1923]: E1101 00:30:59.539101 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:30:59.539185 kubelet[1923]: E1101 00:30:59.539172 1923 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 1 00:30:59.539264 kubelet[1923]: E1101 00:30:59.539252 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:30:59.558769 kubelet[1923]: I1101 00:30:59.558708 1923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.55869435 podStartE2EDuration="2.55869435s" podCreationTimestamp="2025-11-01 00:30:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:30:59.550900112 +0000 UTC m=+1.126273574" watchObservedRunningTime="2025-11-01 00:30:59.55869435 +0000 UTC m=+1.134067812" Nov 1 00:30:59.564533 kubelet[1923]: I1101 00:30:59.564495 1923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.564483848 podStartE2EDuration="1.564483848s" podCreationTimestamp="2025-11-01 00:30:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:30:59.558859491 +0000 UTC m=+1.134232953" watchObservedRunningTime="2025-11-01 00:30:59.564483848 +0000 UTC m=+1.139857310" Nov 1 00:30:59.576347 kubelet[1923]: I1101 00:30:59.576286 1923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.576272722 podStartE2EDuration="2.576272722s" podCreationTimestamp="2025-11-01 00:30:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:30:59.564880115 +0000 UTC m=+1.140253577" watchObservedRunningTime="2025-11-01 00:30:59.576272722 +0000 UTC m=+1.151646184" Nov 1 00:31:00.532141 kubelet[1923]: E1101 00:31:00.532088 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:31:00.532690 kubelet[1923]: E1101 00:31:00.532660 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:31:00.532952 kubelet[1923]: E1101 00:31:00.532919 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:31:01.103760 sudo[1316]: pam_unix(sudo:session): session closed for user root Nov 1 00:31:01.105402 sshd[1313]: pam_unix(sshd:session): session closed for user core Nov 1 00:31:01.108152 systemd-logind[1203]: Session 5 logged out. Waiting for processes to exit. Nov 1 00:31:01.108880 systemd[1]: sshd@4-10.0.0.138:22-10.0.0.1:60516.service: Deactivated successfully. Nov 1 00:31:01.109635 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 00:31:01.109788 systemd[1]: session-5.scope: Consumed 6.226s CPU time. Nov 1 00:31:01.110712 systemd-logind[1203]: Removed session 5. Nov 1 00:31:01.533612 kubelet[1923]: E1101 00:31:01.533509 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:31:04.338792 kubelet[1923]: E1101 00:31:04.338749 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:31:04.537358 kubelet[1923]: E1101 00:31:04.537323 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:31:05.195487 kubelet[1923]: I1101 00:31:05.195454 1923 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 00:31:05.195897 env[1216]: time="2025-11-01T00:31:05.195796887Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 00:31:05.196169 kubelet[1923]: I1101 00:31:05.196037 1923 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 00:31:05.730156 systemd[1]: Created slice kubepods-besteffort-podfdd269f7_18eb_4b25_96de_cae266e588a4.slice. Nov 1 00:31:05.742141 systemd[1]: Created slice kubepods-burstable-pod5cb00ccf_4d3a_44f1_a46b_5ce1ed58192b.slice. Nov 1 00:31:05.835495 kubelet[1923]: I1101 00:31:05.835449 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-bpf-maps\") pod \"cilium-vvz4h\" (UID: \"5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b\") " pod="kube-system/cilium-vvz4h" Nov 1 00:31:05.835883 kubelet[1923]: I1101 00:31:05.835509 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-hostproc\") pod \"cilium-vvz4h\" (UID: \"5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b\") " pod="kube-system/cilium-vvz4h" Nov 1 00:31:05.835883 kubelet[1923]: I1101 00:31:05.835526 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-lib-modules\") pod \"cilium-vvz4h\" (UID: \"5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b\") " pod="kube-system/cilium-vvz4h" Nov 1 00:31:05.835883 kubelet[1923]: I1101 00:31:05.835542 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-xtables-lock\") pod \"cilium-vvz4h\" (UID: \"5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b\") " pod="kube-system/cilium-vvz4h" Nov 1 00:31:05.835883 kubelet[1923]: I1101 00:31:05.835592 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-host-proc-sys-kernel\") pod \"cilium-vvz4h\" (UID: \"5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b\") " pod="kube-system/cilium-vvz4h" Nov 1 00:31:05.835883 kubelet[1923]: I1101 00:31:05.835613 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-cilium-run\") pod \"cilium-vvz4h\" (UID: \"5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b\") " pod="kube-system/cilium-vvz4h" Nov 1 00:31:05.835883 kubelet[1923]: I1101 00:31:05.835657 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-cilium-cgroup\") pod \"cilium-vvz4h\" (UID: \"5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b\") " pod="kube-system/cilium-vvz4h" Nov 1 00:31:05.836045 kubelet[1923]: I1101 00:31:05.835676 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fdd269f7-18eb-4b25-96de-cae266e588a4-xtables-lock\") pod \"kube-proxy-fhfnt\" (UID: \"fdd269f7-18eb-4b25-96de-cae266e588a4\") " pod="kube-system/kube-proxy-fhfnt" Nov 1 00:31:05.836045 kubelet[1923]: I1101 00:31:05.835692 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6dzj\" (UniqueName: \"kubernetes.io/projected/fdd269f7-18eb-4b25-96de-cae266e588a4-kube-api-access-v6dzj\") pod \"kube-proxy-fhfnt\" (UID: \"fdd269f7-18eb-4b25-96de-cae266e588a4\") " pod="kube-system/kube-proxy-fhfnt" Nov 1 00:31:05.836045 kubelet[1923]: I1101 00:31:05.835729 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-etc-cni-netd\") pod \"cilium-vvz4h\" (UID: \"5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b\") " pod="kube-system/cilium-vvz4h" Nov 1 00:31:05.836045 kubelet[1923]: I1101 00:31:05.835759 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-clustermesh-secrets\") pod \"cilium-vvz4h\" (UID: \"5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b\") " pod="kube-system/cilium-vvz4h" Nov 1 00:31:05.836045 kubelet[1923]: I1101 00:31:05.835785 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-host-proc-sys-net\") pod \"cilium-vvz4h\" (UID: \"5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b\") " pod="kube-system/cilium-vvz4h" Nov 1 00:31:05.836183 kubelet[1923]: I1101 00:31:05.835809 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-cni-path\") pod \"cilium-vvz4h\" (UID: \"5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b\") " pod="kube-system/cilium-vvz4h" Nov 1 00:31:05.836183 kubelet[1923]: I1101 00:31:05.835826 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-cilium-config-path\") pod \"cilium-vvz4h\" (UID: \"5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b\") " pod="kube-system/cilium-vvz4h" Nov 1 00:31:05.836183 kubelet[1923]: I1101 00:31:05.835869 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-hubble-tls\") pod \"cilium-vvz4h\" (UID: \"5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b\") " pod="kube-system/cilium-vvz4h" Nov 1 00:31:05.836183 kubelet[1923]: I1101 00:31:05.835883 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9fdn\" (UniqueName: \"kubernetes.io/projected/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-kube-api-access-h9fdn\") pod \"cilium-vvz4h\" (UID: \"5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b\") " pod="kube-system/cilium-vvz4h" Nov 1 00:31:05.836183 kubelet[1923]: I1101 00:31:05.835898 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fdd269f7-18eb-4b25-96de-cae266e588a4-kube-proxy\") pod \"kube-proxy-fhfnt\" (UID: \"fdd269f7-18eb-4b25-96de-cae266e588a4\") " pod="kube-system/kube-proxy-fhfnt" Nov 1 00:31:05.836183 kubelet[1923]: I1101 00:31:05.835945 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fdd269f7-18eb-4b25-96de-cae266e588a4-lib-modules\") pod \"kube-proxy-fhfnt\" (UID: \"fdd269f7-18eb-4b25-96de-cae266e588a4\") " pod="kube-system/kube-proxy-fhfnt" Nov 1 00:31:05.937604 kubelet[1923]: I1101 00:31:05.937568 1923 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 1 00:31:05.946955 kubelet[1923]: E1101 00:31:05.946917 1923 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 1 00:31:05.947067 kubelet[1923]: E1101 00:31:05.946949 1923 projected.go:194] Error preparing data for projected volume kube-api-access-v6dzj for pod kube-system/kube-proxy-fhfnt: configmap "kube-root-ca.crt" not found Nov 1 00:31:05.947067 kubelet[1923]: E1101 00:31:05.947020 1923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fdd269f7-18eb-4b25-96de-cae266e588a4-kube-api-access-v6dzj podName:fdd269f7-18eb-4b25-96de-cae266e588a4 nodeName:}" failed. No retries permitted until 2025-11-01 00:31:06.447000185 +0000 UTC m=+8.022373647 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-v6dzj" (UniqueName: "kubernetes.io/projected/fdd269f7-18eb-4b25-96de-cae266e588a4-kube-api-access-v6dzj") pod "kube-proxy-fhfnt" (UID: "fdd269f7-18eb-4b25-96de-cae266e588a4") : configmap "kube-root-ca.crt" not found Nov 1 00:31:05.947067 kubelet[1923]: E1101 00:31:05.946917 1923 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 1 00:31:05.947067 kubelet[1923]: E1101 00:31:05.947057 1923 projected.go:194] Error preparing data for projected volume kube-api-access-h9fdn for pod kube-system/cilium-vvz4h: configmap "kube-root-ca.crt" not found Nov 1 00:31:05.947193 kubelet[1923]: E1101 00:31:05.947081 1923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-kube-api-access-h9fdn podName:5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b nodeName:}" failed. No retries permitted until 2025-11-01 00:31:06.447074481 +0000 UTC m=+8.022447943 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-h9fdn" (UniqueName: "kubernetes.io/projected/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-kube-api-access-h9fdn") pod "cilium-vvz4h" (UID: "5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b") : configmap "kube-root-ca.crt" not found Nov 1 00:31:06.386563 systemd[1]: Created slice kubepods-besteffort-pod379983f1_7319_4603_8e83_fa54f160f72c.slice. Nov 1 00:31:06.441177 kubelet[1923]: I1101 00:31:06.441100 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6mpm\" (UniqueName: \"kubernetes.io/projected/379983f1-7319-4603-8e83-fa54f160f72c-kube-api-access-j6mpm\") pod \"cilium-operator-6c4d7847fc-klbtb\" (UID: \"379983f1-7319-4603-8e83-fa54f160f72c\") " pod="kube-system/cilium-operator-6c4d7847fc-klbtb" Nov 1 00:31:06.441177 kubelet[1923]: I1101 00:31:06.441176 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/379983f1-7319-4603-8e83-fa54f160f72c-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-klbtb\" (UID: \"379983f1-7319-4603-8e83-fa54f160f72c\") " pod="kube-system/cilium-operator-6c4d7847fc-klbtb" Nov 1 00:31:06.637815 kubelet[1923]: E1101 00:31:06.637675 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:31:06.638410 env[1216]: time="2025-11-01T00:31:06.638255010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fhfnt,Uid:fdd269f7-18eb-4b25-96de-cae266e588a4,Namespace:kube-system,Attempt:0,}" Nov 1 00:31:06.644515 kubelet[1923]: E1101 00:31:06.644476 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:31:06.645422 env[1216]: time="2025-11-01T00:31:06.645199411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vvz4h,Uid:5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b,Namespace:kube-system,Attempt:0,}" Nov 1 00:31:06.652274 env[1216]: time="2025-11-01T00:31:06.652208305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:31:06.652370 env[1216]: time="2025-11-01T00:31:06.652286961Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:31:06.652370 env[1216]: time="2025-11-01T00:31:06.652312446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:31:06.652576 env[1216]: time="2025-11-01T00:31:06.652478159Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6d8220137a1af72bfdec79bf0c778a32825b104d802b4e596a5163cab248c7f1 pid=2023 runtime=io.containerd.runc.v2 Nov 1 00:31:06.663330 systemd[1]: Started cri-containerd-6d8220137a1af72bfdec79bf0c778a32825b104d802b4e596a5163cab248c7f1.scope. Nov 1 00:31:06.664041 env[1216]: time="2025-11-01T00:31:06.662520305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:31:06.664041 env[1216]: time="2025-11-01T00:31:06.662578957Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:31:06.664041 env[1216]: time="2025-11-01T00:31:06.662589599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:31:06.664041 env[1216]: time="2025-11-01T00:31:06.662733388Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a76417e9cd45dc65b16ddaa46cdef0be21539f22d935fc663fffec6c3ec0f0fb pid=2049 runtime=io.containerd.runc.v2 Nov 1 00:31:06.675665 systemd[1]: Started cri-containerd-a76417e9cd45dc65b16ddaa46cdef0be21539f22d935fc663fffec6c3ec0f0fb.scope. Nov 1 00:31:06.689514 kubelet[1923]: E1101 00:31:06.689479 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:31:06.690844 env[1216]: time="2025-11-01T00:31:06.690808891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-klbtb,Uid:379983f1-7319-4603-8e83-fa54f160f72c,Namespace:kube-system,Attempt:0,}" Nov 1 00:31:06.699714 env[1216]: time="2025-11-01T00:31:06.699640993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fhfnt,Uid:fdd269f7-18eb-4b25-96de-cae266e588a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d8220137a1af72bfdec79bf0c778a32825b104d802b4e596a5163cab248c7f1\"" Nov 1 00:31:06.700453 kubelet[1923]: E1101 00:31:06.700428 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:31:06.704212 env[1216]: time="2025-11-01T00:31:06.704164825Z" level=info msg="CreateContainer within sandbox \"6d8220137a1af72bfdec79bf0c778a32825b104d802b4e596a5163cab248c7f1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 00:31:06.711371 env[1216]: time="2025-11-01T00:31:06.711333791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vvz4h,Uid:5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b,Namespace:kube-system,Attempt:0,} returns sandbox id \"a76417e9cd45dc65b16ddaa46cdef0be21539f22d935fc663fffec6c3ec0f0fb\"" Nov 1 00:31:06.712481 kubelet[1923]: E1101 00:31:06.711950 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:31:06.713380 env[1216]: time="2025-11-01T00:31:06.713339596Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 1 00:31:06.716380 env[1216]: time="2025-11-01T00:31:06.716290031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:31:06.716380 env[1216]: time="2025-11-01T00:31:06.716335400Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:31:06.716380 env[1216]: time="2025-11-01T00:31:06.716349083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:31:06.716686 env[1216]: time="2025-11-01T00:31:06.716632620Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6e0d4ce7066d7df9b5038e526b0860d18cde1891fac0b11d84a5930eb3dc545c pid=2107 runtime=io.containerd.runc.v2 Nov 1 00:31:06.718997 env[1216]: time="2025-11-01T00:31:06.718960810Z" level=info msg="CreateContainer within sandbox \"6d8220137a1af72bfdec79bf0c778a32825b104d802b4e596a5163cab248c7f1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f83db03f29361d81e1c93a7fc3c9da252d477e050882a9c0d1288d1447283741\"" Nov 1 00:31:06.719614 env[1216]: time="2025-11-01T00:31:06.719557490Z" level=info msg="StartContainer for \"f83db03f29361d81e1c93a7fc3c9da252d477e050882a9c0d1288d1447283741\"" Nov 1 00:31:06.728346 systemd[1]: Started cri-containerd-6e0d4ce7066d7df9b5038e526b0860d18cde1891fac0b11d84a5930eb3dc545c.scope. Nov 1 00:31:06.738162 systemd[1]: Started cri-containerd-f83db03f29361d81e1c93a7fc3c9da252d477e050882a9c0d1288d1447283741.scope. Nov 1 00:31:06.767366 env[1216]: time="2025-11-01T00:31:06.767327526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-klbtb,Uid:379983f1-7319-4603-8e83-fa54f160f72c,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e0d4ce7066d7df9b5038e526b0860d18cde1891fac0b11d84a5930eb3dc545c\"" Nov 1 00:31:06.768507 kubelet[1923]: E1101 00:31:06.768019 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:31:06.773787 env[1216]: time="2025-11-01T00:31:06.773748821Z" level=info msg="StartContainer for \"f83db03f29361d81e1c93a7fc3c9da252d477e050882a9c0d1288d1447283741\" returns successfully" Nov 1 00:31:06.988164 kubelet[1923]: E1101 00:31:06.987907 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:31:07.547655 kubelet[1923]: E1101 00:31:07.546910 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:31:07.547655 kubelet[1923]: E1101 00:31:07.547461 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:31:10.001011 kubelet[1923]: E1101 00:31:10.000981 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:31:10.013506 kubelet[1923]: I1101 00:31:10.013389 1923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fhfnt" podStartSLOduration=5.013376649 podStartE2EDuration="5.013376649s" podCreationTimestamp="2025-11-01 00:31:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:31:07.565361421 +0000 UTC m=+9.140734883" watchObservedRunningTime="2025-11-01 00:31:10.013376649 +0000 UTC m=+11.588750111" Nov 1 00:31:11.407851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2090218211.mount: Deactivated successfully. Nov 1 00:31:13.626066 env[1216]: time="2025-11-01T00:31:13.626002673Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:31:13.628224 env[1216]: time="2025-11-01T00:31:13.628166333Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:31:13.629881 env[1216]: time="2025-11-01T00:31:13.629853846Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:31:13.631206 env[1216]: time="2025-11-01T00:31:13.631167749Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Nov 1 00:31:13.634182 env[1216]: time="2025-11-01T00:31:13.634121998Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 1 00:31:13.639077 env[1216]: time="2025-11-01T00:31:13.639044360Z" level=info msg="CreateContainer within sandbox \"a76417e9cd45dc65b16ddaa46cdef0be21539f22d935fc663fffec6c3ec0f0fb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 1 00:31:13.648720 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1541433213.mount: Deactivated successfully. Nov 1 00:31:13.649560 env[1216]: time="2025-11-01T00:31:13.649352109Z" level=info msg="CreateContainer within sandbox \"a76417e9cd45dc65b16ddaa46cdef0be21539f22d935fc663fffec6c3ec0f0fb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"644872a6a892097ccbebf35ad270a44d397a7c36b9bb965c1fbe9b31c30ab9db\"" Nov 1 00:31:13.650576 env[1216]: time="2025-11-01T00:31:13.649846698Z" level=info msg="StartContainer for \"644872a6a892097ccbebf35ad270a44d397a7c36b9bb965c1fbe9b31c30ab9db\"" Nov 1 00:31:13.670499 systemd[1]: Started cri-containerd-644872a6a892097ccbebf35ad270a44d397a7c36b9bb965c1fbe9b31c30ab9db.scope. Nov 1 00:31:13.742428 env[1216]: time="2025-11-01T00:31:13.742360601Z" level=info msg="StartContainer for \"644872a6a892097ccbebf35ad270a44d397a7c36b9bb965c1fbe9b31c30ab9db\" returns successfully" Nov 1 00:31:13.750856 systemd[1]: cri-containerd-644872a6a892097ccbebf35ad270a44d397a7c36b9bb965c1fbe9b31c30ab9db.scope: Deactivated successfully. Nov 1 00:31:13.801254 env[1216]: time="2025-11-01T00:31:13.801204757Z" level=info msg="shim disconnected" id=644872a6a892097ccbebf35ad270a44d397a7c36b9bb965c1fbe9b31c30ab9db Nov 1 00:31:13.801568 env[1216]: time="2025-11-01T00:31:13.801523762Z" level=warning msg="cleaning up after shim disconnected" id=644872a6a892097ccbebf35ad270a44d397a7c36b9bb965c1fbe9b31c30ab9db namespace=k8s.io Nov 1 00:31:13.801690 env[1216]: time="2025-11-01T00:31:13.801631257Z" level=info msg="cleaning up dead shim" Nov 1 00:31:13.808602 env[1216]: time="2025-11-01T00:31:13.808569178Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:31:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2363 runtime=io.containerd.runc.v2\n" Nov 1 00:31:13.982668 update_engine[1204]: I1101 00:31:13.982481 1204 update_attempter.cc:509] Updating boot flags... Nov 1 00:31:14.559502 kubelet[1923]: E1101 00:31:14.559467 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:31:14.581653 env[1216]: time="2025-11-01T00:31:14.581601275Z" level=info msg="CreateContainer within sandbox \"a76417e9cd45dc65b16ddaa46cdef0be21539f22d935fc663fffec6c3ec0f0fb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 1 00:31:14.608663 env[1216]: time="2025-11-01T00:31:14.608613313Z" level=info msg="CreateContainer within sandbox \"a76417e9cd45dc65b16ddaa46cdef0be21539f22d935fc663fffec6c3ec0f0fb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"042e7eaaa5a0f70449bd8bdba7f98c91a85f18f17f456c735a94e235e4388645\"" Nov 1 00:31:14.609928 env[1216]: time="2025-11-01T00:31:14.609898842Z" level=info msg="StartContainer for \"042e7eaaa5a0f70449bd8bdba7f98c91a85f18f17f456c735a94e235e4388645\"" Nov 1 00:31:14.624312 systemd[1]: Started cri-containerd-042e7eaaa5a0f70449bd8bdba7f98c91a85f18f17f456c735a94e235e4388645.scope. Nov 1 00:31:14.647470 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-644872a6a892097ccbebf35ad270a44d397a7c36b9bb965c1fbe9b31c30ab9db-rootfs.mount: Deactivated successfully. Nov 1 00:31:14.654761 env[1216]: time="2025-11-01T00:31:14.654708704Z" level=info msg="StartContainer for \"042e7eaaa5a0f70449bd8bdba7f98c91a85f18f17f456c735a94e235e4388645\" returns successfully" Nov 1 00:31:14.664332 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:31:14.664616 systemd[1]: Stopped systemd-sysctl.service. Nov 1 00:31:14.664794 systemd[1]: Stopping systemd-sysctl.service... Nov 1 00:31:14.666434 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:31:14.668364 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 1 00:31:14.672525 systemd[1]: cri-containerd-042e7eaaa5a0f70449bd8bdba7f98c91a85f18f17f456c735a94e235e4388645.scope: Deactivated successfully. Nov 1 00:31:14.674793 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:31:14.688312 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-042e7eaaa5a0f70449bd8bdba7f98c91a85f18f17f456c735a94e235e4388645-rootfs.mount: Deactivated successfully. Nov 1 00:31:14.695758 env[1216]: time="2025-11-01T00:31:14.695716145Z" level=info msg="shim disconnected" id=042e7eaaa5a0f70449bd8bdba7f98c91a85f18f17f456c735a94e235e4388645 Nov 1 00:31:14.695758 env[1216]: time="2025-11-01T00:31:14.695758550Z" level=warning msg="cleaning up after shim disconnected" id=042e7eaaa5a0f70449bd8bdba7f98c91a85f18f17f456c735a94e235e4388645 namespace=k8s.io Nov 1 00:31:14.695928 env[1216]: time="2025-11-01T00:31:14.695769072Z" level=info msg="cleaning up dead shim" Nov 1 00:31:14.702227 env[1216]: time="2025-11-01T00:31:14.702189797Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:31:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2444 runtime=io.containerd.runc.v2\n" Nov 1 00:31:15.519611 env[1216]: time="2025-11-01T00:31:15.519564414Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:31:15.520895 env[1216]: time="2025-11-01T00:31:15.520864057Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:31:15.522409 env[1216]: time="2025-11-01T00:31:15.522380206Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:31:15.522979 env[1216]: time="2025-11-01T00:31:15.522949798Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Nov 1 00:31:15.527648 env[1216]: time="2025-11-01T00:31:15.527614742Z" level=info msg="CreateContainer within sandbox \"6e0d4ce7066d7df9b5038e526b0860d18cde1891fac0b11d84a5930eb3dc545c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 1 00:31:15.535867 env[1216]: time="2025-11-01T00:31:15.535824610Z" level=info msg="CreateContainer within sandbox \"6e0d4ce7066d7df9b5038e526b0860d18cde1891fac0b11d84a5930eb3dc545c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1e198bdc99c403ed79921fc9bddda9ab65844b39df54d89ef6f7254b67eed758\"" Nov 1 00:31:15.536250 env[1216]: time="2025-11-01T00:31:15.536225940Z" level=info msg="StartContainer for \"1e198bdc99c403ed79921fc9bddda9ab65844b39df54d89ef6f7254b67eed758\"" Nov 1 00:31:15.556842 systemd[1]: Started cri-containerd-1e198bdc99c403ed79921fc9bddda9ab65844b39df54d89ef6f7254b67eed758.scope. Nov 1 00:31:15.563080 kubelet[1923]: E1101 00:31:15.562829 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:31:15.623352 env[1216]: time="2025-11-01T00:31:15.623302365Z" level=info msg="CreateContainer within sandbox \"a76417e9cd45dc65b16ddaa46cdef0be21539f22d935fc663fffec6c3ec0f0fb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 1 00:31:15.623780 env[1216]: time="2025-11-01T00:31:15.623747621Z" level=info msg="StartContainer for \"1e198bdc99c403ed79921fc9bddda9ab65844b39df54d89ef6f7254b67eed758\" returns successfully" Nov 1 00:31:15.635970 env[1216]: time="2025-11-01T00:31:15.635923545Z" level=info msg="CreateContainer within sandbox \"a76417e9cd45dc65b16ddaa46cdef0be21539f22d935fc663fffec6c3ec0f0fb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fc7927c678df381a1b9ff45652c62bcf98e99d024ddcf3b49caf92717fd7e69a\"" Nov 1 00:31:15.636658 env[1216]: time="2025-11-01T00:31:15.636630674Z" level=info msg="StartContainer for \"fc7927c678df381a1b9ff45652c62bcf98e99d024ddcf3b49caf92717fd7e69a\"" Nov 1 00:31:15.647914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2609166277.mount: Deactivated successfully. Nov 1 00:31:15.654391 systemd[1]: Started cri-containerd-fc7927c678df381a1b9ff45652c62bcf98e99d024ddcf3b49caf92717fd7e69a.scope. Nov 1 00:31:15.691281 systemd[1]: cri-containerd-fc7927c678df381a1b9ff45652c62bcf98e99d024ddcf3b49caf92717fd7e69a.scope: Deactivated successfully. Nov 1 00:31:15.694576 env[1216]: time="2025-11-01T00:31:15.694497401Z" level=info msg="StartContainer for \"fc7927c678df381a1b9ff45652c62bcf98e99d024ddcf3b49caf92717fd7e69a\" returns successfully" Nov 1 00:31:15.735875 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc7927c678df381a1b9ff45652c62bcf98e99d024ddcf3b49caf92717fd7e69a-rootfs.mount: Deactivated successfully. Nov 1 00:31:15.740942 env[1216]: time="2025-11-01T00:31:15.740898131Z" level=info msg="shim disconnected" id=fc7927c678df381a1b9ff45652c62bcf98e99d024ddcf3b49caf92717fd7e69a Nov 1 00:31:15.741207 env[1216]: time="2025-11-01T00:31:15.741177926Z" level=warning msg="cleaning up after shim disconnected" id=fc7927c678df381a1b9ff45652c62bcf98e99d024ddcf3b49caf92717fd7e69a namespace=k8s.io Nov 1 00:31:15.741288 env[1216]: time="2025-11-01T00:31:15.741273378Z" level=info msg="cleaning up dead shim" Nov 1 00:31:15.758895 env[1216]: time="2025-11-01T00:31:15.758855260Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:31:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2539 runtime=io.containerd.runc.v2\n" Nov 1 00:31:16.581003 kubelet[1923]: E1101 00:31:16.580970 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:31:16.582924 kubelet[1923]: E1101 00:31:16.582904 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:31:16.587523 env[1216]: time="2025-11-01T00:31:16.587482786Z" level=info msg="CreateContainer within sandbox \"a76417e9cd45dc65b16ddaa46cdef0be21539f22d935fc663fffec6c3ec0f0fb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 1 00:31:16.593318 kubelet[1923]: I1101 00:31:16.593270 1923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-klbtb" podStartSLOduration=1.8380831720000002 podStartE2EDuration="10.593256354s" podCreationTimestamp="2025-11-01 00:31:06 +0000 UTC" firstStartedPulling="2025-11-01 00:31:06.768823147 +0000 UTC m=+8.344196609" lastFinishedPulling="2025-11-01 00:31:15.523996329 +0000 UTC m=+17.099369791" observedRunningTime="2025-11-01 00:31:16.592882069 +0000 UTC m=+18.168255531" watchObservedRunningTime="2025-11-01 00:31:16.593256354 +0000 UTC m=+18.168629816" Nov 1 00:31:16.604332 env[1216]: time="2025-11-01T00:31:16.604287388Z" level=info msg="CreateContainer within sandbox \"a76417e9cd45dc65b16ddaa46cdef0be21539f22d935fc663fffec6c3ec0f0fb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4581374c8ff0c68446e8c7d0145603efe474a0cf1170828bbbd8569cb722f4b5\"" Nov 1 00:31:16.605069 env[1216]: time="2025-11-01T00:31:16.605038558Z" level=info msg="StartContainer for \"4581374c8ff0c68446e8c7d0145603efe474a0cf1170828bbbd8569cb722f4b5\"" Nov 1 00:31:16.620575 systemd[1]: Started cri-containerd-4581374c8ff0c68446e8c7d0145603efe474a0cf1170828bbbd8569cb722f4b5.scope. Nov 1 00:31:16.647254 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4011139967.mount: Deactivated successfully. Nov 1 00:31:16.657311 env[1216]: time="2025-11-01T00:31:16.657264261Z" level=info msg="StartContainer for \"4581374c8ff0c68446e8c7d0145603efe474a0cf1170828bbbd8569cb722f4b5\" returns successfully" Nov 1 00:31:16.659610 systemd[1]: cri-containerd-4581374c8ff0c68446e8c7d0145603efe474a0cf1170828bbbd8569cb722f4b5.scope: Deactivated successfully. Nov 1 00:31:16.676902 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4581374c8ff0c68446e8c7d0145603efe474a0cf1170828bbbd8569cb722f4b5-rootfs.mount: Deactivated successfully. Nov 1 00:31:16.682633 env[1216]: time="2025-11-01T00:31:16.682584998Z" level=info msg="shim disconnected" id=4581374c8ff0c68446e8c7d0145603efe474a0cf1170828bbbd8569cb722f4b5 Nov 1 00:31:16.682792 env[1216]: time="2025-11-01T00:31:16.682633004Z" level=warning msg="cleaning up after shim disconnected" id=4581374c8ff0c68446e8c7d0145603efe474a0cf1170828bbbd8569cb722f4b5 namespace=k8s.io Nov 1 00:31:16.682792 env[1216]: time="2025-11-01T00:31:16.682645045Z" level=info msg="cleaning up dead shim" Nov 1 00:31:16.688444 env[1216]: time="2025-11-01T00:31:16.688384409Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:31:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2595 runtime=io.containerd.runc.v2\n" Nov 1 00:31:17.587030 kubelet[1923]: E1101 00:31:17.586993 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:31:17.587459 kubelet[1923]: E1101 00:31:17.587435 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:31:17.591309 env[1216]: time="2025-11-01T00:31:17.591269554Z" level=info msg="CreateContainer within sandbox \"a76417e9cd45dc65b16ddaa46cdef0be21539f22d935fc663fffec6c3ec0f0fb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 1 00:31:17.605532 env[1216]: time="2025-11-01T00:31:17.605487848Z" level=info msg="CreateContainer within sandbox \"a76417e9cd45dc65b16ddaa46cdef0be21539f22d935fc663fffec6c3ec0f0fb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"416e2cd3034ff42594c02dd975bf32c0812606bcd3e0248d15f55b5ee7e0ce81\"" Nov 1 00:31:17.606122 env[1216]: time="2025-11-01T00:31:17.606080115Z" level=info msg="StartContainer for \"416e2cd3034ff42594c02dd975bf32c0812606bcd3e0248d15f55b5ee7e0ce81\"" Nov 1 00:31:17.621371 systemd[1]: Started cri-containerd-416e2cd3034ff42594c02dd975bf32c0812606bcd3e0248d15f55b5ee7e0ce81.scope. Nov 1 00:31:17.654567 env[1216]: time="2025-11-01T00:31:17.651815544Z" level=info msg="StartContainer for \"416e2cd3034ff42594c02dd975bf32c0812606bcd3e0248d15f55b5ee7e0ce81\" returns successfully" Nov 1 00:31:17.732887 kubelet[1923]: I1101 00:31:17.732852 1923 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 1 00:31:17.774686 systemd[1]: Created slice kubepods-burstable-pod8bb293f9_7c66_4fb5_9e1e_1f2c3915dffe.slice. Nov 1 00:31:17.780603 systemd[1]: Created slice kubepods-burstable-pod1994bc24_0f1d_4908_8b69_1a4de45b5fd3.slice. Nov 1 00:31:17.809576 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Nov 1 00:31:17.836957 kubelet[1923]: I1101 00:31:17.836901 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8bb293f9-7c66-4fb5-9e1e-1f2c3915dffe-config-volume\") pod \"coredns-674b8bbfcf-hq6d9\" (UID: \"8bb293f9-7c66-4fb5-9e1e-1f2c3915dffe\") " pod="kube-system/coredns-674b8bbfcf-hq6d9" Nov 1 00:31:17.836957 kubelet[1923]: I1101 00:31:17.836960 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9hpj\" (UniqueName: \"kubernetes.io/projected/8bb293f9-7c66-4fb5-9e1e-1f2c3915dffe-kube-api-access-l9hpj\") pod \"coredns-674b8bbfcf-hq6d9\" (UID: \"8bb293f9-7c66-4fb5-9e1e-1f2c3915dffe\") " pod="kube-system/coredns-674b8bbfcf-hq6d9" Nov 1 00:31:17.837170 kubelet[1923]: I1101 00:31:17.836984 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1994bc24-0f1d-4908-8b69-1a4de45b5fd3-config-volume\") pod \"coredns-674b8bbfcf-fvzs9\" (UID: \"1994bc24-0f1d-4908-8b69-1a4de45b5fd3\") " pod="kube-system/coredns-674b8bbfcf-fvzs9" Nov 1 00:31:17.837170 kubelet[1923]: I1101 00:31:17.837002 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc62n\" (UniqueName: \"kubernetes.io/projected/1994bc24-0f1d-4908-8b69-1a4de45b5fd3-kube-api-access-tc62n\") pod \"coredns-674b8bbfcf-fvzs9\" (UID: \"1994bc24-0f1d-4908-8b69-1a4de45b5fd3\") " pod="kube-system/coredns-674b8bbfcf-fvzs9" Nov 1 00:31:18.035589 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Nov 1 00:31:18.078061 kubelet[1923]: E1101 00:31:18.078026 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:31:18.078726 env[1216]: time="2025-11-01T00:31:18.078687889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hq6d9,Uid:8bb293f9-7c66-4fb5-9e1e-1f2c3915dffe,Namespace:kube-system,Attempt:0,}" Nov 1 00:31:18.084187 kubelet[1923]: E1101 00:31:18.084155 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:31:18.084847 env[1216]: time="2025-11-01T00:31:18.084790069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fvzs9,Uid:1994bc24-0f1d-4908-8b69-1a4de45b5fd3,Namespace:kube-system,Attempt:0,}" Nov 1 00:31:18.591136 kubelet[1923]: E1101 00:31:18.591105 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:31:18.606567 kubelet[1923]: I1101 00:31:18.606486 1923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vvz4h" podStartSLOduration=6.6856696939999996 podStartE2EDuration="13.60647172s" podCreationTimestamp="2025-11-01 00:31:05 +0000 UTC" firstStartedPulling="2025-11-01 00:31:06.713003528 +0000 UTC m=+8.288376990" lastFinishedPulling="2025-11-01 00:31:13.633805554 +0000 UTC m=+15.209179016" observedRunningTime="2025-11-01 00:31:18.605347199 +0000 UTC m=+20.180720661" watchObservedRunningTime="2025-11-01 00:31:18.60647172 +0000 UTC m=+20.181845182" Nov 1 00:31:19.592651 kubelet[1923]: E1101 00:31:19.592619 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:31:19.651638 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Nov 1 00:31:19.650985 systemd-networkd[1042]: cilium_host: Link UP Nov 1 00:31:19.651112 systemd-networkd[1042]: cilium_net: Link UP Nov 1 00:31:19.651115 systemd-networkd[1042]: cilium_net: Gained carrier Nov 1 00:31:19.651232 systemd-networkd[1042]: cilium_host: Gained carrier Nov 1 00:31:19.651620 systemd-networkd[1042]: cilium_host: Gained IPv6LL Nov 1 00:31:19.727683 systemd-networkd[1042]: cilium_vxlan: Link UP Nov 1 00:31:19.727689 systemd-networkd[1042]: cilium_vxlan: Gained carrier Nov 1 00:31:19.975578 kernel: NET: Registered PF_ALG protocol family Nov 1 00:31:20.213693 systemd-networkd[1042]: cilium_net: Gained IPv6LL Nov 1 00:31:20.536497 systemd-networkd[1042]: lxc_health: Link UP Nov 1 00:31:20.545325 systemd-networkd[1042]: lxc_health: Gained carrier Nov 1 00:31:20.545739 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Nov 1 00:31:20.594078 kubelet[1923]: E1101 00:31:20.593692 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:31:21.109757 systemd-networkd[1042]: cilium_vxlan: Gained IPv6LL Nov 1 00:31:21.119148 systemd-networkd[1042]: lxc595d29f7e063: Link UP Nov 1 00:31:21.127825 kernel: eth0: renamed from tmp0a71a Nov 1 00:31:21.136328 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 00:31:21.136425 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc595d29f7e063: link becomes ready Nov 1 00:31:21.136615 systemd-networkd[1042]: lxc595d29f7e063: Gained carrier Nov 1 00:31:21.136753 systemd-networkd[1042]: lxc3d8d346da01b: Link UP Nov 1 00:31:21.144588 kernel: eth0: renamed from tmp7af3c Nov 1 00:31:21.151571 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc3d8d346da01b: link becomes ready Nov 1 00:31:21.151572 systemd-networkd[1042]: lxc3d8d346da01b: Gained carrier Nov 1 00:31:21.594908 kubelet[1923]: E1101 00:31:21.594816 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:31:22.069716 systemd-networkd[1042]: lxc_health: Gained IPv6LL Nov 1 00:31:22.197667 systemd-networkd[1042]: lxc3d8d346da01b: Gained IPv6LL Nov 1 00:31:22.595868 kubelet[1923]: E1101 00:31:22.595821 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:31:23.029688 systemd-networkd[1042]: lxc595d29f7e063: Gained IPv6LL Nov 1 00:31:23.397540 systemd[1]: Started sshd@5-10.0.0.138:22-10.0.0.1:52500.service. Nov 1 00:31:23.434387 sshd[3153]: Accepted publickey for core from 10.0.0.1 port 52500 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:31:23.435791 sshd[3153]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:31:23.439940 systemd[1]: Started session-6.scope. Nov 1 00:31:23.440265 systemd-logind[1203]: New session 6 of user core. Nov 1 00:31:23.569719 sshd[3153]: pam_unix(sshd:session): session closed for user core Nov 1 00:31:23.572104 systemd[1]: sshd@5-10.0.0.138:22-10.0.0.1:52500.service: Deactivated successfully. Nov 1 00:31:23.572839 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 00:31:23.573331 systemd-logind[1203]: Session 6 logged out. Waiting for processes to exit. Nov 1 00:31:23.573967 systemd-logind[1203]: Removed session 6. Nov 1 00:31:23.597433 kubelet[1923]: E1101 00:31:23.597390 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:31:24.605634 env[1216]: time="2025-11-01T00:31:24.605504391Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:31:24.605937 env[1216]: time="2025-11-01T00:31:24.605736610Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:31:24.605937 env[1216]: time="2025-11-01T00:31:24.605768092Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:31:24.605937 env[1216]: time="2025-11-01T00:31:24.605778173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:31:24.606013 env[1216]: time="2025-11-01T00:31:24.605889062Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:31:24.606013 env[1216]: time="2025-11-01T00:31:24.605909904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:31:24.606080 env[1216]: time="2025-11-01T00:31:24.606018993Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a71a8bc99049ba01c9aa8cfc1c396092bccca53c44c94fb62b120fd15b2b586 pid=3181 runtime=io.containerd.runc.v2 Nov 1 00:31:24.606198 env[1216]: time="2025-11-01T00:31:24.606103720Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7af3c491cc36c7cfbbfd4ca96c02fd69efe9b35d2f5a8089e1a00571e78f3392 pid=3192 runtime=io.containerd.runc.v2 Nov 1 00:31:24.624991 systemd[1]: Started cri-containerd-0a71a8bc99049ba01c9aa8cfc1c396092bccca53c44c94fb62b120fd15b2b586.scope. Nov 1 00:31:24.625975 systemd[1]: Started cri-containerd-7af3c491cc36c7cfbbfd4ca96c02fd69efe9b35d2f5a8089e1a00571e78f3392.scope. Nov 1 00:31:24.644108 systemd-resolved[1154]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:31:24.645130 systemd-resolved[1154]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:31:24.664694 env[1216]: time="2025-11-01T00:31:24.664654387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fvzs9,Uid:1994bc24-0f1d-4908-8b69-1a4de45b5fd3,Namespace:kube-system,Attempt:0,} returns sandbox id \"7af3c491cc36c7cfbbfd4ca96c02fd69efe9b35d2f5a8089e1a00571e78f3392\"" Nov 1 00:31:24.669582 kubelet[1923]: E1101 00:31:24.665351 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:31:24.669841 env[1216]: time="2025-11-01T00:31:24.666316564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hq6d9,Uid:8bb293f9-7c66-4fb5-9e1e-1f2c3915dffe,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a71a8bc99049ba01c9aa8cfc1c396092bccca53c44c94fb62b120fd15b2b586\"" Nov 1 00:31:24.671242 kubelet[1923]: E1101 00:31:24.671218 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:31:24.674019 env[1216]: time="2025-11-01T00:31:24.673984596Z" level=info msg="CreateContainer within sandbox \"7af3c491cc36c7cfbbfd4ca96c02fd69efe9b35d2f5a8089e1a00571e78f3392\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:31:24.678068 env[1216]: time="2025-11-01T00:31:24.678018729Z" level=info msg="CreateContainer within sandbox \"0a71a8bc99049ba01c9aa8cfc1c396092bccca53c44c94fb62b120fd15b2b586\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:31:24.688812 env[1216]: time="2025-11-01T00:31:24.688768775Z" level=info msg="CreateContainer within sandbox \"7af3c491cc36c7cfbbfd4ca96c02fd69efe9b35d2f5a8089e1a00571e78f3392\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"35578e5a48383cc60a4948b9222e8156c99eff51decd3165272666a443750a99\"" Nov 1 00:31:24.689355 env[1216]: time="2025-11-01T00:31:24.689325181Z" level=info msg="StartContainer for \"35578e5a48383cc60a4948b9222e8156c99eff51decd3165272666a443750a99\"" Nov 1 00:31:24.692529 env[1216]: time="2025-11-01T00:31:24.692483201Z" level=info msg="CreateContainer within sandbox \"0a71a8bc99049ba01c9aa8cfc1c396092bccca53c44c94fb62b120fd15b2b586\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1253917e36aeead1426989390d8970947b655ec76931363c0c29d2083ae1faa8\"" Nov 1 00:31:24.694182 env[1216]: time="2025-11-01T00:31:24.694046450Z" level=info msg="StartContainer for \"1253917e36aeead1426989390d8970947b655ec76931363c0c29d2083ae1faa8\"" Nov 1 00:31:24.705635 systemd[1]: Started cri-containerd-35578e5a48383cc60a4948b9222e8156c99eff51decd3165272666a443750a99.scope. Nov 1 00:31:24.711860 systemd[1]: Started cri-containerd-1253917e36aeead1426989390d8970947b655ec76931363c0c29d2083ae1faa8.scope. Nov 1 00:31:24.739632 env[1216]: time="2025-11-01T00:31:24.739539040Z" level=info msg="StartContainer for \"35578e5a48383cc60a4948b9222e8156c99eff51decd3165272666a443750a99\" returns successfully" Nov 1 00:31:24.744382 env[1216]: time="2025-11-01T00:31:24.744338436Z" level=info msg="StartContainer for \"1253917e36aeead1426989390d8970947b655ec76931363c0c29d2083ae1faa8\" returns successfully" Nov 1 00:31:25.602492 kubelet[1923]: E1101 00:31:25.602130 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:31:25.604128 kubelet[1923]: E1101 00:31:25.604006 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:31:25.610067 systemd[1]: run-containerd-runc-k8s.io-7af3c491cc36c7cfbbfd4ca96c02fd69efe9b35d2f5a8089e1a00571e78f3392-runc.q8lu9l.mount: Deactivated successfully. Nov 1 00:31:25.617175 kubelet[1923]: I1101 00:31:25.617126 1923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-fvzs9" podStartSLOduration=19.617114734 podStartE2EDuration="19.617114734s" podCreationTimestamp="2025-11-01 00:31:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:31:25.616413958 +0000 UTC m=+27.191787420" watchObservedRunningTime="2025-11-01 00:31:25.617114734 +0000 UTC m=+27.192488196" Nov 1 00:31:25.636596 kubelet[1923]: I1101 00:31:25.636492 1923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-hq6d9" podStartSLOduration=19.636467624 podStartE2EDuration="19.636467624s" podCreationTimestamp="2025-11-01 00:31:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:31:25.635945422 +0000 UTC m=+27.211318884" watchObservedRunningTime="2025-11-01 00:31:25.636467624 +0000 UTC m=+27.211841086" Nov 1 00:31:26.606174 kubelet[1923]: E1101 00:31:26.606145 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:31:26.606502 kubelet[1923]: E1101 00:31:26.606236 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:31:27.607773 kubelet[1923]: E1101 00:31:27.607744 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:31:28.574779 systemd[1]: Started sshd@6-10.0.0.138:22-10.0.0.1:52506.service. Nov 1 00:31:28.611533 sshd[3339]: Accepted publickey for core from 10.0.0.1 port 52506 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:31:28.612952 sshd[3339]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:31:28.616227 systemd-logind[1203]: New session 7 of user core. Nov 1 00:31:28.617094 systemd[1]: Started session-7.scope. Nov 1 00:31:28.725680 sshd[3339]: pam_unix(sshd:session): session closed for user core Nov 1 00:31:28.727882 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 00:31:28.728401 systemd-logind[1203]: Session 7 logged out. Waiting for processes to exit. Nov 1 00:31:28.728540 systemd[1]: sshd@6-10.0.0.138:22-10.0.0.1:52506.service: Deactivated successfully. Nov 1 00:31:28.729440 systemd-logind[1203]: Removed session 7. Nov 1 00:31:33.731111 systemd[1]: Started sshd@7-10.0.0.138:22-10.0.0.1:38880.service. Nov 1 00:31:33.764888 sshd[3353]: Accepted publickey for core from 10.0.0.1 port 38880 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:31:33.766130 sshd[3353]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:31:33.769332 systemd-logind[1203]: New session 8 of user core. Nov 1 00:31:33.770188 systemd[1]: Started session-8.scope. Nov 1 00:31:33.879904 sshd[3353]: pam_unix(sshd:session): session closed for user core Nov 1 00:31:33.882484 systemd-logind[1203]: Session 8 logged out. Waiting for processes to exit. Nov 1 00:31:33.882746 systemd[1]: sshd@7-10.0.0.138:22-10.0.0.1:38880.service: Deactivated successfully. Nov 1 00:31:33.883433 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 00:31:33.884030 systemd-logind[1203]: Removed session 8. Nov 1 00:31:38.884948 systemd[1]: Started sshd@8-10.0.0.138:22-10.0.0.1:38896.service. Nov 1 00:31:38.924925 sshd[3371]: Accepted publickey for core from 10.0.0.1 port 38896 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:31:38.926241 sshd[3371]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:31:38.929912 systemd-logind[1203]: New session 9 of user core. Nov 1 00:31:38.930860 systemd[1]: Started session-9.scope. Nov 1 00:31:39.054687 sshd[3371]: pam_unix(sshd:session): session closed for user core Nov 1 00:31:39.057775 systemd[1]: Started sshd@9-10.0.0.138:22-10.0.0.1:38908.service. Nov 1 00:31:39.060980 systemd[1]: sshd@8-10.0.0.138:22-10.0.0.1:38896.service: Deactivated successfully. Nov 1 00:31:39.061653 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 00:31:39.063126 systemd-logind[1203]: Session 9 logged out. Waiting for processes to exit. Nov 1 00:31:39.063831 systemd-logind[1203]: Removed session 9. Nov 1 00:31:39.094783 sshd[3384]: Accepted publickey for core from 10.0.0.1 port 38908 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:31:39.095161 sshd[3384]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:31:39.099495 systemd-logind[1203]: New session 10 of user core. Nov 1 00:31:39.100091 systemd[1]: Started session-10.scope. Nov 1 00:31:39.290822 sshd[3384]: pam_unix(sshd:session): session closed for user core Nov 1 00:31:39.293929 systemd[1]: Started sshd@10-10.0.0.138:22-10.0.0.1:46140.service. Nov 1 00:31:39.303595 systemd[1]: sshd@9-10.0.0.138:22-10.0.0.1:38908.service: Deactivated successfully. Nov 1 00:31:39.303646 systemd-logind[1203]: Session 10 logged out. Waiting for processes to exit. Nov 1 00:31:39.304335 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 00:31:39.305945 systemd-logind[1203]: Removed session 10. Nov 1 00:31:39.335492 sshd[3396]: Accepted publickey for core from 10.0.0.1 port 46140 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:31:39.337065 sshd[3396]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:31:39.340223 systemd-logind[1203]: New session 11 of user core. Nov 1 00:31:39.341120 systemd[1]: Started session-11.scope. Nov 1 00:31:39.482967 sshd[3396]: pam_unix(sshd:session): session closed for user core Nov 1 00:31:39.485467 systemd[1]: sshd@10-10.0.0.138:22-10.0.0.1:46140.service: Deactivated successfully. Nov 1 00:31:39.486170 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 00:31:39.486703 systemd-logind[1203]: Session 11 logged out. Waiting for processes to exit. Nov 1 00:31:39.487290 systemd-logind[1203]: Removed session 11. Nov 1 00:31:44.487806 systemd[1]: Started sshd@11-10.0.0.138:22-10.0.0.1:46152.service. Nov 1 00:31:44.522148 sshd[3410]: Accepted publickey for core from 10.0.0.1 port 46152 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:31:44.523246 sshd[3410]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:31:44.526150 systemd-logind[1203]: New session 12 of user core. Nov 1 00:31:44.526991 systemd[1]: Started session-12.scope. Nov 1 00:31:44.632877 sshd[3410]: pam_unix(sshd:session): session closed for user core Nov 1 00:31:44.635497 systemd[1]: sshd@11-10.0.0.138:22-10.0.0.1:46152.service: Deactivated successfully. Nov 1 00:31:44.636172 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 00:31:44.636841 systemd-logind[1203]: Session 12 logged out. Waiting for processes to exit. Nov 1 00:31:44.637464 systemd-logind[1203]: Removed session 12. Nov 1 00:31:49.637915 systemd[1]: Started sshd@12-10.0.0.138:22-10.0.0.1:38622.service. Nov 1 00:31:49.672200 sshd[3423]: Accepted publickey for core from 10.0.0.1 port 38622 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:31:49.673681 sshd[3423]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:31:49.676759 systemd-logind[1203]: New session 13 of user core. Nov 1 00:31:49.677582 systemd[1]: Started session-13.scope. Nov 1 00:31:49.787069 sshd[3423]: pam_unix(sshd:session): session closed for user core Nov 1 00:31:49.789948 systemd[1]: sshd@12-10.0.0.138:22-10.0.0.1:38622.service: Deactivated successfully. Nov 1 00:31:49.790612 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 00:31:49.791167 systemd-logind[1203]: Session 13 logged out. Waiting for processes to exit. Nov 1 00:31:49.792288 systemd[1]: Started sshd@13-10.0.0.138:22-10.0.0.1:38636.service. Nov 1 00:31:49.793212 systemd-logind[1203]: Removed session 13. Nov 1 00:31:49.825893 sshd[3436]: Accepted publickey for core from 10.0.0.1 port 38636 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:31:49.827199 sshd[3436]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:31:49.830257 systemd-logind[1203]: New session 14 of user core. Nov 1 00:31:49.832058 systemd[1]: Started session-14.scope. Nov 1 00:31:50.004432 sshd[3436]: pam_unix(sshd:session): session closed for user core Nov 1 00:31:50.007384 systemd[1]: sshd@13-10.0.0.138:22-10.0.0.1:38636.service: Deactivated successfully. Nov 1 00:31:50.008098 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 00:31:50.008620 systemd-logind[1203]: Session 14 logged out. Waiting for processes to exit. Nov 1 00:31:50.009764 systemd[1]: Started sshd@14-10.0.0.138:22-10.0.0.1:38652.service. Nov 1 00:31:50.010451 systemd-logind[1203]: Removed session 14. Nov 1 00:31:50.046146 sshd[3447]: Accepted publickey for core from 10.0.0.1 port 38652 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:31:50.047243 sshd[3447]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:31:50.050373 systemd-logind[1203]: New session 15 of user core. Nov 1 00:31:50.051333 systemd[1]: Started session-15.scope. Nov 1 00:31:50.636849 sshd[3447]: pam_unix(sshd:session): session closed for user core Nov 1 00:31:50.640116 systemd[1]: Started sshd@15-10.0.0.138:22-10.0.0.1:38666.service. Nov 1 00:31:50.642489 systemd[1]: sshd@14-10.0.0.138:22-10.0.0.1:38652.service: Deactivated successfully. Nov 1 00:31:50.643165 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 00:31:50.643825 systemd-logind[1203]: Session 15 logged out. Waiting for processes to exit. Nov 1 00:31:50.644610 systemd-logind[1203]: Removed session 15. Nov 1 00:31:50.680555 sshd[3463]: Accepted publickey for core from 10.0.0.1 port 38666 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:31:50.681804 sshd[3463]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:31:50.685349 systemd-logind[1203]: New session 16 of user core. Nov 1 00:31:50.685820 systemd[1]: Started session-16.scope. Nov 1 00:31:50.904346 systemd[1]: Started sshd@16-10.0.0.138:22-10.0.0.1:38680.service. Nov 1 00:31:50.904804 sshd[3463]: pam_unix(sshd:session): session closed for user core Nov 1 00:31:50.907108 systemd-logind[1203]: Session 16 logged out. Waiting for processes to exit. Nov 1 00:31:50.907241 systemd[1]: sshd@15-10.0.0.138:22-10.0.0.1:38666.service: Deactivated successfully. Nov 1 00:31:50.907873 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 00:31:50.908498 systemd-logind[1203]: Removed session 16. Nov 1 00:31:50.939359 sshd[3477]: Accepted publickey for core from 10.0.0.1 port 38680 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:31:50.941022 sshd[3477]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:31:50.944029 systemd-logind[1203]: New session 17 of user core. Nov 1 00:31:50.944893 systemd[1]: Started session-17.scope. Nov 1 00:31:51.054418 sshd[3477]: pam_unix(sshd:session): session closed for user core Nov 1 00:31:51.056835 systemd[1]: sshd@16-10.0.0.138:22-10.0.0.1:38680.service: Deactivated successfully. Nov 1 00:31:51.057606 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 00:31:51.058129 systemd-logind[1203]: Session 17 logged out. Waiting for processes to exit. Nov 1 00:31:51.058876 systemd-logind[1203]: Removed session 17. Nov 1 00:31:56.059279 systemd[1]: Started sshd@17-10.0.0.138:22-10.0.0.1:38696.service. Nov 1 00:31:56.092769 sshd[3493]: Accepted publickey for core from 10.0.0.1 port 38696 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:31:56.094340 sshd[3493]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:31:56.097480 systemd-logind[1203]: New session 18 of user core. Nov 1 00:31:56.098333 systemd[1]: Started session-18.scope. Nov 1 00:31:56.204349 sshd[3493]: pam_unix(sshd:session): session closed for user core Nov 1 00:31:56.206624 systemd[1]: sshd@17-10.0.0.138:22-10.0.0.1:38696.service: Deactivated successfully. Nov 1 00:31:56.207378 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 00:31:56.207908 systemd-logind[1203]: Session 18 logged out. Waiting for processes to exit. Nov 1 00:31:56.208606 systemd-logind[1203]: Removed session 18. Nov 1 00:32:01.209134 systemd[1]: Started sshd@18-10.0.0.138:22-10.0.0.1:36410.service. Nov 1 00:32:01.243282 sshd[3508]: Accepted publickey for core from 10.0.0.1 port 36410 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:32:01.244904 sshd[3508]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:32:01.248595 systemd-logind[1203]: New session 19 of user core. Nov 1 00:32:01.249190 systemd[1]: Started session-19.scope. Nov 1 00:32:01.364660 sshd[3508]: pam_unix(sshd:session): session closed for user core Nov 1 00:32:01.367030 systemd[1]: sshd@18-10.0.0.138:22-10.0.0.1:36410.service: Deactivated successfully. Nov 1 00:32:01.367817 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 00:32:01.368292 systemd-logind[1203]: Session 19 logged out. Waiting for processes to exit. Nov 1 00:32:01.368876 systemd-logind[1203]: Removed session 19. Nov 1 00:32:06.369131 systemd[1]: Started sshd@19-10.0.0.138:22-10.0.0.1:36416.service. Nov 1 00:32:06.402699 sshd[3522]: Accepted publickey for core from 10.0.0.1 port 36416 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:32:06.404076 sshd[3522]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:32:06.407626 systemd-logind[1203]: New session 20 of user core. Nov 1 00:32:06.408162 systemd[1]: Started session-20.scope. Nov 1 00:32:06.514342 sshd[3522]: pam_unix(sshd:session): session closed for user core Nov 1 00:32:06.517227 systemd[1]: sshd@19-10.0.0.138:22-10.0.0.1:36416.service: Deactivated successfully. Nov 1 00:32:06.517891 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 00:32:06.518444 systemd-logind[1203]: Session 20 logged out. Waiting for processes to exit. Nov 1 00:32:06.519508 systemd[1]: Started sshd@20-10.0.0.138:22-10.0.0.1:36422.service. Nov 1 00:32:06.521327 systemd-logind[1203]: Removed session 20. Nov 1 00:32:06.553534 sshd[3536]: Accepted publickey for core from 10.0.0.1 port 36422 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:32:06.554620 sshd[3536]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:32:06.557996 systemd-logind[1203]: New session 21 of user core. Nov 1 00:32:06.558402 systemd[1]: Started session-21.scope. Nov 1 00:32:09.008655 env[1216]: time="2025-11-01T00:32:09.008613245Z" level=info msg="StopContainer for \"1e198bdc99c403ed79921fc9bddda9ab65844b39df54d89ef6f7254b67eed758\" with timeout 30 (s)" Nov 1 00:32:09.010698 env[1216]: time="2025-11-01T00:32:09.010659194Z" level=info msg="Stop container \"1e198bdc99c403ed79921fc9bddda9ab65844b39df54d89ef6f7254b67eed758\" with signal terminated" Nov 1 00:32:09.029531 systemd[1]: run-containerd-runc-k8s.io-416e2cd3034ff42594c02dd975bf32c0812606bcd3e0248d15f55b5ee7e0ce81-runc.aA3S5U.mount: Deactivated successfully. Nov 1 00:32:09.037853 systemd[1]: cri-containerd-1e198bdc99c403ed79921fc9bddda9ab65844b39df54d89ef6f7254b67eed758.scope: Deactivated successfully. Nov 1 00:32:09.051186 env[1216]: time="2025-11-01T00:32:09.051137951Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:32:09.055784 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e198bdc99c403ed79921fc9bddda9ab65844b39df54d89ef6f7254b67eed758-rootfs.mount: Deactivated successfully. Nov 1 00:32:09.058168 env[1216]: time="2025-11-01T00:32:09.058130098Z" level=info msg="StopContainer for \"416e2cd3034ff42594c02dd975bf32c0812606bcd3e0248d15f55b5ee7e0ce81\" with timeout 2 (s)" Nov 1 00:32:09.058471 env[1216]: time="2025-11-01T00:32:09.058438850Z" level=info msg="Stop container \"416e2cd3034ff42594c02dd975bf32c0812606bcd3e0248d15f55b5ee7e0ce81\" with signal terminated" Nov 1 00:32:09.063244 systemd-networkd[1042]: lxc_health: Link DOWN Nov 1 00:32:09.063250 systemd-networkd[1042]: lxc_health: Lost carrier Nov 1 00:32:09.064558 env[1216]: time="2025-11-01T00:32:09.064513780Z" level=info msg="shim disconnected" id=1e198bdc99c403ed79921fc9bddda9ab65844b39df54d89ef6f7254b67eed758 Nov 1 00:32:09.064676 env[1216]: time="2025-11-01T00:32:09.064657976Z" level=warning msg="cleaning up after shim disconnected" id=1e198bdc99c403ed79921fc9bddda9ab65844b39df54d89ef6f7254b67eed758 namespace=k8s.io Nov 1 00:32:09.064733 env[1216]: time="2025-11-01T00:32:09.064720295Z" level=info msg="cleaning up dead shim" Nov 1 00:32:09.071051 env[1216]: time="2025-11-01T00:32:09.071017659Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:32:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3593 runtime=io.containerd.runc.v2\n" Nov 1 00:32:09.073481 env[1216]: time="2025-11-01T00:32:09.073447038Z" level=info msg="StopContainer for \"1e198bdc99c403ed79921fc9bddda9ab65844b39df54d89ef6f7254b67eed758\" returns successfully" Nov 1 00:32:09.074176 env[1216]: time="2025-11-01T00:32:09.074140301Z" level=info msg="StopPodSandbox for \"6e0d4ce7066d7df9b5038e526b0860d18cde1891fac0b11d84a5930eb3dc545c\"" Nov 1 00:32:09.074242 env[1216]: time="2025-11-01T00:32:09.074202220Z" level=info msg="Container to stop \"1e198bdc99c403ed79921fc9bddda9ab65844b39df54d89ef6f7254b67eed758\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:32:09.075994 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6e0d4ce7066d7df9b5038e526b0860d18cde1891fac0b11d84a5930eb3dc545c-shm.mount: Deactivated successfully. Nov 1 00:32:09.082176 systemd[1]: cri-containerd-6e0d4ce7066d7df9b5038e526b0860d18cde1891fac0b11d84a5930eb3dc545c.scope: Deactivated successfully. Nov 1 00:32:09.104261 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e0d4ce7066d7df9b5038e526b0860d18cde1891fac0b11d84a5930eb3dc545c-rootfs.mount: Deactivated successfully. Nov 1 00:32:09.105801 systemd[1]: cri-containerd-416e2cd3034ff42594c02dd975bf32c0812606bcd3e0248d15f55b5ee7e0ce81.scope: Deactivated successfully. Nov 1 00:32:09.106118 systemd[1]: cri-containerd-416e2cd3034ff42594c02dd975bf32c0812606bcd3e0248d15f55b5ee7e0ce81.scope: Consumed 6.022s CPU time. Nov 1 00:32:09.119439 env[1216]: time="2025-11-01T00:32:09.119290983Z" level=info msg="shim disconnected" id=6e0d4ce7066d7df9b5038e526b0860d18cde1891fac0b11d84a5930eb3dc545c Nov 1 00:32:09.119865 env[1216]: time="2025-11-01T00:32:09.119438579Z" level=warning msg="cleaning up after shim disconnected" id=6e0d4ce7066d7df9b5038e526b0860d18cde1891fac0b11d84a5930eb3dc545c namespace=k8s.io Nov 1 00:32:09.119865 env[1216]: time="2025-11-01T00:32:09.119455099Z" level=info msg="cleaning up dead shim" Nov 1 00:32:09.126711 env[1216]: time="2025-11-01T00:32:09.126673280Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:32:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3639 runtime=io.containerd.runc.v2\n" Nov 1 00:32:09.127108 env[1216]: time="2025-11-01T00:32:09.127081870Z" level=info msg="TearDown network for sandbox \"6e0d4ce7066d7df9b5038e526b0860d18cde1891fac0b11d84a5930eb3dc545c\" successfully" Nov 1 00:32:09.127155 env[1216]: time="2025-11-01T00:32:09.127108189Z" level=info msg="StopPodSandbox for \"6e0d4ce7066d7df9b5038e526b0860d18cde1891fac0b11d84a5930eb3dc545c\" returns successfully" Nov 1 00:32:09.129066 env[1216]: time="2025-11-01T00:32:09.129031581Z" level=info msg="shim disconnected" id=416e2cd3034ff42594c02dd975bf32c0812606bcd3e0248d15f55b5ee7e0ce81 Nov 1 00:32:09.129157 env[1216]: time="2025-11-01T00:32:09.129069341Z" level=warning msg="cleaning up after shim disconnected" id=416e2cd3034ff42594c02dd975bf32c0812606bcd3e0248d15f55b5ee7e0ce81 namespace=k8s.io Nov 1 00:32:09.129157 env[1216]: time="2025-11-01T00:32:09.129079100Z" level=info msg="cleaning up dead shim" Nov 1 00:32:09.138009 env[1216]: time="2025-11-01T00:32:09.137967200Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:32:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3652 runtime=io.containerd.runc.v2\n" Nov 1 00:32:09.140367 env[1216]: time="2025-11-01T00:32:09.140325702Z" level=info msg="StopContainer for \"416e2cd3034ff42594c02dd975bf32c0812606bcd3e0248d15f55b5ee7e0ce81\" returns successfully" Nov 1 00:32:09.140676 env[1216]: time="2025-11-01T00:32:09.140648174Z" level=info msg="StopPodSandbox for \"a76417e9cd45dc65b16ddaa46cdef0be21539f22d935fc663fffec6c3ec0f0fb\"" Nov 1 00:32:09.140753 env[1216]: time="2025-11-01T00:32:09.140739291Z" level=info msg="Container to stop \"644872a6a892097ccbebf35ad270a44d397a7c36b9bb965c1fbe9b31c30ab9db\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:32:09.140784 env[1216]: time="2025-11-01T00:32:09.140756011Z" level=info msg="Container to stop \"fc7927c678df381a1b9ff45652c62bcf98e99d024ddcf3b49caf92717fd7e69a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:32:09.140816 env[1216]: time="2025-11-01T00:32:09.140767411Z" level=info msg="Container to stop \"4581374c8ff0c68446e8c7d0145603efe474a0cf1170828bbbd8569cb722f4b5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:32:09.140816 env[1216]: time="2025-11-01T00:32:09.140797610Z" level=info msg="Container to stop \"042e7eaaa5a0f70449bd8bdba7f98c91a85f18f17f456c735a94e235e4388645\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:32:09.140816 env[1216]: time="2025-11-01T00:32:09.140810570Z" level=info msg="Container to stop \"416e2cd3034ff42594c02dd975bf32c0812606bcd3e0248d15f55b5ee7e0ce81\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:32:09.147321 systemd[1]: cri-containerd-a76417e9cd45dc65b16ddaa46cdef0be21539f22d935fc663fffec6c3ec0f0fb.scope: Deactivated successfully. Nov 1 00:32:09.171923 env[1216]: time="2025-11-01T00:32:09.171872360Z" level=info msg="shim disconnected" id=a76417e9cd45dc65b16ddaa46cdef0be21539f22d935fc663fffec6c3ec0f0fb Nov 1 00:32:09.171923 env[1216]: time="2025-11-01T00:32:09.171923039Z" level=warning msg="cleaning up after shim disconnected" id=a76417e9cd45dc65b16ddaa46cdef0be21539f22d935fc663fffec6c3ec0f0fb namespace=k8s.io Nov 1 00:32:09.172161 env[1216]: time="2025-11-01T00:32:09.171933599Z" level=info msg="cleaning up dead shim" Nov 1 00:32:09.178457 env[1216]: time="2025-11-01T00:32:09.178426598Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:32:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3683 runtime=io.containerd.runc.v2\n" Nov 1 00:32:09.178761 env[1216]: time="2025-11-01T00:32:09.178737670Z" level=info msg="TearDown network for sandbox \"a76417e9cd45dc65b16ddaa46cdef0be21539f22d935fc663fffec6c3ec0f0fb\" successfully" Nov 1 00:32:09.178801 env[1216]: time="2025-11-01T00:32:09.178761910Z" level=info msg="StopPodSandbox for \"a76417e9cd45dc65b16ddaa46cdef0be21539f22d935fc663fffec6c3ec0f0fb\" returns successfully" Nov 1 00:32:09.249606 kubelet[1923]: I1101 00:32:09.249404 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-hostproc\") pod \"5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b\" (UID: \"5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b\") " Nov 1 00:32:09.249606 kubelet[1923]: I1101 00:32:09.249438 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-xtables-lock\") pod \"5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b\" (UID: \"5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b\") " Nov 1 00:32:09.249606 kubelet[1923]: I1101 00:32:09.249460 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-clustermesh-secrets\") pod \"5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b\" (UID: \"5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b\") " Nov 1 00:32:09.249606 kubelet[1923]: I1101 00:32:09.249474 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-bpf-maps\") pod \"5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b\" (UID: \"5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b\") " Nov 1 00:32:09.249606 kubelet[1923]: I1101 00:32:09.249491 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/379983f1-7319-4603-8e83-fa54f160f72c-cilium-config-path\") pod \"379983f1-7319-4603-8e83-fa54f160f72c\" (UID: \"379983f1-7319-4603-8e83-fa54f160f72c\") " Nov 1 00:32:09.249606 kubelet[1923]: I1101 00:32:09.249508 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-etc-cni-netd\") pod \"5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b\" (UID: \"5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b\") " Nov 1 00:32:09.250037 kubelet[1923]: I1101 00:32:09.249526 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9fdn\" (UniqueName: \"kubernetes.io/projected/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-kube-api-access-h9fdn\") pod \"5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b\" (UID: \"5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b\") " Nov 1 00:32:09.250037 kubelet[1923]: I1101 00:32:09.249556 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6mpm\" (UniqueName: \"kubernetes.io/projected/379983f1-7319-4603-8e83-fa54f160f72c-kube-api-access-j6mpm\") pod \"379983f1-7319-4603-8e83-fa54f160f72c\" (UID: \"379983f1-7319-4603-8e83-fa54f160f72c\") " Nov 1 00:32:09.250037 kubelet[1923]: I1101 00:32:09.249572 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-lib-modules\") pod \"5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b\" (UID: \"5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b\") " Nov 1 00:32:09.250037 kubelet[1923]: I1101 00:32:09.249587 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-host-proc-sys-kernel\") pod \"5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b\" (UID: \"5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b\") " Nov 1 00:32:09.250037 kubelet[1923]: I1101 00:32:09.249601 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-cilium-run\") pod \"5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b\" (UID: \"5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b\") " Nov 1 00:32:09.250037 kubelet[1923]: I1101 00:32:09.249622 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-cilium-config-path\") pod \"5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b\" (UID: \"5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b\") " Nov 1 00:32:09.250180 kubelet[1923]: I1101 00:32:09.249640 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-hubble-tls\") pod \"5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b\" (UID: \"5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b\") " Nov 1 00:32:09.250180 kubelet[1923]: I1101 00:32:09.249655 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-cilium-cgroup\") pod \"5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b\" (UID: \"5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b\") " Nov 1 00:32:09.250180 kubelet[1923]: I1101 00:32:09.249670 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-host-proc-sys-net\") pod \"5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b\" (UID: \"5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b\") " Nov 1 00:32:09.250180 kubelet[1923]: I1101 00:32:09.249706 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-cni-path\") pod \"5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b\" (UID: \"5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b\") " Nov 1 00:32:09.250577 kubelet[1923]: I1101 00:32:09.250380 1923 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-cni-path" (OuterVolumeSpecName: "cni-path") pod "5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b" (UID: "5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:32:09.250577 kubelet[1923]: I1101 00:32:09.250406 1923 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b" (UID: "5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:32:09.250577 kubelet[1923]: I1101 00:32:09.250383 1923 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b" (UID: "5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:32:09.250577 kubelet[1923]: I1101 00:32:09.250382 1923 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b" (UID: "5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:32:09.250577 kubelet[1923]: I1101 00:32:09.250431 1923 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b" (UID: "5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:32:09.250781 kubelet[1923]: I1101 00:32:09.250629 1923 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-hostproc" (OuterVolumeSpecName: "hostproc") pod "5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b" (UID: "5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:32:09.250781 kubelet[1923]: I1101 00:32:09.250660 1923 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b" (UID: "5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:32:09.250781 kubelet[1923]: I1101 00:32:09.250695 1923 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b" (UID: "5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:32:09.255239 kubelet[1923]: I1101 00:32:09.255183 1923 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/379983f1-7319-4603-8e83-fa54f160f72c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "379983f1-7319-4603-8e83-fa54f160f72c" (UID: "379983f1-7319-4603-8e83-fa54f160f72c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:32:09.255471 kubelet[1923]: I1101 00:32:09.255309 1923 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b" (UID: "5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:32:09.255471 kubelet[1923]: I1101 00:32:09.255372 1923 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b" (UID: "5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:32:09.255471 kubelet[1923]: I1101 00:32:09.255402 1923 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b" (UID: "5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:32:09.261577 kubelet[1923]: I1101 00:32:09.260278 1923 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/379983f1-7319-4603-8e83-fa54f160f72c-kube-api-access-j6mpm" (OuterVolumeSpecName: "kube-api-access-j6mpm") pod "379983f1-7319-4603-8e83-fa54f160f72c" (UID: "379983f1-7319-4603-8e83-fa54f160f72c"). InnerVolumeSpecName "kube-api-access-j6mpm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:32:09.261577 kubelet[1923]: I1101 00:32:09.260310 1923 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b" (UID: "5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:32:09.261577 kubelet[1923]: I1101 00:32:09.260318 1923 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b" (UID: "5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:32:09.261577 kubelet[1923]: I1101 00:32:09.260312 1923 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-kube-api-access-h9fdn" (OuterVolumeSpecName: "kube-api-access-h9fdn") pod "5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b" (UID: "5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b"). InnerVolumeSpecName "kube-api-access-h9fdn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:32:09.350806 kubelet[1923]: I1101 00:32:09.350777 1923 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-cni-path\") on node \"localhost\" DevicePath \"\"" Nov 1 00:32:09.350924 kubelet[1923]: I1101 00:32:09.350911 1923 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-hostproc\") on node \"localhost\" DevicePath \"\"" Nov 1 00:32:09.350983 kubelet[1923]: I1101 00:32:09.350973 1923 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-xtables-lock\") on node \"localhost\" DevicePath \"\"" Nov 1 00:32:09.351055 kubelet[1923]: I1101 00:32:09.351044 1923 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Nov 1 00:32:09.351120 kubelet[1923]: I1101 00:32:09.351109 1923 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-bpf-maps\") on node \"localhost\" DevicePath \"\"" Nov 1 00:32:09.351171 kubelet[1923]: I1101 00:32:09.351163 1923 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/379983f1-7319-4603-8e83-fa54f160f72c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 1 00:32:09.351224 kubelet[1923]: I1101 00:32:09.351216 1923 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Nov 1 00:32:09.351279 kubelet[1923]: I1101 00:32:09.351269 1923 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h9fdn\" (UniqueName: \"kubernetes.io/projected/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-kube-api-access-h9fdn\") on node \"localhost\" DevicePath \"\"" Nov 1 00:32:09.351335 kubelet[1923]: I1101 00:32:09.351326 1923 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j6mpm\" (UniqueName: \"kubernetes.io/projected/379983f1-7319-4603-8e83-fa54f160f72c-kube-api-access-j6mpm\") on node \"localhost\" DevicePath \"\"" Nov 1 00:32:09.351387 kubelet[1923]: I1101 00:32:09.351377 1923 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-lib-modules\") on node \"localhost\" DevicePath \"\"" Nov 1 00:32:09.351443 kubelet[1923]: I1101 00:32:09.351434 1923 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Nov 1 00:32:09.351496 kubelet[1923]: I1101 00:32:09.351485 1923 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-cilium-run\") on node \"localhost\" DevicePath \"\"" Nov 1 00:32:09.351584 kubelet[1923]: I1101 00:32:09.351540 1923 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 1 00:32:09.351649 kubelet[1923]: I1101 00:32:09.351638 1923 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-hubble-tls\") on node \"localhost\" DevicePath \"\"" Nov 1 00:32:09.351712 kubelet[1923]: I1101 00:32:09.351701 1923 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Nov 1 00:32:09.351772 kubelet[1923]: I1101 00:32:09.351762 1923 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Nov 1 00:32:09.678706 kubelet[1923]: I1101 00:32:09.678674 1923 scope.go:117] "RemoveContainer" containerID="1e198bdc99c403ed79921fc9bddda9ab65844b39df54d89ef6f7254b67eed758" Nov 1 00:32:09.681609 env[1216]: time="2025-11-01T00:32:09.681564054Z" level=info msg="RemoveContainer for \"1e198bdc99c403ed79921fc9bddda9ab65844b39df54d89ef6f7254b67eed758\"" Nov 1 00:32:09.683196 systemd[1]: Removed slice kubepods-besteffort-pod379983f1_7319_4603_8e83_fa54f160f72c.slice. Nov 1 00:32:09.687880 env[1216]: time="2025-11-01T00:32:09.687837178Z" level=info msg="RemoveContainer for \"1e198bdc99c403ed79921fc9bddda9ab65844b39df54d89ef6f7254b67eed758\" returns successfully" Nov 1 00:32:09.688100 kubelet[1923]: I1101 00:32:09.688075 1923 scope.go:117] "RemoveContainer" containerID="1e198bdc99c403ed79921fc9bddda9ab65844b39df54d89ef6f7254b67eed758" Nov 1 00:32:09.688556 env[1216]: time="2025-11-01T00:32:09.688395325Z" level=error msg="ContainerStatus for \"1e198bdc99c403ed79921fc9bddda9ab65844b39df54d89ef6f7254b67eed758\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1e198bdc99c403ed79921fc9bddda9ab65844b39df54d89ef6f7254b67eed758\": not found" Nov 1 00:32:09.688794 kubelet[1923]: E1101 00:32:09.688772 1923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1e198bdc99c403ed79921fc9bddda9ab65844b39df54d89ef6f7254b67eed758\": not found" containerID="1e198bdc99c403ed79921fc9bddda9ab65844b39df54d89ef6f7254b67eed758" Nov 1 00:32:09.689670 systemd[1]: Removed slice kubepods-burstable-pod5cb00ccf_4d3a_44f1_a46b_5ce1ed58192b.slice. Nov 1 00:32:09.689759 systemd[1]: kubepods-burstable-pod5cb00ccf_4d3a_44f1_a46b_5ce1ed58192b.slice: Consumed 6.140s CPU time. Nov 1 00:32:09.690793 kubelet[1923]: I1101 00:32:09.690692 1923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1e198bdc99c403ed79921fc9bddda9ab65844b39df54d89ef6f7254b67eed758"} err="failed to get container status \"1e198bdc99c403ed79921fc9bddda9ab65844b39df54d89ef6f7254b67eed758\": rpc error: code = NotFound desc = an error occurred when try to find container \"1e198bdc99c403ed79921fc9bddda9ab65844b39df54d89ef6f7254b67eed758\": not found" Nov 1 00:32:09.690899 kubelet[1923]: I1101 00:32:09.690885 1923 scope.go:117] "RemoveContainer" containerID="416e2cd3034ff42594c02dd975bf32c0812606bcd3e0248d15f55b5ee7e0ce81" Nov 1 00:32:09.692391 env[1216]: time="2025-11-01T00:32:09.692352227Z" level=info msg="RemoveContainer for \"416e2cd3034ff42594c02dd975bf32c0812606bcd3e0248d15f55b5ee7e0ce81\"" Nov 1 00:32:09.695605 env[1216]: time="2025-11-01T00:32:09.695539748Z" level=info msg="RemoveContainer for \"416e2cd3034ff42594c02dd975bf32c0812606bcd3e0248d15f55b5ee7e0ce81\" returns successfully" Nov 1 00:32:09.695840 kubelet[1923]: I1101 00:32:09.695807 1923 scope.go:117] "RemoveContainer" containerID="4581374c8ff0c68446e8c7d0145603efe474a0cf1170828bbbd8569cb722f4b5" Nov 1 00:32:09.697801 env[1216]: time="2025-11-01T00:32:09.697760613Z" level=info msg="RemoveContainer for \"4581374c8ff0c68446e8c7d0145603efe474a0cf1170828bbbd8569cb722f4b5\"" Nov 1 00:32:09.701127 env[1216]: time="2025-11-01T00:32:09.701095810Z" level=info msg="RemoveContainer for \"4581374c8ff0c68446e8c7d0145603efe474a0cf1170828bbbd8569cb722f4b5\" returns successfully" Nov 1 00:32:09.701375 kubelet[1923]: I1101 00:32:09.701352 1923 scope.go:117] "RemoveContainer" containerID="fc7927c678df381a1b9ff45652c62bcf98e99d024ddcf3b49caf92717fd7e69a" Nov 1 00:32:09.703809 env[1216]: time="2025-11-01T00:32:09.703779344Z" level=info msg="RemoveContainer for \"fc7927c678df381a1b9ff45652c62bcf98e99d024ddcf3b49caf92717fd7e69a\"" Nov 1 00:32:09.706893 env[1216]: time="2025-11-01T00:32:09.706863267Z" level=info msg="RemoveContainer for \"fc7927c678df381a1b9ff45652c62bcf98e99d024ddcf3b49caf92717fd7e69a\" returns successfully" Nov 1 00:32:09.707105 kubelet[1923]: I1101 00:32:09.707083 1923 scope.go:117] "RemoveContainer" containerID="042e7eaaa5a0f70449bd8bdba7f98c91a85f18f17f456c735a94e235e4388645" Nov 1 00:32:09.711969 env[1216]: time="2025-11-01T00:32:09.711938341Z" level=info msg="RemoveContainer for \"042e7eaaa5a0f70449bd8bdba7f98c91a85f18f17f456c735a94e235e4388645\"" Nov 1 00:32:09.714356 env[1216]: time="2025-11-01T00:32:09.714331762Z" level=info msg="RemoveContainer for \"042e7eaaa5a0f70449bd8bdba7f98c91a85f18f17f456c735a94e235e4388645\" returns successfully" Nov 1 00:32:09.714509 kubelet[1923]: I1101 00:32:09.714491 1923 scope.go:117] "RemoveContainer" containerID="644872a6a892097ccbebf35ad270a44d397a7c36b9bb965c1fbe9b31c30ab9db" Nov 1 00:32:09.715466 env[1216]: time="2025-11-01T00:32:09.715441655Z" level=info msg="RemoveContainer for \"644872a6a892097ccbebf35ad270a44d397a7c36b9bb965c1fbe9b31c30ab9db\"" Nov 1 00:32:09.717675 env[1216]: time="2025-11-01T00:32:09.717648200Z" level=info msg="RemoveContainer for \"644872a6a892097ccbebf35ad270a44d397a7c36b9bb965c1fbe9b31c30ab9db\" returns successfully" Nov 1 00:32:09.717834 kubelet[1923]: I1101 00:32:09.717804 1923 scope.go:117] "RemoveContainer" containerID="416e2cd3034ff42594c02dd975bf32c0812606bcd3e0248d15f55b5ee7e0ce81" Nov 1 00:32:09.718134 env[1216]: time="2025-11-01T00:32:09.718078749Z" level=error msg="ContainerStatus for \"416e2cd3034ff42594c02dd975bf32c0812606bcd3e0248d15f55b5ee7e0ce81\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"416e2cd3034ff42594c02dd975bf32c0812606bcd3e0248d15f55b5ee7e0ce81\": not found" Nov 1 00:32:09.718260 kubelet[1923]: E1101 00:32:09.718235 1923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"416e2cd3034ff42594c02dd975bf32c0812606bcd3e0248d15f55b5ee7e0ce81\": not found" containerID="416e2cd3034ff42594c02dd975bf32c0812606bcd3e0248d15f55b5ee7e0ce81" Nov 1 00:32:09.718304 kubelet[1923]: I1101 00:32:09.718268 1923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"416e2cd3034ff42594c02dd975bf32c0812606bcd3e0248d15f55b5ee7e0ce81"} err="failed to get container status \"416e2cd3034ff42594c02dd975bf32c0812606bcd3e0248d15f55b5ee7e0ce81\": rpc error: code = NotFound desc = an error occurred when try to find container \"416e2cd3034ff42594c02dd975bf32c0812606bcd3e0248d15f55b5ee7e0ce81\": not found" Nov 1 00:32:09.718304 kubelet[1923]: I1101 00:32:09.718288 1923 scope.go:117] "RemoveContainer" containerID="4581374c8ff0c68446e8c7d0145603efe474a0cf1170828bbbd8569cb722f4b5" Nov 1 00:32:09.718480 env[1216]: time="2025-11-01T00:32:09.718438060Z" level=error msg="ContainerStatus for \"4581374c8ff0c68446e8c7d0145603efe474a0cf1170828bbbd8569cb722f4b5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4581374c8ff0c68446e8c7d0145603efe474a0cf1170828bbbd8569cb722f4b5\": not found" Nov 1 00:32:09.718591 kubelet[1923]: E1101 00:32:09.718571 1923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4581374c8ff0c68446e8c7d0145603efe474a0cf1170828bbbd8569cb722f4b5\": not found" containerID="4581374c8ff0c68446e8c7d0145603efe474a0cf1170828bbbd8569cb722f4b5" Nov 1 00:32:09.718650 kubelet[1923]: I1101 00:32:09.718597 1923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4581374c8ff0c68446e8c7d0145603efe474a0cf1170828bbbd8569cb722f4b5"} err="failed to get container status \"4581374c8ff0c68446e8c7d0145603efe474a0cf1170828bbbd8569cb722f4b5\": rpc error: code = NotFound desc = an error occurred when try to find container \"4581374c8ff0c68446e8c7d0145603efe474a0cf1170828bbbd8569cb722f4b5\": not found" Nov 1 00:32:09.718650 kubelet[1923]: I1101 00:32:09.718612 1923 scope.go:117] "RemoveContainer" containerID="fc7927c678df381a1b9ff45652c62bcf98e99d024ddcf3b49caf92717fd7e69a" Nov 1 00:32:09.718789 env[1216]: time="2025-11-01T00:32:09.718741053Z" level=error msg="ContainerStatus for \"fc7927c678df381a1b9ff45652c62bcf98e99d024ddcf3b49caf92717fd7e69a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fc7927c678df381a1b9ff45652c62bcf98e99d024ddcf3b49caf92717fd7e69a\": not found" Nov 1 00:32:09.718894 kubelet[1923]: E1101 00:32:09.718873 1923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fc7927c678df381a1b9ff45652c62bcf98e99d024ddcf3b49caf92717fd7e69a\": not found" containerID="fc7927c678df381a1b9ff45652c62bcf98e99d024ddcf3b49caf92717fd7e69a" Nov 1 00:32:09.718933 kubelet[1923]: I1101 00:32:09.718899 1923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fc7927c678df381a1b9ff45652c62bcf98e99d024ddcf3b49caf92717fd7e69a"} err="failed to get container status \"fc7927c678df381a1b9ff45652c62bcf98e99d024ddcf3b49caf92717fd7e69a\": rpc error: code = NotFound desc = an error occurred when try to find container \"fc7927c678df381a1b9ff45652c62bcf98e99d024ddcf3b49caf92717fd7e69a\": not found" Nov 1 00:32:09.718933 kubelet[1923]: I1101 00:32:09.718914 1923 scope.go:117] "RemoveContainer" containerID="042e7eaaa5a0f70449bd8bdba7f98c91a85f18f17f456c735a94e235e4388645" Nov 1 00:32:09.719113 env[1216]: time="2025-11-01T00:32:09.719071405Z" level=error msg="ContainerStatus for \"042e7eaaa5a0f70449bd8bdba7f98c91a85f18f17f456c735a94e235e4388645\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"042e7eaaa5a0f70449bd8bdba7f98c91a85f18f17f456c735a94e235e4388645\": not found" Nov 1 00:32:09.719192 kubelet[1923]: E1101 00:32:09.719174 1923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"042e7eaaa5a0f70449bd8bdba7f98c91a85f18f17f456c735a94e235e4388645\": not found" containerID="042e7eaaa5a0f70449bd8bdba7f98c91a85f18f17f456c735a94e235e4388645" Nov 1 00:32:09.719231 kubelet[1923]: I1101 00:32:09.719193 1923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"042e7eaaa5a0f70449bd8bdba7f98c91a85f18f17f456c735a94e235e4388645"} err="failed to get container status \"042e7eaaa5a0f70449bd8bdba7f98c91a85f18f17f456c735a94e235e4388645\": rpc error: code = NotFound desc = an error occurred when try to find container \"042e7eaaa5a0f70449bd8bdba7f98c91a85f18f17f456c735a94e235e4388645\": not found" Nov 1 00:32:09.719231 kubelet[1923]: I1101 00:32:09.719204 1923 scope.go:117] "RemoveContainer" containerID="644872a6a892097ccbebf35ad270a44d397a7c36b9bb965c1fbe9b31c30ab9db" Nov 1 00:32:09.719436 env[1216]: time="2025-11-01T00:32:09.719399517Z" level=error msg="ContainerStatus for \"644872a6a892097ccbebf35ad270a44d397a7c36b9bb965c1fbe9b31c30ab9db\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"644872a6a892097ccbebf35ad270a44d397a7c36b9bb965c1fbe9b31c30ab9db\": not found" Nov 1 00:32:09.719577 kubelet[1923]: E1101 00:32:09.719542 1923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"644872a6a892097ccbebf35ad270a44d397a7c36b9bb965c1fbe9b31c30ab9db\": not found" containerID="644872a6a892097ccbebf35ad270a44d397a7c36b9bb965c1fbe9b31c30ab9db" Nov 1 00:32:09.719669 kubelet[1923]: I1101 00:32:09.719650 1923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"644872a6a892097ccbebf35ad270a44d397a7c36b9bb965c1fbe9b31c30ab9db"} err="failed to get container status \"644872a6a892097ccbebf35ad270a44d397a7c36b9bb965c1fbe9b31c30ab9db\": rpc error: code = NotFound desc = an error occurred when try to find container \"644872a6a892097ccbebf35ad270a44d397a7c36b9bb965c1fbe9b31c30ab9db\": not found" Nov 1 00:32:10.026721 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-416e2cd3034ff42594c02dd975bf32c0812606bcd3e0248d15f55b5ee7e0ce81-rootfs.mount: Deactivated successfully. Nov 1 00:32:10.026827 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a76417e9cd45dc65b16ddaa46cdef0be21539f22d935fc663fffec6c3ec0f0fb-rootfs.mount: Deactivated successfully. Nov 1 00:32:10.026879 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a76417e9cd45dc65b16ddaa46cdef0be21539f22d935fc663fffec6c3ec0f0fb-shm.mount: Deactivated successfully. Nov 1 00:32:10.026941 systemd[1]: var-lib-kubelet-pods-379983f1\x2d7319\x2d4603\x2d8e83\x2dfa54f160f72c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj6mpm.mount: Deactivated successfully. Nov 1 00:32:10.027002 systemd[1]: var-lib-kubelet-pods-5cb00ccf\x2d4d3a\x2d44f1\x2da46b\x2d5ce1ed58192b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh9fdn.mount: Deactivated successfully. Nov 1 00:32:10.027051 systemd[1]: var-lib-kubelet-pods-5cb00ccf\x2d4d3a\x2d44f1\x2da46b\x2d5ce1ed58192b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 1 00:32:10.027100 systemd[1]: var-lib-kubelet-pods-5cb00ccf\x2d4d3a\x2d44f1\x2da46b\x2d5ce1ed58192b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 1 00:32:10.519681 kubelet[1923]: E1101 00:32:10.519647 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:32:10.521482 kubelet[1923]: I1101 00:32:10.521445 1923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="379983f1-7319-4603-8e83-fa54f160f72c" path="/var/lib/kubelet/pods/379983f1-7319-4603-8e83-fa54f160f72c/volumes" Nov 1 00:32:10.521875 kubelet[1923]: I1101 00:32:10.521840 1923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b" path="/var/lib/kubelet/pods/5cb00ccf-4d3a-44f1-a46b-5ce1ed58192b/volumes" Nov 1 00:32:10.958523 sshd[3536]: pam_unix(sshd:session): session closed for user core Nov 1 00:32:10.961288 systemd[1]: sshd@20-10.0.0.138:22-10.0.0.1:36422.service: Deactivated successfully. Nov 1 00:32:10.961967 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 00:32:10.962131 systemd[1]: session-21.scope: Consumed 1.760s CPU time. Nov 1 00:32:10.962568 systemd-logind[1203]: Session 21 logged out. Waiting for processes to exit. Nov 1 00:32:10.963607 systemd[1]: Started sshd@21-10.0.0.138:22-10.0.0.1:49640.service. Nov 1 00:32:10.964471 systemd-logind[1203]: Removed session 21. Nov 1 00:32:11.000052 sshd[3703]: Accepted publickey for core from 10.0.0.1 port 49640 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:32:11.001144 sshd[3703]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:32:11.004718 systemd-logind[1203]: New session 22 of user core. Nov 1 00:32:11.005142 systemd[1]: Started session-22.scope. Nov 1 00:32:12.526770 sshd[3703]: pam_unix(sshd:session): session closed for user core Nov 1 00:32:12.530972 systemd[1]: Started sshd@22-10.0.0.138:22-10.0.0.1:49650.service. Nov 1 00:32:12.534761 systemd[1]: sshd@21-10.0.0.138:22-10.0.0.1:49640.service: Deactivated successfully. Nov 1 00:32:12.535666 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 00:32:12.535813 systemd[1]: session-22.scope: Consumed 1.433s CPU time. Nov 1 00:32:12.540471 systemd-logind[1203]: Session 22 logged out. Waiting for processes to exit. Nov 1 00:32:12.543173 systemd-logind[1203]: Removed session 22. Nov 1 00:32:12.547045 systemd[1]: Created slice kubepods-burstable-podb204899d_bc56_4a31_a942_03950a41d437.slice. Nov 1 00:32:12.574298 sshd[3714]: Accepted publickey for core from 10.0.0.1 port 49650 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:32:12.575651 sshd[3714]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:32:12.580476 systemd[1]: Started session-23.scope. Nov 1 00:32:12.580883 systemd-logind[1203]: New session 23 of user core. Nov 1 00:32:12.670299 kubelet[1923]: I1101 00:32:12.670256 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b204899d-bc56-4a31-a942-03950a41d437-cilium-cgroup\") pod \"cilium-fsx8s\" (UID: \"b204899d-bc56-4a31-a942-03950a41d437\") " pod="kube-system/cilium-fsx8s" Nov 1 00:32:12.670711 kubelet[1923]: I1101 00:32:12.670678 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b204899d-bc56-4a31-a942-03950a41d437-xtables-lock\") pod \"cilium-fsx8s\" (UID: \"b204899d-bc56-4a31-a942-03950a41d437\") " pod="kube-system/cilium-fsx8s" Nov 1 00:32:12.670889 kubelet[1923]: I1101 00:32:12.670853 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b204899d-bc56-4a31-a942-03950a41d437-host-proc-sys-kernel\") pod \"cilium-fsx8s\" (UID: \"b204899d-bc56-4a31-a942-03950a41d437\") " pod="kube-system/cilium-fsx8s" Nov 1 00:32:12.671001 kubelet[1923]: I1101 00:32:12.670986 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b204899d-bc56-4a31-a942-03950a41d437-hubble-tls\") pod \"cilium-fsx8s\" (UID: \"b204899d-bc56-4a31-a942-03950a41d437\") " pod="kube-system/cilium-fsx8s" Nov 1 00:32:12.671139 kubelet[1923]: I1101 00:32:12.671112 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2685\" (UniqueName: \"kubernetes.io/projected/b204899d-bc56-4a31-a942-03950a41d437-kube-api-access-v2685\") pod \"cilium-fsx8s\" (UID: \"b204899d-bc56-4a31-a942-03950a41d437\") " pod="kube-system/cilium-fsx8s" Nov 1 00:32:12.671183 kubelet[1923]: I1101 00:32:12.671152 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b204899d-bc56-4a31-a942-03950a41d437-bpf-maps\") pod \"cilium-fsx8s\" (UID: \"b204899d-bc56-4a31-a942-03950a41d437\") " pod="kube-system/cilium-fsx8s" Nov 1 00:32:12.671183 kubelet[1923]: I1101 00:32:12.671172 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b204899d-bc56-4a31-a942-03950a41d437-hostproc\") pod \"cilium-fsx8s\" (UID: \"b204899d-bc56-4a31-a942-03950a41d437\") " pod="kube-system/cilium-fsx8s" Nov 1 00:32:12.671243 kubelet[1923]: I1101 00:32:12.671192 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b204899d-bc56-4a31-a942-03950a41d437-cni-path\") pod \"cilium-fsx8s\" (UID: \"b204899d-bc56-4a31-a942-03950a41d437\") " pod="kube-system/cilium-fsx8s" Nov 1 00:32:12.671243 kubelet[1923]: I1101 00:32:12.671213 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b204899d-bc56-4a31-a942-03950a41d437-etc-cni-netd\") pod \"cilium-fsx8s\" (UID: \"b204899d-bc56-4a31-a942-03950a41d437\") " pod="kube-system/cilium-fsx8s" Nov 1 00:32:12.671243 kubelet[1923]: I1101 00:32:12.671230 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b204899d-bc56-4a31-a942-03950a41d437-cilium-ipsec-secrets\") pod \"cilium-fsx8s\" (UID: \"b204899d-bc56-4a31-a942-03950a41d437\") " pod="kube-system/cilium-fsx8s" Nov 1 00:32:12.671308 kubelet[1923]: I1101 00:32:12.671248 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b204899d-bc56-4a31-a942-03950a41d437-lib-modules\") pod \"cilium-fsx8s\" (UID: \"b204899d-bc56-4a31-a942-03950a41d437\") " pod="kube-system/cilium-fsx8s" Nov 1 00:32:12.671308 kubelet[1923]: I1101 00:32:12.671262 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b204899d-bc56-4a31-a942-03950a41d437-cilium-run\") pod \"cilium-fsx8s\" (UID: \"b204899d-bc56-4a31-a942-03950a41d437\") " pod="kube-system/cilium-fsx8s" Nov 1 00:32:12.671308 kubelet[1923]: I1101 00:32:12.671276 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b204899d-bc56-4a31-a942-03950a41d437-clustermesh-secrets\") pod \"cilium-fsx8s\" (UID: \"b204899d-bc56-4a31-a942-03950a41d437\") " pod="kube-system/cilium-fsx8s" Nov 1 00:32:12.671308 kubelet[1923]: I1101 00:32:12.671289 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b204899d-bc56-4a31-a942-03950a41d437-host-proc-sys-net\") pod \"cilium-fsx8s\" (UID: \"b204899d-bc56-4a31-a942-03950a41d437\") " pod="kube-system/cilium-fsx8s" Nov 1 00:32:12.671308 kubelet[1923]: I1101 00:32:12.671307 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b204899d-bc56-4a31-a942-03950a41d437-cilium-config-path\") pod \"cilium-fsx8s\" (UID: \"b204899d-bc56-4a31-a942-03950a41d437\") " pod="kube-system/cilium-fsx8s" Nov 1 00:32:12.700628 sshd[3714]: pam_unix(sshd:session): session closed for user core Nov 1 00:32:12.704345 systemd[1]: Started sshd@23-10.0.0.138:22-10.0.0.1:49652.service. Nov 1 00:32:12.707140 systemd[1]: sshd@22-10.0.0.138:22-10.0.0.1:49650.service: Deactivated successfully. Nov 1 00:32:12.707921 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 00:32:12.709440 systemd-logind[1203]: Session 23 logged out. Waiting for processes to exit. Nov 1 00:32:12.710564 systemd-logind[1203]: Removed session 23. Nov 1 00:32:12.720034 kubelet[1923]: E1101 00:32:12.719989 1923 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-v2685 lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-fsx8s" podUID="b204899d-bc56-4a31-a942-03950a41d437" Nov 1 00:32:12.739957 sshd[3728]: Accepted publickey for core from 10.0.0.1 port 49652 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:32:12.741170 sshd[3728]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:32:12.744808 systemd-logind[1203]: New session 24 of user core. Nov 1 00:32:12.745299 systemd[1]: Started session-24.scope. Nov 1 00:32:13.557779 kubelet[1923]: E1101 00:32:13.557687 1923 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 1 00:32:13.778333 kubelet[1923]: I1101 00:32:13.778294 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b204899d-bc56-4a31-a942-03950a41d437-cni-path\") pod \"b204899d-bc56-4a31-a942-03950a41d437\" (UID: \"b204899d-bc56-4a31-a942-03950a41d437\") " Nov 1 00:32:13.778674 kubelet[1923]: I1101 00:32:13.778342 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2685\" (UniqueName: \"kubernetes.io/projected/b204899d-bc56-4a31-a942-03950a41d437-kube-api-access-v2685\") pod \"b204899d-bc56-4a31-a942-03950a41d437\" (UID: \"b204899d-bc56-4a31-a942-03950a41d437\") " Nov 1 00:32:13.778674 kubelet[1923]: I1101 00:32:13.778365 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b204899d-bc56-4a31-a942-03950a41d437-cilium-config-path\") pod \"b204899d-bc56-4a31-a942-03950a41d437\" (UID: \"b204899d-bc56-4a31-a942-03950a41d437\") " Nov 1 00:32:13.778674 kubelet[1923]: I1101 00:32:13.778359 1923 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b204899d-bc56-4a31-a942-03950a41d437-cni-path" (OuterVolumeSpecName: "cni-path") pod "b204899d-bc56-4a31-a942-03950a41d437" (UID: "b204899d-bc56-4a31-a942-03950a41d437"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:32:13.778674 kubelet[1923]: I1101 00:32:13.778380 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b204899d-bc56-4a31-a942-03950a41d437-etc-cni-netd\") pod \"b204899d-bc56-4a31-a942-03950a41d437\" (UID: \"b204899d-bc56-4a31-a942-03950a41d437\") " Nov 1 00:32:13.778674 kubelet[1923]: I1101 00:32:13.778395 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b204899d-bc56-4a31-a942-03950a41d437-xtables-lock\") pod \"b204899d-bc56-4a31-a942-03950a41d437\" (UID: \"b204899d-bc56-4a31-a942-03950a41d437\") " Nov 1 00:32:13.778797 kubelet[1923]: I1101 00:32:13.778426 1923 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b204899d-bc56-4a31-a942-03950a41d437-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b204899d-bc56-4a31-a942-03950a41d437" (UID: "b204899d-bc56-4a31-a942-03950a41d437"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:32:13.778797 kubelet[1923]: I1101 00:32:13.778611 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b204899d-bc56-4a31-a942-03950a41d437-host-proc-sys-kernel\") pod \"b204899d-bc56-4a31-a942-03950a41d437\" (UID: \"b204899d-bc56-4a31-a942-03950a41d437\") " Nov 1 00:32:13.778797 kubelet[1923]: I1101 00:32:13.778611 1923 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b204899d-bc56-4a31-a942-03950a41d437-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b204899d-bc56-4a31-a942-03950a41d437" (UID: "b204899d-bc56-4a31-a942-03950a41d437"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:32:13.778797 kubelet[1923]: I1101 00:32:13.778635 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b204899d-bc56-4a31-a942-03950a41d437-bpf-maps\") pod \"b204899d-bc56-4a31-a942-03950a41d437\" (UID: \"b204899d-bc56-4a31-a942-03950a41d437\") " Nov 1 00:32:13.778797 kubelet[1923]: I1101 00:32:13.778651 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b204899d-bc56-4a31-a942-03950a41d437-hostproc\") pod \"b204899d-bc56-4a31-a942-03950a41d437\" (UID: \"b204899d-bc56-4a31-a942-03950a41d437\") " Nov 1 00:32:13.778919 kubelet[1923]: I1101 00:32:13.778656 1923 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b204899d-bc56-4a31-a942-03950a41d437-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b204899d-bc56-4a31-a942-03950a41d437" (UID: "b204899d-bc56-4a31-a942-03950a41d437"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:32:13.778919 kubelet[1923]: I1101 00:32:13.778667 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b204899d-bc56-4a31-a942-03950a41d437-cilium-cgroup\") pod \"b204899d-bc56-4a31-a942-03950a41d437\" (UID: \"b204899d-bc56-4a31-a942-03950a41d437\") " Nov 1 00:32:13.778919 kubelet[1923]: I1101 00:32:13.778676 1923 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b204899d-bc56-4a31-a942-03950a41d437-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b204899d-bc56-4a31-a942-03950a41d437" (UID: "b204899d-bc56-4a31-a942-03950a41d437"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:32:13.778919 kubelet[1923]: I1101 00:32:13.778686 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b204899d-bc56-4a31-a942-03950a41d437-hubble-tls\") pod \"b204899d-bc56-4a31-a942-03950a41d437\" (UID: \"b204899d-bc56-4a31-a942-03950a41d437\") " Nov 1 00:32:13.778919 kubelet[1923]: I1101 00:32:13.778692 1923 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b204899d-bc56-4a31-a942-03950a41d437-hostproc" (OuterVolumeSpecName: "hostproc") pod "b204899d-bc56-4a31-a942-03950a41d437" (UID: "b204899d-bc56-4a31-a942-03950a41d437"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:32:13.779026 kubelet[1923]: I1101 00:32:13.778706 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b204899d-bc56-4a31-a942-03950a41d437-clustermesh-secrets\") pod \"b204899d-bc56-4a31-a942-03950a41d437\" (UID: \"b204899d-bc56-4a31-a942-03950a41d437\") " Nov 1 00:32:13.779026 kubelet[1923]: I1101 00:32:13.778727 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b204899d-bc56-4a31-a942-03950a41d437-cilium-ipsec-secrets\") pod \"b204899d-bc56-4a31-a942-03950a41d437\" (UID: \"b204899d-bc56-4a31-a942-03950a41d437\") " Nov 1 00:32:13.779026 kubelet[1923]: I1101 00:32:13.778743 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b204899d-bc56-4a31-a942-03950a41d437-lib-modules\") pod \"b204899d-bc56-4a31-a942-03950a41d437\" (UID: \"b204899d-bc56-4a31-a942-03950a41d437\") " Nov 1 00:32:13.779026 kubelet[1923]: I1101 00:32:13.778758 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b204899d-bc56-4a31-a942-03950a41d437-cilium-run\") pod \"b204899d-bc56-4a31-a942-03950a41d437\" (UID: \"b204899d-bc56-4a31-a942-03950a41d437\") " Nov 1 00:32:13.779026 kubelet[1923]: I1101 00:32:13.778773 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b204899d-bc56-4a31-a942-03950a41d437-host-proc-sys-net\") pod \"b204899d-bc56-4a31-a942-03950a41d437\" (UID: \"b204899d-bc56-4a31-a942-03950a41d437\") " Nov 1 00:32:13.779026 kubelet[1923]: I1101 00:32:13.778808 1923 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b204899d-bc56-4a31-a942-03950a41d437-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Nov 1 00:32:13.779026 kubelet[1923]: I1101 00:32:13.778818 1923 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b204899d-bc56-4a31-a942-03950a41d437-xtables-lock\") on node \"localhost\" DevicePath \"\"" Nov 1 00:32:13.779213 kubelet[1923]: I1101 00:32:13.778826 1923 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b204899d-bc56-4a31-a942-03950a41d437-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Nov 1 00:32:13.779213 kubelet[1923]: I1101 00:32:13.778838 1923 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b204899d-bc56-4a31-a942-03950a41d437-bpf-maps\") on node \"localhost\" DevicePath \"\"" Nov 1 00:32:13.779213 kubelet[1923]: I1101 00:32:13.778846 1923 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b204899d-bc56-4a31-a942-03950a41d437-hostproc\") on node \"localhost\" DevicePath \"\"" Nov 1 00:32:13.779213 kubelet[1923]: I1101 00:32:13.778853 1923 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b204899d-bc56-4a31-a942-03950a41d437-cni-path\") on node \"localhost\" DevicePath \"\"" Nov 1 00:32:13.782320 kubelet[1923]: I1101 00:32:13.778708 1923 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b204899d-bc56-4a31-a942-03950a41d437-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b204899d-bc56-4a31-a942-03950a41d437" (UID: "b204899d-bc56-4a31-a942-03950a41d437"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:32:13.782320 kubelet[1923]: I1101 00:32:13.778885 1923 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b204899d-bc56-4a31-a942-03950a41d437-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b204899d-bc56-4a31-a942-03950a41d437" (UID: "b204899d-bc56-4a31-a942-03950a41d437"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:32:13.782320 kubelet[1923]: I1101 00:32:13.780622 1923 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b204899d-bc56-4a31-a942-03950a41d437-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b204899d-bc56-4a31-a942-03950a41d437" (UID: "b204899d-bc56-4a31-a942-03950a41d437"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:32:13.782320 kubelet[1923]: I1101 00:32:13.781212 1923 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b204899d-bc56-4a31-a942-03950a41d437-kube-api-access-v2685" (OuterVolumeSpecName: "kube-api-access-v2685") pod "b204899d-bc56-4a31-a942-03950a41d437" (UID: "b204899d-bc56-4a31-a942-03950a41d437"). InnerVolumeSpecName "kube-api-access-v2685". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:32:13.782320 kubelet[1923]: I1101 00:32:13.781236 1923 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b204899d-bc56-4a31-a942-03950a41d437-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b204899d-bc56-4a31-a942-03950a41d437" (UID: "b204899d-bc56-4a31-a942-03950a41d437"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:32:13.782261 systemd[1]: var-lib-kubelet-pods-b204899d\x2dbc56\x2d4a31\x2da942\x2d03950a41d437-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv2685.mount: Deactivated successfully. Nov 1 00:32:13.782709 kubelet[1923]: I1101 00:32:13.781301 1923 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b204899d-bc56-4a31-a942-03950a41d437-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b204899d-bc56-4a31-a942-03950a41d437" (UID: "b204899d-bc56-4a31-a942-03950a41d437"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:32:13.782709 kubelet[1923]: I1101 00:32:13.782263 1923 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b204899d-bc56-4a31-a942-03950a41d437-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b204899d-bc56-4a31-a942-03950a41d437" (UID: "b204899d-bc56-4a31-a942-03950a41d437"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:32:13.782709 kubelet[1923]: I1101 00:32:13.782264 1923 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b204899d-bc56-4a31-a942-03950a41d437-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b204899d-bc56-4a31-a942-03950a41d437" (UID: "b204899d-bc56-4a31-a942-03950a41d437"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:32:13.782345 systemd[1]: var-lib-kubelet-pods-b204899d\x2dbc56\x2d4a31\x2da942\x2d03950a41d437-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 1 00:32:13.783451 kubelet[1923]: I1101 00:32:13.783428 1923 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b204899d-bc56-4a31-a942-03950a41d437-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "b204899d-bc56-4a31-a942-03950a41d437" (UID: "b204899d-bc56-4a31-a942-03950a41d437"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:32:13.784363 systemd[1]: var-lib-kubelet-pods-b204899d\x2dbc56\x2d4a31\x2da942\x2d03950a41d437-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 1 00:32:13.784446 systemd[1]: var-lib-kubelet-pods-b204899d\x2dbc56\x2d4a31\x2da942\x2d03950a41d437-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Nov 1 00:32:13.879909 kubelet[1923]: I1101 00:32:13.879878 1923 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b204899d-bc56-4a31-a942-03950a41d437-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Nov 1 00:32:13.880050 kubelet[1923]: I1101 00:32:13.880038 1923 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b204899d-bc56-4a31-a942-03950a41d437-lib-modules\") on node \"localhost\" DevicePath \"\"" Nov 1 00:32:13.880107 kubelet[1923]: I1101 00:32:13.880097 1923 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b204899d-bc56-4a31-a942-03950a41d437-cilium-run\") on node \"localhost\" DevicePath \"\"" Nov 1 00:32:13.880158 kubelet[1923]: I1101 00:32:13.880150 1923 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b204899d-bc56-4a31-a942-03950a41d437-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Nov 1 00:32:13.880210 kubelet[1923]: I1101 00:32:13.880201 1923 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v2685\" (UniqueName: \"kubernetes.io/projected/b204899d-bc56-4a31-a942-03950a41d437-kube-api-access-v2685\") on node \"localhost\" DevicePath \"\"" Nov 1 00:32:13.880264 kubelet[1923]: I1101 00:32:13.880254 1923 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b204899d-bc56-4a31-a942-03950a41d437-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 1 00:32:13.880316 kubelet[1923]: I1101 00:32:13.880307 1923 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b204899d-bc56-4a31-a942-03950a41d437-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Nov 1 00:32:13.880376 kubelet[1923]: I1101 00:32:13.880367 1923 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b204899d-bc56-4a31-a942-03950a41d437-hubble-tls\") on node \"localhost\" DevicePath \"\"" Nov 1 00:32:13.880430 kubelet[1923]: I1101 00:32:13.880419 1923 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b204899d-bc56-4a31-a942-03950a41d437-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Nov 1 00:32:14.524988 systemd[1]: Removed slice kubepods-burstable-podb204899d_bc56_4a31_a942_03950a41d437.slice. Nov 1 00:32:14.729354 systemd[1]: Created slice kubepods-burstable-podd37a171f_4d18_4303_82f3_3ffe6e328aa9.slice. Nov 1 00:32:14.786930 kubelet[1923]: I1101 00:32:14.786819 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d37a171f-4d18-4303-82f3-3ffe6e328aa9-hubble-tls\") pod \"cilium-2r7cd\" (UID: \"d37a171f-4d18-4303-82f3-3ffe6e328aa9\") " pod="kube-system/cilium-2r7cd" Nov 1 00:32:14.787342 kubelet[1923]: I1101 00:32:14.787296 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d37a171f-4d18-4303-82f3-3ffe6e328aa9-bpf-maps\") pod \"cilium-2r7cd\" (UID: \"d37a171f-4d18-4303-82f3-3ffe6e328aa9\") " pod="kube-system/cilium-2r7cd" Nov 1 00:32:14.787429 kubelet[1923]: I1101 00:32:14.787415 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d37a171f-4d18-4303-82f3-3ffe6e328aa9-lib-modules\") pod \"cilium-2r7cd\" (UID: \"d37a171f-4d18-4303-82f3-3ffe6e328aa9\") " pod="kube-system/cilium-2r7cd" Nov 1 00:32:14.787527 kubelet[1923]: I1101 00:32:14.787513 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d37a171f-4d18-4303-82f3-3ffe6e328aa9-host-proc-sys-net\") pod \"cilium-2r7cd\" (UID: \"d37a171f-4d18-4303-82f3-3ffe6e328aa9\") " pod="kube-system/cilium-2r7cd" Nov 1 00:32:14.787644 kubelet[1923]: I1101 00:32:14.787630 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d37a171f-4d18-4303-82f3-3ffe6e328aa9-cilium-cgroup\") pod \"cilium-2r7cd\" (UID: \"d37a171f-4d18-4303-82f3-3ffe6e328aa9\") " pod="kube-system/cilium-2r7cd" Nov 1 00:32:14.787732 kubelet[1923]: I1101 00:32:14.787720 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d37a171f-4d18-4303-82f3-3ffe6e328aa9-xtables-lock\") pod \"cilium-2r7cd\" (UID: \"d37a171f-4d18-4303-82f3-3ffe6e328aa9\") " pod="kube-system/cilium-2r7cd" Nov 1 00:32:14.787830 kubelet[1923]: I1101 00:32:14.787818 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d37a171f-4d18-4303-82f3-3ffe6e328aa9-hostproc\") pod \"cilium-2r7cd\" (UID: \"d37a171f-4d18-4303-82f3-3ffe6e328aa9\") " pod="kube-system/cilium-2r7cd" Nov 1 00:32:14.787934 kubelet[1923]: I1101 00:32:14.787920 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4pll\" (UniqueName: \"kubernetes.io/projected/d37a171f-4d18-4303-82f3-3ffe6e328aa9-kube-api-access-h4pll\") pod \"cilium-2r7cd\" (UID: \"d37a171f-4d18-4303-82f3-3ffe6e328aa9\") " pod="kube-system/cilium-2r7cd" Nov 1 00:32:14.788034 kubelet[1923]: I1101 00:32:14.788021 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d37a171f-4d18-4303-82f3-3ffe6e328aa9-cilium-run\") pod \"cilium-2r7cd\" (UID: \"d37a171f-4d18-4303-82f3-3ffe6e328aa9\") " pod="kube-system/cilium-2r7cd" Nov 1 00:32:14.788136 kubelet[1923]: I1101 00:32:14.788123 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d37a171f-4d18-4303-82f3-3ffe6e328aa9-cilium-config-path\") pod \"cilium-2r7cd\" (UID: \"d37a171f-4d18-4303-82f3-3ffe6e328aa9\") " pod="kube-system/cilium-2r7cd" Nov 1 00:32:14.788228 kubelet[1923]: I1101 00:32:14.788216 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d37a171f-4d18-4303-82f3-3ffe6e328aa9-host-proc-sys-kernel\") pod \"cilium-2r7cd\" (UID: \"d37a171f-4d18-4303-82f3-3ffe6e328aa9\") " pod="kube-system/cilium-2r7cd" Nov 1 00:32:14.788342 kubelet[1923]: I1101 00:32:14.788320 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d37a171f-4d18-4303-82f3-3ffe6e328aa9-cni-path\") pod \"cilium-2r7cd\" (UID: \"d37a171f-4d18-4303-82f3-3ffe6e328aa9\") " pod="kube-system/cilium-2r7cd" Nov 1 00:32:14.788439 kubelet[1923]: I1101 00:32:14.788427 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d37a171f-4d18-4303-82f3-3ffe6e328aa9-etc-cni-netd\") pod \"cilium-2r7cd\" (UID: \"d37a171f-4d18-4303-82f3-3ffe6e328aa9\") " pod="kube-system/cilium-2r7cd" Nov 1 00:32:14.788528 kubelet[1923]: I1101 00:32:14.788516 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d37a171f-4d18-4303-82f3-3ffe6e328aa9-clustermesh-secrets\") pod \"cilium-2r7cd\" (UID: \"d37a171f-4d18-4303-82f3-3ffe6e328aa9\") " pod="kube-system/cilium-2r7cd" Nov 1 00:32:14.788642 kubelet[1923]: I1101 00:32:14.788628 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d37a171f-4d18-4303-82f3-3ffe6e328aa9-cilium-ipsec-secrets\") pod \"cilium-2r7cd\" (UID: \"d37a171f-4d18-4303-82f3-3ffe6e328aa9\") " pod="kube-system/cilium-2r7cd" Nov 1 00:32:15.032353 kubelet[1923]: E1101 00:32:15.032306 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:32:15.032862 env[1216]: time="2025-11-01T00:32:15.032814606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2r7cd,Uid:d37a171f-4d18-4303-82f3-3ffe6e328aa9,Namespace:kube-system,Attempt:0,}" Nov 1 00:32:15.044579 env[1216]: time="2025-11-01T00:32:15.044446731Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:32:15.044579 env[1216]: time="2025-11-01T00:32:15.044490091Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:32:15.044579 env[1216]: time="2025-11-01T00:32:15.044500330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:32:15.045010 env[1216]: time="2025-11-01T00:32:15.044824525Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/153cfc31381a58aec232ff113a7a65686b3e9f0a3ddc9ceb3cd67ffcf4397986 pid=3758 runtime=io.containerd.runc.v2 Nov 1 00:32:15.054919 systemd[1]: Started cri-containerd-153cfc31381a58aec232ff113a7a65686b3e9f0a3ddc9ceb3cd67ffcf4397986.scope. Nov 1 00:32:15.088727 env[1216]: time="2025-11-01T00:32:15.088689752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2r7cd,Uid:d37a171f-4d18-4303-82f3-3ffe6e328aa9,Namespace:kube-system,Attempt:0,} returns sandbox id \"153cfc31381a58aec232ff113a7a65686b3e9f0a3ddc9ceb3cd67ffcf4397986\"" Nov 1 00:32:15.090091 kubelet[1923]: E1101 00:32:15.089728 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:32:15.094841 env[1216]: time="2025-11-01T00:32:15.094626373Z" level=info msg="CreateContainer within sandbox \"153cfc31381a58aec232ff113a7a65686b3e9f0a3ddc9ceb3cd67ffcf4397986\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 1 00:32:15.103854 env[1216]: time="2025-11-01T00:32:15.103805219Z" level=info msg="CreateContainer within sandbox \"153cfc31381a58aec232ff113a7a65686b3e9f0a3ddc9ceb3cd67ffcf4397986\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"027ad87ca0c1c134a87ce7c35f6da5d4c2b7cedcb4c1acd3c4b8d105361cb02f\"" Nov 1 00:32:15.104380 env[1216]: time="2025-11-01T00:32:15.104349530Z" level=info msg="StartContainer for \"027ad87ca0c1c134a87ce7c35f6da5d4c2b7cedcb4c1acd3c4b8d105361cb02f\"" Nov 1 00:32:15.122302 systemd[1]: Started cri-containerd-027ad87ca0c1c134a87ce7c35f6da5d4c2b7cedcb4c1acd3c4b8d105361cb02f.scope. Nov 1 00:32:15.149733 env[1216]: time="2025-11-01T00:32:15.149689533Z" level=info msg="StartContainer for \"027ad87ca0c1c134a87ce7c35f6da5d4c2b7cedcb4c1acd3c4b8d105361cb02f\" returns successfully" Nov 1 00:32:15.157570 systemd[1]: cri-containerd-027ad87ca0c1c134a87ce7c35f6da5d4c2b7cedcb4c1acd3c4b8d105361cb02f.scope: Deactivated successfully. Nov 1 00:32:15.182885 env[1216]: time="2025-11-01T00:32:15.182814859Z" level=info msg="shim disconnected" id=027ad87ca0c1c134a87ce7c35f6da5d4c2b7cedcb4c1acd3c4b8d105361cb02f Nov 1 00:32:15.182885 env[1216]: time="2025-11-01T00:32:15.182862538Z" level=warning msg="cleaning up after shim disconnected" id=027ad87ca0c1c134a87ce7c35f6da5d4c2b7cedcb4c1acd3c4b8d105361cb02f namespace=k8s.io Nov 1 00:32:15.182885 env[1216]: time="2025-11-01T00:32:15.182871898Z" level=info msg="cleaning up dead shim" Nov 1 00:32:15.189465 env[1216]: time="2025-11-01T00:32:15.189429309Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:32:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3841 runtime=io.containerd.runc.v2\n" Nov 1 00:32:15.695866 kubelet[1923]: E1101 00:32:15.695690 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:32:15.699863 env[1216]: time="2025-11-01T00:32:15.699515425Z" level=info msg="CreateContainer within sandbox \"153cfc31381a58aec232ff113a7a65686b3e9f0a3ddc9ceb3cd67ffcf4397986\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 1 00:32:15.711737 env[1216]: time="2025-11-01T00:32:15.711679102Z" level=info msg="CreateContainer within sandbox \"153cfc31381a58aec232ff113a7a65686b3e9f0a3ddc9ceb3cd67ffcf4397986\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6cdbe639dd680de2d543b82550849e251c1ad315674779befc0b0880f1d703af\"" Nov 1 00:32:15.712447 env[1216]: time="2025-11-01T00:32:15.712365170Z" level=info msg="StartContainer for \"6cdbe639dd680de2d543b82550849e251c1ad315674779befc0b0880f1d703af\"" Nov 1 00:32:15.728447 systemd[1]: Started cri-containerd-6cdbe639dd680de2d543b82550849e251c1ad315674779befc0b0880f1d703af.scope. Nov 1 00:32:15.759002 env[1216]: time="2025-11-01T00:32:15.758960192Z" level=info msg="StartContainer for \"6cdbe639dd680de2d543b82550849e251c1ad315674779befc0b0880f1d703af\" returns successfully" Nov 1 00:32:15.762707 systemd[1]: cri-containerd-6cdbe639dd680de2d543b82550849e251c1ad315674779befc0b0880f1d703af.scope: Deactivated successfully. Nov 1 00:32:15.781262 env[1216]: time="2025-11-01T00:32:15.781222700Z" level=info msg="shim disconnected" id=6cdbe639dd680de2d543b82550849e251c1ad315674779befc0b0880f1d703af Nov 1 00:32:15.781453 env[1216]: time="2025-11-01T00:32:15.781266179Z" level=warning msg="cleaning up after shim disconnected" id=6cdbe639dd680de2d543b82550849e251c1ad315674779befc0b0880f1d703af namespace=k8s.io Nov 1 00:32:15.781453 env[1216]: time="2025-11-01T00:32:15.781276859Z" level=info msg="cleaning up dead shim" Nov 1 00:32:15.787992 env[1216]: time="2025-11-01T00:32:15.787956107Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:32:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3906 runtime=io.containerd.runc.v2\n" Nov 1 00:32:16.519589 kubelet[1923]: E1101 00:32:16.519518 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:32:16.522198 kubelet[1923]: I1101 00:32:16.522158 1923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b204899d-bc56-4a31-a942-03950a41d437" path="/var/lib/kubelet/pods/b204899d-bc56-4a31-a942-03950a41d437/volumes" Nov 1 00:32:16.698959 kubelet[1923]: E1101 00:32:16.698929 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:32:16.709583 env[1216]: time="2025-11-01T00:32:16.703100097Z" level=info msg="CreateContainer within sandbox \"153cfc31381a58aec232ff113a7a65686b3e9f0a3ddc9ceb3cd67ffcf4397986\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 1 00:32:16.722361 env[1216]: time="2025-11-01T00:32:16.722321918Z" level=info msg="CreateContainer within sandbox \"153cfc31381a58aec232ff113a7a65686b3e9f0a3ddc9ceb3cd67ffcf4397986\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"dd83e4f109fb9a33b03087206f718a8e72a5f2799c38ccaccdd35a1c3350c263\"" Nov 1 00:32:16.723831 env[1216]: time="2025-11-01T00:32:16.723790856Z" level=info msg="StartContainer for \"dd83e4f109fb9a33b03087206f718a8e72a5f2799c38ccaccdd35a1c3350c263\"" Nov 1 00:32:16.740215 systemd[1]: Started cri-containerd-dd83e4f109fb9a33b03087206f718a8e72a5f2799c38ccaccdd35a1c3350c263.scope. Nov 1 00:32:16.769175 env[1216]: time="2025-11-01T00:32:16.769138992Z" level=info msg="StartContainer for \"dd83e4f109fb9a33b03087206f718a8e72a5f2799c38ccaccdd35a1c3350c263\" returns successfully" Nov 1 00:32:16.770979 systemd[1]: cri-containerd-dd83e4f109fb9a33b03087206f718a8e72a5f2799c38ccaccdd35a1c3350c263.scope: Deactivated successfully. Nov 1 00:32:16.792887 env[1216]: time="2025-11-01T00:32:16.792844745Z" level=info msg="shim disconnected" id=dd83e4f109fb9a33b03087206f718a8e72a5f2799c38ccaccdd35a1c3350c263 Nov 1 00:32:16.792887 env[1216]: time="2025-11-01T00:32:16.792887824Z" level=warning msg="cleaning up after shim disconnected" id=dd83e4f109fb9a33b03087206f718a8e72a5f2799c38ccaccdd35a1c3350c263 namespace=k8s.io Nov 1 00:32:16.793059 env[1216]: time="2025-11-01T00:32:16.792897384Z" level=info msg="cleaning up dead shim" Nov 1 00:32:16.799574 env[1216]: time="2025-11-01T00:32:16.799523521Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:32:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3964 runtime=io.containerd.runc.v2\n" Nov 1 00:32:16.898925 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd83e4f109fb9a33b03087206f718a8e72a5f2799c38ccaccdd35a1c3350c263-rootfs.mount: Deactivated successfully. Nov 1 00:32:17.702631 kubelet[1923]: E1101 00:32:17.702589 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:32:17.706924 env[1216]: time="2025-11-01T00:32:17.706698111Z" level=info msg="CreateContainer within sandbox \"153cfc31381a58aec232ff113a7a65686b3e9f0a3ddc9ceb3cd67ffcf4397986\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 1 00:32:17.718764 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1557300461.mount: Deactivated successfully. Nov 1 00:32:17.719677 env[1216]: time="2025-11-01T00:32:17.719426568Z" level=info msg="CreateContainer within sandbox \"153cfc31381a58aec232ff113a7a65686b3e9f0a3ddc9ceb3cd67ffcf4397986\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cc45d9212493af023a70c63f6d8d12e77ba48fe03b3548392af4b481f5fb2f97\"" Nov 1 00:32:17.720741 env[1216]: time="2025-11-01T00:32:17.720715030Z" level=info msg="StartContainer for \"cc45d9212493af023a70c63f6d8d12e77ba48fe03b3548392af4b481f5fb2f97\"" Nov 1 00:32:17.735128 systemd[1]: Started cri-containerd-cc45d9212493af023a70c63f6d8d12e77ba48fe03b3548392af4b481f5fb2f97.scope. Nov 1 00:32:17.764010 env[1216]: time="2025-11-01T00:32:17.763969369Z" level=info msg="StartContainer for \"cc45d9212493af023a70c63f6d8d12e77ba48fe03b3548392af4b481f5fb2f97\" returns successfully" Nov 1 00:32:17.764254 systemd[1]: cri-containerd-cc45d9212493af023a70c63f6d8d12e77ba48fe03b3548392af4b481f5fb2f97.scope: Deactivated successfully. Nov 1 00:32:17.785061 env[1216]: time="2025-11-01T00:32:17.785016547Z" level=info msg="shim disconnected" id=cc45d9212493af023a70c63f6d8d12e77ba48fe03b3548392af4b481f5fb2f97 Nov 1 00:32:17.785301 env[1216]: time="2025-11-01T00:32:17.785280184Z" level=warning msg="cleaning up after shim disconnected" id=cc45d9212493af023a70c63f6d8d12e77ba48fe03b3548392af4b481f5fb2f97 namespace=k8s.io Nov 1 00:32:17.785370 env[1216]: time="2025-11-01T00:32:17.785356662Z" level=info msg="cleaning up dead shim" Nov 1 00:32:17.792339 env[1216]: time="2025-11-01T00:32:17.792307723Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:32:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4018 runtime=io.containerd.runc.v2\n" Nov 1 00:32:17.899001 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cc45d9212493af023a70c63f6d8d12e77ba48fe03b3548392af4b481f5fb2f97-rootfs.mount: Deactivated successfully. Nov 1 00:32:18.559106 kubelet[1923]: E1101 00:32:18.559045 1923 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 1 00:32:18.706010 kubelet[1923]: E1101 00:32:18.705664 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:32:18.711046 env[1216]: time="2025-11-01T00:32:18.710985142Z" level=info msg="CreateContainer within sandbox \"153cfc31381a58aec232ff113a7a65686b3e9f0a3ddc9ceb3cd67ffcf4397986\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 1 00:32:18.728345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2658885939.mount: Deactivated successfully. Nov 1 00:32:18.732715 env[1216]: time="2025-11-01T00:32:18.732653696Z" level=info msg="CreateContainer within sandbox \"153cfc31381a58aec232ff113a7a65686b3e9f0a3ddc9ceb3cd67ffcf4397986\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0ba8ff77c8d6890888cf9287468f19d2591d91abf7eb272fd04277000a55cd20\"" Nov 1 00:32:18.733529 env[1216]: time="2025-11-01T00:32:18.733503044Z" level=info msg="StartContainer for \"0ba8ff77c8d6890888cf9287468f19d2591d91abf7eb272fd04277000a55cd20\"" Nov 1 00:32:18.748517 systemd[1]: Started cri-containerd-0ba8ff77c8d6890888cf9287468f19d2591d91abf7eb272fd04277000a55cd20.scope. Nov 1 00:32:18.778315 env[1216]: time="2025-11-01T00:32:18.778273733Z" level=info msg="StartContainer for \"0ba8ff77c8d6890888cf9287468f19d2591d91abf7eb272fd04277000a55cd20\" returns successfully" Nov 1 00:32:19.008571 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Nov 1 00:32:19.709642 kubelet[1923]: E1101 00:32:19.709612 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:32:19.724777 kubelet[1923]: I1101 00:32:19.724478 1923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2r7cd" podStartSLOduration=5.724461813 podStartE2EDuration="5.724461813s" podCreationTimestamp="2025-11-01 00:32:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:32:19.723690463 +0000 UTC m=+81.299063925" watchObservedRunningTime="2025-11-01 00:32:19.724461813 +0000 UTC m=+81.299835275" Nov 1 00:32:20.239708 kubelet[1923]: I1101 00:32:20.239653 1923 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-01T00:32:20Z","lastTransitionTime":"2025-11-01T00:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 1 00:32:20.992710 systemd[1]: run-containerd-runc-k8s.io-0ba8ff77c8d6890888cf9287468f19d2591d91abf7eb272fd04277000a55cd20-runc.5WbXlq.mount: Deactivated successfully. Nov 1 00:32:21.033358 kubelet[1923]: E1101 00:32:21.033268 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:32:21.779873 systemd-networkd[1042]: lxc_health: Link UP Nov 1 00:32:21.792579 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Nov 1 00:32:21.792588 systemd-networkd[1042]: lxc_health: Gained carrier Nov 1 00:32:22.933680 systemd-networkd[1042]: lxc_health: Gained IPv6LL Nov 1 00:32:23.034393 kubelet[1923]: E1101 00:32:23.034292 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:32:23.717099 kubelet[1923]: E1101 00:32:23.717055 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:32:24.718693 kubelet[1923]: E1101 00:32:24.718654 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:32:25.243877 systemd[1]: run-containerd-runc-k8s.io-0ba8ff77c8d6890888cf9287468f19d2591d91abf7eb272fd04277000a55cd20-runc.F6YUrP.mount: Deactivated successfully. Nov 1 00:32:26.521153 kubelet[1923]: E1101 00:32:26.521108 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:32:26.521153 kubelet[1923]: E1101 00:32:26.521129 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:32:27.354159 systemd[1]: run-containerd-runc-k8s.io-0ba8ff77c8d6890888cf9287468f19d2591d91abf7eb272fd04277000a55cd20-runc.etww1S.mount: Deactivated successfully. Nov 1 00:32:27.408013 sshd[3728]: pam_unix(sshd:session): session closed for user core Nov 1 00:32:27.410644 systemd[1]: sshd@23-10.0.0.138:22-10.0.0.1:49652.service: Deactivated successfully. Nov 1 00:32:27.411388 systemd[1]: session-24.scope: Deactivated successfully. Nov 1 00:32:27.412031 systemd-logind[1203]: Session 24 logged out. Waiting for processes to exit. Nov 1 00:32:27.412729 systemd-logind[1203]: Removed session 24.