Jul 10 00:33:37.719644 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 10 00:33:37.719664 kernel: Linux version 5.15.186-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Wed Jul 9 23:19:15 -00 2025 Jul 10 00:33:37.719672 kernel: efi: EFI v2.70 by EDK II Jul 10 00:33:37.719677 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Jul 10 00:33:37.719682 kernel: random: crng init done Jul 10 00:33:37.719688 kernel: ACPI: Early table checksum verification disabled Jul 10 00:33:37.719694 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Jul 10 00:33:37.719701 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 10 00:33:37.719706 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:33:37.719712 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:33:37.719717 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:33:37.719722 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:33:37.719728 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:33:37.719733 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:33:37.719741 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:33:37.719747 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:33:37.719753 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:33:37.719759 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 10 00:33:37.719765 kernel: NUMA: Failed to initialise from firmware Jul 10 00:33:37.719770 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 10 00:33:37.719776 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Jul 10 00:33:37.719782 kernel: Zone ranges: Jul 10 00:33:37.719787 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 10 00:33:37.719794 kernel: DMA32 empty Jul 10 00:33:37.719800 kernel: Normal empty Jul 10 00:33:37.719805 kernel: Movable zone start for each node Jul 10 00:33:37.719811 kernel: Early memory node ranges Jul 10 00:33:37.719816 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Jul 10 00:33:37.719822 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Jul 10 00:33:37.719828 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Jul 10 00:33:37.719834 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Jul 10 00:33:37.719839 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Jul 10 00:33:37.719845 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Jul 10 00:33:37.719851 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Jul 10 00:33:37.719856 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 10 00:33:37.719863 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 10 00:33:37.719869 kernel: psci: probing for conduit method from ACPI. Jul 10 00:33:37.719875 kernel: psci: PSCIv1.1 detected in firmware. Jul 10 00:33:37.719880 kernel: psci: Using standard PSCI v0.2 function IDs Jul 10 00:33:37.719887 kernel: psci: Trusted OS migration not required Jul 10 00:33:37.719895 kernel: psci: SMC Calling Convention v1.1 Jul 10 00:33:37.719901 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 10 00:33:37.719909 kernel: ACPI: SRAT not present Jul 10 00:33:37.719915 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Jul 10 00:33:37.719921 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Jul 10 00:33:37.719928 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 10 00:33:37.719934 kernel: Detected PIPT I-cache on CPU0 Jul 10 00:33:37.719940 kernel: CPU features: detected: GIC system register CPU interface Jul 10 00:33:37.719946 kernel: CPU features: detected: Hardware dirty bit management Jul 10 00:33:37.719952 kernel: CPU features: detected: Spectre-v4 Jul 10 00:33:37.719958 kernel: CPU features: detected: Spectre-BHB Jul 10 00:33:37.719970 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 10 00:33:37.719977 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 10 00:33:37.719983 kernel: CPU features: detected: ARM erratum 1418040 Jul 10 00:33:37.719989 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 10 00:33:37.719995 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 10 00:33:37.720006 kernel: Policy zone: DMA Jul 10 00:33:37.720014 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=97626bbec4e8c603c151f40dbbae5fabba3cda417023e06335ea30183b36a27f Jul 10 00:33:37.720020 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 10 00:33:37.720027 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 10 00:33:37.720033 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 10 00:33:37.720039 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 10 00:33:37.720047 kernel: Memory: 2457340K/2572288K available (9792K kernel code, 2094K rwdata, 7588K rodata, 36416K init, 777K bss, 114948K reserved, 0K cma-reserved) Jul 10 00:33:37.720053 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 10 00:33:37.720059 kernel: trace event string verifier disabled Jul 10 00:33:37.720065 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 10 00:33:37.720072 kernel: rcu: RCU event tracing is enabled. Jul 10 00:33:37.720078 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 10 00:33:37.720085 kernel: Trampoline variant of Tasks RCU enabled. Jul 10 00:33:37.720091 kernel: Tracing variant of Tasks RCU enabled. Jul 10 00:33:37.720097 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 10 00:33:37.720103 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 10 00:33:37.720109 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 10 00:33:37.720119 kernel: GICv3: 256 SPIs implemented Jul 10 00:33:37.720125 kernel: GICv3: 0 Extended SPIs implemented Jul 10 00:33:37.720131 kernel: GICv3: Distributor has no Range Selector support Jul 10 00:33:37.720137 kernel: Root IRQ handler: gic_handle_irq Jul 10 00:33:37.720143 kernel: GICv3: 16 PPIs implemented Jul 10 00:33:37.720150 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 10 00:33:37.720156 kernel: ACPI: SRAT not present Jul 10 00:33:37.720162 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 10 00:33:37.720170 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Jul 10 00:33:37.720177 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Jul 10 00:33:37.720183 kernel: GICv3: using LPI property table @0x00000000400d0000 Jul 10 00:33:37.720190 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Jul 10 00:33:37.720197 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 00:33:37.720203 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 10 00:33:37.720210 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 10 00:33:37.720218 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 10 00:33:37.720245 kernel: arm-pv: using stolen time PV Jul 10 00:33:37.720252 kernel: Console: colour dummy device 80x25 Jul 10 00:33:37.720258 kernel: ACPI: Core revision 20210730 Jul 10 00:33:37.720265 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 10 00:33:37.720272 kernel: pid_max: default: 32768 minimum: 301 Jul 10 00:33:37.720278 kernel: LSM: Security Framework initializing Jul 10 00:33:37.720286 kernel: SELinux: Initializing. Jul 10 00:33:37.720292 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 00:33:37.720310 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 00:33:37.720317 kernel: rcu: Hierarchical SRCU implementation. Jul 10 00:33:37.720323 kernel: Platform MSI: ITS@0x8080000 domain created Jul 10 00:33:37.720330 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 10 00:33:37.720336 kernel: Remapping and enabling EFI services. Jul 10 00:33:37.720342 kernel: smp: Bringing up secondary CPUs ... Jul 10 00:33:37.720349 kernel: Detected PIPT I-cache on CPU1 Jul 10 00:33:37.720357 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 10 00:33:37.720363 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Jul 10 00:33:37.720369 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 00:33:37.720376 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 10 00:33:37.720382 kernel: Detected PIPT I-cache on CPU2 Jul 10 00:33:37.720389 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 10 00:33:37.720396 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Jul 10 00:33:37.720402 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 00:33:37.720408 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 10 00:33:37.720415 kernel: Detected PIPT I-cache on CPU3 Jul 10 00:33:37.720422 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 10 00:33:37.720428 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Jul 10 00:33:37.720435 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 00:33:37.720442 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 10 00:33:37.720452 kernel: smp: Brought up 1 node, 4 CPUs Jul 10 00:33:37.720460 kernel: SMP: Total of 4 processors activated. Jul 10 00:33:37.720466 kernel: CPU features: detected: 32-bit EL0 Support Jul 10 00:33:37.720473 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 10 00:33:37.720480 kernel: CPU features: detected: Common not Private translations Jul 10 00:33:37.720487 kernel: CPU features: detected: CRC32 instructions Jul 10 00:33:37.720493 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 10 00:33:37.720500 kernel: CPU features: detected: LSE atomic instructions Jul 10 00:33:37.720508 kernel: CPU features: detected: Privileged Access Never Jul 10 00:33:37.720515 kernel: CPU features: detected: RAS Extension Support Jul 10 00:33:37.720521 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 10 00:33:37.720528 kernel: CPU: All CPU(s) started at EL1 Jul 10 00:33:37.720534 kernel: alternatives: patching kernel code Jul 10 00:33:37.720542 kernel: devtmpfs: initialized Jul 10 00:33:37.720549 kernel: KASLR enabled Jul 10 00:33:37.720556 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 10 00:33:37.720562 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 10 00:33:37.720569 kernel: pinctrl core: initialized pinctrl subsystem Jul 10 00:33:37.720576 kernel: SMBIOS 3.0.0 present. Jul 10 00:33:37.720582 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Jul 10 00:33:37.720589 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 10 00:33:37.720596 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 10 00:33:37.720604 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 10 00:33:37.720611 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 10 00:33:37.720617 kernel: audit: initializing netlink subsys (disabled) Jul 10 00:33:37.720624 kernel: audit: type=2000 audit(0.033:1): state=initialized audit_enabled=0 res=1 Jul 10 00:33:37.720631 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 10 00:33:37.720638 kernel: cpuidle: using governor menu Jul 10 00:33:37.720645 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 10 00:33:37.720651 kernel: ASID allocator initialised with 32768 entries Jul 10 00:33:37.720658 kernel: ACPI: bus type PCI registered Jul 10 00:33:37.720666 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 10 00:33:37.720672 kernel: Serial: AMBA PL011 UART driver Jul 10 00:33:37.720679 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 10 00:33:37.720686 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Jul 10 00:33:37.720695 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 10 00:33:37.720702 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Jul 10 00:33:37.720709 kernel: cryptd: max_cpu_qlen set to 1000 Jul 10 00:33:37.720716 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 10 00:33:37.720722 kernel: ACPI: Added _OSI(Module Device) Jul 10 00:33:37.720730 kernel: ACPI: Added _OSI(Processor Device) Jul 10 00:33:37.720740 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 10 00:33:37.720747 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 10 00:33:37.720754 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 10 00:33:37.720760 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 10 00:33:37.720767 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 10 00:33:37.720774 kernel: ACPI: Interpreter enabled Jul 10 00:33:37.720781 kernel: ACPI: Using GIC for interrupt routing Jul 10 00:33:37.720787 kernel: ACPI: MCFG table detected, 1 entries Jul 10 00:33:37.720796 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 10 00:33:37.720803 kernel: printk: console [ttyAMA0] enabled Jul 10 00:33:37.720810 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 10 00:33:37.720931 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 10 00:33:37.721006 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 10 00:33:37.721077 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 10 00:33:37.721146 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 10 00:33:37.721215 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 10 00:33:37.721236 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 10 00:33:37.721243 kernel: PCI host bridge to bus 0000:00 Jul 10 00:33:37.721323 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 10 00:33:37.721381 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 10 00:33:37.721444 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 10 00:33:37.721508 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 10 00:33:37.721590 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 10 00:33:37.721656 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 10 00:33:37.721717 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 10 00:33:37.721776 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 10 00:33:37.721834 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 10 00:33:37.721893 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 10 00:33:37.721952 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 10 00:33:37.722021 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 10 00:33:37.722077 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 10 00:33:37.722133 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 10 00:33:37.722186 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 10 00:33:37.722195 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 10 00:33:37.722201 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 10 00:33:37.722208 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 10 00:33:37.722215 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 10 00:33:37.722232 kernel: iommu: Default domain type: Translated Jul 10 00:33:37.722239 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 10 00:33:37.722245 kernel: vgaarb: loaded Jul 10 00:33:37.722252 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 10 00:33:37.722259 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 10 00:33:37.722266 kernel: PTP clock support registered Jul 10 00:33:37.722272 kernel: Registered efivars operations Jul 10 00:33:37.722279 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 10 00:33:37.722285 kernel: VFS: Disk quotas dquot_6.6.0 Jul 10 00:33:37.722294 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 10 00:33:37.722300 kernel: pnp: PnP ACPI init Jul 10 00:33:37.722367 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 10 00:33:37.722377 kernel: pnp: PnP ACPI: found 1 devices Jul 10 00:33:37.722384 kernel: NET: Registered PF_INET protocol family Jul 10 00:33:37.722390 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 10 00:33:37.722397 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 10 00:33:37.722404 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 10 00:33:37.722412 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 10 00:33:37.722419 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 10 00:33:37.722425 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 10 00:33:37.722432 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 00:33:37.722439 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 00:33:37.722445 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 10 00:33:37.722452 kernel: PCI: CLS 0 bytes, default 64 Jul 10 00:33:37.722459 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 10 00:33:37.722465 kernel: kvm [1]: HYP mode not available Jul 10 00:33:37.722473 kernel: Initialise system trusted keyrings Jul 10 00:33:37.722480 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 10 00:33:37.722486 kernel: Key type asymmetric registered Jul 10 00:33:37.722493 kernel: Asymmetric key parser 'x509' registered Jul 10 00:33:37.722500 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 10 00:33:37.722506 kernel: io scheduler mq-deadline registered Jul 10 00:33:37.722513 kernel: io scheduler kyber registered Jul 10 00:33:37.722519 kernel: io scheduler bfq registered Jul 10 00:33:37.722526 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 10 00:33:37.722534 kernel: ACPI: button: Power Button [PWRB] Jul 10 00:33:37.722541 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 10 00:33:37.722598 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 10 00:33:37.722607 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 10 00:33:37.722614 kernel: thunder_xcv, ver 1.0 Jul 10 00:33:37.722620 kernel: thunder_bgx, ver 1.0 Jul 10 00:33:37.722627 kernel: nicpf, ver 1.0 Jul 10 00:33:37.722633 kernel: nicvf, ver 1.0 Jul 10 00:33:37.722702 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 10 00:33:37.722759 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-10T00:33:37 UTC (1752107617) Jul 10 00:33:37.722768 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 10 00:33:37.722775 kernel: NET: Registered PF_INET6 protocol family Jul 10 00:33:37.722781 kernel: Segment Routing with IPv6 Jul 10 00:33:37.722787 kernel: In-situ OAM (IOAM) with IPv6 Jul 10 00:33:37.722794 kernel: NET: Registered PF_PACKET protocol family Jul 10 00:33:37.722801 kernel: Key type dns_resolver registered Jul 10 00:33:37.722807 kernel: registered taskstats version 1 Jul 10 00:33:37.722815 kernel: Loading compiled-in X.509 certificates Jul 10 00:33:37.722822 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.186-flatcar: 9e274a0dc4fc3d34232d90d226b034c4fe0e3e22' Jul 10 00:33:37.722829 kernel: Key type .fscrypt registered Jul 10 00:33:37.722835 kernel: Key type fscrypt-provisioning registered Jul 10 00:33:37.722842 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 10 00:33:37.722849 kernel: ima: Allocated hash algorithm: sha1 Jul 10 00:33:37.722855 kernel: ima: No architecture policies found Jul 10 00:33:37.722862 kernel: clk: Disabling unused clocks Jul 10 00:33:37.722868 kernel: Freeing unused kernel memory: 36416K Jul 10 00:33:37.722876 kernel: Run /init as init process Jul 10 00:33:37.722882 kernel: with arguments: Jul 10 00:33:37.722889 kernel: /init Jul 10 00:33:37.722895 kernel: with environment: Jul 10 00:33:37.722901 kernel: HOME=/ Jul 10 00:33:37.722908 kernel: TERM=linux Jul 10 00:33:37.722914 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 10 00:33:37.722923 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 10 00:33:37.722932 systemd[1]: Detected virtualization kvm. Jul 10 00:33:37.722940 systemd[1]: Detected architecture arm64. Jul 10 00:33:37.722947 systemd[1]: Running in initrd. Jul 10 00:33:37.722953 systemd[1]: No hostname configured, using default hostname. Jul 10 00:33:37.722960 systemd[1]: Hostname set to . Jul 10 00:33:37.722967 systemd[1]: Initializing machine ID from VM UUID. Jul 10 00:33:37.722974 systemd[1]: Queued start job for default target initrd.target. Jul 10 00:33:37.722981 systemd[1]: Started systemd-ask-password-console.path. Jul 10 00:33:37.722989 systemd[1]: Reached target cryptsetup.target. Jul 10 00:33:37.722995 systemd[1]: Reached target paths.target. Jul 10 00:33:37.723008 systemd[1]: Reached target slices.target. Jul 10 00:33:37.723015 systemd[1]: Reached target swap.target. Jul 10 00:33:37.723022 systemd[1]: Reached target timers.target. Jul 10 00:33:37.723029 systemd[1]: Listening on iscsid.socket. Jul 10 00:33:37.723036 systemd[1]: Listening on iscsiuio.socket. Jul 10 00:33:37.723044 systemd[1]: Listening on systemd-journald-audit.socket. Jul 10 00:33:37.723051 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 10 00:33:37.723058 systemd[1]: Listening on systemd-journald.socket. Jul 10 00:33:37.723065 systemd[1]: Listening on systemd-networkd.socket. Jul 10 00:33:37.723072 systemd[1]: Listening on systemd-udevd-control.socket. Jul 10 00:33:37.723079 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 10 00:33:37.723085 systemd[1]: Reached target sockets.target. Jul 10 00:33:37.723092 systemd[1]: Starting kmod-static-nodes.service... Jul 10 00:33:37.723099 systemd[1]: Finished network-cleanup.service. Jul 10 00:33:37.723107 systemd[1]: Starting systemd-fsck-usr.service... Jul 10 00:33:37.723114 systemd[1]: Starting systemd-journald.service... Jul 10 00:33:37.723121 systemd[1]: Starting systemd-modules-load.service... Jul 10 00:33:37.723128 systemd[1]: Starting systemd-resolved.service... Jul 10 00:33:37.723135 systemd[1]: Starting systemd-vconsole-setup.service... Jul 10 00:33:37.723141 systemd[1]: Finished kmod-static-nodes.service. Jul 10 00:33:37.723148 systemd[1]: Finished systemd-fsck-usr.service. Jul 10 00:33:37.723155 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 10 00:33:37.723162 systemd[1]: Finished systemd-vconsole-setup.service. Jul 10 00:33:37.723170 systemd[1]: Starting dracut-cmdline-ask.service... Jul 10 00:33:37.723177 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 10 00:33:37.723187 systemd-journald[289]: Journal started Jul 10 00:33:37.723243 systemd-journald[289]: Runtime Journal (/run/log/journal/d07d8d157d84429caa2f0e789d0461d8) is 6.0M, max 48.7M, 42.6M free. Jul 10 00:33:37.704170 systemd-modules-load[290]: Inserted module 'overlay' Jul 10 00:33:37.724463 systemd[1]: Started systemd-journald.service. Jul 10 00:33:37.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:37.727237 kernel: audit: type=1130 audit(1752107617.724:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:37.728570 systemd[1]: Finished dracut-cmdline-ask.service. Jul 10 00:33:37.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:37.732036 systemd-resolved[291]: Positive Trust Anchors: Jul 10 00:33:37.732771 kernel: audit: type=1130 audit(1752107617.729:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:37.732050 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:33:37.735412 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 10 00:33:37.732079 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 10 00:33:37.732374 systemd[1]: Starting dracut-cmdline.service... Jul 10 00:33:37.738015 systemd-resolved[291]: Defaulting to hostname 'linux'. Jul 10 00:33:37.743305 kernel: Bridge firewalling registered Jul 10 00:33:37.743321 kernel: audit: type=1130 audit(1752107617.740:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:37.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:37.739667 systemd[1]: Started systemd-resolved.service. Jul 10 00:33:37.740044 systemd-modules-load[290]: Inserted module 'br_netfilter' Jul 10 00:33:37.744989 systemd[1]: Reached target nss-lookup.target. Jul 10 00:33:37.746739 dracut-cmdline[308]: dracut-dracut-053 Jul 10 00:33:37.748726 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=97626bbec4e8c603c151f40dbbae5fabba3cda417023e06335ea30183b36a27f Jul 10 00:33:37.753394 kernel: SCSI subsystem initialized Jul 10 00:33:37.760251 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 10 00:33:37.760281 kernel: device-mapper: uevent: version 1.0.3 Jul 10 00:33:37.760290 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 10 00:33:37.763124 systemd-modules-load[290]: Inserted module 'dm_multipath' Jul 10 00:33:37.763898 systemd[1]: Finished systemd-modules-load.service. Jul 10 00:33:37.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:37.765850 systemd[1]: Starting systemd-sysctl.service... Jul 10 00:33:37.768248 kernel: audit: type=1130 audit(1752107617.764:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:37.773547 systemd[1]: Finished systemd-sysctl.service. Jul 10 00:33:37.776282 kernel: audit: type=1130 audit(1752107617.773:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:37.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:37.812244 kernel: Loading iSCSI transport class v2.0-870. Jul 10 00:33:37.824238 kernel: iscsi: registered transport (tcp) Jul 10 00:33:37.841248 kernel: iscsi: registered transport (qla4xxx) Jul 10 00:33:37.841274 kernel: QLogic iSCSI HBA Driver Jul 10 00:33:37.874849 systemd[1]: Finished dracut-cmdline.service. Jul 10 00:33:37.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:37.876365 systemd[1]: Starting dracut-pre-udev.service... Jul 10 00:33:37.878603 kernel: audit: type=1130 audit(1752107617.875:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:37.924251 kernel: raid6: neonx8 gen() 13803 MB/s Jul 10 00:33:37.941234 kernel: raid6: neonx8 xor() 10748 MB/s Jul 10 00:33:37.958254 kernel: raid6: neonx4 gen() 13539 MB/s Jul 10 00:33:37.975246 kernel: raid6: neonx4 xor() 11195 MB/s Jul 10 00:33:37.992243 kernel: raid6: neonx2 gen() 13076 MB/s Jul 10 00:33:38.009249 kernel: raid6: neonx2 xor() 10259 MB/s Jul 10 00:33:38.026254 kernel: raid6: neonx1 gen() 10486 MB/s Jul 10 00:33:38.043250 kernel: raid6: neonx1 xor() 8786 MB/s Jul 10 00:33:38.060252 kernel: raid6: int64x8 gen() 6262 MB/s Jul 10 00:33:38.077248 kernel: raid6: int64x8 xor() 3544 MB/s Jul 10 00:33:38.094263 kernel: raid6: int64x4 gen() 7223 MB/s Jul 10 00:33:38.111264 kernel: raid6: int64x4 xor() 3687 MB/s Jul 10 00:33:38.128251 kernel: raid6: int64x2 gen() 6147 MB/s Jul 10 00:33:38.145261 kernel: raid6: int64x2 xor() 3312 MB/s Jul 10 00:33:38.162250 kernel: raid6: int64x1 gen() 5046 MB/s Jul 10 00:33:38.179484 kernel: raid6: int64x1 xor() 2646 MB/s Jul 10 00:33:38.179503 kernel: raid6: using algorithm neonx8 gen() 13803 MB/s Jul 10 00:33:38.179513 kernel: raid6: .... xor() 10748 MB/s, rmw enabled Jul 10 00:33:38.179521 kernel: raid6: using neon recovery algorithm Jul 10 00:33:38.190238 kernel: xor: measuring software checksum speed Jul 10 00:33:38.190266 kernel: 8regs : 17173 MB/sec Jul 10 00:33:38.191604 kernel: 32regs : 19641 MB/sec Jul 10 00:33:38.191616 kernel: arm64_neon : 27738 MB/sec Jul 10 00:33:38.191625 kernel: xor: using function: arm64_neon (27738 MB/sec) Jul 10 00:33:38.254257 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Jul 10 00:33:38.268471 systemd[1]: Finished dracut-pre-udev.service. Jul 10 00:33:38.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:38.270079 systemd[1]: Starting systemd-udevd.service... Jul 10 00:33:38.273524 kernel: audit: type=1130 audit(1752107618.268:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:38.273555 kernel: audit: type=1334 audit(1752107618.269:9): prog-id=7 op=LOAD Jul 10 00:33:38.273564 kernel: audit: type=1334 audit(1752107618.269:10): prog-id=8 op=LOAD Jul 10 00:33:38.269000 audit: BPF prog-id=7 op=LOAD Jul 10 00:33:38.269000 audit: BPF prog-id=8 op=LOAD Jul 10 00:33:38.288727 systemd-udevd[491]: Using default interface naming scheme 'v252'. Jul 10 00:33:38.292185 systemd[1]: Started systemd-udevd.service. Jul 10 00:33:38.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:38.293803 systemd[1]: Starting dracut-pre-trigger.service... Jul 10 00:33:38.306606 dracut-pre-trigger[497]: rd.md=0: removing MD RAID activation Jul 10 00:33:38.336638 systemd[1]: Finished dracut-pre-trigger.service. Jul 10 00:33:38.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:38.338185 systemd[1]: Starting systemd-udev-trigger.service... Jul 10 00:33:38.374701 systemd[1]: Finished systemd-udev-trigger.service. Jul 10 00:33:38.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:38.404074 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 10 00:33:38.408496 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 10 00:33:38.408512 kernel: GPT:9289727 != 19775487 Jul 10 00:33:38.408521 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 10 00:33:38.408529 kernel: GPT:9289727 != 19775487 Jul 10 00:33:38.408538 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 10 00:33:38.408546 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:33:38.425260 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (540) Jul 10 00:33:38.427737 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 10 00:33:38.428639 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 10 00:33:38.434887 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 10 00:33:38.438125 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 10 00:33:38.442206 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 10 00:33:38.444582 systemd[1]: Starting disk-uuid.service... Jul 10 00:33:38.450501 disk-uuid[562]: Primary Header is updated. Jul 10 00:33:38.450501 disk-uuid[562]: Secondary Entries is updated. Jul 10 00:33:38.450501 disk-uuid[562]: Secondary Header is updated. Jul 10 00:33:38.454244 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:33:39.465848 disk-uuid[563]: The operation has completed successfully. Jul 10 00:33:39.466755 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:33:39.487926 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 10 00:33:39.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:39.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:39.488038 systemd[1]: Finished disk-uuid.service. Jul 10 00:33:39.489655 systemd[1]: Starting verity-setup.service... Jul 10 00:33:39.505250 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 10 00:33:39.527892 systemd[1]: Found device dev-mapper-usr.device. Jul 10 00:33:39.529244 systemd[1]: Mounting sysusr-usr.mount... Jul 10 00:33:39.529888 systemd[1]: Finished verity-setup.service. Jul 10 00:33:39.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:39.577251 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 10 00:33:39.577599 systemd[1]: Mounted sysusr-usr.mount. Jul 10 00:33:39.578323 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 10 00:33:39.579046 systemd[1]: Starting ignition-setup.service... Jul 10 00:33:39.580728 systemd[1]: Starting parse-ip-for-networkd.service... Jul 10 00:33:39.588695 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 10 00:33:39.588740 kernel: BTRFS info (device vda6): using free space tree Jul 10 00:33:39.588750 kernel: BTRFS info (device vda6): has skinny extents Jul 10 00:33:39.598316 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 10 00:33:39.604765 systemd[1]: Finished ignition-setup.service. Jul 10 00:33:39.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:39.606327 systemd[1]: Starting ignition-fetch-offline.service... Jul 10 00:33:39.671145 systemd[1]: Finished parse-ip-for-networkd.service. Jul 10 00:33:39.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:39.672000 audit: BPF prog-id=9 op=LOAD Jul 10 00:33:39.673596 systemd[1]: Starting systemd-networkd.service... Jul 10 00:33:39.696780 systemd-networkd[738]: lo: Link UP Jul 10 00:33:39.696793 systemd-networkd[738]: lo: Gained carrier Jul 10 00:33:39.697294 systemd-networkd[738]: Enumeration completed Jul 10 00:33:39.697479 systemd-networkd[738]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:33:39.698773 systemd-networkd[738]: eth0: Link UP Jul 10 00:33:39.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:39.698777 systemd-networkd[738]: eth0: Gained carrier Jul 10 00:33:39.699677 systemd[1]: Started systemd-networkd.service. Jul 10 00:33:39.701384 systemd[1]: Reached target network.target. Jul 10 00:33:39.704748 ignition[651]: Ignition 2.14.0 Jul 10 00:33:39.703299 systemd[1]: Starting iscsiuio.service... Jul 10 00:33:39.704755 ignition[651]: Stage: fetch-offline Jul 10 00:33:39.704797 ignition[651]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:33:39.704813 ignition[651]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:33:39.704977 ignition[651]: parsed url from cmdline: "" Jul 10 00:33:39.704981 ignition[651]: no config URL provided Jul 10 00:33:39.704994 ignition[651]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 00:33:39.705003 ignition[651]: no config at "/usr/lib/ignition/user.ign" Jul 10 00:33:39.705024 ignition[651]: op(1): [started] loading QEMU firmware config module Jul 10 00:33:39.705029 ignition[651]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 10 00:33:39.714199 ignition[651]: op(1): [finished] loading QEMU firmware config module Jul 10 00:33:39.716684 systemd[1]: Started iscsiuio.service. Jul 10 00:33:39.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:39.718239 systemd[1]: Starting iscsid.service... Jul 10 00:33:39.721664 iscsid[745]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 10 00:33:39.721664 iscsid[745]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 10 00:33:39.721664 iscsid[745]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 10 00:33:39.721664 iscsid[745]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 10 00:33:39.721664 iscsid[745]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 10 00:33:39.721664 iscsid[745]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 10 00:33:39.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:39.725283 systemd[1]: Started iscsid.service. Jul 10 00:33:39.728334 systemd-networkd[738]: eth0: DHCPv4 address 10.0.0.76/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 00:33:39.728854 systemd[1]: Starting dracut-initqueue.service... Jul 10 00:33:39.740123 systemd[1]: Finished dracut-initqueue.service. Jul 10 00:33:39.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:39.741038 systemd[1]: Reached target remote-fs-pre.target. Jul 10 00:33:39.742277 systemd[1]: Reached target remote-cryptsetup.target. Jul 10 00:33:39.743608 systemd[1]: Reached target remote-fs.target. Jul 10 00:33:39.745698 systemd[1]: Starting dracut-pre-mount.service... Jul 10 00:33:39.754706 systemd[1]: Finished dracut-pre-mount.service. Jul 10 00:33:39.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:39.769557 ignition[651]: parsing config with SHA512: 6d7eb5db804e434def0301cac7a8edc3c9a2152825943f0c1b15cf4164c9338911c84722de8dbb40798d1136f303fd76545a66ed096e0635c0fae400470e0f90 Jul 10 00:33:39.782157 unknown[651]: fetched base config from "system" Jul 10 00:33:39.782168 unknown[651]: fetched user config from "qemu" Jul 10 00:33:39.782618 ignition[651]: fetch-offline: fetch-offline passed Jul 10 00:33:39.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:39.783829 systemd[1]: Finished ignition-fetch-offline.service. Jul 10 00:33:39.782670 ignition[651]: Ignition finished successfully Jul 10 00:33:39.785044 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 10 00:33:39.785897 systemd[1]: Starting ignition-kargs.service... Jul 10 00:33:39.794717 ignition[759]: Ignition 2.14.0 Jul 10 00:33:39.794732 ignition[759]: Stage: kargs Jul 10 00:33:39.794820 ignition[759]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:33:39.794829 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:33:39.797353 systemd[1]: Finished ignition-kargs.service. Jul 10 00:33:39.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:39.795895 ignition[759]: kargs: kargs passed Jul 10 00:33:39.795933 ignition[759]: Ignition finished successfully Jul 10 00:33:39.799488 systemd[1]: Starting ignition-disks.service... Jul 10 00:33:39.806682 ignition[765]: Ignition 2.14.0 Jul 10 00:33:39.806692 ignition[765]: Stage: disks Jul 10 00:33:39.806795 ignition[765]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:33:39.806812 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:33:39.807765 ignition[765]: disks: disks passed Jul 10 00:33:39.807821 ignition[765]: Ignition finished successfully Jul 10 00:33:39.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:39.810027 systemd[1]: Finished ignition-disks.service. Jul 10 00:33:39.811021 systemd[1]: Reached target initrd-root-device.target. Jul 10 00:33:39.812039 systemd[1]: Reached target local-fs-pre.target. Jul 10 00:33:39.813051 systemd[1]: Reached target local-fs.target. Jul 10 00:33:39.814129 systemd[1]: Reached target sysinit.target. Jul 10 00:33:39.815165 systemd[1]: Reached target basic.target. Jul 10 00:33:39.817184 systemd[1]: Starting systemd-fsck-root.service... Jul 10 00:33:39.828959 systemd-fsck[773]: ROOT: clean, 619/553520 files, 56022/553472 blocks Jul 10 00:33:39.833185 systemd[1]: Finished systemd-fsck-root.service. Jul 10 00:33:39.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:39.834834 systemd[1]: Mounting sysroot.mount... Jul 10 00:33:39.844247 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 10 00:33:39.844398 systemd[1]: Mounted sysroot.mount. Jul 10 00:33:39.845012 systemd[1]: Reached target initrd-root-fs.target. Jul 10 00:33:39.846957 systemd[1]: Mounting sysroot-usr.mount... Jul 10 00:33:39.847737 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 10 00:33:39.847776 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 10 00:33:39.847801 systemd[1]: Reached target ignition-diskful.target. Jul 10 00:33:39.850053 systemd[1]: Mounted sysroot-usr.mount. Jul 10 00:33:39.852144 systemd[1]: Starting initrd-setup-root.service... Jul 10 00:33:39.856746 initrd-setup-root[783]: cut: /sysroot/etc/passwd: No such file or directory Jul 10 00:33:39.860855 initrd-setup-root[791]: cut: /sysroot/etc/group: No such file or directory Jul 10 00:33:39.864838 initrd-setup-root[799]: cut: /sysroot/etc/shadow: No such file or directory Jul 10 00:33:39.868351 initrd-setup-root[807]: cut: /sysroot/etc/gshadow: No such file or directory Jul 10 00:33:39.898604 systemd[1]: Finished initrd-setup-root.service. Jul 10 00:33:39.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:39.900202 systemd[1]: Starting ignition-mount.service... Jul 10 00:33:39.901558 systemd[1]: Starting sysroot-boot.service... Jul 10 00:33:39.905920 bash[824]: umount: /sysroot/usr/share/oem: not mounted. Jul 10 00:33:39.913821 ignition[826]: INFO : Ignition 2.14.0 Jul 10 00:33:39.913821 ignition[826]: INFO : Stage: mount Jul 10 00:33:39.915077 ignition[826]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:33:39.915077 ignition[826]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:33:39.915077 ignition[826]: INFO : mount: mount passed Jul 10 00:33:39.915077 ignition[826]: INFO : Ignition finished successfully Jul 10 00:33:39.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:39.915664 systemd[1]: Finished ignition-mount.service. Jul 10 00:33:39.926752 systemd[1]: Finished sysroot-boot.service. Jul 10 00:33:39.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:40.537317 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 10 00:33:40.546265 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (834) Jul 10 00:33:40.551645 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 10 00:33:40.551667 kernel: BTRFS info (device vda6): using free space tree Jul 10 00:33:40.551678 kernel: BTRFS info (device vda6): has skinny extents Jul 10 00:33:40.555108 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 10 00:33:40.560212 systemd[1]: Starting ignition-files.service... Jul 10 00:33:40.580502 ignition[854]: INFO : Ignition 2.14.0 Jul 10 00:33:40.580502 ignition[854]: INFO : Stage: files Jul 10 00:33:40.581714 ignition[854]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:33:40.581714 ignition[854]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:33:40.583559 ignition[854]: DEBUG : files: compiled without relabeling support, skipping Jul 10 00:33:40.589948 ignition[854]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 10 00:33:40.589948 ignition[854]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 10 00:33:40.594653 ignition[854]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 10 00:33:40.595698 ignition[854]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 10 00:33:40.595698 ignition[854]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 10 00:33:40.595376 unknown[854]: wrote ssh authorized keys file for user: core Jul 10 00:33:40.598885 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 10 00:33:40.598885 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jul 10 00:33:41.079893 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 10 00:33:41.189346 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 10 00:33:41.190966 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 10 00:33:41.192371 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 10 00:33:41.339342 systemd-networkd[738]: eth0: Gained IPv6LL Jul 10 00:33:41.542943 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 10 00:33:41.625861 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 10 00:33:41.625861 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 10 00:33:41.628648 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 10 00:33:41.628648 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:33:41.628648 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:33:41.628648 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:33:41.633642 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:33:41.633642 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:33:41.633642 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:33:41.652225 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:33:41.653566 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:33:41.653566 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 10 00:33:41.653566 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 10 00:33:41.653566 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 10 00:33:41.653566 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 10 00:33:42.173568 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 10 00:33:42.664031 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 10 00:33:42.664031 ignition[854]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 10 00:33:42.667439 ignition[854]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:33:42.667439 ignition[854]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:33:42.667439 ignition[854]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 10 00:33:42.667439 ignition[854]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 10 00:33:42.667439 ignition[854]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 00:33:42.667439 ignition[854]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 00:33:42.667439 ignition[854]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 10 00:33:42.667439 ignition[854]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jul 10 00:33:42.667439 ignition[854]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jul 10 00:33:42.667439 ignition[854]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Jul 10 00:33:42.667439 ignition[854]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 00:33:42.701558 ignition[854]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 00:33:42.702727 ignition[854]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Jul 10 00:33:42.702727 ignition[854]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:33:42.702727 ignition[854]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:33:42.702727 ignition[854]: INFO : files: files passed Jul 10 00:33:42.702727 ignition[854]: INFO : Ignition finished successfully Jul 10 00:33:42.712171 kernel: kauditd_printk_skb: 21 callbacks suppressed Jul 10 00:33:42.712194 kernel: audit: type=1130 audit(1752107622.704:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:42.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:42.703028 systemd[1]: Finished ignition-files.service. Jul 10 00:33:42.705669 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 10 00:33:42.713883 initrd-setup-root-after-ignition[879]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Jul 10 00:33:42.719026 kernel: audit: type=1130 audit(1752107622.714:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:42.719048 kernel: audit: type=1131 audit(1752107622.714:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:42.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:42.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:42.709124 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 10 00:33:42.720570 initrd-setup-root-after-ignition[881]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:33:42.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:42.709973 systemd[1]: Starting ignition-quench.service... Jul 10 00:33:42.725305 kernel: audit: type=1130 audit(1752107622.720:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:42.713352 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 10 00:33:42.713441 systemd[1]: Finished ignition-quench.service. Jul 10 00:33:42.719662 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 10 00:33:42.721345 systemd[1]: Reached target ignition-complete.target. Jul 10 00:33:42.725686 systemd[1]: Starting initrd-parse-etc.service... Jul 10 00:33:42.739691 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 10 00:33:42.739803 systemd[1]: Finished initrd-parse-etc.service. Jul 10 00:33:42.745124 kernel: audit: type=1130 audit(1752107622.740:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:42.745145 kernel: audit: type=1131 audit(1752107622.740:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:42.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:42.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:42.741194 systemd[1]: Reached target initrd-fs.target. Jul 10 00:33:42.745700 systemd[1]: Reached target initrd.target. Jul 10 00:33:42.746777 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 10 00:33:42.747769 systemd[1]: Starting dracut-pre-pivot.service... Jul 10 00:33:42.759964 systemd[1]: Finished dracut-pre-pivot.service. Jul 10 00:33:42.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:42.761614 systemd[1]: Starting initrd-cleanup.service... Jul 10 00:33:42.764250 kernel: audit: type=1130 audit(1752107622.760:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:42.770230 systemd[1]: Stopped target nss-lookup.target. Jul 10 00:33:42.770914 systemd[1]: Stopped target remote-cryptsetup.target. Jul 10 00:33:42.772026 systemd[1]: Stopped target timers.target. Jul 10 00:33:42.773062 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 10 00:33:42.773000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:42.773173 systemd[1]: Stopped dracut-pre-pivot.service. Jul 10 00:33:42.777412 kernel: audit: type=1131 audit(1752107622.773:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:42.774236 systemd[1]: Stopped target initrd.target. Jul 10 00:33:42.777015 systemd[1]: Stopped target basic.target. Jul 10 00:33:42.777962 systemd[1]: Stopped target ignition-complete.target. Jul 10 00:33:42.778991 systemd[1]: Stopped target ignition-diskful.target. Jul 10 00:33:42.780046 systemd[1]: Stopped target initrd-root-device.target. Jul 10 00:33:42.781142 systemd[1]: Stopped target remote-fs.target. Jul 10 00:33:42.782186 systemd[1]: Stopped target remote-fs-pre.target. Jul 10 00:33:42.783278 systemd[1]: Stopped target sysinit.target. Jul 10 00:33:42.784347 systemd[1]: Stopped target local-fs.target. Jul 10 00:33:42.785344 systemd[1]: Stopped target local-fs-pre.target. Jul 10 00:33:42.786318 systemd[1]: Stopped target swap.target. Jul 10 00:33:42.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:42.787309 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 10 00:33:42.791983 kernel: audit: type=1131 audit(1752107622.788:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:42.787427 systemd[1]: Stopped dracut-pre-mount.service. Jul 10 00:33:42.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:42.788603 systemd[1]: Stopped target cryptsetup.target. Jul 10 00:33:42.796020 kernel: audit: type=1131 audit(1752107622.792:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:42.795000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:42.791433 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 10 00:33:42.791538 systemd[1]: Stopped dracut-initqueue.service. Jul 10 00:33:42.792648 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 10 00:33:42.792738 systemd[1]: Stopped ignition-fetch-offline.service. Jul 10 00:33:42.795680 systemd[1]: Stopped target paths.target. Jul 10 00:33:42.796550 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 10 00:33:42.799275 systemd[1]: Stopped systemd-ask-password-console.path. Jul 10 00:33:42.800493 systemd[1]: Stopped target slices.target. Jul 10 00:33:42.801784 systemd[1]: Stopped target sockets.target. Jul 10 00:33:42.802937 systemd[1]: iscsid.socket: Deactivated successfully. Jul 10 00:33:42.803019 systemd[1]: Closed iscsid.socket. Jul 10 00:33:42.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:42.803955 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 10 00:33:42.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:42.804026 systemd[1]: Closed iscsiuio.socket. Jul 10 00:33:42.804962 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 10 00:33:42.805062 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 10 00:33:42.806200 systemd[1]: ignition-files.service: Deactivated successfully. Jul 10 00:33:42.806301 systemd[1]: Stopped ignition-files.service. Jul 10 00:33:42.808089 systemd[1]: Stopping ignition-mount.service... Jul 10 00:33:42.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:42.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:42.809721 systemd[1]: Stopping sysroot-boot.service... Jul 10 00:33:42.810830 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 10 00:33:42.811028 systemd[1]: Stopped systemd-udev-trigger.service. Jul 10 00:33:42.815925 ignition[894]: INFO : Ignition 2.14.0 Jul 10 00:33:42.815925 ignition[894]: INFO : Stage: umount Jul 10 00:33:42.815925 ignition[894]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:33:42.815925 ignition[894]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:33:42.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:42.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:42.812240 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 10 00:33:42.820849 ignition[894]: INFO : umount: umount passed Jul 10 00:33:42.820849 ignition[894]: INFO : Ignition finished successfully Jul 10 00:33:42.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:42.812375 systemd[1]: Stopped dracut-pre-trigger.service. Jul 10 00:33:42.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:42.817545 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 10 00:33:42.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:42.817628 systemd[1]: Finished initrd-cleanup.service. Jul 10 00:33:42.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:42.819734 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 10 00:33:42.820375 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 10 00:33:42.820450 systemd[1]: Stopped ignition-mount.service. Jul 10 00:33:42.821629 systemd[1]: Stopped target network.target. Jul 10 00:33:42.822816 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 10 00:33:42.822866 systemd[1]: Stopped ignition-disks.service. Jul 10 00:33:42.824045 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 10 00:33:42.824090 systemd[1]: Stopped ignition-kargs.service. Jul 10 00:33:42.825059 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 10 00:33:42.825097 systemd[1]: Stopped ignition-setup.service. Jul 10 00:33:42.826516 systemd[1]: Stopping systemd-networkd.service... Jul 10 00:33:42.827408 systemd[1]: Stopping systemd-resolved.service... Jul 10 00:33:42.834270 systemd-networkd[738]: eth0: DHCPv6 lease lost Jul 10 00:33:42.835649 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 10 00:33:42.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:42.835751 systemd[1]: Stopped systemd-networkd.service. Jul 10 00:33:42.837056 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 10 00:33:42.837087 systemd[1]: Closed systemd-networkd.socket. Jul 10 00:33:42.838776 systemd[1]: Stopping network-cleanup.service... Jul 10 00:33:42.840000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:42.839813 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 10 00:33:42.839870 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 10 00:33:42.842000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:42.841194 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 00:33:42.844000 audit: BPF prog-id=9 op=UNLOAD Jul 10 00:33:42.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:42.841384 systemd[1]: Stopped systemd-sysctl.service. Jul 10 00:33:42.843205 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 10 00:33:42.843267 systemd[1]: Stopped systemd-modules-load.service. Jul 10 00:33:42.844945 systemd[1]: Stopping systemd-udevd.service... Jul 10 00:33:42.850036 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 10 00:33:42.851000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:42.850546 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 10 00:33:42.850736 systemd[1]: Stopped systemd-resolved.service. Jul 10 00:33:42.854984 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 10 00:33:42.855088 systemd[1]: Stopped network-cleanup.service. Jul 10 00:33:42.856000 audit: BPF prog-id=6 op=UNLOAD Jul 10 00:33:42.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:42.856784 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 10 00:33:42.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:42.856899 systemd[1]: Stopped systemd-udevd.service. Jul 10 00:33:42.857834 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 10 00:33:42.857872 systemd[1]: Closed systemd-udevd-control.socket. Jul 10 00:33:42.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:42.859675 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 10 00:33:42.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:42.859711 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 10 00:33:42.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:42.860753 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 10 00:33:42.860800 systemd[1]: Stopped dracut-pre-udev.service. Jul 10 00:33:42.862898 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 10 00:33:42.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:42.862940 systemd[1]: Stopped dracut-cmdline.service. Jul 10 00:33:42.863922 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 00:33:42.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:42.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:42.863965 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 10 00:33:42.866720 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 10 00:33:42.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:42.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:42.868015 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 10 00:33:42.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:42.868087 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Jul 10 00:33:42.870077 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 10 00:33:42.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:42.870121 systemd[1]: Stopped kmod-static-nodes.service. Jul 10 00:33:42.870933 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:33:42.870984 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 10 00:33:42.872803 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 10 00:33:42.873274 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 10 00:33:42.873353 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 10 00:33:42.874360 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 10 00:33:42.874445 systemd[1]: Stopped sysroot-boot.service. Jul 10 00:33:42.875481 systemd[1]: Reached target initrd-switch-root.target. Jul 10 00:33:42.876750 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 10 00:33:42.876802 systemd[1]: Stopped initrd-setup-root.service. Jul 10 00:33:42.878655 systemd[1]: Starting initrd-switch-root.service... Jul 10 00:33:42.885392 systemd[1]: Switching root. Jul 10 00:33:42.901572 iscsid[745]: iscsid shutting down. Jul 10 00:33:42.902166 systemd-journald[289]: Journal stopped Jul 10 00:33:44.899067 systemd-journald[289]: Received SIGTERM from PID 1 (systemd). Jul 10 00:33:44.899129 kernel: SELinux: Class mctp_socket not defined in policy. Jul 10 00:33:44.899143 kernel: SELinux: Class anon_inode not defined in policy. Jul 10 00:33:44.899156 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 10 00:33:44.899171 kernel: SELinux: policy capability network_peer_controls=1 Jul 10 00:33:44.899180 kernel: SELinux: policy capability open_perms=1 Jul 10 00:33:44.899190 kernel: SELinux: policy capability extended_socket_class=1 Jul 10 00:33:44.899200 kernel: SELinux: policy capability always_check_network=0 Jul 10 00:33:44.899209 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 10 00:33:44.899232 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 10 00:33:44.899244 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 10 00:33:44.899254 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 10 00:33:44.899267 systemd[1]: Successfully loaded SELinux policy in 34.009ms. Jul 10 00:33:44.899282 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.971ms. Jul 10 00:33:44.899295 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 10 00:33:44.899306 systemd[1]: Detected virtualization kvm. Jul 10 00:33:44.899316 systemd[1]: Detected architecture arm64. Jul 10 00:33:44.899327 systemd[1]: Detected first boot. Jul 10 00:33:44.899341 systemd[1]: Initializing machine ID from VM UUID. Jul 10 00:33:44.899352 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 10 00:33:44.899362 systemd[1]: Populated /etc with preset unit settings. Jul 10 00:33:44.899374 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 10 00:33:44.899386 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 10 00:33:44.899398 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:33:44.899411 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 10 00:33:44.899421 systemd[1]: Stopped iscsiuio.service. Jul 10 00:33:44.899431 systemd[1]: iscsid.service: Deactivated successfully. Jul 10 00:33:44.899441 systemd[1]: Stopped iscsid.service. Jul 10 00:33:44.899451 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 10 00:33:44.899465 systemd[1]: Stopped initrd-switch-root.service. Jul 10 00:33:44.899475 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 10 00:33:44.899485 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 10 00:33:44.899497 systemd[1]: Created slice system-addon\x2drun.slice. Jul 10 00:33:44.899511 systemd[1]: Created slice system-getty.slice. Jul 10 00:33:44.899522 systemd[1]: Created slice system-modprobe.slice. Jul 10 00:33:44.899533 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 10 00:33:44.899543 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 10 00:33:44.899554 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 10 00:33:44.899564 systemd[1]: Created slice user.slice. Jul 10 00:33:44.899575 systemd[1]: Started systemd-ask-password-console.path. Jul 10 00:33:44.899585 systemd[1]: Started systemd-ask-password-wall.path. Jul 10 00:33:44.899597 systemd[1]: Set up automount boot.automount. Jul 10 00:33:44.899607 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 10 00:33:44.899618 systemd[1]: Stopped target initrd-switch-root.target. Jul 10 00:33:44.899628 systemd[1]: Stopped target initrd-fs.target. Jul 10 00:33:44.899639 systemd[1]: Stopped target initrd-root-fs.target. Jul 10 00:33:44.899649 systemd[1]: Reached target integritysetup.target. Jul 10 00:33:44.899672 systemd[1]: Reached target remote-cryptsetup.target. Jul 10 00:33:44.899683 systemd[1]: Reached target remote-fs.target. Jul 10 00:33:44.899693 systemd[1]: Reached target slices.target. Jul 10 00:33:44.899704 systemd[1]: Reached target swap.target. Jul 10 00:33:44.899714 systemd[1]: Reached target torcx.target. Jul 10 00:33:44.899725 systemd[1]: Reached target veritysetup.target. Jul 10 00:33:44.899736 systemd[1]: Listening on systemd-coredump.socket. Jul 10 00:33:44.899746 systemd[1]: Listening on systemd-initctl.socket. Jul 10 00:33:44.899757 systemd[1]: Listening on systemd-networkd.socket. Jul 10 00:33:44.899767 systemd[1]: Listening on systemd-udevd-control.socket. Jul 10 00:33:44.899778 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 10 00:33:44.899789 systemd[1]: Listening on systemd-userdbd.socket. Jul 10 00:33:44.899799 systemd[1]: Mounting dev-hugepages.mount... Jul 10 00:33:44.899809 systemd[1]: Mounting dev-mqueue.mount... Jul 10 00:33:44.899820 systemd[1]: Mounting media.mount... Jul 10 00:33:44.899830 systemd[1]: Mounting sys-kernel-debug.mount... Jul 10 00:33:44.899840 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 10 00:33:44.899851 systemd[1]: Mounting tmp.mount... Jul 10 00:33:44.899861 systemd[1]: Starting flatcar-tmpfiles.service... Jul 10 00:33:44.899871 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 10 00:33:44.899882 systemd[1]: Starting kmod-static-nodes.service... Jul 10 00:33:44.899893 systemd[1]: Starting modprobe@configfs.service... Jul 10 00:33:44.899903 systemd[1]: Starting modprobe@dm_mod.service... Jul 10 00:33:44.899919 systemd[1]: Starting modprobe@drm.service... Jul 10 00:33:44.899929 systemd[1]: Starting modprobe@efi_pstore.service... Jul 10 00:33:44.899944 systemd[1]: Starting modprobe@fuse.service... Jul 10 00:33:44.899956 systemd[1]: Starting modprobe@loop.service... Jul 10 00:33:44.899967 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 10 00:33:44.899977 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 10 00:33:44.899989 systemd[1]: Stopped systemd-fsck-root.service. Jul 10 00:33:44.900004 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 10 00:33:44.900015 systemd[1]: Stopped systemd-fsck-usr.service. Jul 10 00:33:44.900025 systemd[1]: Stopped systemd-journald.service. Jul 10 00:33:44.900035 kernel: loop: module loaded Jul 10 00:33:44.900046 kernel: fuse: init (API version 7.34) Jul 10 00:33:44.900057 systemd[1]: Starting systemd-journald.service... Jul 10 00:33:44.900067 systemd[1]: Starting systemd-modules-load.service... Jul 10 00:33:44.900078 systemd[1]: Starting systemd-network-generator.service... Jul 10 00:33:44.900088 systemd[1]: Starting systemd-remount-fs.service... Jul 10 00:33:44.900098 systemd[1]: Starting systemd-udev-trigger.service... Jul 10 00:33:44.900109 systemd[1]: verity-setup.service: Deactivated successfully. Jul 10 00:33:44.900120 systemd[1]: Stopped verity-setup.service. Jul 10 00:33:44.900131 systemd[1]: Mounted dev-hugepages.mount. Jul 10 00:33:44.900141 systemd[1]: Mounted dev-mqueue.mount. Jul 10 00:33:44.900152 systemd[1]: Mounted media.mount. Jul 10 00:33:44.900164 systemd[1]: Mounted sys-kernel-debug.mount. Jul 10 00:33:44.900175 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 10 00:33:44.900186 systemd[1]: Mounted tmp.mount. Jul 10 00:33:44.900196 systemd[1]: Finished kmod-static-nodes.service. Jul 10 00:33:44.900208 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 10 00:33:44.900238 systemd[1]: Finished modprobe@configfs.service. Jul 10 00:33:44.900251 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:33:44.900262 systemd[1]: Finished modprobe@dm_mod.service. Jul 10 00:33:44.900275 systemd-journald[993]: Journal started Jul 10 00:33:44.900317 systemd-journald[993]: Runtime Journal (/run/log/journal/d07d8d157d84429caa2f0e789d0461d8) is 6.0M, max 48.7M, 42.6M free. Jul 10 00:33:42.969000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 10 00:33:43.062000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 10 00:33:43.063000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 10 00:33:43.063000 audit: BPF prog-id=10 op=LOAD Jul 10 00:33:43.063000 audit: BPF prog-id=10 op=UNLOAD Jul 10 00:33:43.063000 audit: BPF prog-id=11 op=LOAD Jul 10 00:33:43.063000 audit: BPF prog-id=11 op=UNLOAD Jul 10 00:33:43.098000 audit[927]: AVC avc: denied { associate } for pid=927 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 10 00:33:43.098000 audit[927]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c589c a1=40000c8de0 a2=40000cf0c0 a3=32 items=0 ppid=910 pid=927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:33:43.098000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 10 00:33:43.099000 audit[927]: AVC avc: denied { associate } for pid=927 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 10 00:33:43.099000 audit[927]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001c5975 a2=1ed a3=0 items=2 ppid=910 pid=927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:33:43.099000 audit: CWD cwd="/" Jul 10 00:33:43.099000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:33:43.099000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:33:43.099000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 10 00:33:44.774000 audit: BPF prog-id=12 op=LOAD Jul 10 00:33:44.774000 audit: BPF prog-id=3 op=UNLOAD Jul 10 00:33:44.774000 audit: BPF prog-id=13 op=LOAD Jul 10 00:33:44.774000 audit: BPF prog-id=14 op=LOAD Jul 10 00:33:44.774000 audit: BPF prog-id=4 op=UNLOAD Jul 10 00:33:44.774000 audit: BPF prog-id=5 op=UNLOAD Jul 10 00:33:44.775000 audit: BPF prog-id=15 op=LOAD Jul 10 00:33:44.775000 audit: BPF prog-id=12 op=UNLOAD Jul 10 00:33:44.775000 audit: BPF prog-id=16 op=LOAD Jul 10 00:33:44.775000 audit: BPF prog-id=17 op=LOAD Jul 10 00:33:44.775000 audit: BPF prog-id=13 op=UNLOAD Jul 10 00:33:44.775000 audit: BPF prog-id=14 op=UNLOAD Jul 10 00:33:44.776000 audit: BPF prog-id=18 op=LOAD Jul 10 00:33:44.776000 audit: BPF prog-id=15 op=UNLOAD Jul 10 00:33:44.776000 audit: BPF prog-id=19 op=LOAD Jul 10 00:33:44.776000 audit: BPF prog-id=20 op=LOAD Jul 10 00:33:44.776000 audit: BPF prog-id=16 op=UNLOAD Jul 10 00:33:44.776000 audit: BPF prog-id=17 op=UNLOAD Jul 10 00:33:44.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:44.778000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:44.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:44.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:44.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:44.787000 audit: BPF prog-id=18 op=UNLOAD Jul 10 00:33:44.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:44.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:44.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:44.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:44.870000 audit: BPF prog-id=21 op=LOAD Jul 10 00:33:44.871000 audit: BPF prog-id=22 op=LOAD Jul 10 00:33:44.871000 audit: BPF prog-id=23 op=LOAD Jul 10 00:33:44.871000 audit: BPF prog-id=19 op=UNLOAD Jul 10 00:33:44.871000 audit: BPF prog-id=20 op=UNLOAD Jul 10 00:33:44.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:44.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:44.897000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 10 00:33:44.897000 audit[993]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=ffffe7c488b0 a2=4000 a3=1 items=0 ppid=1 pid=993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:33:44.897000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 10 00:33:44.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:44.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:44.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:44.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:43.096314 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2025-07-10T00:33:43Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 10 00:33:44.772562 systemd[1]: Queued start job for default target multi-user.target. Jul 10 00:33:43.096618 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2025-07-10T00:33:43Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 10 00:33:44.772574 systemd[1]: Unnecessary job was removed for dev-vda6.device. Jul 10 00:33:43.096637 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2025-07-10T00:33:43Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 10 00:33:44.776692 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 10 00:33:43.096669 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2025-07-10T00:33:43Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 10 00:33:44.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:43.096678 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2025-07-10T00:33:43Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 10 00:33:43.096708 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2025-07-10T00:33:43Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 10 00:33:44.902275 systemd[1]: Started systemd-journald.service. Jul 10 00:33:43.096719 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2025-07-10T00:33:43Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 10 00:33:44.902279 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:33:43.096921 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2025-07-10T00:33:43Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 10 00:33:44.902428 systemd[1]: Finished modprobe@drm.service. Jul 10 00:33:43.096967 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2025-07-10T00:33:43Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 10 00:33:43.096983 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2025-07-10T00:33:43Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 10 00:33:43.097670 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2025-07-10T00:33:43Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 10 00:33:43.097704 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2025-07-10T00:33:43Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 10 00:33:43.097721 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2025-07-10T00:33:43Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 Jul 10 00:33:43.097735 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2025-07-10T00:33:43Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 10 00:33:43.097752 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2025-07-10T00:33:43Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 Jul 10 00:33:44.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:44.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:43.097765 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2025-07-10T00:33:43Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 10 00:33:44.526969 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2025-07-10T00:33:44Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 10 00:33:44.527251 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2025-07-10T00:33:44Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 10 00:33:44.527347 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2025-07-10T00:33:44Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 10 00:33:44.527503 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2025-07-10T00:33:44Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 10 00:33:44.527552 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2025-07-10T00:33:44Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 10 00:33:44.527607 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2025-07-10T00:33:44Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 10 00:33:44.903539 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:33:44.903700 systemd[1]: Finished modprobe@efi_pstore.service. Jul 10 00:33:44.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:44.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:44.904729 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 10 00:33:44.904882 systemd[1]: Finished modprobe@fuse.service. Jul 10 00:33:44.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:44.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:44.905780 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:33:44.906192 systemd[1]: Finished modprobe@loop.service. Jul 10 00:33:44.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:44.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:44.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:44.907175 systemd[1]: Finished systemd-modules-load.service. Jul 10 00:33:44.908265 systemd[1]: Finished systemd-network-generator.service. Jul 10 00:33:44.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:44.909239 systemd[1]: Finished systemd-remount-fs.service. Jul 10 00:33:44.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:44.910191 systemd[1]: Finished flatcar-tmpfiles.service. Jul 10 00:33:44.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:44.911399 systemd[1]: Reached target network-pre.target. Jul 10 00:33:44.913336 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 10 00:33:44.914996 systemd[1]: Mounting sys-kernel-config.mount... Jul 10 00:33:44.915813 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 10 00:33:44.917461 systemd[1]: Starting systemd-hwdb-update.service... Jul 10 00:33:44.919064 systemd[1]: Starting systemd-journal-flush.service... Jul 10 00:33:44.919785 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:33:44.920787 systemd[1]: Starting systemd-random-seed.service... Jul 10 00:33:44.921595 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 10 00:33:44.922716 systemd[1]: Starting systemd-sysctl.service... Jul 10 00:33:44.925468 systemd[1]: Starting systemd-sysusers.service... Jul 10 00:33:44.927802 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 10 00:33:44.930415 systemd-journald[993]: Time spent on flushing to /var/log/journal/d07d8d157d84429caa2f0e789d0461d8 is 19.841ms for 1003 entries. Jul 10 00:33:44.930415 systemd-journald[993]: System Journal (/var/log/journal/d07d8d157d84429caa2f0e789d0461d8) is 8.0M, max 195.6M, 187.6M free. Jul 10 00:33:44.960490 systemd-journald[993]: Received client request to flush runtime journal. Jul 10 00:33:44.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:44.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:44.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:44.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:44.928806 systemd[1]: Mounted sys-kernel-config.mount. Jul 10 00:33:44.933056 systemd[1]: Finished systemd-random-seed.service. Jul 10 00:33:44.961056 udevadm[1029]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 10 00:33:44.933967 systemd[1]: Reached target first-boot-complete.target. Jul 10 00:33:44.942588 systemd[1]: Finished systemd-udev-trigger.service. Jul 10 00:33:44.944493 systemd[1]: Starting systemd-udev-settle.service... Jul 10 00:33:44.945597 systemd[1]: Finished systemd-sysctl.service. Jul 10 00:33:44.947550 systemd[1]: Finished systemd-sysusers.service. Jul 10 00:33:44.949353 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 10 00:33:44.961452 systemd[1]: Finished systemd-journal-flush.service. Jul 10 00:33:44.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:44.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:44.971956 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 10 00:33:45.301070 systemd[1]: Finished systemd-hwdb-update.service. Jul 10 00:33:45.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:45.302000 audit: BPF prog-id=24 op=LOAD Jul 10 00:33:45.302000 audit: BPF prog-id=25 op=LOAD Jul 10 00:33:45.302000 audit: BPF prog-id=7 op=UNLOAD Jul 10 00:33:45.302000 audit: BPF prog-id=8 op=UNLOAD Jul 10 00:33:45.303204 systemd[1]: Starting systemd-udevd.service... Jul 10 00:33:45.326281 systemd-udevd[1033]: Using default interface naming scheme 'v252'. Jul 10 00:33:45.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:45.339000 audit: BPF prog-id=26 op=LOAD Jul 10 00:33:45.337666 systemd[1]: Started systemd-udevd.service. Jul 10 00:33:45.340002 systemd[1]: Starting systemd-networkd.service... Jul 10 00:33:45.360000 audit: BPF prog-id=27 op=LOAD Jul 10 00:33:45.360000 audit: BPF prog-id=28 op=LOAD Jul 10 00:33:45.360000 audit: BPF prog-id=29 op=LOAD Jul 10 00:33:45.361235 systemd[1]: Starting systemd-userdbd.service... Jul 10 00:33:45.367133 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Jul 10 00:33:45.403135 systemd[1]: Started systemd-userdbd.service. Jul 10 00:33:45.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:45.430306 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 10 00:33:45.456632 systemd[1]: Finished systemd-udev-settle.service. Jul 10 00:33:45.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:45.458645 systemd[1]: Starting lvm2-activation-early.service... Jul 10 00:33:45.461085 systemd-networkd[1041]: lo: Link UP Jul 10 00:33:45.461097 systemd-networkd[1041]: lo: Gained carrier Jul 10 00:33:45.461566 systemd-networkd[1041]: Enumeration completed Jul 10 00:33:45.461694 systemd[1]: Started systemd-networkd.service. Jul 10 00:33:45.461730 systemd-networkd[1041]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:33:45.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:45.464137 systemd-networkd[1041]: eth0: Link UP Jul 10 00:33:45.464147 systemd-networkd[1041]: eth0: Gained carrier Jul 10 00:33:45.470072 lvm[1066]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 10 00:33:45.493358 systemd-networkd[1041]: eth0: DHCPv4 address 10.0.0.76/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 00:33:45.507139 systemd[1]: Finished lvm2-activation-early.service. Jul 10 00:33:45.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:45.508005 systemd[1]: Reached target cryptsetup.target. Jul 10 00:33:45.509799 systemd[1]: Starting lvm2-activation.service... Jul 10 00:33:45.513479 lvm[1067]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 10 00:33:45.543159 systemd[1]: Finished lvm2-activation.service. Jul 10 00:33:45.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:45.543988 systemd[1]: Reached target local-fs-pre.target. Jul 10 00:33:45.544665 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 10 00:33:45.544692 systemd[1]: Reached target local-fs.target. Jul 10 00:33:45.545325 systemd[1]: Reached target machines.target. Jul 10 00:33:45.547164 systemd[1]: Starting ldconfig.service... Jul 10 00:33:45.548088 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 10 00:33:45.548145 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:33:45.549128 systemd[1]: Starting systemd-boot-update.service... Jul 10 00:33:45.550909 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 10 00:33:45.552888 systemd[1]: Starting systemd-machine-id-commit.service... Jul 10 00:33:45.558044 systemd[1]: Starting systemd-sysext.service... Jul 10 00:33:45.559133 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1069 (bootctl) Jul 10 00:33:45.560744 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 10 00:33:45.564843 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 10 00:33:45.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:45.570302 systemd[1]: Unmounting usr-share-oem.mount... Jul 10 00:33:45.580132 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 10 00:33:45.580339 systemd[1]: Unmounted usr-share-oem.mount. Jul 10 00:33:45.694247 kernel: loop0: detected capacity change from 0 to 207008 Jul 10 00:33:45.700150 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 10 00:33:45.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:45.702483 systemd[1]: Finished systemd-machine-id-commit.service. Jul 10 00:33:45.704381 systemd-fsck[1077]: fsck.fat 4.2 (2021-01-31) Jul 10 00:33:45.704381 systemd-fsck[1077]: /dev/vda1: 236 files, 117310/258078 clusters Jul 10 00:33:45.706392 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 10 00:33:45.707236 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 10 00:33:45.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:45.709491 systemd[1]: Mounting boot.mount... Jul 10 00:33:45.716718 systemd[1]: Mounted boot.mount. Jul 10 00:33:45.723266 kernel: loop1: detected capacity change from 0 to 207008 Jul 10 00:33:45.723972 systemd[1]: Finished systemd-boot-update.service. Jul 10 00:33:45.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:45.731630 (sd-sysext)[1082]: Using extensions 'kubernetes'. Jul 10 00:33:45.731965 (sd-sysext)[1082]: Merged extensions into '/usr'. Jul 10 00:33:45.748738 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 10 00:33:45.750170 systemd[1]: Starting modprobe@dm_mod.service... Jul 10 00:33:45.752764 systemd[1]: Starting modprobe@efi_pstore.service... Jul 10 00:33:45.754928 systemd[1]: Starting modprobe@loop.service... Jul 10 00:33:45.755854 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 10 00:33:45.755999 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:33:45.756865 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:33:45.757006 systemd[1]: Finished modprobe@dm_mod.service. Jul 10 00:33:45.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:45.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:45.758324 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:33:45.758441 systemd[1]: Finished modprobe@efi_pstore.service. Jul 10 00:33:45.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:45.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:45.759683 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:33:45.759797 systemd[1]: Finished modprobe@loop.service. Jul 10 00:33:45.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:45.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:45.761143 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:33:45.761263 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 10 00:33:45.824502 ldconfig[1068]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 10 00:33:45.828103 systemd[1]: Finished ldconfig.service. Jul 10 00:33:45.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:45.888670 systemd[1]: Mounting usr-share-oem.mount... Jul 10 00:33:45.893579 systemd[1]: Mounted usr-share-oem.mount. Jul 10 00:33:45.895278 systemd[1]: Finished systemd-sysext.service. Jul 10 00:33:45.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:45.897121 systemd[1]: Starting ensure-sysext.service... Jul 10 00:33:45.898658 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 10 00:33:45.902922 systemd[1]: Reloading. Jul 10 00:33:45.907919 systemd-tmpfiles[1089]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 10 00:33:45.909189 systemd-tmpfiles[1089]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 10 00:33:45.910571 systemd-tmpfiles[1089]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 10 00:33:45.949901 /usr/lib/systemd/system-generators/torcx-generator[1109]: time="2025-07-10T00:33:45Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 10 00:33:45.949942 /usr/lib/systemd/system-generators/torcx-generator[1109]: time="2025-07-10T00:33:45Z" level=info msg="torcx already run" Jul 10 00:33:45.998122 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 10 00:33:45.998139 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 10 00:33:46.013677 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:33:46.057000 audit: BPF prog-id=30 op=LOAD Jul 10 00:33:46.057000 audit: BPF prog-id=26 op=UNLOAD Jul 10 00:33:46.057000 audit: BPF prog-id=31 op=LOAD Jul 10 00:33:46.058000 audit: BPF prog-id=32 op=LOAD Jul 10 00:33:46.058000 audit: BPF prog-id=24 op=UNLOAD Jul 10 00:33:46.058000 audit: BPF prog-id=25 op=UNLOAD Jul 10 00:33:46.058000 audit: BPF prog-id=33 op=LOAD Jul 10 00:33:46.058000 audit: BPF prog-id=27 op=UNLOAD Jul 10 00:33:46.058000 audit: BPF prog-id=34 op=LOAD Jul 10 00:33:46.058000 audit: BPF prog-id=35 op=LOAD Jul 10 00:33:46.058000 audit: BPF prog-id=28 op=UNLOAD Jul 10 00:33:46.058000 audit: BPF prog-id=29 op=UNLOAD Jul 10 00:33:46.059000 audit: BPF prog-id=36 op=LOAD Jul 10 00:33:46.059000 audit: BPF prog-id=21 op=UNLOAD Jul 10 00:33:46.059000 audit: BPF prog-id=37 op=LOAD Jul 10 00:33:46.059000 audit: BPF prog-id=38 op=LOAD Jul 10 00:33:46.059000 audit: BPF prog-id=22 op=UNLOAD Jul 10 00:33:46.059000 audit: BPF prog-id=23 op=UNLOAD Jul 10 00:33:46.061535 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 10 00:33:46.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:46.065566 systemd[1]: Starting audit-rules.service... Jul 10 00:33:46.067321 systemd[1]: Starting clean-ca-certificates.service... Jul 10 00:33:46.074000 audit: BPF prog-id=39 op=LOAD Jul 10 00:33:46.069649 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 10 00:33:46.075754 systemd[1]: Starting systemd-resolved.service... Jul 10 00:33:46.077000 audit: BPF prog-id=40 op=LOAD Jul 10 00:33:46.078407 systemd[1]: Starting systemd-timesyncd.service... Jul 10 00:33:46.080807 systemd[1]: Starting systemd-update-utmp.service... Jul 10 00:33:46.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:46.087000 audit[1159]: SYSTEM_BOOT pid=1159 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 10 00:33:46.082402 systemd[1]: Finished clean-ca-certificates.service. Jul 10 00:33:46.085255 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:33:46.090565 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 10 00:33:46.091962 systemd[1]: Starting modprobe@dm_mod.service... Jul 10 00:33:46.093633 systemd[1]: Starting modprobe@efi_pstore.service... Jul 10 00:33:46.095423 systemd[1]: Starting modprobe@loop.service... Jul 10 00:33:46.096063 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 10 00:33:46.096280 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:33:46.096438 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:33:46.097663 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 10 00:33:46.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:46.099271 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:33:46.099396 systemd[1]: Finished modprobe@dm_mod.service. Jul 10 00:33:46.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:46.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:46.100529 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:33:46.100635 systemd[1]: Finished modprobe@efi_pstore.service. Jul 10 00:33:46.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:46.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:46.101761 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:33:46.101870 systemd[1]: Finished modprobe@loop.service. Jul 10 00:33:46.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:46.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:46.104390 systemd[1]: Finished systemd-update-utmp.service. Jul 10 00:33:46.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:46.106375 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 10 00:33:46.107632 systemd[1]: Starting modprobe@dm_mod.service... Jul 10 00:33:46.109396 systemd[1]: Starting modprobe@efi_pstore.service... Jul 10 00:33:46.111277 systemd[1]: Starting modprobe@loop.service... Jul 10 00:33:46.111843 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 10 00:33:46.111970 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:33:46.113544 systemd[1]: Starting systemd-update-done.service... Jul 10 00:33:46.114301 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:33:46.115341 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:33:46.115476 systemd[1]: Finished modprobe@dm_mod.service. Jul 10 00:33:46.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:46.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:46.116460 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:33:46.116571 systemd[1]: Finished modprobe@efi_pstore.service. Jul 10 00:33:46.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:46.117000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:46.117539 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:33:46.117640 systemd[1]: Finished modprobe@loop.service. Jul 10 00:33:46.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:46.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:46.120565 systemd[1]: Finished systemd-update-done.service. Jul 10 00:33:46.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:33:46.121000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 10 00:33:46.121000 audit[1176]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffcf1ddda0 a2=420 a3=0 items=0 ppid=1148 pid=1176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:33:46.121000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 10 00:33:46.121782 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 10 00:33:46.121976 augenrules[1176]: No rules Jul 10 00:33:46.123010 systemd[1]: Starting modprobe@dm_mod.service... Jul 10 00:33:46.124711 systemd[1]: Starting modprobe@drm.service... Jul 10 00:33:46.126379 systemd[1]: Starting modprobe@efi_pstore.service... Jul 10 00:33:46.127986 systemd[1]: Starting modprobe@loop.service... Jul 10 00:33:46.128673 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 10 00:33:46.128842 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:33:46.130076 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 10 00:33:46.130969 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:33:46.132086 systemd[1]: Finished audit-rules.service. Jul 10 00:33:46.133214 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:33:46.133397 systemd[1]: Finished modprobe@dm_mod.service. Jul 10 00:33:46.134339 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:33:46.134443 systemd[1]: Finished modprobe@drm.service. Jul 10 00:33:46.135386 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:33:46.135484 systemd[1]: Finished modprobe@efi_pstore.service. Jul 10 00:33:46.136501 systemd[1]: Started systemd-timesyncd.service. Jul 10 00:33:46.136727 systemd-timesyncd[1158]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 10 00:33:46.136781 systemd-timesyncd[1158]: Initial clock synchronization to Thu 2025-07-10 00:33:46.267248 UTC. Jul 10 00:33:46.137819 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:33:46.137947 systemd[1]: Finished modprobe@loop.service. Jul 10 00:33:46.139339 systemd[1]: Reached target time-set.target. Jul 10 00:33:46.140148 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:33:46.140193 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 10 00:33:46.140475 systemd[1]: Finished ensure-sysext.service. Jul 10 00:33:46.141188 systemd-resolved[1154]: Positive Trust Anchors: Jul 10 00:33:46.141466 systemd-resolved[1154]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:33:46.141546 systemd-resolved[1154]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 10 00:33:46.154395 systemd-resolved[1154]: Defaulting to hostname 'linux'. Jul 10 00:33:46.156041 systemd[1]: Started systemd-resolved.service. Jul 10 00:33:46.156760 systemd[1]: Reached target network.target. Jul 10 00:33:46.157408 systemd[1]: Reached target nss-lookup.target. Jul 10 00:33:46.157996 systemd[1]: Reached target sysinit.target. Jul 10 00:33:46.158655 systemd[1]: Started motdgen.path. Jul 10 00:33:46.159189 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 10 00:33:46.160211 systemd[1]: Started logrotate.timer. Jul 10 00:33:46.160886 systemd[1]: Started mdadm.timer. Jul 10 00:33:46.161496 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 10 00:33:46.162113 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 10 00:33:46.162144 systemd[1]: Reached target paths.target. Jul 10 00:33:46.162723 systemd[1]: Reached target timers.target. Jul 10 00:33:46.163644 systemd[1]: Listening on dbus.socket. Jul 10 00:33:46.165378 systemd[1]: Starting docker.socket... Jul 10 00:33:46.168768 systemd[1]: Listening on sshd.socket. Jul 10 00:33:46.169477 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:33:46.169904 systemd[1]: Listening on docker.socket. Jul 10 00:33:46.170606 systemd[1]: Reached target sockets.target. Jul 10 00:33:46.171175 systemd[1]: Reached target basic.target. Jul 10 00:33:46.171786 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 10 00:33:46.171814 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 10 00:33:46.172770 systemd[1]: Starting containerd.service... Jul 10 00:33:46.174444 systemd[1]: Starting dbus.service... Jul 10 00:33:46.176021 systemd[1]: Starting enable-oem-cloudinit.service... Jul 10 00:33:46.177889 systemd[1]: Starting extend-filesystems.service... Jul 10 00:33:46.178688 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 10 00:33:46.179996 systemd[1]: Starting motdgen.service... Jul 10 00:33:46.185202 systemd[1]: Starting prepare-helm.service... Jul 10 00:33:46.187147 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 10 00:33:46.190792 jq[1191]: false Jul 10 00:33:46.189232 systemd[1]: Starting sshd-keygen.service... Jul 10 00:33:46.192645 systemd[1]: Starting systemd-logind.service... Jul 10 00:33:46.193289 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:33:46.193360 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 10 00:33:46.193882 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 10 00:33:46.194678 systemd[1]: Starting update-engine.service... Jul 10 00:33:46.196452 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 10 00:33:46.200247 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 10 00:33:46.200459 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 10 00:33:46.202477 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 10 00:33:46.202735 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 10 00:33:46.206212 jq[1205]: true Jul 10 00:33:46.208064 extend-filesystems[1192]: Found loop1 Jul 10 00:33:46.208898 extend-filesystems[1192]: Found vda Jul 10 00:33:46.209532 extend-filesystems[1192]: Found vda1 Jul 10 00:33:46.210514 extend-filesystems[1192]: Found vda2 Jul 10 00:33:46.211110 extend-filesystems[1192]: Found vda3 Jul 10 00:33:46.211737 extend-filesystems[1192]: Found usr Jul 10 00:33:46.212319 extend-filesystems[1192]: Found vda4 Jul 10 00:33:46.212886 extend-filesystems[1192]: Found vda6 Jul 10 00:33:46.213725 extend-filesystems[1192]: Found vda7 Jul 10 00:33:46.214375 extend-filesystems[1192]: Found vda9 Jul 10 00:33:46.214991 tar[1212]: linux-arm64/LICENSE Jul 10 00:33:46.214991 tar[1212]: linux-arm64/helm Jul 10 00:33:46.215409 extend-filesystems[1192]: Checking size of /dev/vda9 Jul 10 00:33:46.221778 jq[1216]: true Jul 10 00:33:46.223188 systemd[1]: motdgen.service: Deactivated successfully. Jul 10 00:33:46.223423 systemd[1]: Finished motdgen.service. Jul 10 00:33:46.260300 extend-filesystems[1192]: Resized partition /dev/vda9 Jul 10 00:33:46.276727 dbus-daemon[1190]: [system] SELinux support is enabled Jul 10 00:33:46.276911 systemd[1]: Started dbus.service. Jul 10 00:33:46.279852 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 10 00:33:46.279878 systemd[1]: Reached target system-config.target. Jul 10 00:33:46.280708 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 10 00:33:46.280728 systemd[1]: Reached target user-config.target. Jul 10 00:33:46.284376 extend-filesystems[1243]: resize2fs 1.46.5 (30-Dec-2021) Jul 10 00:33:46.292287 bash[1239]: Updated "/home/core/.ssh/authorized_keys" Jul 10 00:33:46.293850 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 10 00:33:46.295835 systemd-logind[1201]: Watching system buttons on /dev/input/event0 (Power Button) Jul 10 00:33:46.297703 systemd-logind[1201]: New seat seat0. Jul 10 00:33:46.300242 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 10 00:33:46.308811 systemd[1]: Started systemd-logind.service. Jul 10 00:33:46.314153 update_engine[1203]: I0710 00:33:46.313884 1203 main.cc:92] Flatcar Update Engine starting Jul 10 00:33:46.316552 systemd[1]: Started update-engine.service. Jul 10 00:33:46.316657 update_engine[1203]: I0710 00:33:46.316581 1203 update_check_scheduler.cc:74] Next update check in 11m54s Jul 10 00:33:46.319167 systemd[1]: Started locksmithd.service. Jul 10 00:33:46.323236 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 10 00:33:46.333331 extend-filesystems[1243]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 10 00:33:46.333331 extend-filesystems[1243]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 10 00:33:46.333331 extend-filesystems[1243]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 10 00:33:46.336814 extend-filesystems[1192]: Resized filesystem in /dev/vda9 Jul 10 00:33:46.336269 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 10 00:33:46.338191 env[1214]: time="2025-07-10T00:33:46.334322080Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 10 00:33:46.336440 systemd[1]: Finished extend-filesystems.service. Jul 10 00:33:46.368669 env[1214]: time="2025-07-10T00:33:46.368623120Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 10 00:33:46.368808 env[1214]: time="2025-07-10T00:33:46.368783920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:33:46.371305 env[1214]: time="2025-07-10T00:33:46.371268800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.186-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:33:46.371352 env[1214]: time="2025-07-10T00:33:46.371304680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:33:46.371559 env[1214]: time="2025-07-10T00:33:46.371533400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:33:46.371559 env[1214]: time="2025-07-10T00:33:46.371556200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 10 00:33:46.371609 env[1214]: time="2025-07-10T00:33:46.371569720Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 10 00:33:46.371609 env[1214]: time="2025-07-10T00:33:46.371579680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 10 00:33:46.371753 env[1214]: time="2025-07-10T00:33:46.371666600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:33:46.371913 env[1214]: time="2025-07-10T00:33:46.371890880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:33:46.372056 env[1214]: time="2025-07-10T00:33:46.372033840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:33:46.372086 env[1214]: time="2025-07-10T00:33:46.372055240Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 10 00:33:46.372131 env[1214]: time="2025-07-10T00:33:46.372111760Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 10 00:33:46.372131 env[1214]: time="2025-07-10T00:33:46.372128960Z" level=info msg="metadata content store policy set" policy=shared Jul 10 00:33:46.377558 env[1214]: time="2025-07-10T00:33:46.377462360Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 10 00:33:46.377558 env[1214]: time="2025-07-10T00:33:46.377503000Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 10 00:33:46.377558 env[1214]: time="2025-07-10T00:33:46.377516520Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 10 00:33:46.377558 env[1214]: time="2025-07-10T00:33:46.377547680Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 10 00:33:46.377694 env[1214]: time="2025-07-10T00:33:46.377568760Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 10 00:33:46.377694 env[1214]: time="2025-07-10T00:33:46.377583320Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 10 00:33:46.377694 env[1214]: time="2025-07-10T00:33:46.377595840Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 10 00:33:46.377953 env[1214]: time="2025-07-10T00:33:46.377923320Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 10 00:33:46.377984 env[1214]: time="2025-07-10T00:33:46.377954680Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 10 00:33:46.377984 env[1214]: time="2025-07-10T00:33:46.377969600Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 10 00:33:46.378026 env[1214]: time="2025-07-10T00:33:46.377989480Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 10 00:33:46.378026 env[1214]: time="2025-07-10T00:33:46.378002760Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 10 00:33:46.378131 env[1214]: time="2025-07-10T00:33:46.378111560Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 10 00:33:46.378206 env[1214]: time="2025-07-10T00:33:46.378189040Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 10 00:33:46.378454 env[1214]: time="2025-07-10T00:33:46.378435080Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 10 00:33:46.378483 env[1214]: time="2025-07-10T00:33:46.378463840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 10 00:33:46.378483 env[1214]: time="2025-07-10T00:33:46.378477520Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 10 00:33:46.378596 env[1214]: time="2025-07-10T00:33:46.378581200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 10 00:33:46.378624 env[1214]: time="2025-07-10T00:33:46.378596960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 10 00:33:46.378624 env[1214]: time="2025-07-10T00:33:46.378610720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 10 00:33:46.378624 env[1214]: time="2025-07-10T00:33:46.378621720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 10 00:33:46.378679 env[1214]: time="2025-07-10T00:33:46.378633400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 10 00:33:46.378679 env[1214]: time="2025-07-10T00:33:46.378645800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 10 00:33:46.378679 env[1214]: time="2025-07-10T00:33:46.378656680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 10 00:33:46.378679 env[1214]: time="2025-07-10T00:33:46.378667720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 10 00:33:46.378757 env[1214]: time="2025-07-10T00:33:46.378681800Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 10 00:33:46.378820 env[1214]: time="2025-07-10T00:33:46.378800760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 10 00:33:46.378854 env[1214]: time="2025-07-10T00:33:46.378822320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 10 00:33:46.378854 env[1214]: time="2025-07-10T00:33:46.378835080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 10 00:33:46.378854 env[1214]: time="2025-07-10T00:33:46.378846880Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 10 00:33:46.378913 env[1214]: time="2025-07-10T00:33:46.378860000Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 10 00:33:46.378913 env[1214]: time="2025-07-10T00:33:46.378871240Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 10 00:33:46.378913 env[1214]: time="2025-07-10T00:33:46.378887680Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 10 00:33:46.378985 env[1214]: time="2025-07-10T00:33:46.378918400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 10 00:33:46.379168 env[1214]: time="2025-07-10T00:33:46.379116120Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 10 00:33:46.379810 env[1214]: time="2025-07-10T00:33:46.379175400Z" level=info msg="Connect containerd service" Jul 10 00:33:46.379810 env[1214]: time="2025-07-10T00:33:46.379206600Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 10 00:33:46.379810 env[1214]: time="2025-07-10T00:33:46.379780920Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:33:46.380106 env[1214]: time="2025-07-10T00:33:46.380064400Z" level=info msg="Start subscribing containerd event" Jul 10 00:33:46.380138 env[1214]: time="2025-07-10T00:33:46.380120240Z" level=info msg="Start recovering state" Jul 10 00:33:46.380283 env[1214]: time="2025-07-10T00:33:46.380263800Z" level=info msg="Start event monitor" Jul 10 00:33:46.380311 env[1214]: time="2025-07-10T00:33:46.380292200Z" level=info msg="Start snapshots syncer" Jul 10 00:33:46.380311 env[1214]: time="2025-07-10T00:33:46.380303160Z" level=info msg="Start cni network conf syncer for default" Jul 10 00:33:46.380350 env[1214]: time="2025-07-10T00:33:46.380310720Z" level=info msg="Start streaming server" Jul 10 00:33:46.380907 env[1214]: time="2025-07-10T00:33:46.380828000Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 10 00:33:46.380965 env[1214]: time="2025-07-10T00:33:46.380949880Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 10 00:33:46.381056 env[1214]: time="2025-07-10T00:33:46.381038040Z" level=info msg="containerd successfully booted in 0.050382s" Jul 10 00:33:46.381123 systemd[1]: Started containerd.service. Jul 10 00:33:46.393891 locksmithd[1244]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 10 00:33:46.638872 tar[1212]: linux-arm64/README.md Jul 10 00:33:46.643103 systemd[1]: Finished prepare-helm.service. Jul 10 00:33:47.291504 systemd-networkd[1041]: eth0: Gained IPv6LL Jul 10 00:33:47.293166 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 10 00:33:47.294217 systemd[1]: Reached target network-online.target. Jul 10 00:33:47.296419 systemd[1]: Starting kubelet.service... Jul 10 00:33:47.894649 systemd[1]: Started kubelet.service. Jul 10 00:33:48.342080 kubelet[1258]: E0710 00:33:48.341979 1258 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:33:48.343628 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:33:48.343765 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:33:48.418945 sshd_keygen[1206]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 10 00:33:48.437474 systemd[1]: Finished sshd-keygen.service. Jul 10 00:33:48.439715 systemd[1]: Starting issuegen.service... Jul 10 00:33:48.444423 systemd[1]: issuegen.service: Deactivated successfully. Jul 10 00:33:48.444569 systemd[1]: Finished issuegen.service. Jul 10 00:33:48.446649 systemd[1]: Starting systemd-user-sessions.service... Jul 10 00:33:48.452694 systemd[1]: Finished systemd-user-sessions.service. Jul 10 00:33:48.454716 systemd[1]: Started getty@tty1.service. Jul 10 00:33:48.456741 systemd[1]: Started serial-getty@ttyAMA0.service. Jul 10 00:33:48.457848 systemd[1]: Reached target getty.target. Jul 10 00:33:48.458706 systemd[1]: Reached target multi-user.target. Jul 10 00:33:48.460628 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 10 00:33:48.466925 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 10 00:33:48.467070 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 10 00:33:48.468171 systemd[1]: Startup finished in 547ms (kernel) + 5.363s (initrd) + 5.536s (userspace) = 11.447s. Jul 10 00:33:50.788839 systemd[1]: Created slice system-sshd.slice. Jul 10 00:33:50.789924 systemd[1]: Started sshd@0-10.0.0.76:22-10.0.0.1:42006.service. Jul 10 00:33:50.846959 sshd[1280]: Accepted publickey for core from 10.0.0.1 port 42006 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:33:50.849317 sshd[1280]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:33:50.858161 systemd-logind[1201]: New session 1 of user core. Jul 10 00:33:50.859088 systemd[1]: Created slice user-500.slice. Jul 10 00:33:50.860254 systemd[1]: Starting user-runtime-dir@500.service... Jul 10 00:33:50.868574 systemd[1]: Finished user-runtime-dir@500.service. Jul 10 00:33:50.869933 systemd[1]: Starting user@500.service... Jul 10 00:33:50.873934 (systemd)[1283]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:33:50.940750 systemd[1283]: Queued start job for default target default.target. Jul 10 00:33:50.941695 systemd[1283]: Reached target paths.target. Jul 10 00:33:50.942309 systemd[1283]: Reached target sockets.target. Jul 10 00:33:50.942447 systemd[1283]: Reached target timers.target. Jul 10 00:33:50.942459 systemd[1283]: Reached target basic.target. Jul 10 00:33:50.942503 systemd[1283]: Reached target default.target. Jul 10 00:33:50.942528 systemd[1283]: Startup finished in 62ms. Jul 10 00:33:50.942596 systemd[1]: Started user@500.service. Jul 10 00:33:50.943564 systemd[1]: Started session-1.scope. Jul 10 00:33:50.996107 systemd[1]: Started sshd@1-10.0.0.76:22-10.0.0.1:42010.service. Jul 10 00:33:51.041691 sshd[1292]: Accepted publickey for core from 10.0.0.1 port 42010 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:33:51.043228 sshd[1292]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:33:51.048311 systemd-logind[1201]: New session 2 of user core. Jul 10 00:33:51.049061 systemd[1]: Started session-2.scope. Jul 10 00:33:51.105032 sshd[1292]: pam_unix(sshd:session): session closed for user core Jul 10 00:33:51.107545 systemd[1]: sshd@1-10.0.0.76:22-10.0.0.1:42010.service: Deactivated successfully. Jul 10 00:33:51.108096 systemd[1]: session-2.scope: Deactivated successfully. Jul 10 00:33:51.109632 systemd-logind[1201]: Session 2 logged out. Waiting for processes to exit. Jul 10 00:33:51.110504 systemd[1]: Started sshd@2-10.0.0.76:22-10.0.0.1:42024.service. Jul 10 00:33:51.116281 systemd-logind[1201]: Removed session 2. Jul 10 00:33:51.168383 sshd[1298]: Accepted publickey for core from 10.0.0.1 port 42024 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:33:51.169515 sshd[1298]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:33:51.172906 systemd-logind[1201]: New session 3 of user core. Jul 10 00:33:51.173698 systemd[1]: Started session-3.scope. Jul 10 00:33:51.224897 sshd[1298]: pam_unix(sshd:session): session closed for user core Jul 10 00:33:51.227435 systemd[1]: sshd@2-10.0.0.76:22-10.0.0.1:42024.service: Deactivated successfully. Jul 10 00:33:51.228030 systemd[1]: session-3.scope: Deactivated successfully. Jul 10 00:33:51.228654 systemd-logind[1201]: Session 3 logged out. Waiting for processes to exit. Jul 10 00:33:51.231600 systemd[1]: Started sshd@3-10.0.0.76:22-10.0.0.1:42038.service. Jul 10 00:33:51.232208 systemd-logind[1201]: Removed session 3. Jul 10 00:33:51.267113 sshd[1304]: Accepted publickey for core from 10.0.0.1 port 42038 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:33:51.268495 sshd[1304]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:33:51.272732 systemd-logind[1201]: New session 4 of user core. Jul 10 00:33:51.273532 systemd[1]: Started session-4.scope. Jul 10 00:33:51.331182 sshd[1304]: pam_unix(sshd:session): session closed for user core Jul 10 00:33:51.333967 systemd[1]: sshd@3-10.0.0.76:22-10.0.0.1:42038.service: Deactivated successfully. Jul 10 00:33:51.334649 systemd[1]: session-4.scope: Deactivated successfully. Jul 10 00:33:51.335129 systemd-logind[1201]: Session 4 logged out. Waiting for processes to exit. Jul 10 00:33:51.336226 systemd[1]: Started sshd@4-10.0.0.76:22-10.0.0.1:42052.service. Jul 10 00:33:51.336879 systemd-logind[1201]: Removed session 4. Jul 10 00:33:51.374590 sshd[1310]: Accepted publickey for core from 10.0.0.1 port 42052 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:33:51.375781 sshd[1310]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:33:51.379171 systemd-logind[1201]: New session 5 of user core. Jul 10 00:33:51.379947 systemd[1]: Started session-5.scope. Jul 10 00:33:51.444333 sudo[1314]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 10 00:33:51.444575 sudo[1314]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 10 00:33:51.506334 systemd[1]: Starting docker.service... Jul 10 00:33:51.588819 env[1326]: time="2025-07-10T00:33:51.588693224Z" level=info msg="Starting up" Jul 10 00:33:51.590746 env[1326]: time="2025-07-10T00:33:51.590653443Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 10 00:33:51.590746 env[1326]: time="2025-07-10T00:33:51.590744264Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 10 00:33:51.590852 env[1326]: time="2025-07-10T00:33:51.590764213Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 10 00:33:51.590852 env[1326]: time="2025-07-10T00:33:51.590775439Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 10 00:33:51.592930 env[1326]: time="2025-07-10T00:33:51.592901915Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 10 00:33:51.592930 env[1326]: time="2025-07-10T00:33:51.592925094Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 10 00:33:51.593024 env[1326]: time="2025-07-10T00:33:51.592939753Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 10 00:33:51.593024 env[1326]: time="2025-07-10T00:33:51.592949162Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 10 00:33:51.727959 env[1326]: time="2025-07-10T00:33:51.727924249Z" level=info msg="Loading containers: start." Jul 10 00:33:51.844268 kernel: Initializing XFRM netlink socket Jul 10 00:33:51.867215 env[1326]: time="2025-07-10T00:33:51.867174134Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 10 00:33:51.929303 systemd-networkd[1041]: docker0: Link UP Jul 10 00:33:52.002633 env[1326]: time="2025-07-10T00:33:52.002596190Z" level=info msg="Loading containers: done." Jul 10 00:33:52.037274 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck226945688-merged.mount: Deactivated successfully. Jul 10 00:33:52.040166 env[1326]: time="2025-07-10T00:33:52.040119155Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 10 00:33:52.040347 env[1326]: time="2025-07-10T00:33:52.040320184Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 10 00:33:52.040437 env[1326]: time="2025-07-10T00:33:52.040422836Z" level=info msg="Daemon has completed initialization" Jul 10 00:33:52.056802 systemd[1]: Started docker.service. Jul 10 00:33:52.063120 env[1326]: time="2025-07-10T00:33:52.063069680Z" level=info msg="API listen on /run/docker.sock" Jul 10 00:33:52.837048 env[1214]: time="2025-07-10T00:33:52.836810443Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 10 00:33:53.530788 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2982064287.mount: Deactivated successfully. Jul 10 00:33:54.974635 env[1214]: time="2025-07-10T00:33:54.974588212Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:33:54.976108 env[1214]: time="2025-07-10T00:33:54.976074157Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:33:54.978001 env[1214]: time="2025-07-10T00:33:54.977972810Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:33:54.979666 env[1214]: time="2025-07-10T00:33:54.979638863Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:33:54.981353 env[1214]: time="2025-07-10T00:33:54.981315422Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\"" Jul 10 00:33:54.981941 env[1214]: time="2025-07-10T00:33:54.981917014Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 10 00:33:56.736002 env[1214]: time="2025-07-10T00:33:56.735944626Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:33:56.737744 env[1214]: time="2025-07-10T00:33:56.737703376Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:33:56.739573 env[1214]: time="2025-07-10T00:33:56.739545452Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:33:56.741514 env[1214]: time="2025-07-10T00:33:56.741486412Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:33:56.742388 env[1214]: time="2025-07-10T00:33:56.742361486Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\"" Jul 10 00:33:56.742867 env[1214]: time="2025-07-10T00:33:56.742786120Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 10 00:33:58.179263 env[1214]: time="2025-07-10T00:33:58.179196579Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:33:58.183317 env[1214]: time="2025-07-10T00:33:58.183275409Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:33:58.187656 env[1214]: time="2025-07-10T00:33:58.187566754Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:33:58.189001 env[1214]: time="2025-07-10T00:33:58.188954308Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:33:58.190513 env[1214]: time="2025-07-10T00:33:58.190468256Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\"" Jul 10 00:33:58.191463 env[1214]: time="2025-07-10T00:33:58.190956243Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 10 00:33:58.594666 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 10 00:33:58.594834 systemd[1]: Stopped kubelet.service. Jul 10 00:33:58.596480 systemd[1]: Starting kubelet.service... Jul 10 00:33:58.696532 systemd[1]: Started kubelet.service. Jul 10 00:33:58.737297 kubelet[1460]: E0710 00:33:58.737214 1460 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:33:58.739650 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:33:58.739790 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:33:59.335034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3146678976.mount: Deactivated successfully. Jul 10 00:33:59.818485 env[1214]: time="2025-07-10T00:33:59.818374836Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:33:59.820838 env[1214]: time="2025-07-10T00:33:59.820803756Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:33:59.822511 env[1214]: time="2025-07-10T00:33:59.822478525Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:33:59.825917 env[1214]: time="2025-07-10T00:33:59.825886213Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:33:59.826504 env[1214]: time="2025-07-10T00:33:59.826468922Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jul 10 00:33:59.827096 env[1214]: time="2025-07-10T00:33:59.827070494Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 10 00:34:00.405693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1933487265.mount: Deactivated successfully. Jul 10 00:34:01.409181 env[1214]: time="2025-07-10T00:34:01.409135055Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:34:01.411539 env[1214]: time="2025-07-10T00:34:01.411503001Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:34:01.414097 env[1214]: time="2025-07-10T00:34:01.414070127Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:34:01.416116 env[1214]: time="2025-07-10T00:34:01.416087994Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:34:01.416847 env[1214]: time="2025-07-10T00:34:01.416797094Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 10 00:34:01.418400 env[1214]: time="2025-07-10T00:34:01.418349833Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 10 00:34:01.863926 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount435044816.mount: Deactivated successfully. Jul 10 00:34:01.867154 env[1214]: time="2025-07-10T00:34:01.867101705Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:34:01.868541 env[1214]: time="2025-07-10T00:34:01.868503103Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:34:01.870482 env[1214]: time="2025-07-10T00:34:01.870436639Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:34:01.872185 env[1214]: time="2025-07-10T00:34:01.872145209Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:34:01.872458 env[1214]: time="2025-07-10T00:34:01.872422465Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 10 00:34:01.873168 env[1214]: time="2025-07-10T00:34:01.873136658Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 10 00:34:02.482168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2429750552.mount: Deactivated successfully. Jul 10 00:34:05.164489 env[1214]: time="2025-07-10T00:34:05.164417387Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:34:05.165871 env[1214]: time="2025-07-10T00:34:05.165834836Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:34:05.167638 env[1214]: time="2025-07-10T00:34:05.167607808Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:34:05.169619 env[1214]: time="2025-07-10T00:34:05.169568177Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:34:05.170525 env[1214]: time="2025-07-10T00:34:05.170491337Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jul 10 00:34:08.815629 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 10 00:34:08.815800 systemd[1]: Stopped kubelet.service. Jul 10 00:34:08.817184 systemd[1]: Starting kubelet.service... Jul 10 00:34:08.930490 systemd[1]: Started kubelet.service. Jul 10 00:34:08.969229 kubelet[1494]: E0710 00:34:08.969168 1494 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:34:08.973700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:34:08.973819 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:34:10.626323 systemd[1]: Stopped kubelet.service. Jul 10 00:34:10.628640 systemd[1]: Starting kubelet.service... Jul 10 00:34:10.654348 systemd[1]: Reloading. Jul 10 00:34:10.714500 /usr/lib/systemd/system-generators/torcx-generator[1530]: time="2025-07-10T00:34:10Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 10 00:34:10.714539 /usr/lib/systemd/system-generators/torcx-generator[1530]: time="2025-07-10T00:34:10Z" level=info msg="torcx already run" Jul 10 00:34:10.817960 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 10 00:34:10.817981 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 10 00:34:10.833405 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:34:10.899961 systemd[1]: Started kubelet.service. Jul 10 00:34:10.903611 systemd[1]: Stopping kubelet.service... Jul 10 00:34:10.904363 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 00:34:10.904632 systemd[1]: Stopped kubelet.service. Jul 10 00:34:10.906343 systemd[1]: Starting kubelet.service... Jul 10 00:34:10.999289 systemd[1]: Started kubelet.service. Jul 10 00:34:11.041925 kubelet[1578]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:34:11.041925 kubelet[1578]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 00:34:11.041925 kubelet[1578]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:34:11.042332 kubelet[1578]: I0710 00:34:11.042106 1578 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:34:12.838426 kubelet[1578]: I0710 00:34:12.838369 1578 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 10 00:34:12.838426 kubelet[1578]: I0710 00:34:12.838410 1578 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:34:12.838772 kubelet[1578]: I0710 00:34:12.838687 1578 server.go:954] "Client rotation is on, will bootstrap in background" Jul 10 00:34:12.897597 kubelet[1578]: E0710 00:34:12.897558 1578 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.76:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:34:12.900751 kubelet[1578]: I0710 00:34:12.900726 1578 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:34:12.913902 kubelet[1578]: E0710 00:34:12.913809 1578 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 10 00:34:12.913902 kubelet[1578]: I0710 00:34:12.913881 1578 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 10 00:34:12.916795 kubelet[1578]: I0710 00:34:12.916775 1578 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:34:12.917044 kubelet[1578]: I0710 00:34:12.917020 1578 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:34:12.917234 kubelet[1578]: I0710 00:34:12.917046 1578 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 00:34:12.917430 kubelet[1578]: I0710 00:34:12.917361 1578 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:34:12.917430 kubelet[1578]: I0710 00:34:12.917375 1578 container_manager_linux.go:304] "Creating device plugin manager" Jul 10 00:34:12.917629 kubelet[1578]: I0710 00:34:12.917615 1578 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:34:12.920979 kubelet[1578]: I0710 00:34:12.920952 1578 kubelet.go:446] "Attempting to sync node with API server" Jul 10 00:34:12.920979 kubelet[1578]: I0710 00:34:12.920979 1578 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:34:12.921061 kubelet[1578]: I0710 00:34:12.920999 1578 kubelet.go:352] "Adding apiserver pod source" Jul 10 00:34:12.921061 kubelet[1578]: I0710 00:34:12.921015 1578 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:34:12.930809 kubelet[1578]: W0710 00:34:12.930753 1578 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused Jul 10 00:34:12.930883 kubelet[1578]: E0710 00:34:12.930816 1578 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:34:12.931907 kubelet[1578]: W0710 00:34:12.931865 1578 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.76:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused Jul 10 00:34:12.931966 kubelet[1578]: E0710 00:34:12.931906 1578 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.76:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:34:12.939301 kubelet[1578]: I0710 00:34:12.939276 1578 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 10 00:34:12.939915 kubelet[1578]: I0710 00:34:12.939883 1578 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 00:34:12.940007 kubelet[1578]: W0710 00:34:12.939992 1578 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 10 00:34:12.940912 kubelet[1578]: I0710 00:34:12.940886 1578 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 00:34:12.940986 kubelet[1578]: I0710 00:34:12.940920 1578 server.go:1287] "Started kubelet" Jul 10 00:34:12.957668 kubelet[1578]: I0710 00:34:12.957625 1578 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:34:12.958582 kubelet[1578]: I0710 00:34:12.958556 1578 server.go:479] "Adding debug handlers to kubelet server" Jul 10 00:34:12.971989 kubelet[1578]: E0710 00:34:12.971760 1578 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.76:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.76:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1850bca095c9256d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-10 00:34:12.940899693 +0000 UTC m=+1.937212147,LastTimestamp:2025-07-10 00:34:12.940899693 +0000 UTC m=+1.937212147,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 10 00:34:12.972740 kubelet[1578]: I0710 00:34:12.972682 1578 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:34:12.973132 kubelet[1578]: I0710 00:34:12.973118 1578 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:34:12.974198 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 10 00:34:12.974340 kubelet[1578]: I0710 00:34:12.974318 1578 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:34:12.974536 kubelet[1578]: I0710 00:34:12.974520 1578 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:34:12.975586 kubelet[1578]: E0710 00:34:12.974321 1578 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:34:12.975732 kubelet[1578]: I0710 00:34:12.975716 1578 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 00:34:12.975914 kubelet[1578]: I0710 00:34:12.975900 1578 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 00:34:12.976034 kubelet[1578]: I0710 00:34:12.976021 1578 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:34:12.976597 kubelet[1578]: W0710 00:34:12.976561 1578 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused Jul 10 00:34:12.976723 kubelet[1578]: E0710 00:34:12.976702 1578 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:34:12.976973 kubelet[1578]: I0710 00:34:12.976934 1578 factory.go:221] Registration of the systemd container factory successfully Jul 10 00:34:12.977148 kubelet[1578]: I0710 00:34:12.977126 1578 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:34:12.977364 kubelet[1578]: E0710 00:34:12.977334 1578 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.76:6443: connect: connection refused" interval="200ms" Jul 10 00:34:12.977587 kubelet[1578]: E0710 00:34:12.977237 1578 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:34:12.978991 kubelet[1578]: I0710 00:34:12.978925 1578 factory.go:221] Registration of the containerd container factory successfully Jul 10 00:34:12.991705 kubelet[1578]: I0710 00:34:12.991683 1578 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 00:34:12.991705 kubelet[1578]: I0710 00:34:12.991700 1578 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 00:34:12.991839 kubelet[1578]: I0710 00:34:12.991718 1578 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:34:12.997233 kubelet[1578]: I0710 00:34:12.997181 1578 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 00:34:12.998212 kubelet[1578]: I0710 00:34:12.998189 1578 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 00:34:12.998348 kubelet[1578]: I0710 00:34:12.998335 1578 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 10 00:34:12.998430 kubelet[1578]: I0710 00:34:12.998416 1578 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 00:34:12.998488 kubelet[1578]: I0710 00:34:12.998476 1578 kubelet.go:2382] "Starting kubelet main sync loop" Jul 10 00:34:12.998581 kubelet[1578]: E0710 00:34:12.998566 1578 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:34:12.999184 kubelet[1578]: W0710 00:34:12.999120 1578 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused Jul 10 00:34:12.999352 kubelet[1578]: E0710 00:34:12.999327 1578 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:34:13.067191 kubelet[1578]: I0710 00:34:13.067147 1578 policy_none.go:49] "None policy: Start" Jul 10 00:34:13.067191 kubelet[1578]: I0710 00:34:13.067176 1578 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 00:34:13.067191 kubelet[1578]: I0710 00:34:13.067188 1578 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:34:13.071600 systemd[1]: Created slice kubepods.slice. Jul 10 00:34:13.075340 systemd[1]: Created slice kubepods-burstable.slice. Jul 10 00:34:13.077629 systemd[1]: Created slice kubepods-besteffort.slice. Jul 10 00:34:13.077763 kubelet[1578]: E0710 00:34:13.077637 1578 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:34:13.092963 kubelet[1578]: I0710 00:34:13.092833 1578 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 00:34:13.093067 kubelet[1578]: I0710 00:34:13.092979 1578 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:34:13.093067 kubelet[1578]: I0710 00:34:13.092990 1578 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:34:13.093548 kubelet[1578]: I0710 00:34:13.093466 1578 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:34:13.095352 kubelet[1578]: E0710 00:34:13.095316 1578 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 00:34:13.095423 kubelet[1578]: E0710 00:34:13.095369 1578 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 10 00:34:13.105304 systemd[1]: Created slice kubepods-burstable-pod18fff775d5fbd83cbcb968afa4461d64.slice. Jul 10 00:34:13.116035 kubelet[1578]: E0710 00:34:13.115843 1578 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:34:13.118105 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jul 10 00:34:13.119436 kubelet[1578]: E0710 00:34:13.119414 1578 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:34:13.121539 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jul 10 00:34:13.122782 kubelet[1578]: E0710 00:34:13.122756 1578 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:34:13.178306 kubelet[1578]: E0710 00:34:13.178266 1578 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.76:6443: connect: connection refused" interval="400ms" Jul 10 00:34:13.194413 kubelet[1578]: I0710 00:34:13.194375 1578 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 00:34:13.194821 kubelet[1578]: E0710 00:34:13.194794 1578 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.76:6443/api/v1/nodes\": dial tcp 10.0.0.76:6443: connect: connection refused" node="localhost" Jul 10 00:34:13.277296 kubelet[1578]: I0710 00:34:13.277266 1578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/18fff775d5fbd83cbcb968afa4461d64-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"18fff775d5fbd83cbcb968afa4461d64\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:34:13.277469 kubelet[1578]: I0710 00:34:13.277448 1578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:34:13.277566 kubelet[1578]: I0710 00:34:13.277553 1578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:34:13.277662 kubelet[1578]: I0710 00:34:13.277645 1578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:34:13.277739 kubelet[1578]: I0710 00:34:13.277726 1578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 10 00:34:13.277815 kubelet[1578]: I0710 00:34:13.277802 1578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/18fff775d5fbd83cbcb968afa4461d64-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"18fff775d5fbd83cbcb968afa4461d64\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:34:13.277895 kubelet[1578]: I0710 00:34:13.277882 1578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/18fff775d5fbd83cbcb968afa4461d64-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"18fff775d5fbd83cbcb968afa4461d64\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:34:13.277982 kubelet[1578]: I0710 00:34:13.277969 1578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:34:13.278057 kubelet[1578]: I0710 00:34:13.278045 1578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:34:13.396635 kubelet[1578]: I0710 00:34:13.396101 1578 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 00:34:13.396635 kubelet[1578]: E0710 00:34:13.396440 1578 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.76:6443/api/v1/nodes\": dial tcp 10.0.0.76:6443: connect: connection refused" node="localhost" Jul 10 00:34:13.416894 kubelet[1578]: E0710 00:34:13.416843 1578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:13.417531 env[1214]: time="2025-07-10T00:34:13.417480861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:18fff775d5fbd83cbcb968afa4461d64,Namespace:kube-system,Attempt:0,}" Jul 10 00:34:13.420696 kubelet[1578]: E0710 00:34:13.420673 1578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:13.421369 env[1214]: time="2025-07-10T00:34:13.421118624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jul 10 00:34:13.423850 kubelet[1578]: E0710 00:34:13.423826 1578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:13.424203 env[1214]: time="2025-07-10T00:34:13.424165409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jul 10 00:34:13.579326 kubelet[1578]: E0710 00:34:13.579280 1578 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.76:6443: connect: connection refused" interval="800ms" Jul 10 00:34:13.795638 kubelet[1578]: W0710 00:34:13.795272 1578 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused Jul 10 00:34:13.795638 kubelet[1578]: E0710 00:34:13.795318 1578 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:34:13.797622 kubelet[1578]: I0710 00:34:13.797591 1578 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 00:34:13.797924 kubelet[1578]: E0710 00:34:13.797881 1578 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.76:6443/api/v1/nodes\": dial tcp 10.0.0.76:6443: connect: connection refused" node="localhost" Jul 10 00:34:13.916689 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount27330598.mount: Deactivated successfully. Jul 10 00:34:13.921712 env[1214]: time="2025-07-10T00:34:13.921307071Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:34:13.925824 env[1214]: time="2025-07-10T00:34:13.925263636Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:34:13.927171 env[1214]: time="2025-07-10T00:34:13.926712410Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:34:13.929593 env[1214]: time="2025-07-10T00:34:13.929566377Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:34:13.931459 env[1214]: time="2025-07-10T00:34:13.931434723Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:34:13.934792 env[1214]: time="2025-07-10T00:34:13.933454667Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:34:13.936849 env[1214]: time="2025-07-10T00:34:13.936823134Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:34:13.938563 env[1214]: time="2025-07-10T00:34:13.938534561Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:34:13.940012 env[1214]: time="2025-07-10T00:34:13.939974091Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:34:13.942311 env[1214]: time="2025-07-10T00:34:13.942280340Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:34:13.944018 env[1214]: time="2025-07-10T00:34:13.943988445Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:34:13.953294 env[1214]: time="2025-07-10T00:34:13.953237092Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:34:13.972326 env[1214]: time="2025-07-10T00:34:13.972268937Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:34:13.972478 env[1214]: time="2025-07-10T00:34:13.972310398Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:34:13.972478 env[1214]: time="2025-07-10T00:34:13.972320764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:34:13.972678 env[1214]: time="2025-07-10T00:34:13.972603427Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e2fdfd7708169b5f8d3993bfdd0ac92f8287767ec2e1678310f32e59b026b79e pid=1625 runtime=io.containerd.runc.v2 Jul 10 00:34:13.975197 kubelet[1578]: W0710 00:34:13.975113 1578 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused Jul 10 00:34:13.975474 kubelet[1578]: E0710 00:34:13.975213 1578 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:34:13.977294 env[1214]: time="2025-07-10T00:34:13.977042396Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:34:13.977294 env[1214]: time="2025-07-10T00:34:13.977096784Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:34:13.977294 env[1214]: time="2025-07-10T00:34:13.977118235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:34:13.977821 env[1214]: time="2025-07-10T00:34:13.977328181Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c449803a220e5aa84a174918294b30616e8343ef7db7fd7734184759b7b6fab0 pid=1641 runtime=io.containerd.runc.v2 Jul 10 00:34:13.980333 env[1214]: time="2025-07-10T00:34:13.980268832Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:34:13.980627 env[1214]: time="2025-07-10T00:34:13.980586833Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:34:13.980627 env[1214]: time="2025-07-10T00:34:13.980608124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:34:13.981403 env[1214]: time="2025-07-10T00:34:13.980852407Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a218b76067e2f2b2585df8e4e970e8e1419370377fe2913e790b2fd9cf4d38b8 pid=1653 runtime=io.containerd.runc.v2 Jul 10 00:34:13.986623 systemd[1]: Started cri-containerd-e2fdfd7708169b5f8d3993bfdd0ac92f8287767ec2e1678310f32e59b026b79e.scope. Jul 10 00:34:13.994536 systemd[1]: Started cri-containerd-a218b76067e2f2b2585df8e4e970e8e1419370377fe2913e790b2fd9cf4d38b8.scope. Jul 10 00:34:14.013653 systemd[1]: Started cri-containerd-c449803a220e5aa84a174918294b30616e8343ef7db7fd7734184759b7b6fab0.scope. Jul 10 00:34:14.046985 env[1214]: time="2025-07-10T00:34:14.046884580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:18fff775d5fbd83cbcb968afa4461d64,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2fdfd7708169b5f8d3993bfdd0ac92f8287767ec2e1678310f32e59b026b79e\"" Jul 10 00:34:14.048823 kubelet[1578]: E0710 00:34:14.048607 1578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:14.050832 env[1214]: time="2025-07-10T00:34:14.050787231Z" level=info msg="CreateContainer within sandbox \"e2fdfd7708169b5f8d3993bfdd0ac92f8287767ec2e1678310f32e59b026b79e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 10 00:34:14.064812 env[1214]: time="2025-07-10T00:34:14.063911571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"c449803a220e5aa84a174918294b30616e8343ef7db7fd7734184759b7b6fab0\"" Jul 10 00:34:14.066357 kubelet[1578]: E0710 00:34:14.066243 1578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:14.067289 env[1214]: time="2025-07-10T00:34:14.067192626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"a218b76067e2f2b2585df8e4e970e8e1419370377fe2913e790b2fd9cf4d38b8\"" Jul 10 00:34:14.068565 kubelet[1578]: E0710 00:34:14.067902 1578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:14.068630 env[1214]: time="2025-07-10T00:34:14.068469232Z" level=info msg="CreateContainer within sandbox \"c449803a220e5aa84a174918294b30616e8343ef7db7fd7734184759b7b6fab0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 10 00:34:14.069264 env[1214]: time="2025-07-10T00:34:14.069196074Z" level=info msg="CreateContainer within sandbox \"a218b76067e2f2b2585df8e4e970e8e1419370377fe2913e790b2fd9cf4d38b8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 10 00:34:14.071401 env[1214]: time="2025-07-10T00:34:14.071362875Z" level=info msg="CreateContainer within sandbox \"e2fdfd7708169b5f8d3993bfdd0ac92f8287767ec2e1678310f32e59b026b79e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9fccc8b8fdcd68308cb845c92953025307b4f4646a5fcc2184b0bd78d3257a17\"" Jul 10 00:34:14.081503 env[1214]: time="2025-07-10T00:34:14.081469357Z" level=info msg="StartContainer for \"9fccc8b8fdcd68308cb845c92953025307b4f4646a5fcc2184b0bd78d3257a17\"" Jul 10 00:34:14.096901 env[1214]: time="2025-07-10T00:34:14.096839053Z" level=info msg="CreateContainer within sandbox \"c449803a220e5aa84a174918294b30616e8343ef7db7fd7734184759b7b6fab0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2c3b374af04e5634349f9ef342e2322e20844d8d9bfcada78b97f82f3b5473ee\"" Jul 10 00:34:14.097388 env[1214]: time="2025-07-10T00:34:14.097345678Z" level=info msg="StartContainer for \"2c3b374af04e5634349f9ef342e2322e20844d8d9bfcada78b97f82f3b5473ee\"" Jul 10 00:34:14.100464 systemd[1]: Started cri-containerd-9fccc8b8fdcd68308cb845c92953025307b4f4646a5fcc2184b0bd78d3257a17.scope. Jul 10 00:34:14.108519 env[1214]: time="2025-07-10T00:34:14.107957984Z" level=info msg="CreateContainer within sandbox \"a218b76067e2f2b2585df8e4e970e8e1419370377fe2913e790b2fd9cf4d38b8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a971e47289d7423bd6fbfbdd82d9d9df3e2be123419b7a85556d78f6385c7950\"" Jul 10 00:34:14.108519 env[1214]: time="2025-07-10T00:34:14.108394097Z" level=info msg="StartContainer for \"a971e47289d7423bd6fbfbdd82d9d9df3e2be123419b7a85556d78f6385c7950\"" Jul 10 00:34:14.129014 systemd[1]: Started cri-containerd-2c3b374af04e5634349f9ef342e2322e20844d8d9bfcada78b97f82f3b5473ee.scope. Jul 10 00:34:14.133708 systemd[1]: Started cri-containerd-a971e47289d7423bd6fbfbdd82d9d9df3e2be123419b7a85556d78f6385c7950.scope. Jul 10 00:34:14.139835 kubelet[1578]: W0710 00:34:14.139758 1578 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.76:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused Jul 10 00:34:14.139941 kubelet[1578]: E0710 00:34:14.139836 1578 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.76:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:34:14.197645 env[1214]: time="2025-07-10T00:34:14.195543145Z" level=info msg="StartContainer for \"9fccc8b8fdcd68308cb845c92953025307b4f4646a5fcc2184b0bd78d3257a17\" returns successfully" Jul 10 00:34:14.197645 env[1214]: time="2025-07-10T00:34:14.196571361Z" level=info msg="StartContainer for \"a971e47289d7423bd6fbfbdd82d9d9df3e2be123419b7a85556d78f6385c7950\" returns successfully" Jul 10 00:34:14.205950 env[1214]: time="2025-07-10T00:34:14.205744949Z" level=info msg="StartContainer for \"2c3b374af04e5634349f9ef342e2322e20844d8d9bfcada78b97f82f3b5473ee\" returns successfully" Jul 10 00:34:14.599201 kubelet[1578]: I0710 00:34:14.599137 1578 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 00:34:15.005394 kubelet[1578]: E0710 00:34:15.005300 1578 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:34:15.005696 kubelet[1578]: E0710 00:34:15.005442 1578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:15.006002 kubelet[1578]: E0710 00:34:15.005978 1578 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:34:15.006095 kubelet[1578]: E0710 00:34:15.006079 1578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:15.007550 kubelet[1578]: E0710 00:34:15.007530 1578 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:34:15.007646 kubelet[1578]: E0710 00:34:15.007629 1578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:15.903656 kubelet[1578]: E0710 00:34:15.903620 1578 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 10 00:34:16.000363 kubelet[1578]: I0710 00:34:16.000319 1578 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 10 00:34:16.000363 kubelet[1578]: E0710 00:34:16.000356 1578 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 10 00:34:16.009383 kubelet[1578]: E0710 00:34:16.009352 1578 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:34:16.009649 kubelet[1578]: E0710 00:34:16.009464 1578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:16.009695 kubelet[1578]: E0710 00:34:16.009671 1578 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:34:16.009775 kubelet[1578]: E0710 00:34:16.009757 1578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:16.010074 kubelet[1578]: E0710 00:34:16.010051 1578 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:34:16.111184 kubelet[1578]: E0710 00:34:16.111145 1578 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:34:16.211864 kubelet[1578]: E0710 00:34:16.211742 1578 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:34:16.312378 kubelet[1578]: E0710 00:34:16.312325 1578 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:34:16.413365 kubelet[1578]: E0710 00:34:16.413327 1578 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:34:16.478542 kubelet[1578]: I0710 00:34:16.478426 1578 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 10 00:34:16.486447 kubelet[1578]: E0710 00:34:16.486417 1578 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 10 00:34:16.486447 kubelet[1578]: I0710 00:34:16.486443 1578 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 10 00:34:16.487852 kubelet[1578]: E0710 00:34:16.487823 1578 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 10 00:34:16.487852 kubelet[1578]: I0710 00:34:16.487846 1578 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 10 00:34:16.489112 kubelet[1578]: E0710 00:34:16.489084 1578 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 10 00:34:16.923860 kubelet[1578]: I0710 00:34:16.923819 1578 apiserver.go:52] "Watching apiserver" Jul 10 00:34:16.976380 kubelet[1578]: I0710 00:34:16.976336 1578 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 00:34:17.968729 systemd[1]: Reloading. Jul 10 00:34:18.024401 /usr/lib/systemd/system-generators/torcx-generator[1870]: time="2025-07-10T00:34:18Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 10 00:34:18.024433 /usr/lib/systemd/system-generators/torcx-generator[1870]: time="2025-07-10T00:34:18Z" level=info msg="torcx already run" Jul 10 00:34:18.084281 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 10 00:34:18.084300 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 10 00:34:18.099944 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:34:18.180876 kubelet[1578]: I0710 00:34:18.180834 1578 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:34:18.181062 systemd[1]: Stopping kubelet.service... Jul 10 00:34:18.206610 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 00:34:18.206801 systemd[1]: Stopped kubelet.service. Jul 10 00:34:18.206854 systemd[1]: kubelet.service: Consumed 2.330s CPU time. Jul 10 00:34:18.208560 systemd[1]: Starting kubelet.service... Jul 10 00:34:18.308881 systemd[1]: Started kubelet.service. Jul 10 00:34:18.345117 kubelet[1912]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:34:18.345117 kubelet[1912]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 00:34:18.345117 kubelet[1912]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:34:18.345488 kubelet[1912]: I0710 00:34:18.345157 1912 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:34:18.352937 kubelet[1912]: I0710 00:34:18.352690 1912 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 10 00:34:18.352937 kubelet[1912]: I0710 00:34:18.352719 1912 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:34:18.353122 kubelet[1912]: I0710 00:34:18.353032 1912 server.go:954] "Client rotation is on, will bootstrap in background" Jul 10 00:34:18.354981 kubelet[1912]: I0710 00:34:18.354824 1912 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 10 00:34:18.361008 kubelet[1912]: I0710 00:34:18.360978 1912 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:34:18.366486 kubelet[1912]: E0710 00:34:18.366436 1912 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 10 00:34:18.366486 kubelet[1912]: I0710 00:34:18.366469 1912 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 10 00:34:18.369600 kubelet[1912]: I0710 00:34:18.369540 1912 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:34:18.369853 kubelet[1912]: I0710 00:34:18.369810 1912 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:34:18.370043 kubelet[1912]: I0710 00:34:18.369839 1912 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 00:34:18.370128 kubelet[1912]: I0710 00:34:18.370045 1912 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:34:18.370128 kubelet[1912]: I0710 00:34:18.370056 1912 container_manager_linux.go:304] "Creating device plugin manager" Jul 10 00:34:18.370128 kubelet[1912]: I0710 00:34:18.370108 1912 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:34:18.370276 kubelet[1912]: I0710 00:34:18.370260 1912 kubelet.go:446] "Attempting to sync node with API server" Jul 10 00:34:18.370320 kubelet[1912]: I0710 00:34:18.370279 1912 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:34:18.370320 kubelet[1912]: I0710 00:34:18.370296 1912 kubelet.go:352] "Adding apiserver pod source" Jul 10 00:34:18.370320 kubelet[1912]: I0710 00:34:18.370318 1912 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:34:18.384539 kubelet[1912]: I0710 00:34:18.384509 1912 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 10 00:34:18.385240 kubelet[1912]: I0710 00:34:18.385208 1912 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 00:34:18.386284 kubelet[1912]: I0710 00:34:18.386259 1912 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 00:34:18.386359 kubelet[1912]: I0710 00:34:18.386297 1912 server.go:1287] "Started kubelet" Jul 10 00:34:18.386421 kubelet[1912]: I0710 00:34:18.386373 1912 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:34:18.386868 kubelet[1912]: I0710 00:34:18.386841 1912 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:34:18.388298 kubelet[1912]: I0710 00:34:18.387208 1912 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:34:18.388629 kubelet[1912]: I0710 00:34:18.388604 1912 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:34:18.390190 kubelet[1912]: I0710 00:34:18.390164 1912 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:34:18.390501 kubelet[1912]: I0710 00:34:18.390471 1912 server.go:479] "Adding debug handlers to kubelet server" Jul 10 00:34:18.391815 kubelet[1912]: E0710 00:34:18.391788 1912 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:34:18.391941 kubelet[1912]: I0710 00:34:18.391929 1912 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 00:34:18.397599 kubelet[1912]: I0710 00:34:18.392947 1912 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 00:34:18.397599 kubelet[1912]: I0710 00:34:18.393321 1912 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:34:18.398456 kubelet[1912]: I0710 00:34:18.398403 1912 factory.go:221] Registration of the systemd container factory successfully Jul 10 00:34:18.398879 kubelet[1912]: I0710 00:34:18.398497 1912 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:34:18.400537 kubelet[1912]: I0710 00:34:18.400497 1912 factory.go:221] Registration of the containerd container factory successfully Jul 10 00:34:18.401781 kubelet[1912]: E0710 00:34:18.401755 1912 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:34:18.415006 kubelet[1912]: I0710 00:34:18.414958 1912 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 00:34:18.416152 kubelet[1912]: I0710 00:34:18.416115 1912 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 00:34:18.416152 kubelet[1912]: I0710 00:34:18.416144 1912 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 10 00:34:18.416275 kubelet[1912]: I0710 00:34:18.416166 1912 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 00:34:18.416275 kubelet[1912]: I0710 00:34:18.416174 1912 kubelet.go:2382] "Starting kubelet main sync loop" Jul 10 00:34:18.416275 kubelet[1912]: E0710 00:34:18.416243 1912 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:34:18.435161 kubelet[1912]: I0710 00:34:18.435131 1912 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 00:34:18.435349 kubelet[1912]: I0710 00:34:18.435332 1912 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 00:34:18.435424 kubelet[1912]: I0710 00:34:18.435413 1912 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:34:18.435620 kubelet[1912]: I0710 00:34:18.435604 1912 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 10 00:34:18.435709 kubelet[1912]: I0710 00:34:18.435682 1912 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 10 00:34:18.435763 kubelet[1912]: I0710 00:34:18.435754 1912 policy_none.go:49] "None policy: Start" Jul 10 00:34:18.435820 kubelet[1912]: I0710 00:34:18.435810 1912 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 00:34:18.435880 kubelet[1912]: I0710 00:34:18.435871 1912 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:34:18.436039 kubelet[1912]: I0710 00:34:18.436025 1912 state_mem.go:75] "Updated machine memory state" Jul 10 00:34:18.439643 kubelet[1912]: I0710 00:34:18.439604 1912 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 00:34:18.439771 kubelet[1912]: I0710 00:34:18.439756 1912 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:34:18.439815 kubelet[1912]: I0710 00:34:18.439770 1912 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:34:18.440523 kubelet[1912]: I0710 00:34:18.440066 1912 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:34:18.441126 kubelet[1912]: E0710 00:34:18.441061 1912 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 00:34:18.517061 kubelet[1912]: I0710 00:34:18.517023 1912 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 10 00:34:18.517281 kubelet[1912]: I0710 00:34:18.517075 1912 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 10 00:34:18.517281 kubelet[1912]: I0710 00:34:18.517023 1912 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 10 00:34:18.543600 kubelet[1912]: I0710 00:34:18.543564 1912 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 00:34:18.553103 kubelet[1912]: I0710 00:34:18.553074 1912 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 10 00:34:18.553218 kubelet[1912]: I0710 00:34:18.553167 1912 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 10 00:34:18.593808 kubelet[1912]: I0710 00:34:18.593752 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/18fff775d5fbd83cbcb968afa4461d64-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"18fff775d5fbd83cbcb968afa4461d64\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:34:18.593808 kubelet[1912]: I0710 00:34:18.593796 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:34:18.593992 kubelet[1912]: I0710 00:34:18.593819 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:34:18.593992 kubelet[1912]: I0710 00:34:18.593838 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:34:18.593992 kubelet[1912]: I0710 00:34:18.593856 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/18fff775d5fbd83cbcb968afa4461d64-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"18fff775d5fbd83cbcb968afa4461d64\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:34:18.593992 kubelet[1912]: I0710 00:34:18.593871 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/18fff775d5fbd83cbcb968afa4461d64-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"18fff775d5fbd83cbcb968afa4461d64\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:34:18.593992 kubelet[1912]: I0710 00:34:18.593887 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:34:18.594119 kubelet[1912]: I0710 00:34:18.593902 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:34:18.594119 kubelet[1912]: I0710 00:34:18.593918 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 10 00:34:18.823172 kubelet[1912]: E0710 00:34:18.823133 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:18.823446 kubelet[1912]: E0710 00:34:18.823164 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:18.823500 kubelet[1912]: E0710 00:34:18.823309 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:18.965754 sudo[1949]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 10 00:34:18.966522 sudo[1949]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 10 00:34:19.371588 kubelet[1912]: I0710 00:34:19.371542 1912 apiserver.go:52] "Watching apiserver" Jul 10 00:34:19.393752 kubelet[1912]: I0710 00:34:19.393701 1912 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 00:34:19.405970 sudo[1949]: pam_unix(sudo:session): session closed for user root Jul 10 00:34:19.426369 kubelet[1912]: I0710 00:34:19.426335 1912 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 10 00:34:19.426802 kubelet[1912]: I0710 00:34:19.426778 1912 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 10 00:34:19.426883 kubelet[1912]: I0710 00:34:19.426862 1912 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 10 00:34:19.434019 kubelet[1912]: E0710 00:34:19.433846 1912 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 10 00:34:19.435049 kubelet[1912]: E0710 00:34:19.435030 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:19.437441 kubelet[1912]: E0710 00:34:19.437418 1912 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 10 00:34:19.437537 kubelet[1912]: E0710 00:34:19.437437 1912 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 10 00:34:19.437578 kubelet[1912]: E0710 00:34:19.437541 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:19.437578 kubelet[1912]: E0710 00:34:19.437571 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:19.457098 kubelet[1912]: I0710 00:34:19.457038 1912 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.457021706 podStartE2EDuration="1.457021706s" podCreationTimestamp="2025-07-10 00:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:34:19.456786291 +0000 UTC m=+1.141861319" watchObservedRunningTime="2025-07-10 00:34:19.457021706 +0000 UTC m=+1.142096734" Jul 10 00:34:19.457403 kubelet[1912]: I0710 00:34:19.457374 1912 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.457351022 podStartE2EDuration="1.457351022s" podCreationTimestamp="2025-07-10 00:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:34:19.449000855 +0000 UTC m=+1.134075843" watchObservedRunningTime="2025-07-10 00:34:19.457351022 +0000 UTC m=+1.142426050" Jul 10 00:34:19.471560 kubelet[1912]: I0710 00:34:19.471492 1912 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.471467438 podStartE2EDuration="1.471467438s" podCreationTimestamp="2025-07-10 00:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:34:19.463691764 +0000 UTC m=+1.148766792" watchObservedRunningTime="2025-07-10 00:34:19.471467438 +0000 UTC m=+1.156542466" Jul 10 00:34:20.428231 kubelet[1912]: E0710 00:34:20.428173 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:20.428579 kubelet[1912]: E0710 00:34:20.428290 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:20.428579 kubelet[1912]: E0710 00:34:20.428524 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:21.272012 sudo[1314]: pam_unix(sudo:session): session closed for user root Jul 10 00:34:21.273961 sshd[1310]: pam_unix(sshd:session): session closed for user core Jul 10 00:34:21.276911 systemd-logind[1201]: Session 5 logged out. Waiting for processes to exit. Jul 10 00:34:21.277062 systemd[1]: sshd@4-10.0.0.76:22-10.0.0.1:42052.service: Deactivated successfully. Jul 10 00:34:21.277759 systemd[1]: session-5.scope: Deactivated successfully. Jul 10 00:34:21.277923 systemd[1]: session-5.scope: Consumed 7.706s CPU time. Jul 10 00:34:21.278511 systemd-logind[1201]: Removed session 5. Jul 10 00:34:21.429137 kubelet[1912]: E0710 00:34:21.429099 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:24.587204 kubelet[1912]: I0710 00:34:24.587174 1912 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 10 00:34:24.587560 env[1214]: time="2025-07-10T00:34:24.587504383Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 10 00:34:24.587737 kubelet[1912]: I0710 00:34:24.587698 1912 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 10 00:34:25.462078 systemd[1]: Created slice kubepods-besteffort-pod05451f67_26ba_419b_8b23_d3c7d2538732.slice. Jul 10 00:34:25.470788 systemd[1]: Created slice kubepods-burstable-pod339dedd7_13e4_4691_9620_c8e0b2532c87.slice. Jul 10 00:34:25.541717 kubelet[1912]: I0710 00:34:25.541678 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/05451f67-26ba-419b-8b23-d3c7d2538732-kube-proxy\") pod \"kube-proxy-68764\" (UID: \"05451f67-26ba-419b-8b23-d3c7d2538732\") " pod="kube-system/kube-proxy-68764" Jul 10 00:34:25.541891 kubelet[1912]: I0710 00:34:25.541875 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05451f67-26ba-419b-8b23-d3c7d2538732-lib-modules\") pod \"kube-proxy-68764\" (UID: \"05451f67-26ba-419b-8b23-d3c7d2538732\") " pod="kube-system/kube-proxy-68764" Jul 10 00:34:25.542011 kubelet[1912]: I0710 00:34:25.541996 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/339dedd7-13e4-4691-9620-c8e0b2532c87-etc-cni-netd\") pod \"cilium-wp679\" (UID: \"339dedd7-13e4-4691-9620-c8e0b2532c87\") " pod="kube-system/cilium-wp679" Jul 10 00:34:25.542109 kubelet[1912]: I0710 00:34:25.542095 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/339dedd7-13e4-4691-9620-c8e0b2532c87-xtables-lock\") pod \"cilium-wp679\" (UID: \"339dedd7-13e4-4691-9620-c8e0b2532c87\") " pod="kube-system/cilium-wp679" Jul 10 00:34:25.542211 kubelet[1912]: I0710 00:34:25.542196 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/339dedd7-13e4-4691-9620-c8e0b2532c87-cni-path\") pod \"cilium-wp679\" (UID: \"339dedd7-13e4-4691-9620-c8e0b2532c87\") " pod="kube-system/cilium-wp679" Jul 10 00:34:25.542330 kubelet[1912]: I0710 00:34:25.542314 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nks6s\" (UniqueName: \"kubernetes.io/projected/339dedd7-13e4-4691-9620-c8e0b2532c87-kube-api-access-nks6s\") pod \"cilium-wp679\" (UID: \"339dedd7-13e4-4691-9620-c8e0b2532c87\") " pod="kube-system/cilium-wp679" Jul 10 00:34:25.542428 kubelet[1912]: I0710 00:34:25.542414 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/339dedd7-13e4-4691-9620-c8e0b2532c87-hostproc\") pod \"cilium-wp679\" (UID: \"339dedd7-13e4-4691-9620-c8e0b2532c87\") " pod="kube-system/cilium-wp679" Jul 10 00:34:25.542520 kubelet[1912]: I0710 00:34:25.542508 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/339dedd7-13e4-4691-9620-c8e0b2532c87-lib-modules\") pod \"cilium-wp679\" (UID: \"339dedd7-13e4-4691-9620-c8e0b2532c87\") " pod="kube-system/cilium-wp679" Jul 10 00:34:25.542607 kubelet[1912]: I0710 00:34:25.542594 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/339dedd7-13e4-4691-9620-c8e0b2532c87-clustermesh-secrets\") pod \"cilium-wp679\" (UID: \"339dedd7-13e4-4691-9620-c8e0b2532c87\") " pod="kube-system/cilium-wp679" Jul 10 00:34:25.542709 kubelet[1912]: I0710 00:34:25.542695 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/339dedd7-13e4-4691-9620-c8e0b2532c87-bpf-maps\") pod \"cilium-wp679\" (UID: \"339dedd7-13e4-4691-9620-c8e0b2532c87\") " pod="kube-system/cilium-wp679" Jul 10 00:34:25.542799 kubelet[1912]: I0710 00:34:25.542785 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/05451f67-26ba-419b-8b23-d3c7d2538732-xtables-lock\") pod \"kube-proxy-68764\" (UID: \"05451f67-26ba-419b-8b23-d3c7d2538732\") " pod="kube-system/kube-proxy-68764" Jul 10 00:34:25.542889 kubelet[1912]: I0710 00:34:25.542876 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/339dedd7-13e4-4691-9620-c8e0b2532c87-cilium-cgroup\") pod \"cilium-wp679\" (UID: \"339dedd7-13e4-4691-9620-c8e0b2532c87\") " pod="kube-system/cilium-wp679" Jul 10 00:34:25.542984 kubelet[1912]: I0710 00:34:25.542970 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8krh\" (UniqueName: \"kubernetes.io/projected/05451f67-26ba-419b-8b23-d3c7d2538732-kube-api-access-m8krh\") pod \"kube-proxy-68764\" (UID: \"05451f67-26ba-419b-8b23-d3c7d2538732\") " pod="kube-system/kube-proxy-68764" Jul 10 00:34:25.543079 kubelet[1912]: I0710 00:34:25.543055 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/339dedd7-13e4-4691-9620-c8e0b2532c87-host-proc-sys-kernel\") pod \"cilium-wp679\" (UID: \"339dedd7-13e4-4691-9620-c8e0b2532c87\") " pod="kube-system/cilium-wp679" Jul 10 00:34:25.543174 kubelet[1912]: I0710 00:34:25.543160 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/339dedd7-13e4-4691-9620-c8e0b2532c87-cilium-run\") pod \"cilium-wp679\" (UID: \"339dedd7-13e4-4691-9620-c8e0b2532c87\") " pod="kube-system/cilium-wp679" Jul 10 00:34:25.543267 kubelet[1912]: I0710 00:34:25.543253 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/339dedd7-13e4-4691-9620-c8e0b2532c87-cilium-config-path\") pod \"cilium-wp679\" (UID: \"339dedd7-13e4-4691-9620-c8e0b2532c87\") " pod="kube-system/cilium-wp679" Jul 10 00:34:25.543354 kubelet[1912]: I0710 00:34:25.543341 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/339dedd7-13e4-4691-9620-c8e0b2532c87-host-proc-sys-net\") pod \"cilium-wp679\" (UID: \"339dedd7-13e4-4691-9620-c8e0b2532c87\") " pod="kube-system/cilium-wp679" Jul 10 00:34:25.543458 kubelet[1912]: I0710 00:34:25.543444 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/339dedd7-13e4-4691-9620-c8e0b2532c87-hubble-tls\") pod \"cilium-wp679\" (UID: \"339dedd7-13e4-4691-9620-c8e0b2532c87\") " pod="kube-system/cilium-wp679" Jul 10 00:34:25.644548 kubelet[1912]: I0710 00:34:25.644507 1912 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jul 10 00:34:25.686441 systemd[1]: Created slice kubepods-besteffort-podf5e4c308_e8f2_43ba_b7a2_377166168980.slice. Jul 10 00:34:25.746903 kubelet[1912]: I0710 00:34:25.746780 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f5e4c308-e8f2-43ba-b7a2-377166168980-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-8dq5s\" (UID: \"f5e4c308-e8f2-43ba-b7a2-377166168980\") " pod="kube-system/cilium-operator-6c4d7847fc-8dq5s" Jul 10 00:34:25.746903 kubelet[1912]: I0710 00:34:25.746833 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pck55\" (UniqueName: \"kubernetes.io/projected/f5e4c308-e8f2-43ba-b7a2-377166168980-kube-api-access-pck55\") pod \"cilium-operator-6c4d7847fc-8dq5s\" (UID: \"f5e4c308-e8f2-43ba-b7a2-377166168980\") " pod="kube-system/cilium-operator-6c4d7847fc-8dq5s" Jul 10 00:34:25.768012 kubelet[1912]: E0710 00:34:25.767968 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:25.768685 env[1214]: time="2025-07-10T00:34:25.768635663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-68764,Uid:05451f67-26ba-419b-8b23-d3c7d2538732,Namespace:kube-system,Attempt:0,}" Jul 10 00:34:25.776659 kubelet[1912]: E0710 00:34:25.776461 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:25.777325 env[1214]: time="2025-07-10T00:34:25.777279328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wp679,Uid:339dedd7-13e4-4691-9620-c8e0b2532c87,Namespace:kube-system,Attempt:0,}" Jul 10 00:34:25.784935 env[1214]: time="2025-07-10T00:34:25.784877461Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:34:25.784935 env[1214]: time="2025-07-10T00:34:25.784917907Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:34:25.784935 env[1214]: time="2025-07-10T00:34:25.784928789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:34:25.785086 env[1214]: time="2025-07-10T00:34:25.785055970Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0c7d2b3ab28c3d52706f3ab3344d028ecd559f8181e90f2c4ace0e1eac4d10f7 pid=2006 runtime=io.containerd.runc.v2 Jul 10 00:34:25.797022 systemd[1]: Started cri-containerd-0c7d2b3ab28c3d52706f3ab3344d028ecd559f8181e90f2c4ace0e1eac4d10f7.scope. Jul 10 00:34:25.799791 env[1214]: time="2025-07-10T00:34:25.799719187Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:34:25.799791 env[1214]: time="2025-07-10T00:34:25.799768996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:34:25.799989 env[1214]: time="2025-07-10T00:34:25.799959387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:34:25.800243 env[1214]: time="2025-07-10T00:34:25.800207668Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/76dfadbcb2672b3b075645b0544acd49bd54b48f4d82850647fa5409edfc6e5c pid=2034 runtime=io.containerd.runc.v2 Jul 10 00:34:25.812013 systemd[1]: Started cri-containerd-76dfadbcb2672b3b075645b0544acd49bd54b48f4d82850647fa5409edfc6e5c.scope. Jul 10 00:34:25.841005 env[1214]: time="2025-07-10T00:34:25.840849208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-68764,Uid:05451f67-26ba-419b-8b23-d3c7d2538732,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c7d2b3ab28c3d52706f3ab3344d028ecd559f8181e90f2c4ace0e1eac4d10f7\"" Jul 10 00:34:25.841817 kubelet[1912]: E0710 00:34:25.841573 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:25.845198 env[1214]: time="2025-07-10T00:34:25.845132994Z" level=info msg="CreateContainer within sandbox \"0c7d2b3ab28c3d52706f3ab3344d028ecd559f8181e90f2c4ace0e1eac4d10f7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 10 00:34:25.863365 env[1214]: time="2025-07-10T00:34:25.863305510Z" level=info msg="CreateContainer within sandbox \"0c7d2b3ab28c3d52706f3ab3344d028ecd559f8181e90f2c4ace0e1eac4d10f7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3dd136afa19d48948dc794758603399a3991f88b5b199f797f8a897f7c059f28\"" Jul 10 00:34:25.864062 env[1214]: time="2025-07-10T00:34:25.863984102Z" level=info msg="StartContainer for \"3dd136afa19d48948dc794758603399a3991f88b5b199f797f8a897f7c059f28\"" Jul 10 00:34:25.869822 env[1214]: time="2025-07-10T00:34:25.869781338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wp679,Uid:339dedd7-13e4-4691-9620-c8e0b2532c87,Namespace:kube-system,Attempt:0,} returns sandbox id \"76dfadbcb2672b3b075645b0544acd49bd54b48f4d82850647fa5409edfc6e5c\"" Jul 10 00:34:25.870620 kubelet[1912]: E0710 00:34:25.870595 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:25.871861 env[1214]: time="2025-07-10T00:34:25.871826395Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 10 00:34:25.884160 systemd[1]: Started cri-containerd-3dd136afa19d48948dc794758603399a3991f88b5b199f797f8a897f7c059f28.scope. Jul 10 00:34:25.930587 env[1214]: time="2025-07-10T00:34:25.930527112Z" level=info msg="StartContainer for \"3dd136afa19d48948dc794758603399a3991f88b5b199f797f8a897f7c059f28\" returns successfully" Jul 10 00:34:25.991654 kubelet[1912]: E0710 00:34:25.991614 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:25.992403 env[1214]: time="2025-07-10T00:34:25.992365107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-8dq5s,Uid:f5e4c308-e8f2-43ba-b7a2-377166168980,Namespace:kube-system,Attempt:0,}" Jul 10 00:34:26.006145 env[1214]: time="2025-07-10T00:34:26.005861843Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:34:26.006145 env[1214]: time="2025-07-10T00:34:26.005908730Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:34:26.006145 env[1214]: time="2025-07-10T00:34:26.005919132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:34:26.006351 env[1214]: time="2025-07-10T00:34:26.006082638Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cb3ff0796f748d244b235574eb4cbdcab25e4050929e119ab3ad5b528bcb2bab pid=2128 runtime=io.containerd.runc.v2 Jul 10 00:34:26.016951 systemd[1]: Started cri-containerd-cb3ff0796f748d244b235574eb4cbdcab25e4050929e119ab3ad5b528bcb2bab.scope. Jul 10 00:34:26.066415 env[1214]: time="2025-07-10T00:34:26.066373372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-8dq5s,Uid:f5e4c308-e8f2-43ba-b7a2-377166168980,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb3ff0796f748d244b235574eb4cbdcab25e4050929e119ab3ad5b528bcb2bab\"" Jul 10 00:34:26.067140 kubelet[1912]: E0710 00:34:26.067100 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:26.444537 kubelet[1912]: E0710 00:34:26.444506 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:26.454021 kubelet[1912]: I0710 00:34:26.453652 1912 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-68764" podStartSLOduration=1.4536346070000001 podStartE2EDuration="1.453634607s" podCreationTimestamp="2025-07-10 00:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:34:26.453520989 +0000 UTC m=+8.138595977" watchObservedRunningTime="2025-07-10 00:34:26.453634607 +0000 UTC m=+8.138709635" Jul 10 00:34:27.395635 kubelet[1912]: E0710 00:34:27.395585 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:27.446548 kubelet[1912]: E0710 00:34:27.446456 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:28.899543 kubelet[1912]: E0710 00:34:28.899500 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:29.977761 kubelet[1912]: E0710 00:34:29.977725 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:32.025068 update_engine[1203]: I0710 00:34:32.024994 1203 update_attempter.cc:509] Updating boot flags... Jul 10 00:34:32.849574 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2049900957.mount: Deactivated successfully. Jul 10 00:34:35.158095 env[1214]: time="2025-07-10T00:34:35.158051818Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:34:35.159682 env[1214]: time="2025-07-10T00:34:35.159642895Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:34:35.161234 env[1214]: time="2025-07-10T00:34:35.161189328Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:34:35.162402 env[1214]: time="2025-07-10T00:34:35.162360003Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 10 00:34:35.169616 env[1214]: time="2025-07-10T00:34:35.167162837Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 10 00:34:35.170314 env[1214]: time="2025-07-10T00:34:35.170170734Z" level=info msg="CreateContainer within sandbox \"76dfadbcb2672b3b075645b0544acd49bd54b48f4d82850647fa5409edfc6e5c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 00:34:35.179813 env[1214]: time="2025-07-10T00:34:35.179751719Z" level=info msg="CreateContainer within sandbox \"76dfadbcb2672b3b075645b0544acd49bd54b48f4d82850647fa5409edfc6e5c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fc6f81294b10d1fb134cb85f15cf734b392978665b818389e31267e2c9f185af\"" Jul 10 00:34:35.181660 env[1214]: time="2025-07-10T00:34:35.181626664Z" level=info msg="StartContainer for \"fc6f81294b10d1fb134cb85f15cf734b392978665b818389e31267e2c9f185af\"" Jul 10 00:34:35.205432 systemd[1]: Started cri-containerd-fc6f81294b10d1fb134cb85f15cf734b392978665b818389e31267e2c9f185af.scope. Jul 10 00:34:35.255672 env[1214]: time="2025-07-10T00:34:35.255617244Z" level=info msg="StartContainer for \"fc6f81294b10d1fb134cb85f15cf734b392978665b818389e31267e2c9f185af\" returns successfully" Jul 10 00:34:35.272488 systemd[1]: cri-containerd-fc6f81294b10d1fb134cb85f15cf734b392978665b818389e31267e2c9f185af.scope: Deactivated successfully. Jul 10 00:34:35.319682 env[1214]: time="2025-07-10T00:34:35.319634240Z" level=info msg="shim disconnected" id=fc6f81294b10d1fb134cb85f15cf734b392978665b818389e31267e2c9f185af Jul 10 00:34:35.320021 env[1214]: time="2025-07-10T00:34:35.320001196Z" level=warning msg="cleaning up after shim disconnected" id=fc6f81294b10d1fb134cb85f15cf734b392978665b818389e31267e2c9f185af namespace=k8s.io Jul 10 00:34:35.320088 env[1214]: time="2025-07-10T00:34:35.320074843Z" level=info msg="cleaning up dead shim" Jul 10 00:34:35.327458 env[1214]: time="2025-07-10T00:34:35.327418488Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:34:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2358 runtime=io.containerd.runc.v2\n" Jul 10 00:34:35.465903 kubelet[1912]: E0710 00:34:35.465791 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:35.471287 env[1214]: time="2025-07-10T00:34:35.471245597Z" level=info msg="CreateContainer within sandbox \"76dfadbcb2672b3b075645b0544acd49bd54b48f4d82850647fa5409edfc6e5c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 10 00:34:35.485593 env[1214]: time="2025-07-10T00:34:35.485535887Z" level=info msg="CreateContainer within sandbox \"76dfadbcb2672b3b075645b0544acd49bd54b48f4d82850647fa5409edfc6e5c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"57f0393d983d3fc9aa6a63389f76d57fb60cf10389c73476c2041be6f02d0a0f\"" Jul 10 00:34:35.486377 env[1214]: time="2025-07-10T00:34:35.486287041Z" level=info msg="StartContainer for \"57f0393d983d3fc9aa6a63389f76d57fb60cf10389c73476c2041be6f02d0a0f\"" Jul 10 00:34:35.507986 systemd[1]: Started cri-containerd-57f0393d983d3fc9aa6a63389f76d57fb60cf10389c73476c2041be6f02d0a0f.scope. Jul 10 00:34:35.544156 env[1214]: time="2025-07-10T00:34:35.544108026Z" level=info msg="StartContainer for \"57f0393d983d3fc9aa6a63389f76d57fb60cf10389c73476c2041be6f02d0a0f\" returns successfully" Jul 10 00:34:35.558295 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 00:34:35.558553 systemd[1]: Stopped systemd-sysctl.service. Jul 10 00:34:35.558911 systemd[1]: Stopping systemd-sysctl.service... Jul 10 00:34:35.560715 systemd[1]: Starting systemd-sysctl.service... Jul 10 00:34:35.561750 systemd[1]: cri-containerd-57f0393d983d3fc9aa6a63389f76d57fb60cf10389c73476c2041be6f02d0a0f.scope: Deactivated successfully. Jul 10 00:34:35.569503 systemd[1]: Finished systemd-sysctl.service. Jul 10 00:34:35.603831 env[1214]: time="2025-07-10T00:34:35.603779793Z" level=info msg="shim disconnected" id=57f0393d983d3fc9aa6a63389f76d57fb60cf10389c73476c2041be6f02d0a0f Jul 10 00:34:35.604067 env[1214]: time="2025-07-10T00:34:35.604047979Z" level=warning msg="cleaning up after shim disconnected" id=57f0393d983d3fc9aa6a63389f76d57fb60cf10389c73476c2041be6f02d0a0f namespace=k8s.io Jul 10 00:34:35.604139 env[1214]: time="2025-07-10T00:34:35.604125387Z" level=info msg="cleaning up dead shim" Jul 10 00:34:35.610963 env[1214]: time="2025-07-10T00:34:35.610919697Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:34:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2425 runtime=io.containerd.runc.v2\n" Jul 10 00:34:36.178196 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc6f81294b10d1fb134cb85f15cf734b392978665b818389e31267e2c9f185af-rootfs.mount: Deactivated successfully. Jul 10 00:34:36.465465 kubelet[1912]: E0710 00:34:36.465283 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:36.471261 env[1214]: time="2025-07-10T00:34:36.468541096Z" level=info msg="CreateContainer within sandbox \"76dfadbcb2672b3b075645b0544acd49bd54b48f4d82850647fa5409edfc6e5c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 10 00:34:36.496563 env[1214]: time="2025-07-10T00:34:36.496505448Z" level=info msg="CreateContainer within sandbox \"76dfadbcb2672b3b075645b0544acd49bd54b48f4d82850647fa5409edfc6e5c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4f1b3c3cd84e914505a2859f5963ccbc459450150f9088e28f32f5f0ce39bb1c\"" Jul 10 00:34:36.498425 env[1214]: time="2025-07-10T00:34:36.497312604Z" level=info msg="StartContainer for \"4f1b3c3cd84e914505a2859f5963ccbc459450150f9088e28f32f5f0ce39bb1c\"" Jul 10 00:34:36.520562 systemd[1]: Started cri-containerd-4f1b3c3cd84e914505a2859f5963ccbc459450150f9088e28f32f5f0ce39bb1c.scope. Jul 10 00:34:36.591148 env[1214]: time="2025-07-10T00:34:36.591106509Z" level=info msg="StartContainer for \"4f1b3c3cd84e914505a2859f5963ccbc459450150f9088e28f32f5f0ce39bb1c\" returns successfully" Jul 10 00:34:36.601503 systemd[1]: cri-containerd-4f1b3c3cd84e914505a2859f5963ccbc459450150f9088e28f32f5f0ce39bb1c.scope: Deactivated successfully. Jul 10 00:34:36.627302 env[1214]: time="2025-07-10T00:34:36.627249670Z" level=info msg="shim disconnected" id=4f1b3c3cd84e914505a2859f5963ccbc459450150f9088e28f32f5f0ce39bb1c Jul 10 00:34:36.627302 env[1214]: time="2025-07-10T00:34:36.627298715Z" level=warning msg="cleaning up after shim disconnected" id=4f1b3c3cd84e914505a2859f5963ccbc459450150f9088e28f32f5f0ce39bb1c namespace=k8s.io Jul 10 00:34:36.627302 env[1214]: time="2025-07-10T00:34:36.627310316Z" level=info msg="cleaning up dead shim" Jul 10 00:34:36.633960 env[1214]: time="2025-07-10T00:34:36.633910737Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:34:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2481 runtime=io.containerd.runc.v2\n" Jul 10 00:34:37.013511 env[1214]: time="2025-07-10T00:34:37.013437476Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:34:37.014684 env[1214]: time="2025-07-10T00:34:37.014650785Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:34:37.016629 env[1214]: time="2025-07-10T00:34:37.016592999Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:34:37.017170 env[1214]: time="2025-07-10T00:34:37.017134968Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 10 00:34:37.019495 env[1214]: time="2025-07-10T00:34:37.019454296Z" level=info msg="CreateContainer within sandbox \"cb3ff0796f748d244b235574eb4cbdcab25e4050929e119ab3ad5b528bcb2bab\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 10 00:34:37.028706 env[1214]: time="2025-07-10T00:34:37.028671924Z" level=info msg="CreateContainer within sandbox \"cb3ff0796f748d244b235574eb4cbdcab25e4050929e119ab3ad5b528bcb2bab\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"7fbb06f7c4da0983f5c4afb0dd1544aff51429257a34c2286498b13727c518f3\"" Jul 10 00:34:37.029176 env[1214]: time="2025-07-10T00:34:37.029100683Z" level=info msg="StartContainer for \"7fbb06f7c4da0983f5c4afb0dd1544aff51429257a34c2286498b13727c518f3\"" Jul 10 00:34:37.043188 systemd[1]: Started cri-containerd-7fbb06f7c4da0983f5c4afb0dd1544aff51429257a34c2286498b13727c518f3.scope. Jul 10 00:34:37.094246 env[1214]: time="2025-07-10T00:34:37.091258506Z" level=info msg="StartContainer for \"7fbb06f7c4da0983f5c4afb0dd1544aff51429257a34c2286498b13727c518f3\" returns successfully" Jul 10 00:34:37.178468 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f1b3c3cd84e914505a2859f5963ccbc459450150f9088e28f32f5f0ce39bb1c-rootfs.mount: Deactivated successfully. Jul 10 00:34:37.469428 kubelet[1912]: E0710 00:34:37.469400 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:37.470881 kubelet[1912]: E0710 00:34:37.470861 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:37.471957 env[1214]: time="2025-07-10T00:34:37.471893653Z" level=info msg="CreateContainer within sandbox \"76dfadbcb2672b3b075645b0544acd49bd54b48f4d82850647fa5409edfc6e5c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 10 00:34:37.487591 env[1214]: time="2025-07-10T00:34:37.487541738Z" level=info msg="CreateContainer within sandbox \"76dfadbcb2672b3b075645b0544acd49bd54b48f4d82850647fa5409edfc6e5c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9b1404a8987a145b9bb1f2147f9995e29b127bb437b19f09195b3c132b2315a8\"" Jul 10 00:34:37.488342 env[1214]: time="2025-07-10T00:34:37.488288325Z" level=info msg="StartContainer for \"9b1404a8987a145b9bb1f2147f9995e29b127bb437b19f09195b3c132b2315a8\"" Jul 10 00:34:37.507131 systemd[1]: Started cri-containerd-9b1404a8987a145b9bb1f2147f9995e29b127bb437b19f09195b3c132b2315a8.scope. Jul 10 00:34:37.588515 systemd[1]: cri-containerd-9b1404a8987a145b9bb1f2147f9995e29b127bb437b19f09195b3c132b2315a8.scope: Deactivated successfully. Jul 10 00:34:37.614323 env[1214]: time="2025-07-10T00:34:37.614279882Z" level=info msg="StartContainer for \"9b1404a8987a145b9bb1f2147f9995e29b127bb437b19f09195b3c132b2315a8\" returns successfully" Jul 10 00:34:37.654716 env[1214]: time="2025-07-10T00:34:37.654668229Z" level=info msg="shim disconnected" id=9b1404a8987a145b9bb1f2147f9995e29b127bb437b19f09195b3c132b2315a8 Jul 10 00:34:37.654944 env[1214]: time="2025-07-10T00:34:37.654925252Z" level=warning msg="cleaning up after shim disconnected" id=9b1404a8987a145b9bb1f2147f9995e29b127bb437b19f09195b3c132b2315a8 namespace=k8s.io Jul 10 00:34:37.655032 env[1214]: time="2025-07-10T00:34:37.655017740Z" level=info msg="cleaning up dead shim" Jul 10 00:34:37.665735 env[1214]: time="2025-07-10T00:34:37.665683418Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:34:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2574 runtime=io.containerd.runc.v2\ntime=\"2025-07-10T00:34:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Jul 10 00:34:38.177871 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b1404a8987a145b9bb1f2147f9995e29b127bb437b19f09195b3c132b2315a8-rootfs.mount: Deactivated successfully. Jul 10 00:34:38.478280 kubelet[1912]: E0710 00:34:38.478164 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:38.478280 kubelet[1912]: E0710 00:34:38.478232 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:38.482334 env[1214]: time="2025-07-10T00:34:38.482295874Z" level=info msg="CreateContainer within sandbox \"76dfadbcb2672b3b075645b0544acd49bd54b48f4d82850647fa5409edfc6e5c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 10 00:34:38.497063 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount306808200.mount: Deactivated successfully. Jul 10 00:34:38.497948 kubelet[1912]: I0710 00:34:38.497887 1912 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-8dq5s" podStartSLOduration=2.548338436 podStartE2EDuration="13.497871891s" podCreationTimestamp="2025-07-10 00:34:25 +0000 UTC" firstStartedPulling="2025-07-10 00:34:26.068653048 +0000 UTC m=+7.753728036" lastFinishedPulling="2025-07-10 00:34:37.018186463 +0000 UTC m=+18.703261491" observedRunningTime="2025-07-10 00:34:37.51238289 +0000 UTC m=+19.197457918" watchObservedRunningTime="2025-07-10 00:34:38.497871891 +0000 UTC m=+20.182946879" Jul 10 00:34:38.501569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3028371578.mount: Deactivated successfully. Jul 10 00:34:38.504992 env[1214]: time="2025-07-10T00:34:38.504942177Z" level=info msg="CreateContainer within sandbox \"76dfadbcb2672b3b075645b0544acd49bd54b48f4d82850647fa5409edfc6e5c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c2dce4a6145cd4dbdec869a4d293e86474c3cc5cc7f2200099bcabea2f3366cb\"" Jul 10 00:34:38.505611 env[1214]: time="2025-07-10T00:34:38.505573031Z" level=info msg="StartContainer for \"c2dce4a6145cd4dbdec869a4d293e86474c3cc5cc7f2200099bcabea2f3366cb\"" Jul 10 00:34:38.520256 systemd[1]: Started cri-containerd-c2dce4a6145cd4dbdec869a4d293e86474c3cc5cc7f2200099bcabea2f3366cb.scope. Jul 10 00:34:38.568546 env[1214]: time="2025-07-10T00:34:38.568502071Z" level=info msg="StartContainer for \"c2dce4a6145cd4dbdec869a4d293e86474c3cc5cc7f2200099bcabea2f3366cb\" returns successfully" Jul 10 00:34:38.734303 kubelet[1912]: I0710 00:34:38.733381 1912 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 10 00:34:38.775104 systemd[1]: Created slice kubepods-burstable-pod36c2eeac_5539_436d_9254_5994e488abbf.slice. Jul 10 00:34:38.778968 systemd[1]: Created slice kubepods-burstable-pod61ac0ab0_789e_43c5_be48_06574534e5be.slice. Jul 10 00:34:38.838256 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 10 00:34:38.851726 kubelet[1912]: I0710 00:34:38.851690 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36c2eeac-5539-436d-9254-5994e488abbf-config-volume\") pod \"coredns-668d6bf9bc-5n8g6\" (UID: \"36c2eeac-5539-436d-9254-5994e488abbf\") " pod="kube-system/coredns-668d6bf9bc-5n8g6" Jul 10 00:34:38.851726 kubelet[1912]: I0710 00:34:38.851728 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbnnv\" (UniqueName: \"kubernetes.io/projected/36c2eeac-5539-436d-9254-5994e488abbf-kube-api-access-bbnnv\") pod \"coredns-668d6bf9bc-5n8g6\" (UID: \"36c2eeac-5539-436d-9254-5994e488abbf\") " pod="kube-system/coredns-668d6bf9bc-5n8g6" Jul 10 00:34:38.851868 kubelet[1912]: I0710 00:34:38.851751 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/61ac0ab0-789e-43c5-be48-06574534e5be-config-volume\") pod \"coredns-668d6bf9bc-952kh\" (UID: \"61ac0ab0-789e-43c5-be48-06574534e5be\") " pod="kube-system/coredns-668d6bf9bc-952kh" Jul 10 00:34:38.851868 kubelet[1912]: I0710 00:34:38.851773 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vks5x\" (UniqueName: \"kubernetes.io/projected/61ac0ab0-789e-43c5-be48-06574534e5be-kube-api-access-vks5x\") pod \"coredns-668d6bf9bc-952kh\" (UID: \"61ac0ab0-789e-43c5-be48-06574534e5be\") " pod="kube-system/coredns-668d6bf9bc-952kh" Jul 10 00:34:39.077618 kubelet[1912]: E0710 00:34:39.077580 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:39.078415 env[1214]: time="2025-07-10T00:34:39.078372491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5n8g6,Uid:36c2eeac-5539-436d-9254-5994e488abbf,Namespace:kube-system,Attempt:0,}" Jul 10 00:34:39.081129 kubelet[1912]: E0710 00:34:39.081093 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:39.081613 env[1214]: time="2025-07-10T00:34:39.081569793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-952kh,Uid:61ac0ab0-789e-43c5-be48-06574534e5be,Namespace:kube-system,Attempt:0,}" Jul 10 00:34:39.086237 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 10 00:34:39.480423 kubelet[1912]: E0710 00:34:39.480322 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:39.494858 kubelet[1912]: I0710 00:34:39.494793 1912 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wp679" podStartSLOduration=5.199628523 podStartE2EDuration="14.494778095s" podCreationTimestamp="2025-07-10 00:34:25 +0000 UTC" firstStartedPulling="2025-07-10 00:34:25.871388883 +0000 UTC m=+7.556463911" lastFinishedPulling="2025-07-10 00:34:35.166538495 +0000 UTC m=+16.851613483" observedRunningTime="2025-07-10 00:34:39.494627562 +0000 UTC m=+21.179702590" watchObservedRunningTime="2025-07-10 00:34:39.494778095 +0000 UTC m=+21.179853123" Jul 10 00:34:40.482408 kubelet[1912]: E0710 00:34:40.482380 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:40.729163 systemd-networkd[1041]: cilium_host: Link UP Jul 10 00:34:40.729441 systemd-networkd[1041]: cilium_net: Link UP Jul 10 00:34:40.729444 systemd-networkd[1041]: cilium_net: Gained carrier Jul 10 00:34:40.729567 systemd-networkd[1041]: cilium_host: Gained carrier Jul 10 00:34:40.730477 systemd-networkd[1041]: cilium_host: Gained IPv6LL Jul 10 00:34:40.730647 systemd-networkd[1041]: cilium_net: Gained IPv6LL Jul 10 00:34:40.731397 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 10 00:34:40.822419 systemd-networkd[1041]: cilium_vxlan: Link UP Jul 10 00:34:40.822425 systemd-networkd[1041]: cilium_vxlan: Gained carrier Jul 10 00:34:41.209264 kernel: NET: Registered PF_ALG protocol family Jul 10 00:34:41.483711 kubelet[1912]: E0710 00:34:41.483676 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:41.817645 systemd-networkd[1041]: lxc_health: Link UP Jul 10 00:34:41.826000 systemd-networkd[1041]: lxc_health: Gained carrier Jul 10 00:34:41.826246 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 10 00:34:42.163616 systemd-networkd[1041]: lxc39ad81797637: Link UP Jul 10 00:34:42.170261 kernel: eth0: renamed from tmp56fc6 Jul 10 00:34:42.176978 systemd-networkd[1041]: lxc39ad81797637: Gained carrier Jul 10 00:34:42.177160 systemd-networkd[1041]: lxcb67368490326: Link UP Jul 10 00:34:42.177240 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc39ad81797637: link becomes ready Jul 10 00:34:42.187244 kernel: eth0: renamed from tmpf8dd5 Jul 10 00:34:42.194600 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb67368490326: link becomes ready Jul 10 00:34:42.194542 systemd-networkd[1041]: lxcb67368490326: Gained carrier Jul 10 00:34:42.331407 systemd-networkd[1041]: cilium_vxlan: Gained IPv6LL Jul 10 00:34:43.227451 systemd-networkd[1041]: lxc39ad81797637: Gained IPv6LL Jul 10 00:34:43.419351 systemd-networkd[1041]: lxc_health: Gained IPv6LL Jul 10 00:34:43.777776 kubelet[1912]: E0710 00:34:43.777732 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:43.803402 systemd-networkd[1041]: lxcb67368490326: Gained IPv6LL Jul 10 00:34:44.627324 systemd[1]: Started sshd@5-10.0.0.76:22-10.0.0.1:50604.service. Jul 10 00:34:44.677416 sshd[3140]: Accepted publickey for core from 10.0.0.1 port 50604 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:34:44.679044 sshd[3140]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:34:44.683039 systemd-logind[1201]: New session 6 of user core. Jul 10 00:34:44.683895 systemd[1]: Started session-6.scope. Jul 10 00:34:44.814613 sshd[3140]: pam_unix(sshd:session): session closed for user core Jul 10 00:34:44.817510 systemd[1]: sshd@5-10.0.0.76:22-10.0.0.1:50604.service: Deactivated successfully. Jul 10 00:34:44.818264 systemd[1]: session-6.scope: Deactivated successfully. Jul 10 00:34:44.818900 systemd-logind[1201]: Session 6 logged out. Waiting for processes to exit. Jul 10 00:34:44.819723 systemd-logind[1201]: Removed session 6. Jul 10 00:34:45.829139 env[1214]: time="2025-07-10T00:34:45.829050404Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:34:45.836578 env[1214]: time="2025-07-10T00:34:45.829090967Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:34:45.836578 env[1214]: time="2025-07-10T00:34:45.829108128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:34:45.836578 env[1214]: time="2025-07-10T00:34:45.829333822Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f8dd53a0d0b55cd305b3469a5a7e168bc2cf9f5140a02bab951bfcfec3888785 pid=3177 runtime=io.containerd.runc.v2 Jul 10 00:34:45.836578 env[1214]: time="2025-07-10T00:34:45.830353208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:34:45.836578 env[1214]: time="2025-07-10T00:34:45.830384049Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:34:45.836578 env[1214]: time="2025-07-10T00:34:45.830394370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:34:45.837946 env[1214]: time="2025-07-10T00:34:45.837809284Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/56fc640e88b677630b7a28a08953c1cf98755c197c0b0506397ab3ca4fb1afa3 pid=3176 runtime=io.containerd.runc.v2 Jul 10 00:34:45.851863 systemd[1]: run-containerd-runc-k8s.io-f8dd53a0d0b55cd305b3469a5a7e168bc2cf9f5140a02bab951bfcfec3888785-runc.Afk4Cu.mount: Deactivated successfully. Jul 10 00:34:45.858773 systemd[1]: Started cri-containerd-56fc640e88b677630b7a28a08953c1cf98755c197c0b0506397ab3ca4fb1afa3.scope. Jul 10 00:34:45.860614 systemd[1]: Started cri-containerd-f8dd53a0d0b55cd305b3469a5a7e168bc2cf9f5140a02bab951bfcfec3888785.scope. Jul 10 00:34:45.908089 systemd-resolved[1154]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:34:45.914124 systemd-resolved[1154]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:34:45.931401 env[1214]: time="2025-07-10T00:34:45.930504291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-952kh,Uid:61ac0ab0-789e-43c5-be48-06574534e5be,Namespace:kube-system,Attempt:0,} returns sandbox id \"56fc640e88b677630b7a28a08953c1cf98755c197c0b0506397ab3ca4fb1afa3\"" Jul 10 00:34:45.932201 kubelet[1912]: E0710 00:34:45.932175 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:45.933635 env[1214]: time="2025-07-10T00:34:45.933117138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5n8g6,Uid:36c2eeac-5539-436d-9254-5994e488abbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"f8dd53a0d0b55cd305b3469a5a7e168bc2cf9f5140a02bab951bfcfec3888785\"" Jul 10 00:34:45.933729 kubelet[1912]: E0710 00:34:45.933695 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:45.935352 env[1214]: time="2025-07-10T00:34:45.935317278Z" level=info msg="CreateContainer within sandbox \"56fc640e88b677630b7a28a08953c1cf98755c197c0b0506397ab3ca4fb1afa3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:34:45.937796 env[1214]: time="2025-07-10T00:34:45.937747754Z" level=info msg="CreateContainer within sandbox \"f8dd53a0d0b55cd305b3469a5a7e168bc2cf9f5140a02bab951bfcfec3888785\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:34:45.952100 env[1214]: time="2025-07-10T00:34:45.952039428Z" level=info msg="CreateContainer within sandbox \"f8dd53a0d0b55cd305b3469a5a7e168bc2cf9f5140a02bab951bfcfec3888785\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ed68c29df60e24ce3f8f831690e04dab9c7bb7f686f300afd13dcf295241b41a\"" Jul 10 00:34:45.952596 env[1214]: time="2025-07-10T00:34:45.952569621Z" level=info msg="StartContainer for \"ed68c29df60e24ce3f8f831690e04dab9c7bb7f686f300afd13dcf295241b41a\"" Jul 10 00:34:45.954334 env[1214]: time="2025-07-10T00:34:45.954287891Z" level=info msg="CreateContainer within sandbox \"56fc640e88b677630b7a28a08953c1cf98755c197c0b0506397ab3ca4fb1afa3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2d4af913a870e06369790fd0e6c5fa3f83793007db3a7652acd86b6e0f5a657c\"" Jul 10 00:34:45.954775 env[1214]: time="2025-07-10T00:34:45.954711718Z" level=info msg="StartContainer for \"2d4af913a870e06369790fd0e6c5fa3f83793007db3a7652acd86b6e0f5a657c\"" Jul 10 00:34:45.969987 systemd[1]: Started cri-containerd-2d4af913a870e06369790fd0e6c5fa3f83793007db3a7652acd86b6e0f5a657c.scope. Jul 10 00:34:45.971235 systemd[1]: Started cri-containerd-ed68c29df60e24ce3f8f831690e04dab9c7bb7f686f300afd13dcf295241b41a.scope. Jul 10 00:34:46.015387 env[1214]: time="2025-07-10T00:34:46.015337878Z" level=info msg="StartContainer for \"ed68c29df60e24ce3f8f831690e04dab9c7bb7f686f300afd13dcf295241b41a\" returns successfully" Jul 10 00:34:46.015583 env[1214]: time="2025-07-10T00:34:46.015344679Z" level=info msg="StartContainer for \"2d4af913a870e06369790fd0e6c5fa3f83793007db3a7652acd86b6e0f5a657c\" returns successfully" Jul 10 00:34:46.492377 kubelet[1912]: E0710 00:34:46.492347 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:46.494732 kubelet[1912]: E0710 00:34:46.494707 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:46.518685 kubelet[1912]: I0710 00:34:46.518529 1912 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-5n8g6" podStartSLOduration=21.518511444 podStartE2EDuration="21.518511444s" podCreationTimestamp="2025-07-10 00:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:34:46.505885107 +0000 UTC m=+28.190960135" watchObservedRunningTime="2025-07-10 00:34:46.518511444 +0000 UTC m=+28.203586432" Jul 10 00:34:46.532168 kubelet[1912]: I0710 00:34:46.532087 1912 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-952kh" podStartSLOduration=21.532069559 podStartE2EDuration="21.532069559s" podCreationTimestamp="2025-07-10 00:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:34:46.530646071 +0000 UTC m=+28.215721099" watchObservedRunningTime="2025-07-10 00:34:46.532069559 +0000 UTC m=+28.217144587" Jul 10 00:34:47.496882 kubelet[1912]: E0710 00:34:47.496842 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:47.497353 kubelet[1912]: E0710 00:34:47.497334 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:48.498775 kubelet[1912]: E0710 00:34:48.498747 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:48.499509 kubelet[1912]: E0710 00:34:48.499485 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:49.819541 systemd[1]: Started sshd@6-10.0.0.76:22-10.0.0.1:50608.service. Jul 10 00:34:49.859716 sshd[3325]: Accepted publickey for core from 10.0.0.1 port 50608 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:34:49.861143 sshd[3325]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:34:49.864963 systemd-logind[1201]: New session 7 of user core. Jul 10 00:34:49.865421 systemd[1]: Started session-7.scope. Jul 10 00:34:49.979209 sshd[3325]: pam_unix(sshd:session): session closed for user core Jul 10 00:34:49.981888 systemd[1]: sshd@6-10.0.0.76:22-10.0.0.1:50608.service: Deactivated successfully. Jul 10 00:34:49.982664 systemd[1]: session-7.scope: Deactivated successfully. Jul 10 00:34:49.983159 systemd-logind[1201]: Session 7 logged out. Waiting for processes to exit. Jul 10 00:34:49.983843 systemd-logind[1201]: Removed session 7. Jul 10 00:34:54.925037 kubelet[1912]: I0710 00:34:54.925001 1912 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:34:54.925610 kubelet[1912]: E0710 00:34:54.925590 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:34:54.987661 systemd[1]: Started sshd@7-10.0.0.76:22-10.0.0.1:50370.service. Jul 10 00:34:55.025936 sshd[3340]: Accepted publickey for core from 10.0.0.1 port 50370 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:34:55.027741 sshd[3340]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:34:55.032006 systemd-logind[1201]: New session 8 of user core. Jul 10 00:34:55.032268 systemd[1]: Started session-8.scope. Jul 10 00:34:55.147911 sshd[3340]: pam_unix(sshd:session): session closed for user core Jul 10 00:34:55.150401 systemd[1]: sshd@7-10.0.0.76:22-10.0.0.1:50370.service: Deactivated successfully. Jul 10 00:34:55.151105 systemd[1]: session-8.scope: Deactivated successfully. Jul 10 00:34:55.151587 systemd-logind[1201]: Session 8 logged out. Waiting for processes to exit. Jul 10 00:34:55.152201 systemd-logind[1201]: Removed session 8. Jul 10 00:34:55.510899 kubelet[1912]: E0710 00:34:55.510862 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:00.153326 systemd[1]: Started sshd@8-10.0.0.76:22-10.0.0.1:50386.service. Jul 10 00:35:00.197155 sshd[3362]: Accepted publickey for core from 10.0.0.1 port 50386 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:35:00.198682 sshd[3362]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:35:00.206821 systemd-logind[1201]: New session 9 of user core. Jul 10 00:35:00.207340 systemd[1]: Started session-9.scope. Jul 10 00:35:00.335211 sshd[3362]: pam_unix(sshd:session): session closed for user core Jul 10 00:35:00.338589 systemd[1]: Started sshd@9-10.0.0.76:22-10.0.0.1:50398.service. Jul 10 00:35:00.349759 systemd[1]: sshd@8-10.0.0.76:22-10.0.0.1:50386.service: Deactivated successfully. Jul 10 00:35:00.350542 systemd[1]: session-9.scope: Deactivated successfully. Jul 10 00:35:00.352499 systemd-logind[1201]: Session 9 logged out. Waiting for processes to exit. Jul 10 00:35:00.354479 systemd-logind[1201]: Removed session 9. Jul 10 00:35:00.383391 sshd[3375]: Accepted publickey for core from 10.0.0.1 port 50398 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:35:00.384784 sshd[3375]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:35:00.390674 systemd[1]: Started session-10.scope. Jul 10 00:35:00.391128 systemd-logind[1201]: New session 10 of user core. Jul 10 00:35:00.582614 sshd[3375]: pam_unix(sshd:session): session closed for user core Jul 10 00:35:00.585202 systemd[1]: Started sshd@10-10.0.0.76:22-10.0.0.1:50414.service. Jul 10 00:35:00.587459 systemd[1]: sshd@9-10.0.0.76:22-10.0.0.1:50398.service: Deactivated successfully. Jul 10 00:35:00.588104 systemd[1]: session-10.scope: Deactivated successfully. Jul 10 00:35:00.588921 systemd-logind[1201]: Session 10 logged out. Waiting for processes to exit. Jul 10 00:35:00.590017 systemd-logind[1201]: Removed session 10. Jul 10 00:35:00.635411 sshd[3388]: Accepted publickey for core from 10.0.0.1 port 50414 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:35:00.637069 sshd[3388]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:35:00.641232 systemd-logind[1201]: New session 11 of user core. Jul 10 00:35:00.641673 systemd[1]: Started session-11.scope. Jul 10 00:35:00.767018 sshd[3388]: pam_unix(sshd:session): session closed for user core Jul 10 00:35:00.769973 systemd[1]: sshd@10-10.0.0.76:22-10.0.0.1:50414.service: Deactivated successfully. Jul 10 00:35:00.770685 systemd[1]: session-11.scope: Deactivated successfully. Jul 10 00:35:00.771303 systemd-logind[1201]: Session 11 logged out. Waiting for processes to exit. Jul 10 00:35:00.772125 systemd-logind[1201]: Removed session 11. Jul 10 00:35:05.772025 systemd[1]: Started sshd@11-10.0.0.76:22-10.0.0.1:49048.service. Jul 10 00:35:05.809690 sshd[3404]: Accepted publickey for core from 10.0.0.1 port 49048 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:35:05.811217 sshd[3404]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:35:05.815127 systemd-logind[1201]: New session 12 of user core. Jul 10 00:35:05.815663 systemd[1]: Started session-12.scope. Jul 10 00:35:05.928833 sshd[3404]: pam_unix(sshd:session): session closed for user core Jul 10 00:35:05.931391 systemd[1]: sshd@11-10.0.0.76:22-10.0.0.1:49048.service: Deactivated successfully. Jul 10 00:35:05.932082 systemd[1]: session-12.scope: Deactivated successfully. Jul 10 00:35:05.932573 systemd-logind[1201]: Session 12 logged out. Waiting for processes to exit. Jul 10 00:35:05.933442 systemd-logind[1201]: Removed session 12. Jul 10 00:35:10.933078 systemd[1]: Started sshd@12-10.0.0.76:22-10.0.0.1:49052.service. Jul 10 00:35:10.974509 sshd[3419]: Accepted publickey for core from 10.0.0.1 port 49052 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:35:10.975871 sshd[3419]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:35:10.982604 systemd-logind[1201]: New session 13 of user core. Jul 10 00:35:10.983532 systemd[1]: Started session-13.scope. Jul 10 00:35:11.104469 sshd[3419]: pam_unix(sshd:session): session closed for user core Jul 10 00:35:11.106855 systemd[1]: sshd@12-10.0.0.76:22-10.0.0.1:49052.service: Deactivated successfully. Jul 10 00:35:11.107570 systemd[1]: session-13.scope: Deactivated successfully. Jul 10 00:35:11.108126 systemd-logind[1201]: Session 13 logged out. Waiting for processes to exit. Jul 10 00:35:11.108882 systemd-logind[1201]: Removed session 13. Jul 10 00:35:16.109274 systemd[1]: Started sshd@13-10.0.0.76:22-10.0.0.1:55212.service. Jul 10 00:35:16.151238 sshd[3432]: Accepted publickey for core from 10.0.0.1 port 55212 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:35:16.151653 sshd[3432]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:35:16.155253 systemd-logind[1201]: New session 14 of user core. Jul 10 00:35:16.155701 systemd[1]: Started session-14.scope. Jul 10 00:35:16.276797 sshd[3432]: pam_unix(sshd:session): session closed for user core Jul 10 00:35:16.280885 systemd[1]: Started sshd@14-10.0.0.76:22-10.0.0.1:55216.service. Jul 10 00:35:16.281387 systemd[1]: sshd@13-10.0.0.76:22-10.0.0.1:55212.service: Deactivated successfully. Jul 10 00:35:16.282172 systemd[1]: session-14.scope: Deactivated successfully. Jul 10 00:35:16.283294 systemd-logind[1201]: Session 14 logged out. Waiting for processes to exit. Jul 10 00:35:16.284487 systemd-logind[1201]: Removed session 14. Jul 10 00:35:16.322807 sshd[3445]: Accepted publickey for core from 10.0.0.1 port 55216 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:35:16.324212 sshd[3445]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:35:16.327318 systemd-logind[1201]: New session 15 of user core. Jul 10 00:35:16.328094 systemd[1]: Started session-15.scope. Jul 10 00:35:16.532234 sshd[3445]: pam_unix(sshd:session): session closed for user core Jul 10 00:35:16.535236 systemd[1]: sshd@14-10.0.0.76:22-10.0.0.1:55216.service: Deactivated successfully. Jul 10 00:35:16.535898 systemd[1]: session-15.scope: Deactivated successfully. Jul 10 00:35:16.536475 systemd-logind[1201]: Session 15 logged out. Waiting for processes to exit. Jul 10 00:35:16.538250 systemd[1]: Started sshd@15-10.0.0.76:22-10.0.0.1:55228.service. Jul 10 00:35:16.540063 systemd-logind[1201]: Removed session 15. Jul 10 00:35:16.579030 sshd[3457]: Accepted publickey for core from 10.0.0.1 port 55228 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:35:16.580196 sshd[3457]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:35:16.584414 systemd-logind[1201]: New session 16 of user core. Jul 10 00:35:16.584864 systemd[1]: Started session-16.scope. Jul 10 00:35:17.390907 sshd[3457]: pam_unix(sshd:session): session closed for user core Jul 10 00:35:17.394765 systemd[1]: Started sshd@16-10.0.0.76:22-10.0.0.1:55244.service. Jul 10 00:35:17.395405 systemd[1]: sshd@15-10.0.0.76:22-10.0.0.1:55228.service: Deactivated successfully. Jul 10 00:35:17.396387 systemd[1]: session-16.scope: Deactivated successfully. Jul 10 00:35:17.397041 systemd-logind[1201]: Session 16 logged out. Waiting for processes to exit. Jul 10 00:35:17.397993 systemd-logind[1201]: Removed session 16. Jul 10 00:35:17.437583 sshd[3474]: Accepted publickey for core from 10.0.0.1 port 55244 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:35:17.438885 sshd[3474]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:35:17.442515 systemd-logind[1201]: New session 17 of user core. Jul 10 00:35:17.443381 systemd[1]: Started session-17.scope. Jul 10 00:35:17.659451 sshd[3474]: pam_unix(sshd:session): session closed for user core Jul 10 00:35:17.661634 systemd[1]: Started sshd@17-10.0.0.76:22-10.0.0.1:55246.service. Jul 10 00:35:17.663173 systemd-logind[1201]: Session 17 logged out. Waiting for processes to exit. Jul 10 00:35:17.663392 systemd[1]: sshd@16-10.0.0.76:22-10.0.0.1:55244.service: Deactivated successfully. Jul 10 00:35:17.664025 systemd[1]: session-17.scope: Deactivated successfully. Jul 10 00:35:17.664917 systemd-logind[1201]: Removed session 17. Jul 10 00:35:17.701245 sshd[3486]: Accepted publickey for core from 10.0.0.1 port 55246 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:35:17.702576 sshd[3486]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:35:17.706146 systemd-logind[1201]: New session 18 of user core. Jul 10 00:35:17.706617 systemd[1]: Started session-18.scope. Jul 10 00:35:17.820778 sshd[3486]: pam_unix(sshd:session): session closed for user core Jul 10 00:35:17.823031 systemd[1]: sshd@17-10.0.0.76:22-10.0.0.1:55246.service: Deactivated successfully. Jul 10 00:35:17.823756 systemd[1]: session-18.scope: Deactivated successfully. Jul 10 00:35:17.824342 systemd-logind[1201]: Session 18 logged out. Waiting for processes to exit. Jul 10 00:35:17.825097 systemd-logind[1201]: Removed session 18. Jul 10 00:35:22.825776 systemd[1]: Started sshd@18-10.0.0.76:22-10.0.0.1:46590.service. Jul 10 00:35:22.870940 sshd[3504]: Accepted publickey for core from 10.0.0.1 port 46590 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:35:22.873104 sshd[3504]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:35:22.877647 systemd-logind[1201]: New session 19 of user core. Jul 10 00:35:22.878086 systemd[1]: Started session-19.scope. Jul 10 00:35:22.991844 sshd[3504]: pam_unix(sshd:session): session closed for user core Jul 10 00:35:22.994333 systemd[1]: sshd@18-10.0.0.76:22-10.0.0.1:46590.service: Deactivated successfully. Jul 10 00:35:22.995053 systemd[1]: session-19.scope: Deactivated successfully. Jul 10 00:35:22.995550 systemd-logind[1201]: Session 19 logged out. Waiting for processes to exit. Jul 10 00:35:22.996265 systemd-logind[1201]: Removed session 19. Jul 10 00:35:27.995651 systemd[1]: Started sshd@19-10.0.0.76:22-10.0.0.1:46604.service. Jul 10 00:35:28.032968 sshd[3519]: Accepted publickey for core from 10.0.0.1 port 46604 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:35:28.034507 sshd[3519]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:35:28.038486 systemd-logind[1201]: New session 20 of user core. Jul 10 00:35:28.038903 systemd[1]: Started session-20.scope. Jul 10 00:35:28.147144 sshd[3519]: pam_unix(sshd:session): session closed for user core Jul 10 00:35:28.149415 systemd[1]: sshd@19-10.0.0.76:22-10.0.0.1:46604.service: Deactivated successfully. Jul 10 00:35:28.150142 systemd[1]: session-20.scope: Deactivated successfully. Jul 10 00:35:28.150697 systemd-logind[1201]: Session 20 logged out. Waiting for processes to exit. Jul 10 00:35:28.151347 systemd-logind[1201]: Removed session 20. Jul 10 00:35:33.151406 systemd[1]: Started sshd@20-10.0.0.76:22-10.0.0.1:42336.service. Jul 10 00:35:33.191382 sshd[3532]: Accepted publickey for core from 10.0.0.1 port 42336 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:35:33.192974 sshd[3532]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:35:33.197184 systemd[1]: Started session-21.scope. Jul 10 00:35:33.197475 systemd-logind[1201]: New session 21 of user core. Jul 10 00:35:33.303586 sshd[3532]: pam_unix(sshd:session): session closed for user core Jul 10 00:35:33.305974 systemd[1]: sshd@20-10.0.0.76:22-10.0.0.1:42336.service: Deactivated successfully. Jul 10 00:35:33.306717 systemd[1]: session-21.scope: Deactivated successfully. Jul 10 00:35:33.307210 systemd-logind[1201]: Session 21 logged out. Waiting for processes to exit. Jul 10 00:35:33.307955 systemd-logind[1201]: Removed session 21. Jul 10 00:35:38.311133 systemd[1]: Started sshd@21-10.0.0.76:22-10.0.0.1:42346.service. Jul 10 00:35:38.350975 sshd[3546]: Accepted publickey for core from 10.0.0.1 port 42346 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:35:38.352378 sshd[3546]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:35:38.357799 systemd[1]: Started session-22.scope. Jul 10 00:35:38.358367 systemd-logind[1201]: New session 22 of user core. Jul 10 00:35:38.478646 sshd[3546]: pam_unix(sshd:session): session closed for user core Jul 10 00:35:38.482258 systemd[1]: Started sshd@22-10.0.0.76:22-10.0.0.1:42350.service. Jul 10 00:35:38.485907 systemd[1]: session-22.scope: Deactivated successfully. Jul 10 00:35:38.486502 systemd[1]: sshd@21-10.0.0.76:22-10.0.0.1:42346.service: Deactivated successfully. Jul 10 00:35:38.490247 systemd-logind[1201]: Session 22 logged out. Waiting for processes to exit. Jul 10 00:35:38.491116 systemd-logind[1201]: Removed session 22. Jul 10 00:35:38.521067 sshd[3558]: Accepted publickey for core from 10.0.0.1 port 42350 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:35:38.522248 sshd[3558]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:35:38.527278 systemd[1]: Started session-23.scope. Jul 10 00:35:38.527867 systemd-logind[1201]: New session 23 of user core. Jul 10 00:35:40.173191 env[1214]: time="2025-07-10T00:35:40.173137286Z" level=info msg="StopContainer for \"7fbb06f7c4da0983f5c4afb0dd1544aff51429257a34c2286498b13727c518f3\" with timeout 30 (s)" Jul 10 00:35:40.173993 env[1214]: time="2025-07-10T00:35:40.173952645Z" level=info msg="Stop container \"7fbb06f7c4da0983f5c4afb0dd1544aff51429257a34c2286498b13727c518f3\" with signal terminated" Jul 10 00:35:40.196921 systemd[1]: cri-containerd-7fbb06f7c4da0983f5c4afb0dd1544aff51429257a34c2286498b13727c518f3.scope: Deactivated successfully. Jul 10 00:35:40.205675 systemd[1]: run-containerd-runc-k8s.io-c2dce4a6145cd4dbdec869a4d293e86474c3cc5cc7f2200099bcabea2f3366cb-runc.UKP5jh.mount: Deactivated successfully. Jul 10 00:35:40.223002 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7fbb06f7c4da0983f5c4afb0dd1544aff51429257a34c2286498b13727c518f3-rootfs.mount: Deactivated successfully. Jul 10 00:35:40.228869 env[1214]: time="2025-07-10T00:35:40.228808072Z" level=info msg="shim disconnected" id=7fbb06f7c4da0983f5c4afb0dd1544aff51429257a34c2286498b13727c518f3 Jul 10 00:35:40.228869 env[1214]: time="2025-07-10T00:35:40.228865471Z" level=warning msg="cleaning up after shim disconnected" id=7fbb06f7c4da0983f5c4afb0dd1544aff51429257a34c2286498b13727c518f3 namespace=k8s.io Jul 10 00:35:40.228869 env[1214]: time="2025-07-10T00:35:40.228875351Z" level=info msg="cleaning up dead shim" Jul 10 00:35:40.232760 env[1214]: time="2025-07-10T00:35:40.232695305Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:35:40.235898 env[1214]: time="2025-07-10T00:35:40.235865860Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:35:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3603 runtime=io.containerd.runc.v2\n" Jul 10 00:35:40.238704 env[1214]: time="2025-07-10T00:35:40.238660055Z" level=info msg="StopContainer for \"7fbb06f7c4da0983f5c4afb0dd1544aff51429257a34c2286498b13727c518f3\" returns successfully" Jul 10 00:35:40.239151 env[1214]: time="2025-07-10T00:35:40.239119134Z" level=info msg="StopContainer for \"c2dce4a6145cd4dbdec869a4d293e86474c3cc5cc7f2200099bcabea2f3366cb\" with timeout 2 (s)" Jul 10 00:35:40.239282 env[1214]: time="2025-07-10T00:35:40.239250414Z" level=info msg="StopPodSandbox for \"cb3ff0796f748d244b235574eb4cbdcab25e4050929e119ab3ad5b528bcb2bab\"" Jul 10 00:35:40.239405 env[1214]: time="2025-07-10T00:35:40.239380374Z" level=info msg="Container to stop \"7fbb06f7c4da0983f5c4afb0dd1544aff51429257a34c2286498b13727c518f3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:35:40.241131 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cb3ff0796f748d244b235574eb4cbdcab25e4050929e119ab3ad5b528bcb2bab-shm.mount: Deactivated successfully. Jul 10 00:35:40.242764 env[1214]: time="2025-07-10T00:35:40.242733368Z" level=info msg="Stop container \"c2dce4a6145cd4dbdec869a4d293e86474c3cc5cc7f2200099bcabea2f3366cb\" with signal terminated" Jul 10 00:35:40.248204 systemd[1]: cri-containerd-cb3ff0796f748d244b235574eb4cbdcab25e4050929e119ab3ad5b528bcb2bab.scope: Deactivated successfully. Jul 10 00:35:40.248647 systemd-networkd[1041]: lxc_health: Link DOWN Jul 10 00:35:40.248651 systemd-networkd[1041]: lxc_health: Lost carrier Jul 10 00:35:40.270252 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb3ff0796f748d244b235574eb4cbdcab25e4050929e119ab3ad5b528bcb2bab-rootfs.mount: Deactivated successfully. Jul 10 00:35:40.276981 env[1214]: time="2025-07-10T00:35:40.276925550Z" level=info msg="shim disconnected" id=cb3ff0796f748d244b235574eb4cbdcab25e4050929e119ab3ad5b528bcb2bab Jul 10 00:35:40.277690 env[1214]: time="2025-07-10T00:35:40.277665188Z" level=warning msg="cleaning up after shim disconnected" id=cb3ff0796f748d244b235574eb4cbdcab25e4050929e119ab3ad5b528bcb2bab namespace=k8s.io Jul 10 00:35:40.277787 env[1214]: time="2025-07-10T00:35:40.277773228Z" level=info msg="cleaning up dead shim" Jul 10 00:35:40.285796 env[1214]: time="2025-07-10T00:35:40.285758534Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:35:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3646 runtime=io.containerd.runc.v2\n" Jul 10 00:35:40.286240 env[1214]: time="2025-07-10T00:35:40.286194574Z" level=info msg="TearDown network for sandbox \"cb3ff0796f748d244b235574eb4cbdcab25e4050929e119ab3ad5b528bcb2bab\" successfully" Jul 10 00:35:40.286472 env[1214]: time="2025-07-10T00:35:40.286446173Z" level=info msg="StopPodSandbox for \"cb3ff0796f748d244b235574eb4cbdcab25e4050929e119ab3ad5b528bcb2bab\" returns successfully" Jul 10 00:35:40.288331 systemd[1]: cri-containerd-c2dce4a6145cd4dbdec869a4d293e86474c3cc5cc7f2200099bcabea2f3366cb.scope: Deactivated successfully. Jul 10 00:35:40.288649 systemd[1]: cri-containerd-c2dce4a6145cd4dbdec869a4d293e86474c3cc5cc7f2200099bcabea2f3366cb.scope: Consumed 6.596s CPU time. Jul 10 00:35:40.317485 env[1214]: time="2025-07-10T00:35:40.317413881Z" level=info msg="shim disconnected" id=c2dce4a6145cd4dbdec869a4d293e86474c3cc5cc7f2200099bcabea2f3366cb Jul 10 00:35:40.317485 env[1214]: time="2025-07-10T00:35:40.317479400Z" level=warning msg="cleaning up after shim disconnected" id=c2dce4a6145cd4dbdec869a4d293e86474c3cc5cc7f2200099bcabea2f3366cb namespace=k8s.io Jul 10 00:35:40.317485 env[1214]: time="2025-07-10T00:35:40.317490520Z" level=info msg="cleaning up dead shim" Jul 10 00:35:40.327213 env[1214]: time="2025-07-10T00:35:40.327150344Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:35:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3671 runtime=io.containerd.runc.v2\n" Jul 10 00:35:40.329412 env[1214]: time="2025-07-10T00:35:40.329364100Z" level=info msg="StopContainer for \"c2dce4a6145cd4dbdec869a4d293e86474c3cc5cc7f2200099bcabea2f3366cb\" returns successfully" Jul 10 00:35:40.330009 env[1214]: time="2025-07-10T00:35:40.329966219Z" level=info msg="StopPodSandbox for \"76dfadbcb2672b3b075645b0544acd49bd54b48f4d82850647fa5409edfc6e5c\"" Jul 10 00:35:40.330058 env[1214]: time="2025-07-10T00:35:40.330038699Z" level=info msg="Container to stop \"c2dce4a6145cd4dbdec869a4d293e86474c3cc5cc7f2200099bcabea2f3366cb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:35:40.330089 env[1214]: time="2025-07-10T00:35:40.330058099Z" level=info msg="Container to stop \"fc6f81294b10d1fb134cb85f15cf734b392978665b818389e31267e2c9f185af\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:35:40.330089 env[1214]: time="2025-07-10T00:35:40.330069739Z" level=info msg="Container to stop \"57f0393d983d3fc9aa6a63389f76d57fb60cf10389c73476c2041be6f02d0a0f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:35:40.330089 env[1214]: time="2025-07-10T00:35:40.330080499Z" level=info msg="Container to stop \"4f1b3c3cd84e914505a2859f5963ccbc459450150f9088e28f32f5f0ce39bb1c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:35:40.330185 env[1214]: time="2025-07-10T00:35:40.330091139Z" level=info msg="Container to stop \"9b1404a8987a145b9bb1f2147f9995e29b127bb437b19f09195b3c132b2315a8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:35:40.336773 systemd[1]: cri-containerd-76dfadbcb2672b3b075645b0544acd49bd54b48f4d82850647fa5409edfc6e5c.scope: Deactivated successfully. Jul 10 00:35:40.338098 kubelet[1912]: I0710 00:35:40.338044 1912 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pck55\" (UniqueName: \"kubernetes.io/projected/f5e4c308-e8f2-43ba-b7a2-377166168980-kube-api-access-pck55\") pod \"f5e4c308-e8f2-43ba-b7a2-377166168980\" (UID: \"f5e4c308-e8f2-43ba-b7a2-377166168980\") " Jul 10 00:35:40.338098 kubelet[1912]: I0710 00:35:40.338093 1912 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f5e4c308-e8f2-43ba-b7a2-377166168980-cilium-config-path\") pod \"f5e4c308-e8f2-43ba-b7a2-377166168980\" (UID: \"f5e4c308-e8f2-43ba-b7a2-377166168980\") " Jul 10 00:35:40.343505 kubelet[1912]: I0710 00:35:40.343458 1912 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5e4c308-e8f2-43ba-b7a2-377166168980-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f5e4c308-e8f2-43ba-b7a2-377166168980" (UID: "f5e4c308-e8f2-43ba-b7a2-377166168980"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 10 00:35:40.346853 kubelet[1912]: I0710 00:35:40.346817 1912 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5e4c308-e8f2-43ba-b7a2-377166168980-kube-api-access-pck55" (OuterVolumeSpecName: "kube-api-access-pck55") pod "f5e4c308-e8f2-43ba-b7a2-377166168980" (UID: "f5e4c308-e8f2-43ba-b7a2-377166168980"). InnerVolumeSpecName "kube-api-access-pck55". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:35:40.366184 env[1214]: time="2025-07-10T00:35:40.366106478Z" level=info msg="shim disconnected" id=76dfadbcb2672b3b075645b0544acd49bd54b48f4d82850647fa5409edfc6e5c Jul 10 00:35:40.366184 env[1214]: time="2025-07-10T00:35:40.366175397Z" level=warning msg="cleaning up after shim disconnected" id=76dfadbcb2672b3b075645b0544acd49bd54b48f4d82850647fa5409edfc6e5c namespace=k8s.io Jul 10 00:35:40.366184 env[1214]: time="2025-07-10T00:35:40.366184957Z" level=info msg="cleaning up dead shim" Jul 10 00:35:40.373655 env[1214]: time="2025-07-10T00:35:40.373606265Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:35:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3702 runtime=io.containerd.runc.v2\n" Jul 10 00:35:40.373951 env[1214]: time="2025-07-10T00:35:40.373924104Z" level=info msg="TearDown network for sandbox \"76dfadbcb2672b3b075645b0544acd49bd54b48f4d82850647fa5409edfc6e5c\" successfully" Jul 10 00:35:40.373987 env[1214]: time="2025-07-10T00:35:40.373950984Z" level=info msg="StopPodSandbox for \"76dfadbcb2672b3b075645b0544acd49bd54b48f4d82850647fa5409edfc6e5c\" returns successfully" Jul 10 00:35:40.424164 systemd[1]: Removed slice kubepods-besteffort-podf5e4c308_e8f2_43ba_b7a2_377166168980.slice. Jul 10 00:35:40.438487 kubelet[1912]: I0710 00:35:40.438432 1912 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/339dedd7-13e4-4691-9620-c8e0b2532c87-cilium-run\") pod \"339dedd7-13e4-4691-9620-c8e0b2532c87\" (UID: \"339dedd7-13e4-4691-9620-c8e0b2532c87\") " Jul 10 00:35:40.438708 kubelet[1912]: I0710 00:35:40.438688 1912 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/339dedd7-13e4-4691-9620-c8e0b2532c87-cilium-config-path\") pod \"339dedd7-13e4-4691-9620-c8e0b2532c87\" (UID: \"339dedd7-13e4-4691-9620-c8e0b2532c87\") " Jul 10 00:35:40.438812 kubelet[1912]: I0710 00:35:40.438800 1912 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nks6s\" (UniqueName: \"kubernetes.io/projected/339dedd7-13e4-4691-9620-c8e0b2532c87-kube-api-access-nks6s\") pod \"339dedd7-13e4-4691-9620-c8e0b2532c87\" (UID: \"339dedd7-13e4-4691-9620-c8e0b2532c87\") " Jul 10 00:35:40.439189 kubelet[1912]: I0710 00:35:40.439171 1912 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/339dedd7-13e4-4691-9620-c8e0b2532c87-hostproc\") pod \"339dedd7-13e4-4691-9620-c8e0b2532c87\" (UID: \"339dedd7-13e4-4691-9620-c8e0b2532c87\") " Jul 10 00:35:40.439353 kubelet[1912]: I0710 00:35:40.439339 1912 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/339dedd7-13e4-4691-9620-c8e0b2532c87-etc-cni-netd\") pod \"339dedd7-13e4-4691-9620-c8e0b2532c87\" (UID: \"339dedd7-13e4-4691-9620-c8e0b2532c87\") " Jul 10 00:35:40.439455 kubelet[1912]: I0710 00:35:40.439438 1912 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/339dedd7-13e4-4691-9620-c8e0b2532c87-clustermesh-secrets\") pod \"339dedd7-13e4-4691-9620-c8e0b2532c87\" (UID: \"339dedd7-13e4-4691-9620-c8e0b2532c87\") " Jul 10 00:35:40.439535 kubelet[1912]: I0710 00:35:40.439522 1912 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/339dedd7-13e4-4691-9620-c8e0b2532c87-host-proc-sys-kernel\") pod \"339dedd7-13e4-4691-9620-c8e0b2532c87\" (UID: \"339dedd7-13e4-4691-9620-c8e0b2532c87\") " Jul 10 00:35:40.439615 kubelet[1912]: I0710 00:35:40.439601 1912 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/339dedd7-13e4-4691-9620-c8e0b2532c87-cni-path\") pod \"339dedd7-13e4-4691-9620-c8e0b2532c87\" (UID: \"339dedd7-13e4-4691-9620-c8e0b2532c87\") " Jul 10 00:35:40.439687 kubelet[1912]: I0710 00:35:40.439674 1912 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/339dedd7-13e4-4691-9620-c8e0b2532c87-xtables-lock\") pod \"339dedd7-13e4-4691-9620-c8e0b2532c87\" (UID: \"339dedd7-13e4-4691-9620-c8e0b2532c87\") " Jul 10 00:35:40.439762 kubelet[1912]: I0710 00:35:40.439749 1912 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/339dedd7-13e4-4691-9620-c8e0b2532c87-lib-modules\") pod \"339dedd7-13e4-4691-9620-c8e0b2532c87\" (UID: \"339dedd7-13e4-4691-9620-c8e0b2532c87\") " Jul 10 00:35:40.439841 kubelet[1912]: I0710 00:35:40.438502 1912 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/339dedd7-13e4-4691-9620-c8e0b2532c87-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "339dedd7-13e4-4691-9620-c8e0b2532c87" (UID: "339dedd7-13e4-4691-9620-c8e0b2532c87"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:35:40.439878 kubelet[1912]: I0710 00:35:40.439263 1912 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/339dedd7-13e4-4691-9620-c8e0b2532c87-hostproc" (OuterVolumeSpecName: "hostproc") pod "339dedd7-13e4-4691-9620-c8e0b2532c87" (UID: "339dedd7-13e4-4691-9620-c8e0b2532c87"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:35:40.439878 kubelet[1912]: I0710 00:35:40.439406 1912 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/339dedd7-13e4-4691-9620-c8e0b2532c87-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "339dedd7-13e4-4691-9620-c8e0b2532c87" (UID: "339dedd7-13e4-4691-9620-c8e0b2532c87"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:35:40.439878 kubelet[1912]: I0710 00:35:40.439565 1912 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/339dedd7-13e4-4691-9620-c8e0b2532c87-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "339dedd7-13e4-4691-9620-c8e0b2532c87" (UID: "339dedd7-13e4-4691-9620-c8e0b2532c87"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:35:40.439878 kubelet[1912]: I0710 00:35:40.439647 1912 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/339dedd7-13e4-4691-9620-c8e0b2532c87-cni-path" (OuterVolumeSpecName: "cni-path") pod "339dedd7-13e4-4691-9620-c8e0b2532c87" (UID: "339dedd7-13e4-4691-9620-c8e0b2532c87"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:35:40.439878 kubelet[1912]: I0710 00:35:40.439734 1912 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/339dedd7-13e4-4691-9620-c8e0b2532c87-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "339dedd7-13e4-4691-9620-c8e0b2532c87" (UID: "339dedd7-13e4-4691-9620-c8e0b2532c87"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:35:40.439999 kubelet[1912]: I0710 00:35:40.439812 1912 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/339dedd7-13e4-4691-9620-c8e0b2532c87-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "339dedd7-13e4-4691-9620-c8e0b2532c87" (UID: "339dedd7-13e4-4691-9620-c8e0b2532c87"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:35:40.439999 kubelet[1912]: I0710 00:35:40.439819 1912 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/339dedd7-13e4-4691-9620-c8e0b2532c87-bpf-maps\") pod \"339dedd7-13e4-4691-9620-c8e0b2532c87\" (UID: \"339dedd7-13e4-4691-9620-c8e0b2532c87\") " Jul 10 00:35:40.439999 kubelet[1912]: I0710 00:35:40.439903 1912 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/339dedd7-13e4-4691-9620-c8e0b2532c87-host-proc-sys-net\") pod \"339dedd7-13e4-4691-9620-c8e0b2532c87\" (UID: \"339dedd7-13e4-4691-9620-c8e0b2532c87\") " Jul 10 00:35:40.439999 kubelet[1912]: I0710 00:35:40.439928 1912 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/339dedd7-13e4-4691-9620-c8e0b2532c87-hubble-tls\") pod \"339dedd7-13e4-4691-9620-c8e0b2532c87\" (UID: \"339dedd7-13e4-4691-9620-c8e0b2532c87\") " Jul 10 00:35:40.439999 kubelet[1912]: I0710 00:35:40.439946 1912 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/339dedd7-13e4-4691-9620-c8e0b2532c87-cilium-cgroup\") pod \"339dedd7-13e4-4691-9620-c8e0b2532c87\" (UID: \"339dedd7-13e4-4691-9620-c8e0b2532c87\") " Jul 10 00:35:40.439999 kubelet[1912]: I0710 00:35:40.439986 1912 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/339dedd7-13e4-4691-9620-c8e0b2532c87-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 10 00:35:40.440209 kubelet[1912]: I0710 00:35:40.439996 1912 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/339dedd7-13e4-4691-9620-c8e0b2532c87-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 10 00:35:40.440209 kubelet[1912]: I0710 00:35:40.440004 1912 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/339dedd7-13e4-4691-9620-c8e0b2532c87-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 10 00:35:40.440209 kubelet[1912]: I0710 00:35:40.440029 1912 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/339dedd7-13e4-4691-9620-c8e0b2532c87-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 10 00:35:40.440209 kubelet[1912]: I0710 00:35:40.440038 1912 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pck55\" (UniqueName: \"kubernetes.io/projected/f5e4c308-e8f2-43ba-b7a2-377166168980-kube-api-access-pck55\") on node \"localhost\" DevicePath \"\"" Jul 10 00:35:40.440209 kubelet[1912]: I0710 00:35:40.440047 1912 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/339dedd7-13e4-4691-9620-c8e0b2532c87-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 10 00:35:40.440209 kubelet[1912]: I0710 00:35:40.440055 1912 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f5e4c308-e8f2-43ba-b7a2-377166168980-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 10 00:35:40.440209 kubelet[1912]: I0710 00:35:40.440062 1912 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/339dedd7-13e4-4691-9620-c8e0b2532c87-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 10 00:35:40.440209 kubelet[1912]: I0710 00:35:40.440070 1912 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/339dedd7-13e4-4691-9620-c8e0b2532c87-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 10 00:35:40.440662 kubelet[1912]: I0710 00:35:40.440092 1912 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/339dedd7-13e4-4691-9620-c8e0b2532c87-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "339dedd7-13e4-4691-9620-c8e0b2532c87" (UID: "339dedd7-13e4-4691-9620-c8e0b2532c87"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:35:40.440662 kubelet[1912]: I0710 00:35:40.440105 1912 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/339dedd7-13e4-4691-9620-c8e0b2532c87-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "339dedd7-13e4-4691-9620-c8e0b2532c87" (UID: "339dedd7-13e4-4691-9620-c8e0b2532c87"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:35:40.440834 kubelet[1912]: I0710 00:35:40.440811 1912 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/339dedd7-13e4-4691-9620-c8e0b2532c87-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "339dedd7-13e4-4691-9620-c8e0b2532c87" (UID: "339dedd7-13e4-4691-9620-c8e0b2532c87"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:35:40.441636 kubelet[1912]: I0710 00:35:40.441596 1912 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/339dedd7-13e4-4691-9620-c8e0b2532c87-kube-api-access-nks6s" (OuterVolumeSpecName: "kube-api-access-nks6s") pod "339dedd7-13e4-4691-9620-c8e0b2532c87" (UID: "339dedd7-13e4-4691-9620-c8e0b2532c87"). InnerVolumeSpecName "kube-api-access-nks6s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:35:40.441795 kubelet[1912]: I0710 00:35:40.441692 1912 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/339dedd7-13e4-4691-9620-c8e0b2532c87-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "339dedd7-13e4-4691-9620-c8e0b2532c87" (UID: "339dedd7-13e4-4691-9620-c8e0b2532c87"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 10 00:35:40.442048 kubelet[1912]: I0710 00:35:40.442020 1912 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/339dedd7-13e4-4691-9620-c8e0b2532c87-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "339dedd7-13e4-4691-9620-c8e0b2532c87" (UID: "339dedd7-13e4-4691-9620-c8e0b2532c87"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 10 00:35:40.443895 kubelet[1912]: I0710 00:35:40.443862 1912 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/339dedd7-13e4-4691-9620-c8e0b2532c87-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "339dedd7-13e4-4691-9620-c8e0b2532c87" (UID: "339dedd7-13e4-4691-9620-c8e0b2532c87"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:35:40.541121 kubelet[1912]: I0710 00:35:40.541071 1912 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/339dedd7-13e4-4691-9620-c8e0b2532c87-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 10 00:35:40.541121 kubelet[1912]: I0710 00:35:40.541106 1912 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nks6s\" (UniqueName: \"kubernetes.io/projected/339dedd7-13e4-4691-9620-c8e0b2532c87-kube-api-access-nks6s\") on node \"localhost\" DevicePath \"\"" Jul 10 00:35:40.541121 kubelet[1912]: I0710 00:35:40.541118 1912 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/339dedd7-13e4-4691-9620-c8e0b2532c87-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 10 00:35:40.541121 kubelet[1912]: I0710 00:35:40.541128 1912 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/339dedd7-13e4-4691-9620-c8e0b2532c87-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 10 00:35:40.541121 kubelet[1912]: I0710 00:35:40.541137 1912 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/339dedd7-13e4-4691-9620-c8e0b2532c87-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 10 00:35:40.541415 kubelet[1912]: I0710 00:35:40.541146 1912 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/339dedd7-13e4-4691-9620-c8e0b2532c87-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 10 00:35:40.541415 kubelet[1912]: I0710 00:35:40.541153 1912 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/339dedd7-13e4-4691-9620-c8e0b2532c87-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 10 00:35:40.605820 kubelet[1912]: I0710 00:35:40.605765 1912 scope.go:117] "RemoveContainer" containerID="7fbb06f7c4da0983f5c4afb0dd1544aff51429257a34c2286498b13727c518f3" Jul 10 00:35:40.612778 env[1214]: time="2025-07-10T00:35:40.612644417Z" level=info msg="RemoveContainer for \"7fbb06f7c4da0983f5c4afb0dd1544aff51429257a34c2286498b13727c518f3\"" Jul 10 00:35:40.617007 env[1214]: time="2025-07-10T00:35:40.616975810Z" level=info msg="RemoveContainer for \"7fbb06f7c4da0983f5c4afb0dd1544aff51429257a34c2286498b13727c518f3\" returns successfully" Jul 10 00:35:40.617470 kubelet[1912]: I0710 00:35:40.617444 1912 scope.go:117] "RemoveContainer" containerID="7fbb06f7c4da0983f5c4afb0dd1544aff51429257a34c2286498b13727c518f3" Jul 10 00:35:40.617701 env[1214]: time="2025-07-10T00:35:40.617638249Z" level=error msg="ContainerStatus for \"7fbb06f7c4da0983f5c4afb0dd1544aff51429257a34c2286498b13727c518f3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7fbb06f7c4da0983f5c4afb0dd1544aff51429257a34c2286498b13727c518f3\": not found" Jul 10 00:35:40.617836 kubelet[1912]: E0710 00:35:40.617814 1912 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7fbb06f7c4da0983f5c4afb0dd1544aff51429257a34c2286498b13727c518f3\": not found" containerID="7fbb06f7c4da0983f5c4afb0dd1544aff51429257a34c2286498b13727c518f3" Jul 10 00:35:40.619156 kubelet[1912]: I0710 00:35:40.618980 1912 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7fbb06f7c4da0983f5c4afb0dd1544aff51429257a34c2286498b13727c518f3"} err="failed to get container status \"7fbb06f7c4da0983f5c4afb0dd1544aff51429257a34c2286498b13727c518f3\": rpc error: code = NotFound desc = an error occurred when try to find container \"7fbb06f7c4da0983f5c4afb0dd1544aff51429257a34c2286498b13727c518f3\": not found" Jul 10 00:35:40.619341 kubelet[1912]: I0710 00:35:40.619299 1912 scope.go:117] "RemoveContainer" containerID="c2dce4a6145cd4dbdec869a4d293e86474c3cc5cc7f2200099bcabea2f3366cb" Jul 10 00:35:40.620054 systemd[1]: Removed slice kubepods-burstable-pod339dedd7_13e4_4691_9620_c8e0b2532c87.slice. Jul 10 00:35:40.620131 systemd[1]: kubepods-burstable-pod339dedd7_13e4_4691_9620_c8e0b2532c87.slice: Consumed 6.808s CPU time. Jul 10 00:35:40.622442 env[1214]: time="2025-07-10T00:35:40.622128761Z" level=info msg="RemoveContainer for \"c2dce4a6145cd4dbdec869a4d293e86474c3cc5cc7f2200099bcabea2f3366cb\"" Jul 10 00:35:40.624751 env[1214]: time="2025-07-10T00:35:40.624718797Z" level=info msg="RemoveContainer for \"c2dce4a6145cd4dbdec869a4d293e86474c3cc5cc7f2200099bcabea2f3366cb\" returns successfully" Jul 10 00:35:40.625275 kubelet[1912]: I0710 00:35:40.625219 1912 scope.go:117] "RemoveContainer" containerID="9b1404a8987a145b9bb1f2147f9995e29b127bb437b19f09195b3c132b2315a8" Jul 10 00:35:40.626460 env[1214]: time="2025-07-10T00:35:40.626432194Z" level=info msg="RemoveContainer for \"9b1404a8987a145b9bb1f2147f9995e29b127bb437b19f09195b3c132b2315a8\"" Jul 10 00:35:40.631340 env[1214]: time="2025-07-10T00:35:40.631184146Z" level=info msg="RemoveContainer for \"9b1404a8987a145b9bb1f2147f9995e29b127bb437b19f09195b3c132b2315a8\" returns successfully" Jul 10 00:35:40.631440 kubelet[1912]: I0710 00:35:40.631397 1912 scope.go:117] "RemoveContainer" containerID="4f1b3c3cd84e914505a2859f5963ccbc459450150f9088e28f32f5f0ce39bb1c" Jul 10 00:35:40.632459 env[1214]: time="2025-07-10T00:35:40.632415184Z" level=info msg="RemoveContainer for \"4f1b3c3cd84e914505a2859f5963ccbc459450150f9088e28f32f5f0ce39bb1c\"" Jul 10 00:35:40.635579 env[1214]: time="2025-07-10T00:35:40.635538578Z" level=info msg="RemoveContainer for \"4f1b3c3cd84e914505a2859f5963ccbc459450150f9088e28f32f5f0ce39bb1c\" returns successfully" Jul 10 00:35:40.635700 kubelet[1912]: I0710 00:35:40.635686 1912 scope.go:117] "RemoveContainer" containerID="57f0393d983d3fc9aa6a63389f76d57fb60cf10389c73476c2041be6f02d0a0f" Jul 10 00:35:40.636607 env[1214]: time="2025-07-10T00:35:40.636578336Z" level=info msg="RemoveContainer for \"57f0393d983d3fc9aa6a63389f76d57fb60cf10389c73476c2041be6f02d0a0f\"" Jul 10 00:35:40.638825 env[1214]: time="2025-07-10T00:35:40.638788093Z" level=info msg="RemoveContainer for \"57f0393d983d3fc9aa6a63389f76d57fb60cf10389c73476c2041be6f02d0a0f\" returns successfully" Jul 10 00:35:40.638967 kubelet[1912]: I0710 00:35:40.638946 1912 scope.go:117] "RemoveContainer" containerID="fc6f81294b10d1fb134cb85f15cf734b392978665b818389e31267e2c9f185af" Jul 10 00:35:40.639925 env[1214]: time="2025-07-10T00:35:40.639895491Z" level=info msg="RemoveContainer for \"fc6f81294b10d1fb134cb85f15cf734b392978665b818389e31267e2c9f185af\"" Jul 10 00:35:40.642058 env[1214]: time="2025-07-10T00:35:40.642026047Z" level=info msg="RemoveContainer for \"fc6f81294b10d1fb134cb85f15cf734b392978665b818389e31267e2c9f185af\" returns successfully" Jul 10 00:35:40.642207 kubelet[1912]: I0710 00:35:40.642180 1912 scope.go:117] "RemoveContainer" containerID="c2dce4a6145cd4dbdec869a4d293e86474c3cc5cc7f2200099bcabea2f3366cb" Jul 10 00:35:40.642522 env[1214]: time="2025-07-10T00:35:40.642370807Z" level=error msg="ContainerStatus for \"c2dce4a6145cd4dbdec869a4d293e86474c3cc5cc7f2200099bcabea2f3366cb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c2dce4a6145cd4dbdec869a4d293e86474c3cc5cc7f2200099bcabea2f3366cb\": not found" Jul 10 00:35:40.642671 kubelet[1912]: E0710 00:35:40.642640 1912 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c2dce4a6145cd4dbdec869a4d293e86474c3cc5cc7f2200099bcabea2f3366cb\": not found" containerID="c2dce4a6145cd4dbdec869a4d293e86474c3cc5cc7f2200099bcabea2f3366cb" Jul 10 00:35:40.642712 kubelet[1912]: I0710 00:35:40.642672 1912 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c2dce4a6145cd4dbdec869a4d293e86474c3cc5cc7f2200099bcabea2f3366cb"} err="failed to get container status \"c2dce4a6145cd4dbdec869a4d293e86474c3cc5cc7f2200099bcabea2f3366cb\": rpc error: code = NotFound desc = an error occurred when try to find container \"c2dce4a6145cd4dbdec869a4d293e86474c3cc5cc7f2200099bcabea2f3366cb\": not found" Jul 10 00:35:40.642712 kubelet[1912]: I0710 00:35:40.642695 1912 scope.go:117] "RemoveContainer" containerID="9b1404a8987a145b9bb1f2147f9995e29b127bb437b19f09195b3c132b2315a8" Jul 10 00:35:40.642911 env[1214]: time="2025-07-10T00:35:40.642856766Z" level=error msg="ContainerStatus for \"9b1404a8987a145b9bb1f2147f9995e29b127bb437b19f09195b3c132b2315a8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9b1404a8987a145b9bb1f2147f9995e29b127bb437b19f09195b3c132b2315a8\": not found" Jul 10 00:35:40.643002 kubelet[1912]: E0710 00:35:40.642984 1912 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9b1404a8987a145b9bb1f2147f9995e29b127bb437b19f09195b3c132b2315a8\": not found" containerID="9b1404a8987a145b9bb1f2147f9995e29b127bb437b19f09195b3c132b2315a8" Jul 10 00:35:40.643031 kubelet[1912]: I0710 00:35:40.643010 1912 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9b1404a8987a145b9bb1f2147f9995e29b127bb437b19f09195b3c132b2315a8"} err="failed to get container status \"9b1404a8987a145b9bb1f2147f9995e29b127bb437b19f09195b3c132b2315a8\": rpc error: code = NotFound desc = an error occurred when try to find container \"9b1404a8987a145b9bb1f2147f9995e29b127bb437b19f09195b3c132b2315a8\": not found" Jul 10 00:35:40.643031 kubelet[1912]: I0710 00:35:40.643026 1912 scope.go:117] "RemoveContainer" containerID="4f1b3c3cd84e914505a2859f5963ccbc459450150f9088e28f32f5f0ce39bb1c" Jul 10 00:35:40.643201 env[1214]: time="2025-07-10T00:35:40.643155165Z" level=error msg="ContainerStatus for \"4f1b3c3cd84e914505a2859f5963ccbc459450150f9088e28f32f5f0ce39bb1c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4f1b3c3cd84e914505a2859f5963ccbc459450150f9088e28f32f5f0ce39bb1c\": not found" Jul 10 00:35:40.643319 kubelet[1912]: E0710 00:35:40.643301 1912 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4f1b3c3cd84e914505a2859f5963ccbc459450150f9088e28f32f5f0ce39bb1c\": not found" containerID="4f1b3c3cd84e914505a2859f5963ccbc459450150f9088e28f32f5f0ce39bb1c" Jul 10 00:35:40.643357 kubelet[1912]: I0710 00:35:40.643339 1912 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4f1b3c3cd84e914505a2859f5963ccbc459450150f9088e28f32f5f0ce39bb1c"} err="failed to get container status \"4f1b3c3cd84e914505a2859f5963ccbc459450150f9088e28f32f5f0ce39bb1c\": rpc error: code = NotFound desc = an error occurred when try to find container \"4f1b3c3cd84e914505a2859f5963ccbc459450150f9088e28f32f5f0ce39bb1c\": not found" Jul 10 00:35:40.643381 kubelet[1912]: I0710 00:35:40.643359 1912 scope.go:117] "RemoveContainer" containerID="57f0393d983d3fc9aa6a63389f76d57fb60cf10389c73476c2041be6f02d0a0f" Jul 10 00:35:40.643530 env[1214]: time="2025-07-10T00:35:40.643493765Z" level=error msg="ContainerStatus for \"57f0393d983d3fc9aa6a63389f76d57fb60cf10389c73476c2041be6f02d0a0f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"57f0393d983d3fc9aa6a63389f76d57fb60cf10389c73476c2041be6f02d0a0f\": not found" Jul 10 00:35:40.643610 kubelet[1912]: E0710 00:35:40.643592 1912 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"57f0393d983d3fc9aa6a63389f76d57fb60cf10389c73476c2041be6f02d0a0f\": not found" containerID="57f0393d983d3fc9aa6a63389f76d57fb60cf10389c73476c2041be6f02d0a0f" Jul 10 00:35:40.643635 kubelet[1912]: I0710 00:35:40.643615 1912 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"57f0393d983d3fc9aa6a63389f76d57fb60cf10389c73476c2041be6f02d0a0f"} err="failed to get container status \"57f0393d983d3fc9aa6a63389f76d57fb60cf10389c73476c2041be6f02d0a0f\": rpc error: code = NotFound desc = an error occurred when try to find container \"57f0393d983d3fc9aa6a63389f76d57fb60cf10389c73476c2041be6f02d0a0f\": not found" Jul 10 00:35:40.643635 kubelet[1912]: I0710 00:35:40.643630 1912 scope.go:117] "RemoveContainer" containerID="fc6f81294b10d1fb134cb85f15cf734b392978665b818389e31267e2c9f185af" Jul 10 00:35:40.643768 env[1214]: time="2025-07-10T00:35:40.643734044Z" level=error msg="ContainerStatus for \"fc6f81294b10d1fb134cb85f15cf734b392978665b818389e31267e2c9f185af\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fc6f81294b10d1fb134cb85f15cf734b392978665b818389e31267e2c9f185af\": not found" Jul 10 00:35:40.643847 kubelet[1912]: E0710 00:35:40.643831 1912 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fc6f81294b10d1fb134cb85f15cf734b392978665b818389e31267e2c9f185af\": not found" containerID="fc6f81294b10d1fb134cb85f15cf734b392978665b818389e31267e2c9f185af" Jul 10 00:35:40.643874 kubelet[1912]: I0710 00:35:40.643852 1912 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fc6f81294b10d1fb134cb85f15cf734b392978665b818389e31267e2c9f185af"} err="failed to get container status \"fc6f81294b10d1fb134cb85f15cf734b392978665b818389e31267e2c9f185af\": rpc error: code = NotFound desc = an error occurred when try to find container \"fc6f81294b10d1fb134cb85f15cf734b392978665b818389e31267e2c9f185af\": not found" Jul 10 00:35:41.202782 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c2dce4a6145cd4dbdec869a4d293e86474c3cc5cc7f2200099bcabea2f3366cb-rootfs.mount: Deactivated successfully. Jul 10 00:35:41.202881 systemd[1]: var-lib-kubelet-pods-f5e4c308\x2de8f2\x2d43ba\x2db7a2\x2d377166168980-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpck55.mount: Deactivated successfully. Jul 10 00:35:41.202943 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76dfadbcb2672b3b075645b0544acd49bd54b48f4d82850647fa5409edfc6e5c-rootfs.mount: Deactivated successfully. Jul 10 00:35:41.202989 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-76dfadbcb2672b3b075645b0544acd49bd54b48f4d82850647fa5409edfc6e5c-shm.mount: Deactivated successfully. Jul 10 00:35:41.203037 systemd[1]: var-lib-kubelet-pods-339dedd7\x2d13e4\x2d4691\x2d9620\x2dc8e0b2532c87-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnks6s.mount: Deactivated successfully. Jul 10 00:35:41.203090 systemd[1]: var-lib-kubelet-pods-339dedd7\x2d13e4\x2d4691\x2d9620\x2dc8e0b2532c87-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 10 00:35:41.203139 systemd[1]: var-lib-kubelet-pods-339dedd7\x2d13e4\x2d4691\x2d9620\x2dc8e0b2532c87-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 10 00:35:41.417639 kubelet[1912]: E0710 00:35:41.417603 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:42.119587 sshd[3558]: pam_unix(sshd:session): session closed for user core Jul 10 00:35:42.130145 systemd[1]: Started sshd@23-10.0.0.76:22-10.0.0.1:42354.service. Jul 10 00:35:42.130719 systemd[1]: sshd@22-10.0.0.76:22-10.0.0.1:42350.service: Deactivated successfully. Jul 10 00:35:42.133315 systemd[1]: session-23.scope: Deactivated successfully. Jul 10 00:35:42.139809 systemd-logind[1201]: Session 23 logged out. Waiting for processes to exit. Jul 10 00:35:42.143901 systemd-logind[1201]: Removed session 23. Jul 10 00:35:42.179682 sshd[3719]: Accepted publickey for core from 10.0.0.1 port 42354 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:35:42.180924 sshd[3719]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:35:42.186336 systemd[1]: Started session-24.scope. Jul 10 00:35:42.187330 systemd-logind[1201]: New session 24 of user core. Jul 10 00:35:42.419739 kubelet[1912]: I0710 00:35:42.419637 1912 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="339dedd7-13e4-4691-9620-c8e0b2532c87" path="/var/lib/kubelet/pods/339dedd7-13e4-4691-9620-c8e0b2532c87/volumes" Jul 10 00:35:42.420925 kubelet[1912]: I0710 00:35:42.420897 1912 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5e4c308-e8f2-43ba-b7a2-377166168980" path="/var/lib/kubelet/pods/f5e4c308-e8f2-43ba-b7a2-377166168980/volumes" Jul 10 00:35:43.211054 sshd[3719]: pam_unix(sshd:session): session closed for user core Jul 10 00:35:43.216047 systemd[1]: Started sshd@24-10.0.0.76:22-10.0.0.1:57392.service. Jul 10 00:35:43.219966 systemd[1]: sshd@23-10.0.0.76:22-10.0.0.1:42354.service: Deactivated successfully. Jul 10 00:35:43.220759 systemd[1]: session-24.scope: Deactivated successfully. Jul 10 00:35:43.223714 systemd-logind[1201]: Session 24 logged out. Waiting for processes to exit. Jul 10 00:35:43.226776 systemd-logind[1201]: Removed session 24. Jul 10 00:35:43.241771 kubelet[1912]: I0710 00:35:43.241620 1912 memory_manager.go:355] "RemoveStaleState removing state" podUID="339dedd7-13e4-4691-9620-c8e0b2532c87" containerName="cilium-agent" Jul 10 00:35:43.241771 kubelet[1912]: I0710 00:35:43.241659 1912 memory_manager.go:355] "RemoveStaleState removing state" podUID="f5e4c308-e8f2-43ba-b7a2-377166168980" containerName="cilium-operator" Jul 10 00:35:43.246895 systemd[1]: Created slice kubepods-burstable-poda28ba47c_8745_4c5d_8d75_eac0985e0c7c.slice. Jul 10 00:35:43.266142 sshd[3730]: Accepted publickey for core from 10.0.0.1 port 57392 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:35:43.269695 sshd[3730]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:35:43.273649 systemd-logind[1201]: New session 25 of user core. Jul 10 00:35:43.274556 systemd[1]: Started session-25.scope. Jul 10 00:35:43.360878 kubelet[1912]: I0710 00:35:43.360837 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-etc-cni-netd\") pod \"cilium-s269q\" (UID: \"a28ba47c-8745-4c5d-8d75-eac0985e0c7c\") " pod="kube-system/cilium-s269q" Jul 10 00:35:43.360878 kubelet[1912]: I0710 00:35:43.360876 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-host-proc-sys-net\") pod \"cilium-s269q\" (UID: \"a28ba47c-8745-4c5d-8d75-eac0985e0c7c\") " pod="kube-system/cilium-s269q" Jul 10 00:35:43.361049 kubelet[1912]: I0710 00:35:43.360897 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-cilium-run\") pod \"cilium-s269q\" (UID: \"a28ba47c-8745-4c5d-8d75-eac0985e0c7c\") " pod="kube-system/cilium-s269q" Jul 10 00:35:43.361049 kubelet[1912]: I0710 00:35:43.360913 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-bpf-maps\") pod \"cilium-s269q\" (UID: \"a28ba47c-8745-4c5d-8d75-eac0985e0c7c\") " pod="kube-system/cilium-s269q" Jul 10 00:35:43.361049 kubelet[1912]: I0710 00:35:43.360929 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-cilium-cgroup\") pod \"cilium-s269q\" (UID: \"a28ba47c-8745-4c5d-8d75-eac0985e0c7c\") " pod="kube-system/cilium-s269q" Jul 10 00:35:43.361049 kubelet[1912]: I0710 00:35:43.360947 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-lib-modules\") pod \"cilium-s269q\" (UID: \"a28ba47c-8745-4c5d-8d75-eac0985e0c7c\") " pod="kube-system/cilium-s269q" Jul 10 00:35:43.361049 kubelet[1912]: I0710 00:35:43.360963 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-hostproc\") pod \"cilium-s269q\" (UID: \"a28ba47c-8745-4c5d-8d75-eac0985e0c7c\") " pod="kube-system/cilium-s269q" Jul 10 00:35:43.361049 kubelet[1912]: I0710 00:35:43.360979 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-xtables-lock\") pod \"cilium-s269q\" (UID: \"a28ba47c-8745-4c5d-8d75-eac0985e0c7c\") " pod="kube-system/cilium-s269q" Jul 10 00:35:43.361185 kubelet[1912]: I0710 00:35:43.360996 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-cilium-config-path\") pod \"cilium-s269q\" (UID: \"a28ba47c-8745-4c5d-8d75-eac0985e0c7c\") " pod="kube-system/cilium-s269q" Jul 10 00:35:43.361185 kubelet[1912]: I0710 00:35:43.361011 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjn5h\" (UniqueName: \"kubernetes.io/projected/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-kube-api-access-bjn5h\") pod \"cilium-s269q\" (UID: \"a28ba47c-8745-4c5d-8d75-eac0985e0c7c\") " pod="kube-system/cilium-s269q" Jul 10 00:35:43.361185 kubelet[1912]: I0710 00:35:43.361030 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-cni-path\") pod \"cilium-s269q\" (UID: \"a28ba47c-8745-4c5d-8d75-eac0985e0c7c\") " pod="kube-system/cilium-s269q" Jul 10 00:35:43.361185 kubelet[1912]: I0710 00:35:43.361047 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-cilium-ipsec-secrets\") pod \"cilium-s269q\" (UID: \"a28ba47c-8745-4c5d-8d75-eac0985e0c7c\") " pod="kube-system/cilium-s269q" Jul 10 00:35:43.361185 kubelet[1912]: I0710 00:35:43.361063 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-host-proc-sys-kernel\") pod \"cilium-s269q\" (UID: \"a28ba47c-8745-4c5d-8d75-eac0985e0c7c\") " pod="kube-system/cilium-s269q" Jul 10 00:35:43.361411 kubelet[1912]: I0710 00:35:43.361081 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-clustermesh-secrets\") pod \"cilium-s269q\" (UID: \"a28ba47c-8745-4c5d-8d75-eac0985e0c7c\") " pod="kube-system/cilium-s269q" Jul 10 00:35:43.361411 kubelet[1912]: I0710 00:35:43.361096 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-hubble-tls\") pod \"cilium-s269q\" (UID: \"a28ba47c-8745-4c5d-8d75-eac0985e0c7c\") " pod="kube-system/cilium-s269q" Jul 10 00:35:43.395427 sshd[3730]: pam_unix(sshd:session): session closed for user core Jul 10 00:35:43.399292 systemd[1]: Started sshd@25-10.0.0.76:22-10.0.0.1:57398.service. Jul 10 00:35:43.405251 kubelet[1912]: E0710 00:35:43.404474 1912 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-bjn5h lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-s269q" podUID="a28ba47c-8745-4c5d-8d75-eac0985e0c7c" Jul 10 00:35:43.405598 systemd[1]: sshd@24-10.0.0.76:22-10.0.0.1:57392.service: Deactivated successfully. Jul 10 00:35:43.406203 systemd-logind[1201]: Session 25 logged out. Waiting for processes to exit. Jul 10 00:35:43.406276 systemd[1]: session-25.scope: Deactivated successfully. Jul 10 00:35:43.406991 systemd-logind[1201]: Removed session 25. Jul 10 00:35:43.439055 sshd[3744]: Accepted publickey for core from 10.0.0.1 port 57398 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:35:43.440341 sshd[3744]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:35:43.443734 systemd-logind[1201]: New session 26 of user core. Jul 10 00:35:43.444570 systemd[1]: Started session-26.scope. Jul 10 00:35:43.454366 kubelet[1912]: E0710 00:35:43.454322 1912 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 10 00:35:43.663515 kubelet[1912]: I0710 00:35:43.663466 1912 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-cilium-config-path\") pod \"a28ba47c-8745-4c5d-8d75-eac0985e0c7c\" (UID: \"a28ba47c-8745-4c5d-8d75-eac0985e0c7c\") " Jul 10 00:35:43.663515 kubelet[1912]: I0710 00:35:43.663514 1912 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-cilium-ipsec-secrets\") pod \"a28ba47c-8745-4c5d-8d75-eac0985e0c7c\" (UID: \"a28ba47c-8745-4c5d-8d75-eac0985e0c7c\") " Jul 10 00:35:43.663705 kubelet[1912]: I0710 00:35:43.663532 1912 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-cilium-run\") pod \"a28ba47c-8745-4c5d-8d75-eac0985e0c7c\" (UID: \"a28ba47c-8745-4c5d-8d75-eac0985e0c7c\") " Jul 10 00:35:43.663705 kubelet[1912]: I0710 00:35:43.663551 1912 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjn5h\" (UniqueName: \"kubernetes.io/projected/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-kube-api-access-bjn5h\") pod \"a28ba47c-8745-4c5d-8d75-eac0985e0c7c\" (UID: \"a28ba47c-8745-4c5d-8d75-eac0985e0c7c\") " Jul 10 00:35:43.663705 kubelet[1912]: I0710 00:35:43.663568 1912 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-xtables-lock\") pod \"a28ba47c-8745-4c5d-8d75-eac0985e0c7c\" (UID: \"a28ba47c-8745-4c5d-8d75-eac0985e0c7c\") " Jul 10 00:35:43.663705 kubelet[1912]: I0710 00:35:43.663586 1912 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-hubble-tls\") pod \"a28ba47c-8745-4c5d-8d75-eac0985e0c7c\" (UID: \"a28ba47c-8745-4c5d-8d75-eac0985e0c7c\") " Jul 10 00:35:43.663705 kubelet[1912]: I0710 00:35:43.663600 1912 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-hostproc\") pod \"a28ba47c-8745-4c5d-8d75-eac0985e0c7c\" (UID: \"a28ba47c-8745-4c5d-8d75-eac0985e0c7c\") " Jul 10 00:35:43.663705 kubelet[1912]: I0710 00:35:43.663614 1912 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-host-proc-sys-kernel\") pod \"a28ba47c-8745-4c5d-8d75-eac0985e0c7c\" (UID: \"a28ba47c-8745-4c5d-8d75-eac0985e0c7c\") " Jul 10 00:35:43.663835 kubelet[1912]: I0710 00:35:43.663629 1912 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-bpf-maps\") pod \"a28ba47c-8745-4c5d-8d75-eac0985e0c7c\" (UID: \"a28ba47c-8745-4c5d-8d75-eac0985e0c7c\") " Jul 10 00:35:43.663835 kubelet[1912]: I0710 00:35:43.663643 1912 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-etc-cni-netd\") pod \"a28ba47c-8745-4c5d-8d75-eac0985e0c7c\" (UID: \"a28ba47c-8745-4c5d-8d75-eac0985e0c7c\") " Jul 10 00:35:43.663835 kubelet[1912]: I0710 00:35:43.663657 1912 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-host-proc-sys-net\") pod \"a28ba47c-8745-4c5d-8d75-eac0985e0c7c\" (UID: \"a28ba47c-8745-4c5d-8d75-eac0985e0c7c\") " Jul 10 00:35:43.663835 kubelet[1912]: I0710 00:35:43.663672 1912 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-lib-modules\") pod \"a28ba47c-8745-4c5d-8d75-eac0985e0c7c\" (UID: \"a28ba47c-8745-4c5d-8d75-eac0985e0c7c\") " Jul 10 00:35:43.663835 kubelet[1912]: I0710 00:35:43.663691 1912 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-clustermesh-secrets\") pod \"a28ba47c-8745-4c5d-8d75-eac0985e0c7c\" (UID: \"a28ba47c-8745-4c5d-8d75-eac0985e0c7c\") " Jul 10 00:35:43.663835 kubelet[1912]: I0710 00:35:43.663707 1912 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-cilium-cgroup\") pod \"a28ba47c-8745-4c5d-8d75-eac0985e0c7c\" (UID: \"a28ba47c-8745-4c5d-8d75-eac0985e0c7c\") " Jul 10 00:35:43.663959 kubelet[1912]: I0710 00:35:43.663722 1912 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-cni-path\") pod \"a28ba47c-8745-4c5d-8d75-eac0985e0c7c\" (UID: \"a28ba47c-8745-4c5d-8d75-eac0985e0c7c\") " Jul 10 00:35:43.663959 kubelet[1912]: I0710 00:35:43.663774 1912 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-cni-path" (OuterVolumeSpecName: "cni-path") pod "a28ba47c-8745-4c5d-8d75-eac0985e0c7c" (UID: "a28ba47c-8745-4c5d-8d75-eac0985e0c7c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:35:43.664115 kubelet[1912]: I0710 00:35:43.664091 1912 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a28ba47c-8745-4c5d-8d75-eac0985e0c7c" (UID: "a28ba47c-8745-4c5d-8d75-eac0985e0c7c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:35:43.664183 kubelet[1912]: I0710 00:35:43.664095 1912 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a28ba47c-8745-4c5d-8d75-eac0985e0c7c" (UID: "a28ba47c-8745-4c5d-8d75-eac0985e0c7c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:35:43.664265 kubelet[1912]: I0710 00:35:43.664111 1912 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a28ba47c-8745-4c5d-8d75-eac0985e0c7c" (UID: "a28ba47c-8745-4c5d-8d75-eac0985e0c7c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:35:43.665306 kubelet[1912]: I0710 00:35:43.665239 1912 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a28ba47c-8745-4c5d-8d75-eac0985e0c7c" (UID: "a28ba47c-8745-4c5d-8d75-eac0985e0c7c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 10 00:35:43.665306 kubelet[1912]: I0710 00:35:43.665303 1912 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a28ba47c-8745-4c5d-8d75-eac0985e0c7c" (UID: "a28ba47c-8745-4c5d-8d75-eac0985e0c7c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:35:43.665466 kubelet[1912]: I0710 00:35:43.665322 1912 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a28ba47c-8745-4c5d-8d75-eac0985e0c7c" (UID: "a28ba47c-8745-4c5d-8d75-eac0985e0c7c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:35:43.665466 kubelet[1912]: I0710 00:35:43.665335 1912 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a28ba47c-8745-4c5d-8d75-eac0985e0c7c" (UID: "a28ba47c-8745-4c5d-8d75-eac0985e0c7c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:35:43.666718 kubelet[1912]: I0710 00:35:43.666642 1912 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-kube-api-access-bjn5h" (OuterVolumeSpecName: "kube-api-access-bjn5h") pod "a28ba47c-8745-4c5d-8d75-eac0985e0c7c" (UID: "a28ba47c-8745-4c5d-8d75-eac0985e0c7c"). InnerVolumeSpecName "kube-api-access-bjn5h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:35:43.666718 kubelet[1912]: I0710 00:35:43.666696 1912 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-hostproc" (OuterVolumeSpecName: "hostproc") pod "a28ba47c-8745-4c5d-8d75-eac0985e0c7c" (UID: "a28ba47c-8745-4c5d-8d75-eac0985e0c7c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:35:43.666820 kubelet[1912]: I0710 00:35:43.666715 1912 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a28ba47c-8745-4c5d-8d75-eac0985e0c7c" (UID: "a28ba47c-8745-4c5d-8d75-eac0985e0c7c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:35:43.666820 kubelet[1912]: I0710 00:35:43.666749 1912 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a28ba47c-8745-4c5d-8d75-eac0985e0c7c" (UID: "a28ba47c-8745-4c5d-8d75-eac0985e0c7c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:35:43.667105 kubelet[1912]: I0710 00:35:43.667082 1912 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a28ba47c-8745-4c5d-8d75-eac0985e0c7c" (UID: "a28ba47c-8745-4c5d-8d75-eac0985e0c7c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:35:43.667690 systemd[1]: var-lib-kubelet-pods-a28ba47c\x2d8745\x2d4c5d\x2d8d75\x2deac0985e0c7c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbjn5h.mount: Deactivated successfully. Jul 10 00:35:43.668367 kubelet[1912]: I0710 00:35:43.667691 1912 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "a28ba47c-8745-4c5d-8d75-eac0985e0c7c" (UID: "a28ba47c-8745-4c5d-8d75-eac0985e0c7c"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 10 00:35:43.668367 kubelet[1912]: I0710 00:35:43.667897 1912 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a28ba47c-8745-4c5d-8d75-eac0985e0c7c" (UID: "a28ba47c-8745-4c5d-8d75-eac0985e0c7c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 10 00:35:43.667792 systemd[1]: var-lib-kubelet-pods-a28ba47c\x2d8745\x2d4c5d\x2d8d75\x2deac0985e0c7c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 10 00:35:43.667844 systemd[1]: var-lib-kubelet-pods-a28ba47c\x2d8745\x2d4c5d\x2d8d75\x2deac0985e0c7c-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 10 00:35:43.670022 systemd[1]: var-lib-kubelet-pods-a28ba47c\x2d8745\x2d4c5d\x2d8d75\x2deac0985e0c7c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 10 00:35:43.764346 kubelet[1912]: I0710 00:35:43.764305 1912 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Jul 10 00:35:43.764346 kubelet[1912]: I0710 00:35:43.764338 1912 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 10 00:35:43.764346 kubelet[1912]: I0710 00:35:43.764348 1912 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 10 00:35:43.764346 kubelet[1912]: I0710 00:35:43.764358 1912 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bjn5h\" (UniqueName: \"kubernetes.io/projected/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-kube-api-access-bjn5h\") on node \"localhost\" DevicePath \"\"" Jul 10 00:35:43.764558 kubelet[1912]: I0710 00:35:43.764368 1912 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 10 00:35:43.764558 kubelet[1912]: I0710 00:35:43.764376 1912 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 10 00:35:43.764558 kubelet[1912]: I0710 00:35:43.764384 1912 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 10 00:35:43.764558 kubelet[1912]: I0710 00:35:43.764399 1912 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 10 00:35:43.764558 kubelet[1912]: I0710 00:35:43.764410 1912 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 10 00:35:43.764558 kubelet[1912]: I0710 00:35:43.764418 1912 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 10 00:35:43.764558 kubelet[1912]: I0710 00:35:43.764426 1912 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 10 00:35:43.764558 kubelet[1912]: I0710 00:35:43.764434 1912 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 10 00:35:43.764727 kubelet[1912]: I0710 00:35:43.764441 1912 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 10 00:35:43.764727 kubelet[1912]: I0710 00:35:43.764449 1912 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 10 00:35:43.764727 kubelet[1912]: I0710 00:35:43.764456 1912 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a28ba47c-8745-4c5d-8d75-eac0985e0c7c-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 10 00:35:44.424674 systemd[1]: Removed slice kubepods-burstable-poda28ba47c_8745_4c5d_8d75_eac0985e0c7c.slice. Jul 10 00:35:44.677169 systemd[1]: Created slice kubepods-burstable-pod339116da_2d0f_421d_9141_091d70d80f4e.slice. Jul 10 00:35:44.769124 kubelet[1912]: I0710 00:35:44.769074 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/339116da-2d0f-421d-9141-091d70d80f4e-host-proc-sys-net\") pod \"cilium-tl52k\" (UID: \"339116da-2d0f-421d-9141-091d70d80f4e\") " pod="kube-system/cilium-tl52k" Jul 10 00:35:44.769124 kubelet[1912]: I0710 00:35:44.769121 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/339116da-2d0f-421d-9141-091d70d80f4e-cilium-cgroup\") pod \"cilium-tl52k\" (UID: \"339116da-2d0f-421d-9141-091d70d80f4e\") " pod="kube-system/cilium-tl52k" Jul 10 00:35:44.769499 kubelet[1912]: I0710 00:35:44.769143 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/339116da-2d0f-421d-9141-091d70d80f4e-cilium-config-path\") pod \"cilium-tl52k\" (UID: \"339116da-2d0f-421d-9141-091d70d80f4e\") " pod="kube-system/cilium-tl52k" Jul 10 00:35:44.769499 kubelet[1912]: I0710 00:35:44.769182 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/339116da-2d0f-421d-9141-091d70d80f4e-cilium-ipsec-secrets\") pod \"cilium-tl52k\" (UID: \"339116da-2d0f-421d-9141-091d70d80f4e\") " pod="kube-system/cilium-tl52k" Jul 10 00:35:44.769499 kubelet[1912]: I0710 00:35:44.769266 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/339116da-2d0f-421d-9141-091d70d80f4e-cilium-run\") pod \"cilium-tl52k\" (UID: \"339116da-2d0f-421d-9141-091d70d80f4e\") " pod="kube-system/cilium-tl52k" Jul 10 00:35:44.769499 kubelet[1912]: I0710 00:35:44.769286 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/339116da-2d0f-421d-9141-091d70d80f4e-cni-path\") pod \"cilium-tl52k\" (UID: \"339116da-2d0f-421d-9141-091d70d80f4e\") " pod="kube-system/cilium-tl52k" Jul 10 00:35:44.769499 kubelet[1912]: I0710 00:35:44.769300 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/339116da-2d0f-421d-9141-091d70d80f4e-xtables-lock\") pod \"cilium-tl52k\" (UID: \"339116da-2d0f-421d-9141-091d70d80f4e\") " pod="kube-system/cilium-tl52k" Jul 10 00:35:44.769499 kubelet[1912]: I0710 00:35:44.769315 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/339116da-2d0f-421d-9141-091d70d80f4e-host-proc-sys-kernel\") pod \"cilium-tl52k\" (UID: \"339116da-2d0f-421d-9141-091d70d80f4e\") " pod="kube-system/cilium-tl52k" Jul 10 00:35:44.769655 kubelet[1912]: I0710 00:35:44.769332 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/339116da-2d0f-421d-9141-091d70d80f4e-etc-cni-netd\") pod \"cilium-tl52k\" (UID: \"339116da-2d0f-421d-9141-091d70d80f4e\") " pod="kube-system/cilium-tl52k" Jul 10 00:35:44.769655 kubelet[1912]: I0710 00:35:44.769347 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/339116da-2d0f-421d-9141-091d70d80f4e-lib-modules\") pod \"cilium-tl52k\" (UID: \"339116da-2d0f-421d-9141-091d70d80f4e\") " pod="kube-system/cilium-tl52k" Jul 10 00:35:44.769655 kubelet[1912]: I0710 00:35:44.769363 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/339116da-2d0f-421d-9141-091d70d80f4e-hubble-tls\") pod \"cilium-tl52k\" (UID: \"339116da-2d0f-421d-9141-091d70d80f4e\") " pod="kube-system/cilium-tl52k" Jul 10 00:35:44.769655 kubelet[1912]: I0710 00:35:44.769381 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/339116da-2d0f-421d-9141-091d70d80f4e-bpf-maps\") pod \"cilium-tl52k\" (UID: \"339116da-2d0f-421d-9141-091d70d80f4e\") " pod="kube-system/cilium-tl52k" Jul 10 00:35:44.769655 kubelet[1912]: I0710 00:35:44.769407 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/339116da-2d0f-421d-9141-091d70d80f4e-hostproc\") pod \"cilium-tl52k\" (UID: \"339116da-2d0f-421d-9141-091d70d80f4e\") " pod="kube-system/cilium-tl52k" Jul 10 00:35:44.769655 kubelet[1912]: I0710 00:35:44.769423 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/339116da-2d0f-421d-9141-091d70d80f4e-clustermesh-secrets\") pod \"cilium-tl52k\" (UID: \"339116da-2d0f-421d-9141-091d70d80f4e\") " pod="kube-system/cilium-tl52k" Jul 10 00:35:44.769778 kubelet[1912]: I0710 00:35:44.769441 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2jxb\" (UniqueName: \"kubernetes.io/projected/339116da-2d0f-421d-9141-091d70d80f4e-kube-api-access-b2jxb\") pod \"cilium-tl52k\" (UID: \"339116da-2d0f-421d-9141-091d70d80f4e\") " pod="kube-system/cilium-tl52k" Jul 10 00:35:44.979862 kubelet[1912]: E0710 00:35:44.979743 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:44.980994 env[1214]: time="2025-07-10T00:35:44.980478734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tl52k,Uid:339116da-2d0f-421d-9141-091d70d80f4e,Namespace:kube-system,Attempt:0,}" Jul 10 00:35:44.991926 env[1214]: time="2025-07-10T00:35:44.991753945Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:35:44.991926 env[1214]: time="2025-07-10T00:35:44.991792105Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:35:44.991926 env[1214]: time="2025-07-10T00:35:44.991802425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:35:44.992091 env[1214]: time="2025-07-10T00:35:44.991957345Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d40e6febc72fd04c21a0bfefa3c63a94904c794ba44bbf7e69e2806f02070581 pid=3775 runtime=io.containerd.runc.v2 Jul 10 00:35:45.001734 systemd[1]: Started cri-containerd-d40e6febc72fd04c21a0bfefa3c63a94904c794ba44bbf7e69e2806f02070581.scope. Jul 10 00:35:45.035219 env[1214]: time="2025-07-10T00:35:45.035165008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tl52k,Uid:339116da-2d0f-421d-9141-091d70d80f4e,Namespace:kube-system,Attempt:0,} returns sandbox id \"d40e6febc72fd04c21a0bfefa3c63a94904c794ba44bbf7e69e2806f02070581\"" Jul 10 00:35:45.036140 kubelet[1912]: E0710 00:35:45.036114 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:45.039799 env[1214]: time="2025-07-10T00:35:45.039745176Z" level=info msg="CreateContainer within sandbox \"d40e6febc72fd04c21a0bfefa3c63a94904c794ba44bbf7e69e2806f02070581\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 00:35:45.048988 env[1214]: time="2025-07-10T00:35:45.048938950Z" level=info msg="CreateContainer within sandbox \"d40e6febc72fd04c21a0bfefa3c63a94904c794ba44bbf7e69e2806f02070581\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a70519771a8387d54f1f91f46e5710a2a60cf6c94db3008d2b59872b9041874f\"" Jul 10 00:35:45.049457 env[1214]: time="2025-07-10T00:35:45.049430191Z" level=info msg="StartContainer for \"a70519771a8387d54f1f91f46e5710a2a60cf6c94db3008d2b59872b9041874f\"" Jul 10 00:35:45.062744 systemd[1]: Started cri-containerd-a70519771a8387d54f1f91f46e5710a2a60cf6c94db3008d2b59872b9041874f.scope. Jul 10 00:35:45.105038 env[1214]: time="2025-07-10T00:35:45.104992160Z" level=info msg="StartContainer for \"a70519771a8387d54f1f91f46e5710a2a60cf6c94db3008d2b59872b9041874f\" returns successfully" Jul 10 00:35:45.122655 systemd[1]: cri-containerd-a70519771a8387d54f1f91f46e5710a2a60cf6c94db3008d2b59872b9041874f.scope: Deactivated successfully. Jul 10 00:35:45.150000 env[1214]: time="2025-07-10T00:35:45.149954712Z" level=info msg="shim disconnected" id=a70519771a8387d54f1f91f46e5710a2a60cf6c94db3008d2b59872b9041874f Jul 10 00:35:45.150281 env[1214]: time="2025-07-10T00:35:45.150239832Z" level=warning msg="cleaning up after shim disconnected" id=a70519771a8387d54f1f91f46e5710a2a60cf6c94db3008d2b59872b9041874f namespace=k8s.io Jul 10 00:35:45.150353 env[1214]: time="2025-07-10T00:35:45.150338233Z" level=info msg="cleaning up dead shim" Jul 10 00:35:45.156881 env[1214]: time="2025-07-10T00:35:45.156843523Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:35:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3859 runtime=io.containerd.runc.v2\n" Jul 10 00:35:45.623510 kubelet[1912]: E0710 00:35:45.623479 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:45.625234 env[1214]: time="2025-07-10T00:35:45.625183272Z" level=info msg="CreateContainer within sandbox \"d40e6febc72fd04c21a0bfefa3c63a94904c794ba44bbf7e69e2806f02070581\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 10 00:35:45.641780 env[1214]: time="2025-07-10T00:35:45.641716419Z" level=info msg="CreateContainer within sandbox \"d40e6febc72fd04c21a0bfefa3c63a94904c794ba44bbf7e69e2806f02070581\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"634423d50a478f44407df5566390e6a5eadf1c8f9a8f39b4bb84224b242dc688\"" Jul 10 00:35:45.642465 env[1214]: time="2025-07-10T00:35:45.642427260Z" level=info msg="StartContainer for \"634423d50a478f44407df5566390e6a5eadf1c8f9a8f39b4bb84224b242dc688\"" Jul 10 00:35:45.663689 systemd[1]: Started cri-containerd-634423d50a478f44407df5566390e6a5eadf1c8f9a8f39b4bb84224b242dc688.scope. Jul 10 00:35:45.699416 env[1214]: time="2025-07-10T00:35:45.699345871Z" level=info msg="StartContainer for \"634423d50a478f44407df5566390e6a5eadf1c8f9a8f39b4bb84224b242dc688\" returns successfully" Jul 10 00:35:45.700555 systemd[1]: cri-containerd-634423d50a478f44407df5566390e6a5eadf1c8f9a8f39b4bb84224b242dc688.scope: Deactivated successfully. Jul 10 00:35:45.722020 env[1214]: time="2025-07-10T00:35:45.721970907Z" level=info msg="shim disconnected" id=634423d50a478f44407df5566390e6a5eadf1c8f9a8f39b4bb84224b242dc688 Jul 10 00:35:45.722020 env[1214]: time="2025-07-10T00:35:45.722017227Z" level=warning msg="cleaning up after shim disconnected" id=634423d50a478f44407df5566390e6a5eadf1c8f9a8f39b4bb84224b242dc688 namespace=k8s.io Jul 10 00:35:45.722020 env[1214]: time="2025-07-10T00:35:45.722027547Z" level=info msg="cleaning up dead shim" Jul 10 00:35:45.729108 env[1214]: time="2025-07-10T00:35:45.729071599Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:35:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3921 runtime=io.containerd.runc.v2\n" Jul 10 00:35:46.419531 kubelet[1912]: I0710 00:35:46.419484 1912 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a28ba47c-8745-4c5d-8d75-eac0985e0c7c" path="/var/lib/kubelet/pods/a28ba47c-8745-4c5d-8d75-eac0985e0c7c/volumes" Jul 10 00:35:46.626129 kubelet[1912]: E0710 00:35:46.626083 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:46.628196 env[1214]: time="2025-07-10T00:35:46.628152933Z" level=info msg="CreateContainer within sandbox \"d40e6febc72fd04c21a0bfefa3c63a94904c794ba44bbf7e69e2806f02070581\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 10 00:35:46.639604 env[1214]: time="2025-07-10T00:35:46.639558918Z" level=info msg="CreateContainer within sandbox \"d40e6febc72fd04c21a0bfefa3c63a94904c794ba44bbf7e69e2806f02070581\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2a96889e151b9d6ac806a42371f6eb8af22a61f15313174e04fc4575a4ecff8a\"" Jul 10 00:35:46.641286 env[1214]: time="2025-07-10T00:35:46.640385560Z" level=info msg="StartContainer for \"2a96889e151b9d6ac806a42371f6eb8af22a61f15313174e04fc4575a4ecff8a\"" Jul 10 00:35:46.660153 systemd[1]: Started cri-containerd-2a96889e151b9d6ac806a42371f6eb8af22a61f15313174e04fc4575a4ecff8a.scope. Jul 10 00:35:46.699529 env[1214]: time="2025-07-10T00:35:46.699427330Z" level=info msg="StartContainer for \"2a96889e151b9d6ac806a42371f6eb8af22a61f15313174e04fc4575a4ecff8a\" returns successfully" Jul 10 00:35:46.701324 systemd[1]: cri-containerd-2a96889e151b9d6ac806a42371f6eb8af22a61f15313174e04fc4575a4ecff8a.scope: Deactivated successfully. Jul 10 00:35:46.723105 env[1214]: time="2025-07-10T00:35:46.723055662Z" level=info msg="shim disconnected" id=2a96889e151b9d6ac806a42371f6eb8af22a61f15313174e04fc4575a4ecff8a Jul 10 00:35:46.723105 env[1214]: time="2025-07-10T00:35:46.723101062Z" level=warning msg="cleaning up after shim disconnected" id=2a96889e151b9d6ac806a42371f6eb8af22a61f15313174e04fc4575a4ecff8a namespace=k8s.io Jul 10 00:35:46.723315 env[1214]: time="2025-07-10T00:35:46.723112542Z" level=info msg="cleaning up dead shim" Jul 10 00:35:46.728739 env[1214]: time="2025-07-10T00:35:46.728703234Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:35:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3978 runtime=io.containerd.runc.v2\n" Jul 10 00:35:46.875205 systemd[1]: run-containerd-runc-k8s.io-2a96889e151b9d6ac806a42371f6eb8af22a61f15313174e04fc4575a4ecff8a-runc.qcDpFj.mount: Deactivated successfully. Jul 10 00:35:46.875332 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a96889e151b9d6ac806a42371f6eb8af22a61f15313174e04fc4575a4ecff8a-rootfs.mount: Deactivated successfully. Jul 10 00:35:47.417658 kubelet[1912]: E0710 00:35:47.417614 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:47.629825 kubelet[1912]: E0710 00:35:47.629775 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:47.632171 env[1214]: time="2025-07-10T00:35:47.632128028Z" level=info msg="CreateContainer within sandbox \"d40e6febc72fd04c21a0bfefa3c63a94904c794ba44bbf7e69e2806f02070581\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 10 00:35:47.649982 env[1214]: time="2025-07-10T00:35:47.649932918Z" level=info msg="CreateContainer within sandbox \"d40e6febc72fd04c21a0bfefa3c63a94904c794ba44bbf7e69e2806f02070581\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f4d986414bcee7b2db691dc6a504741f219ce5f89eca325c61c4d0952d8fbaf4\"" Jul 10 00:35:47.651528 env[1214]: time="2025-07-10T00:35:47.650767200Z" level=info msg="StartContainer for \"f4d986414bcee7b2db691dc6a504741f219ce5f89eca325c61c4d0952d8fbaf4\"" Jul 10 00:35:47.667431 systemd[1]: Started cri-containerd-f4d986414bcee7b2db691dc6a504741f219ce5f89eca325c61c4d0952d8fbaf4.scope. Jul 10 00:35:47.692621 systemd[1]: cri-containerd-f4d986414bcee7b2db691dc6a504741f219ce5f89eca325c61c4d0952d8fbaf4.scope: Deactivated successfully. Jul 10 00:35:47.693793 env[1214]: time="2025-07-10T00:35:47.693683560Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod339116da_2d0f_421d_9141_091d70d80f4e.slice/cri-containerd-f4d986414bcee7b2db691dc6a504741f219ce5f89eca325c61c4d0952d8fbaf4.scope/memory.events\": no such file or directory" Jul 10 00:35:47.696574 env[1214]: time="2025-07-10T00:35:47.696535808Z" level=info msg="StartContainer for \"f4d986414bcee7b2db691dc6a504741f219ce5f89eca325c61c4d0952d8fbaf4\" returns successfully" Jul 10 00:35:47.714929 env[1214]: time="2025-07-10T00:35:47.714886979Z" level=info msg="shim disconnected" id=f4d986414bcee7b2db691dc6a504741f219ce5f89eca325c61c4d0952d8fbaf4 Jul 10 00:35:47.715154 env[1214]: time="2025-07-10T00:35:47.715135619Z" level=warning msg="cleaning up after shim disconnected" id=f4d986414bcee7b2db691dc6a504741f219ce5f89eca325c61c4d0952d8fbaf4 namespace=k8s.io Jul 10 00:35:47.715229 env[1214]: time="2025-07-10T00:35:47.715206259Z" level=info msg="cleaning up dead shim" Jul 10 00:35:47.721228 env[1214]: time="2025-07-10T00:35:47.721189796Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:35:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4033 runtime=io.containerd.runc.v2\n" Jul 10 00:35:48.456148 kubelet[1912]: E0710 00:35:48.456107 1912 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 10 00:35:48.635209 kubelet[1912]: E0710 00:35:48.635164 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:48.639940 env[1214]: time="2025-07-10T00:35:48.639862111Z" level=info msg="CreateContainer within sandbox \"d40e6febc72fd04c21a0bfefa3c63a94904c794ba44bbf7e69e2806f02070581\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 10 00:35:48.658670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4102934974.mount: Deactivated successfully. Jul 10 00:35:48.662613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2230385706.mount: Deactivated successfully. Jul 10 00:35:48.671945 env[1214]: time="2025-07-10T00:35:48.671888498Z" level=info msg="CreateContainer within sandbox \"d40e6febc72fd04c21a0bfefa3c63a94904c794ba44bbf7e69e2806f02070581\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"37c3321ece9299684042d8e0215e1f034c376cee0734f0f1739e1ba26db86f29\"" Jul 10 00:35:48.672883 env[1214]: time="2025-07-10T00:35:48.672851501Z" level=info msg="StartContainer for \"37c3321ece9299684042d8e0215e1f034c376cee0734f0f1739e1ba26db86f29\"" Jul 10 00:35:48.687663 systemd[1]: Started cri-containerd-37c3321ece9299684042d8e0215e1f034c376cee0734f0f1739e1ba26db86f29.scope. Jul 10 00:35:48.732889 env[1214]: time="2025-07-10T00:35:48.732726982Z" level=info msg="StartContainer for \"37c3321ece9299684042d8e0215e1f034c376cee0734f0f1739e1ba26db86f29\" returns successfully" Jul 10 00:35:48.980254 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Jul 10 00:35:49.639520 kubelet[1912]: E0710 00:35:49.639487 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:49.661806 kubelet[1912]: I0710 00:35:49.661750 1912 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tl52k" podStartSLOduration=5.66173249 podStartE2EDuration="5.66173249s" podCreationTimestamp="2025-07-10 00:35:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:35:49.66167093 +0000 UTC m=+91.346745958" watchObservedRunningTime="2025-07-10 00:35:49.66173249 +0000 UTC m=+91.346807518" Jul 10 00:35:50.518858 kubelet[1912]: I0710 00:35:50.518814 1912 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-10T00:35:50Z","lastTransitionTime":"2025-07-10T00:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 10 00:35:50.980744 kubelet[1912]: E0710 00:35:50.980709 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:51.733275 systemd-networkd[1041]: lxc_health: Link UP Jul 10 00:35:51.740953 systemd-networkd[1041]: lxc_health: Gained carrier Jul 10 00:35:51.741297 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 10 00:35:52.923340 systemd-networkd[1041]: lxc_health: Gained IPv6LL Jul 10 00:35:52.982371 kubelet[1912]: E0710 00:35:52.982333 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:53.416966 kubelet[1912]: E0710 00:35:53.416928 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:53.645942 kubelet[1912]: E0710 00:35:53.645896 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:53.990041 systemd[1]: run-containerd-runc-k8s.io-37c3321ece9299684042d8e0215e1f034c376cee0734f0f1739e1ba26db86f29-runc.lma78n.mount: Deactivated successfully. Jul 10 00:35:54.647419 kubelet[1912]: E0710 00:35:54.647370 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:56.108506 systemd[1]: run-containerd-runc-k8s.io-37c3321ece9299684042d8e0215e1f034c376cee0734f0f1739e1ba26db86f29-runc.jnOwv4.mount: Deactivated successfully. Jul 10 00:35:58.221368 systemd[1]: run-containerd-runc-k8s.io-37c3321ece9299684042d8e0215e1f034c376cee0734f0f1739e1ba26db86f29-runc.1i7tno.mount: Deactivated successfully. Jul 10 00:35:58.275485 sshd[3744]: pam_unix(sshd:session): session closed for user core Jul 10 00:35:58.278438 systemd[1]: sshd@25-10.0.0.76:22-10.0.0.1:57398.service: Deactivated successfully. Jul 10 00:35:58.279118 systemd[1]: session-26.scope: Deactivated successfully. Jul 10 00:35:58.279647 systemd-logind[1201]: Session 26 logged out. Waiting for processes to exit. Jul 10 00:35:58.280400 systemd-logind[1201]: Removed session 26. Jul 10 00:35:58.417992 kubelet[1912]: E0710 00:35:58.417949 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"