Jul 15 11:25:24.845945 kernel: Linux version 5.15.188-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Tue Jul 15 10:04:37 -00 2025 Jul 15 11:25:24.845965 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3fdbb2e3469f90ee764ea38c6fc4332d45967696e3c4fd4a8c65f8d0125b235b Jul 15 11:25:24.845973 kernel: BIOS-provided physical RAM map: Jul 15 11:25:24.845978 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 15 11:25:24.845984 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 15 11:25:24.845989 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 15 11:25:24.845995 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jul 15 11:25:24.846001 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jul 15 11:25:24.846008 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 15 11:25:24.846014 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 15 11:25:24.846019 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 15 11:25:24.846025 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 15 11:25:24.846030 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 15 11:25:24.846036 kernel: NX (Execute Disable) protection: active Jul 15 11:25:24.846053 kernel: SMBIOS 2.8 present. Jul 15 11:25:24.846060 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 15 11:25:24.846065 kernel: Hypervisor detected: KVM Jul 15 11:25:24.846071 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 15 11:25:24.846077 kernel: kvm-clock: cpu 0, msr 6c19b001, primary cpu clock Jul 15 11:25:24.846084 kernel: kvm-clock: using sched offset of 2409523586 cycles Jul 15 11:25:24.846090 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 15 11:25:24.846096 kernel: tsc: Detected 2794.750 MHz processor Jul 15 11:25:24.846103 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 15 11:25:24.846110 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 15 11:25:24.846116 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jul 15 11:25:24.846122 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 15 11:25:24.846129 kernel: Using GB pages for direct mapping Jul 15 11:25:24.846135 kernel: ACPI: Early table checksum verification disabled Jul 15 11:25:24.846141 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jul 15 11:25:24.846147 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:25:24.846153 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:25:24.846159 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:25:24.846167 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 15 11:25:24.846173 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:25:24.846179 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:25:24.846185 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:25:24.846191 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:25:24.846197 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jul 15 11:25:24.846204 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jul 15 11:25:24.846211 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 15 11:25:24.846232 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jul 15 11:25:24.846240 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jul 15 11:25:24.846248 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jul 15 11:25:24.846256 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jul 15 11:25:24.846264 kernel: No NUMA configuration found Jul 15 11:25:24.846272 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jul 15 11:25:24.846282 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jul 15 11:25:24.846291 kernel: Zone ranges: Jul 15 11:25:24.846299 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 15 11:25:24.846307 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jul 15 11:25:24.846316 kernel: Normal empty Jul 15 11:25:24.846323 kernel: Movable zone start for each node Jul 15 11:25:24.846329 kernel: Early memory node ranges Jul 15 11:25:24.846336 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 15 11:25:24.846342 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jul 15 11:25:24.846349 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jul 15 11:25:24.846357 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 15 11:25:24.846363 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 15 11:25:24.846370 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jul 15 11:25:24.846376 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 15 11:25:24.846383 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 15 11:25:24.846389 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 15 11:25:24.846396 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 15 11:25:24.846402 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 15 11:25:24.846409 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 15 11:25:24.846416 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 15 11:25:24.846423 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 15 11:25:24.846429 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 15 11:25:24.846436 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 15 11:25:24.846442 kernel: TSC deadline timer available Jul 15 11:25:24.846448 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 15 11:25:24.846455 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 15 11:25:24.846461 kernel: kvm-guest: setup PV sched yield Jul 15 11:25:24.846467 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 15 11:25:24.846475 kernel: Booting paravirtualized kernel on KVM Jul 15 11:25:24.846482 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 15 11:25:24.846488 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Jul 15 11:25:24.846495 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Jul 15 11:25:24.846501 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Jul 15 11:25:24.846508 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 15 11:25:24.846514 kernel: kvm-guest: setup async PF for cpu 0 Jul 15 11:25:24.846520 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Jul 15 11:25:24.846527 kernel: kvm-guest: PV spinlocks enabled Jul 15 11:25:24.846535 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 15 11:25:24.846541 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jul 15 11:25:24.846548 kernel: Policy zone: DMA32 Jul 15 11:25:24.846555 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3fdbb2e3469f90ee764ea38c6fc4332d45967696e3c4fd4a8c65f8d0125b235b Jul 15 11:25:24.846562 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 15 11:25:24.846568 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 15 11:25:24.846575 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 15 11:25:24.846582 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 15 11:25:24.846590 kernel: Memory: 2436696K/2571752K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47476K init, 4104K bss, 134796K reserved, 0K cma-reserved) Jul 15 11:25:24.846596 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 15 11:25:24.846603 kernel: ftrace: allocating 34607 entries in 136 pages Jul 15 11:25:24.846609 kernel: ftrace: allocated 136 pages with 2 groups Jul 15 11:25:24.846616 kernel: rcu: Hierarchical RCU implementation. Jul 15 11:25:24.846623 kernel: rcu: RCU event tracing is enabled. Jul 15 11:25:24.846630 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 15 11:25:24.846636 kernel: Rude variant of Tasks RCU enabled. Jul 15 11:25:24.846643 kernel: Tracing variant of Tasks RCU enabled. Jul 15 11:25:24.846650 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 15 11:25:24.846657 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 15 11:25:24.846664 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 15 11:25:24.846670 kernel: random: crng init done Jul 15 11:25:24.846676 kernel: Console: colour VGA+ 80x25 Jul 15 11:25:24.846704 kernel: printk: console [ttyS0] enabled Jul 15 11:25:24.846711 kernel: ACPI: Core revision 20210730 Jul 15 11:25:24.846718 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 15 11:25:24.846724 kernel: APIC: Switch to symmetric I/O mode setup Jul 15 11:25:24.846732 kernel: x2apic enabled Jul 15 11:25:24.846739 kernel: Switched APIC routing to physical x2apic. Jul 15 11:25:24.846745 kernel: kvm-guest: setup PV IPIs Jul 15 11:25:24.846752 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 15 11:25:24.846758 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 15 11:25:24.846765 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jul 15 11:25:24.846772 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 15 11:25:24.846778 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 15 11:25:24.846785 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 15 11:25:24.846797 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 15 11:25:24.846804 kernel: Spectre V2 : Mitigation: Retpolines Jul 15 11:25:24.846811 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 15 11:25:24.846819 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 15 11:25:24.846826 kernel: RETBleed: Mitigation: untrained return thunk Jul 15 11:25:24.846833 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 15 11:25:24.846840 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Jul 15 11:25:24.846847 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 15 11:25:24.846854 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 15 11:25:24.846862 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 15 11:25:24.846868 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 15 11:25:24.846875 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 15 11:25:24.846882 kernel: Freeing SMP alternatives memory: 32K Jul 15 11:25:24.846889 kernel: pid_max: default: 32768 minimum: 301 Jul 15 11:25:24.846895 kernel: LSM: Security Framework initializing Jul 15 11:25:24.846902 kernel: SELinux: Initializing. Jul 15 11:25:24.846909 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 11:25:24.846917 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 11:25:24.846924 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 15 11:25:24.846931 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 15 11:25:24.846938 kernel: ... version: 0 Jul 15 11:25:24.846944 kernel: ... bit width: 48 Jul 15 11:25:24.846951 kernel: ... generic registers: 6 Jul 15 11:25:24.846958 kernel: ... value mask: 0000ffffffffffff Jul 15 11:25:24.846964 kernel: ... max period: 00007fffffffffff Jul 15 11:25:24.846971 kernel: ... fixed-purpose events: 0 Jul 15 11:25:24.847469 kernel: ... event mask: 000000000000003f Jul 15 11:25:24.847482 kernel: signal: max sigframe size: 1776 Jul 15 11:25:24.847489 kernel: rcu: Hierarchical SRCU implementation. Jul 15 11:25:24.847496 kernel: smp: Bringing up secondary CPUs ... Jul 15 11:25:24.847503 kernel: x86: Booting SMP configuration: Jul 15 11:25:24.847509 kernel: .... node #0, CPUs: #1 Jul 15 11:25:24.847516 kernel: kvm-clock: cpu 1, msr 6c19b041, secondary cpu clock Jul 15 11:25:24.847523 kernel: kvm-guest: setup async PF for cpu 1 Jul 15 11:25:24.847530 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Jul 15 11:25:24.847540 kernel: #2 Jul 15 11:25:24.847547 kernel: kvm-clock: cpu 2, msr 6c19b081, secondary cpu clock Jul 15 11:25:24.847554 kernel: kvm-guest: setup async PF for cpu 2 Jul 15 11:25:24.847560 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Jul 15 11:25:24.847567 kernel: #3 Jul 15 11:25:24.847574 kernel: kvm-clock: cpu 3, msr 6c19b0c1, secondary cpu clock Jul 15 11:25:24.847580 kernel: kvm-guest: setup async PF for cpu 3 Jul 15 11:25:24.847587 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Jul 15 11:25:24.847594 kernel: smp: Brought up 1 node, 4 CPUs Jul 15 11:25:24.847602 kernel: smpboot: Max logical packages: 1 Jul 15 11:25:24.847609 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jul 15 11:25:24.847615 kernel: devtmpfs: initialized Jul 15 11:25:24.847622 kernel: x86/mm: Memory block size: 128MB Jul 15 11:25:24.847629 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 15 11:25:24.847636 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 15 11:25:24.847643 kernel: pinctrl core: initialized pinctrl subsystem Jul 15 11:25:24.847649 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 15 11:25:24.847656 kernel: audit: initializing netlink subsys (disabled) Jul 15 11:25:24.847663 kernel: audit: type=2000 audit(1752578725.290:1): state=initialized audit_enabled=0 res=1 Jul 15 11:25:24.847671 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 15 11:25:24.847678 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 15 11:25:24.847697 kernel: cpuidle: using governor menu Jul 15 11:25:24.847714 kernel: ACPI: bus type PCI registered Jul 15 11:25:24.847721 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 15 11:25:24.847727 kernel: dca service started, version 1.12.1 Jul 15 11:25:24.847734 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jul 15 11:25:24.847741 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Jul 15 11:25:24.847748 kernel: PCI: Using configuration type 1 for base access Jul 15 11:25:24.847756 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 15 11:25:24.847763 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 15 11:25:24.847770 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 15 11:25:24.847777 kernel: ACPI: Added _OSI(Module Device) Jul 15 11:25:24.847783 kernel: ACPI: Added _OSI(Processor Device) Jul 15 11:25:24.847790 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 15 11:25:24.847797 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 15 11:25:24.847804 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 15 11:25:24.847810 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 15 11:25:24.847819 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 15 11:25:24.847825 kernel: ACPI: Interpreter enabled Jul 15 11:25:24.847832 kernel: ACPI: PM: (supports S0 S3 S5) Jul 15 11:25:24.847839 kernel: ACPI: Using IOAPIC for interrupt routing Jul 15 11:25:24.847845 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 15 11:25:24.847852 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 15 11:25:24.847859 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 15 11:25:24.847975 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 15 11:25:24.848060 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 15 11:25:24.848134 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 15 11:25:24.848143 kernel: PCI host bridge to bus 0000:00 Jul 15 11:25:24.848216 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 15 11:25:24.848277 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 15 11:25:24.848336 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 15 11:25:24.848394 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 15 11:25:24.848456 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 15 11:25:24.848515 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jul 15 11:25:24.848575 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 15 11:25:24.848651 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 15 11:25:24.848775 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jul 15 11:25:24.848844 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jul 15 11:25:24.848914 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jul 15 11:25:24.848979 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jul 15 11:25:24.849053 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 15 11:25:24.849128 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jul 15 11:25:24.849197 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jul 15 11:25:24.849269 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jul 15 11:25:24.849339 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jul 15 11:25:24.849416 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jul 15 11:25:24.849486 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jul 15 11:25:24.849552 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jul 15 11:25:24.849621 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jul 15 11:25:24.849708 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 15 11:25:24.849778 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jul 15 11:25:24.849846 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jul 15 11:25:24.849917 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 15 11:25:24.849984 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jul 15 11:25:24.850066 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 15 11:25:24.850135 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 15 11:25:24.850214 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 15 11:25:24.850281 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jul 15 11:25:24.850352 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jul 15 11:25:24.850426 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 15 11:25:24.850494 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jul 15 11:25:24.850504 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 15 11:25:24.850511 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 15 11:25:24.850518 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 15 11:25:24.850525 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 15 11:25:24.850531 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 15 11:25:24.850541 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 15 11:25:24.850548 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 15 11:25:24.850555 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 15 11:25:24.850562 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 15 11:25:24.850569 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 15 11:25:24.850575 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 15 11:25:24.850582 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 15 11:25:24.850589 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 15 11:25:24.850596 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 15 11:25:24.850604 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 15 11:25:24.850611 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 15 11:25:24.850618 kernel: iommu: Default domain type: Translated Jul 15 11:25:24.850625 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 15 11:25:24.850705 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 15 11:25:24.850775 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 15 11:25:24.850841 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 15 11:25:24.850850 kernel: vgaarb: loaded Jul 15 11:25:24.850857 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 15 11:25:24.850866 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 15 11:25:24.850874 kernel: PTP clock support registered Jul 15 11:25:24.850880 kernel: PCI: Using ACPI for IRQ routing Jul 15 11:25:24.850887 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 15 11:25:24.850894 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 15 11:25:24.850901 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jul 15 11:25:24.850908 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 15 11:25:24.850914 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 15 11:25:24.850921 kernel: clocksource: Switched to clocksource kvm-clock Jul 15 11:25:24.850930 kernel: VFS: Disk quotas dquot_6.6.0 Jul 15 11:25:24.850937 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 15 11:25:24.850949 kernel: pnp: PnP ACPI init Jul 15 11:25:24.851026 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 15 11:25:24.851045 kernel: pnp: PnP ACPI: found 6 devices Jul 15 11:25:24.851053 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 15 11:25:24.851059 kernel: NET: Registered PF_INET protocol family Jul 15 11:25:24.851066 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 15 11:25:24.851076 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 15 11:25:24.851084 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 15 11:25:24.851092 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 15 11:25:24.851100 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 15 11:25:24.851108 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 15 11:25:24.851116 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 11:25:24.851122 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 11:25:24.851129 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 15 11:25:24.851136 kernel: NET: Registered PF_XDP protocol family Jul 15 11:25:24.851201 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 15 11:25:24.855292 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 15 11:25:24.855367 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 15 11:25:24.855427 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 15 11:25:24.855487 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 15 11:25:24.855545 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jul 15 11:25:24.855554 kernel: PCI: CLS 0 bytes, default 64 Jul 15 11:25:24.855561 kernel: Initialise system trusted keyrings Jul 15 11:25:24.855571 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 15 11:25:24.855578 kernel: Key type asymmetric registered Jul 15 11:25:24.855585 kernel: Asymmetric key parser 'x509' registered Jul 15 11:25:24.855592 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 15 11:25:24.855599 kernel: io scheduler mq-deadline registered Jul 15 11:25:24.855605 kernel: io scheduler kyber registered Jul 15 11:25:24.855613 kernel: io scheduler bfq registered Jul 15 11:25:24.855620 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 15 11:25:24.855627 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 15 11:25:24.855635 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 15 11:25:24.855642 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 15 11:25:24.855649 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 15 11:25:24.855656 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 15 11:25:24.855664 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 15 11:25:24.855671 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 15 11:25:24.855677 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 15 11:25:24.855810 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 15 11:25:24.855822 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 15 11:25:24.855901 kernel: rtc_cmos 00:04: registered as rtc0 Jul 15 11:25:24.856016 kernel: rtc_cmos 00:04: setting system clock to 2025-07-15T11:25:24 UTC (1752578724) Jul 15 11:25:24.856096 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 15 11:25:24.856105 kernel: NET: Registered PF_INET6 protocol family Jul 15 11:25:24.856112 kernel: Segment Routing with IPv6 Jul 15 11:25:24.856119 kernel: In-situ OAM (IOAM) with IPv6 Jul 15 11:25:24.856126 kernel: NET: Registered PF_PACKET protocol family Jul 15 11:25:24.856133 kernel: Key type dns_resolver registered Jul 15 11:25:24.856143 kernel: IPI shorthand broadcast: enabled Jul 15 11:25:24.856151 kernel: sched_clock: Marking stable (397129868, 98959319)->(539705227, -43616040) Jul 15 11:25:24.856158 kernel: registered taskstats version 1 Jul 15 11:25:24.856165 kernel: Loading compiled-in X.509 certificates Jul 15 11:25:24.856172 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.188-flatcar: c4b3a19d3bd6de5654dc12075428550cf6251289' Jul 15 11:25:24.856179 kernel: Key type .fscrypt registered Jul 15 11:25:24.856185 kernel: Key type fscrypt-provisioning registered Jul 15 11:25:24.856192 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 15 11:25:24.856199 kernel: ima: Allocated hash algorithm: sha1 Jul 15 11:25:24.856208 kernel: ima: No architecture policies found Jul 15 11:25:24.856215 kernel: clk: Disabling unused clocks Jul 15 11:25:24.856222 kernel: Freeing unused kernel image (initmem) memory: 47476K Jul 15 11:25:24.856229 kernel: Write protecting the kernel read-only data: 28672k Jul 15 11:25:24.856236 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 15 11:25:24.856243 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Jul 15 11:25:24.856250 kernel: Run /init as init process Jul 15 11:25:24.856257 kernel: with arguments: Jul 15 11:25:24.856264 kernel: /init Jul 15 11:25:24.856272 kernel: with environment: Jul 15 11:25:24.856279 kernel: HOME=/ Jul 15 11:25:24.856286 kernel: TERM=linux Jul 15 11:25:24.856293 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 15 11:25:24.856302 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 15 11:25:24.856312 systemd[1]: Detected virtualization kvm. Jul 15 11:25:24.856319 systemd[1]: Detected architecture x86-64. Jul 15 11:25:24.856327 systemd[1]: Running in initrd. Jul 15 11:25:24.856335 systemd[1]: No hostname configured, using default hostname. Jul 15 11:25:24.856343 systemd[1]: Hostname set to . Jul 15 11:25:24.856351 systemd[1]: Initializing machine ID from VM UUID. Jul 15 11:25:24.856358 systemd[1]: Queued start job for default target initrd.target. Jul 15 11:25:24.856366 systemd[1]: Started systemd-ask-password-console.path. Jul 15 11:25:24.856373 systemd[1]: Reached target cryptsetup.target. Jul 15 11:25:24.856380 systemd[1]: Reached target paths.target. Jul 15 11:25:24.856387 systemd[1]: Reached target slices.target. Jul 15 11:25:24.856396 systemd[1]: Reached target swap.target. Jul 15 11:25:24.856410 systemd[1]: Reached target timers.target. Jul 15 11:25:24.856419 systemd[1]: Listening on iscsid.socket. Jul 15 11:25:24.856427 systemd[1]: Listening on iscsiuio.socket. Jul 15 11:25:24.856434 systemd[1]: Listening on systemd-journald-audit.socket. Jul 15 11:25:24.856443 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 15 11:25:24.856450 systemd[1]: Listening on systemd-journald.socket. Jul 15 11:25:24.856458 systemd[1]: Listening on systemd-networkd.socket. Jul 15 11:25:24.856466 systemd[1]: Listening on systemd-udevd-control.socket. Jul 15 11:25:24.856473 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 15 11:25:24.856481 systemd[1]: Reached target sockets.target. Jul 15 11:25:24.856489 systemd[1]: Starting kmod-static-nodes.service... Jul 15 11:25:24.856497 systemd[1]: Finished network-cleanup.service. Jul 15 11:25:24.856504 systemd[1]: Starting systemd-fsck-usr.service... Jul 15 11:25:24.856513 systemd[1]: Starting systemd-journald.service... Jul 15 11:25:24.856521 systemd[1]: Starting systemd-modules-load.service... Jul 15 11:25:24.856529 systemd[1]: Starting systemd-resolved.service... Jul 15 11:25:24.856537 systemd[1]: Starting systemd-vconsole-setup.service... Jul 15 11:25:24.856545 systemd[1]: Finished kmod-static-nodes.service. Jul 15 11:25:24.856552 systemd[1]: Finished systemd-fsck-usr.service. Jul 15 11:25:24.856560 kernel: audit: type=1130 audit(1752578724.845:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:24.856568 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 15 11:25:24.856575 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 15 11:25:24.856587 systemd-journald[198]: Journal started Jul 15 11:25:24.856625 systemd-journald[198]: Runtime Journal (/run/log/journal/2c9250b9cd5b423d8eb61ef450d3df19) is 6.0M, max 48.5M, 42.5M free. Jul 15 11:25:24.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:24.856033 systemd-modules-load[199]: Inserted module 'overlay' Jul 15 11:25:24.887253 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 15 11:25:24.887280 systemd[1]: Started systemd-journald.service. Jul 15 11:25:24.871732 systemd-resolved[200]: Positive Trust Anchors: Jul 15 11:25:24.894301 kernel: audit: type=1130 audit(1752578724.885:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:24.894316 kernel: audit: type=1130 audit(1752578724.887:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:24.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:24.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:24.871740 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 11:25:24.898586 kernel: Bridge firewalling registered Jul 15 11:25:24.898599 kernel: audit: type=1130 audit(1752578724.895:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:24.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:24.871768 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 15 11:25:24.907870 kernel: audit: type=1130 audit(1752578724.898:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:24.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:24.873865 systemd-resolved[200]: Defaulting to hostname 'linux'. Jul 15 11:25:24.888242 systemd[1]: Started systemd-resolved.service. Jul 15 11:25:24.898150 systemd[1]: Finished systemd-vconsole-setup.service. Jul 15 11:25:24.898405 systemd-modules-load[199]: Inserted module 'br_netfilter' Jul 15 11:25:24.901933 systemd[1]: Reached target nss-lookup.target. Jul 15 11:25:24.908927 systemd[1]: Starting dracut-cmdline-ask.service... Jul 15 11:25:24.919908 systemd[1]: Finished dracut-cmdline-ask.service. Jul 15 11:25:24.923788 kernel: audit: type=1130 audit(1752578724.920:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:24.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:24.921507 systemd[1]: Starting dracut-cmdline.service... Jul 15 11:25:24.927716 kernel: SCSI subsystem initialized Jul 15 11:25:24.932949 dracut-cmdline[215]: dracut-dracut-053 Jul 15 11:25:24.935200 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3fdbb2e3469f90ee764ea38c6fc4332d45967696e3c4fd4a8c65f8d0125b235b Jul 15 11:25:24.944235 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 15 11:25:24.944297 kernel: device-mapper: uevent: version 1.0.3 Jul 15 11:25:24.944313 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 15 11:25:24.946853 systemd-modules-load[199]: Inserted module 'dm_multipath' Jul 15 11:25:24.947717 systemd[1]: Finished systemd-modules-load.service. Jul 15 11:25:24.952266 kernel: audit: type=1130 audit(1752578724.947:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:24.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:24.952965 systemd[1]: Starting systemd-sysctl.service... Jul 15 11:25:24.960599 systemd[1]: Finished systemd-sysctl.service. Jul 15 11:25:24.964839 kernel: audit: type=1130 audit(1752578724.960:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:24.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:25.004722 kernel: Loading iSCSI transport class v2.0-870. Jul 15 11:25:25.020721 kernel: iscsi: registered transport (tcp) Jul 15 11:25:25.042727 kernel: iscsi: registered transport (qla4xxx) Jul 15 11:25:25.042751 kernel: QLogic iSCSI HBA Driver Jul 15 11:25:25.069606 systemd[1]: Finished dracut-cmdline.service. Jul 15 11:25:25.073785 kernel: audit: type=1130 audit(1752578725.069:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:25.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:25.073836 systemd[1]: Starting dracut-pre-udev.service... Jul 15 11:25:25.120713 kernel: raid6: avx2x4 gen() 30812 MB/s Jul 15 11:25:25.137706 kernel: raid6: avx2x4 xor() 8276 MB/s Jul 15 11:25:25.154705 kernel: raid6: avx2x2 gen() 32469 MB/s Jul 15 11:25:25.171706 kernel: raid6: avx2x2 xor() 19185 MB/s Jul 15 11:25:25.188705 kernel: raid6: avx2x1 gen() 26384 MB/s Jul 15 11:25:25.205705 kernel: raid6: avx2x1 xor() 15290 MB/s Jul 15 11:25:25.222706 kernel: raid6: sse2x4 gen() 14739 MB/s Jul 15 11:25:25.239708 kernel: raid6: sse2x4 xor() 7555 MB/s Jul 15 11:25:25.256707 kernel: raid6: sse2x2 gen() 16289 MB/s Jul 15 11:25:25.273707 kernel: raid6: sse2x2 xor() 9779 MB/s Jul 15 11:25:25.290707 kernel: raid6: sse2x1 gen() 12483 MB/s Jul 15 11:25:25.307969 kernel: raid6: sse2x1 xor() 7760 MB/s Jul 15 11:25:25.307989 kernel: raid6: using algorithm avx2x2 gen() 32469 MB/s Jul 15 11:25:25.308001 kernel: raid6: .... xor() 19185 MB/s, rmw enabled Jul 15 11:25:25.309255 kernel: raid6: using avx2x2 recovery algorithm Jul 15 11:25:25.320713 kernel: xor: automatically using best checksumming function avx Jul 15 11:25:25.408717 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 15 11:25:25.416259 systemd[1]: Finished dracut-pre-udev.service. Jul 15 11:25:25.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:25.417000 audit: BPF prog-id=7 op=LOAD Jul 15 11:25:25.417000 audit: BPF prog-id=8 op=LOAD Jul 15 11:25:25.418203 systemd[1]: Starting systemd-udevd.service... Jul 15 11:25:25.430008 systemd-udevd[399]: Using default interface naming scheme 'v252'. Jul 15 11:25:25.433951 systemd[1]: Started systemd-udevd.service. Jul 15 11:25:25.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:25.434805 systemd[1]: Starting dracut-pre-trigger.service... Jul 15 11:25:25.445059 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Jul 15 11:25:25.467801 systemd[1]: Finished dracut-pre-trigger.service. Jul 15 11:25:25.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:25.468858 systemd[1]: Starting systemd-udev-trigger.service... Jul 15 11:25:25.502785 systemd[1]: Finished systemd-udev-trigger.service. Jul 15 11:25:25.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:25.533749 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 15 11:25:25.555098 kernel: cryptd: max_cpu_qlen set to 1000 Jul 15 11:25:25.555113 kernel: AVX2 version of gcm_enc/dec engaged. Jul 15 11:25:25.555121 kernel: AES CTR mode by8 optimization enabled Jul 15 11:25:25.555130 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 15 11:25:25.555139 kernel: GPT:9289727 != 19775487 Jul 15 11:25:25.555147 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 15 11:25:25.555155 kernel: GPT:9289727 != 19775487 Jul 15 11:25:25.555163 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 15 11:25:25.555173 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 11:25:25.566709 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (451) Jul 15 11:25:25.572703 kernel: libata version 3.00 loaded. Jul 15 11:25:25.578497 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 15 11:25:25.620826 kernel: ahci 0000:00:1f.2: version 3.0 Jul 15 11:25:25.620943 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 15 11:25:25.620954 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 15 11:25:25.621044 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 15 11:25:25.621118 kernel: scsi host0: ahci Jul 15 11:25:25.621222 kernel: scsi host1: ahci Jul 15 11:25:25.621301 kernel: scsi host2: ahci Jul 15 11:25:25.621379 kernel: scsi host3: ahci Jul 15 11:25:25.621456 kernel: scsi host4: ahci Jul 15 11:25:25.621538 kernel: scsi host5: ahci Jul 15 11:25:25.621635 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jul 15 11:25:25.621645 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jul 15 11:25:25.621653 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jul 15 11:25:25.621662 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jul 15 11:25:25.621670 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jul 15 11:25:25.621679 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jul 15 11:25:25.622242 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 15 11:25:25.627145 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 15 11:25:25.627535 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 15 11:25:25.632196 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 15 11:25:25.633037 systemd[1]: Starting disk-uuid.service... Jul 15 11:25:25.641874 disk-uuid[536]: Primary Header is updated. Jul 15 11:25:25.641874 disk-uuid[536]: Secondary Entries is updated. Jul 15 11:25:25.641874 disk-uuid[536]: Secondary Header is updated. Jul 15 11:25:25.645738 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 11:25:25.648720 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 11:25:25.889712 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 15 11:25:25.889767 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 15 11:25:25.890702 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 15 11:25:25.891736 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 15 11:25:25.891807 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 15 11:25:25.893115 kernel: ata3.00: applying bridge limits Jul 15 11:25:25.893803 kernel: ata3.00: configured for UDMA/100 Jul 15 11:25:25.894710 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 15 11:25:25.899711 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 15 11:25:25.899732 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 15 11:25:25.926713 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 15 11:25:25.943158 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 15 11:25:25.943170 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 15 11:25:26.650058 disk-uuid[537]: The operation has completed successfully. Jul 15 11:25:26.651590 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 11:25:26.673703 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 15 11:25:26.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:26.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:26.673784 systemd[1]: Finished disk-uuid.service. Jul 15 11:25:26.677863 systemd[1]: Starting verity-setup.service... Jul 15 11:25:26.690718 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 15 11:25:26.710079 systemd[1]: Found device dev-mapper-usr.device. Jul 15 11:25:26.712020 systemd[1]: Mounting sysusr-usr.mount... Jul 15 11:25:26.715283 systemd[1]: Finished verity-setup.service. Jul 15 11:25:26.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:26.772444 systemd[1]: Mounted sysusr-usr.mount. Jul 15 11:25:26.773807 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 15 11:25:26.773862 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 15 11:25:26.775952 systemd[1]: Starting ignition-setup.service... Jul 15 11:25:26.777759 systemd[1]: Starting parse-ip-for-networkd.service... Jul 15 11:25:26.784339 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 11:25:26.784370 kernel: BTRFS info (device vda6): using free space tree Jul 15 11:25:26.784379 kernel: BTRFS info (device vda6): has skinny extents Jul 15 11:25:26.792398 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 15 11:25:26.800384 systemd[1]: Finished ignition-setup.service. Jul 15 11:25:26.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:26.801635 systemd[1]: Starting ignition-fetch-offline.service... Jul 15 11:25:26.837388 ignition[636]: Ignition 2.14.0 Jul 15 11:25:26.837402 ignition[636]: Stage: fetch-offline Jul 15 11:25:26.837490 ignition[636]: no configs at "/usr/lib/ignition/base.d" Jul 15 11:25:26.837501 ignition[636]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 11:25:26.837609 ignition[636]: parsed url from cmdline: "" Jul 15 11:25:26.837613 ignition[636]: no config URL provided Jul 15 11:25:26.837619 ignition[636]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 11:25:26.837627 ignition[636]: no config at "/usr/lib/ignition/user.ign" Jul 15 11:25:26.837647 ignition[636]: op(1): [started] loading QEMU firmware config module Jul 15 11:25:26.837652 ignition[636]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 15 11:25:26.841138 ignition[636]: op(1): [finished] loading QEMU firmware config module Jul 15 11:25:26.841154 ignition[636]: QEMU firmware config was not found. Ignoring... Jul 15 11:25:26.856148 systemd[1]: Finished parse-ip-for-networkd.service. Jul 15 11:25:26.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:26.857000 audit: BPF prog-id=9 op=LOAD Jul 15 11:25:26.858309 systemd[1]: Starting systemd-networkd.service... Jul 15 11:25:26.885889 ignition[636]: parsing config with SHA512: 0d5187b049d5e3d8e1e947b99a13c0a1d6a361fc74e9452f4a59d7a0bb53b644c6d12e1bababc7a61a8de512321f07f4f9887a63672c768b072d6540663dc837 Jul 15 11:25:26.892582 unknown[636]: fetched base config from "system" Jul 15 11:25:26.892592 unknown[636]: fetched user config from "qemu" Jul 15 11:25:26.893220 ignition[636]: fetch-offline: fetch-offline passed Jul 15 11:25:26.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:26.894230 systemd[1]: Finished ignition-fetch-offline.service. Jul 15 11:25:26.893284 ignition[636]: Ignition finished successfully Jul 15 11:25:26.914377 systemd-networkd[717]: lo: Link UP Jul 15 11:25:26.914388 systemd-networkd[717]: lo: Gained carrier Jul 15 11:25:26.916311 systemd-networkd[717]: Enumeration completed Jul 15 11:25:26.916396 systemd[1]: Started systemd-networkd.service. Jul 15 11:25:26.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:26.918050 systemd[1]: Reached target network.target. Jul 15 11:25:26.919578 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 15 11:25:26.920309 systemd[1]: Starting ignition-kargs.service... Jul 15 11:25:26.921520 systemd[1]: Starting iscsiuio.service... Jul 15 11:25:26.925509 systemd-networkd[717]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 11:25:26.925903 systemd[1]: Started iscsiuio.service. Jul 15 11:25:26.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:26.927045 systemd[1]: Starting iscsid.service... Jul 15 11:25:26.930952 iscsid[723]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 15 11:25:26.930952 iscsid[723]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Jul 15 11:25:26.930952 iscsid[723]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 15 11:25:26.930952 iscsid[723]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 15 11:25:26.930952 iscsid[723]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 15 11:25:26.930952 iscsid[723]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 15 11:25:26.930952 iscsid[723]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 15 11:25:26.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:26.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:26.929666 systemd-networkd[717]: eth0: Link UP Jul 15 11:25:26.932089 ignition[719]: Ignition 2.14.0 Jul 15 11:25:26.929670 systemd-networkd[717]: eth0: Gained carrier Jul 15 11:25:26.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:26.932096 ignition[719]: Stage: kargs Jul 15 11:25:26.933456 systemd[1]: Started iscsid.service. Jul 15 11:25:26.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:26.932180 ignition[719]: no configs at "/usr/lib/ignition/base.d" Jul 15 11:25:26.938055 systemd[1]: Finished ignition-kargs.service. Jul 15 11:25:26.932189 ignition[719]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 11:25:26.942175 systemd[1]: Starting dracut-initqueue.service... Jul 15 11:25:26.933281 ignition[719]: kargs: kargs passed Jul 15 11:25:26.943645 systemd[1]: Starting ignition-disks.service... Jul 15 11:25:26.933324 ignition[719]: Ignition finished successfully Jul 15 11:25:26.944838 systemd-networkd[717]: eth0: DHCPv4 address 10.0.0.10/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 15 11:25:27.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:26.955188 ignition[730]: Ignition 2.14.0 Jul 15 11:25:26.956662 systemd[1]: Finished dracut-initqueue.service. Jul 15 11:25:26.955193 ignition[730]: Stage: disks Jul 15 11:25:26.995727 systemd[1]: Finished ignition-disks.service. Jul 15 11:25:26.955273 ignition[730]: no configs at "/usr/lib/ignition/base.d" Jul 15 11:25:26.997117 systemd[1]: Reached target initrd-root-device.target. Jul 15 11:25:26.955281 ignition[730]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 11:25:26.998571 systemd[1]: Reached target local-fs-pre.target. Jul 15 11:25:26.956126 ignition[730]: disks: disks passed Jul 15 11:25:26.999422 systemd[1]: Reached target local-fs.target. Jul 15 11:25:26.956156 ignition[730]: Ignition finished successfully Jul 15 11:25:27.000215 systemd[1]: Reached target remote-fs-pre.target. Jul 15 11:25:27.001056 systemd[1]: Reached target remote-cryptsetup.target. Jul 15 11:25:27.002011 systemd[1]: Reached target remote-fs.target. Jul 15 11:25:27.003818 systemd[1]: Reached target sysinit.target. Jul 15 11:25:27.004659 systemd[1]: Reached target basic.target. Jul 15 11:25:27.005769 systemd[1]: Starting dracut-pre-mount.service... Jul 15 11:25:27.011930 systemd[1]: Finished dracut-pre-mount.service. Jul 15 11:25:27.014667 systemd[1]: Starting systemd-fsck-root.service... Jul 15 11:25:27.076219 systemd-fsck[750]: ROOT: clean, 619/553520 files, 56023/553472 blocks Jul 15 11:25:27.513113 systemd[1]: Finished systemd-fsck-root.service. Jul 15 11:25:27.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:27.516343 systemd[1]: Mounting sysroot.mount... Jul 15 11:25:27.522718 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 15 11:25:27.522716 systemd[1]: Mounted sysroot.mount. Jul 15 11:25:27.523167 systemd[1]: Reached target initrd-root-fs.target. Jul 15 11:25:27.525226 systemd[1]: Mounting sysroot-usr.mount... Jul 15 11:25:27.526248 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 15 11:25:27.526281 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 15 11:25:27.526301 systemd[1]: Reached target ignition-diskful.target. Jul 15 11:25:27.527694 systemd[1]: Mounted sysroot-usr.mount. Jul 15 11:25:27.529583 systemd[1]: Starting initrd-setup-root.service... Jul 15 11:25:27.533770 initrd-setup-root[760]: cut: /sysroot/etc/passwd: No such file or directory Jul 15 11:25:27.536435 initrd-setup-root[768]: cut: /sysroot/etc/group: No such file or directory Jul 15 11:25:27.538957 initrd-setup-root[776]: cut: /sysroot/etc/shadow: No such file or directory Jul 15 11:25:27.541518 initrd-setup-root[784]: cut: /sysroot/etc/gshadow: No such file or directory Jul 15 11:25:27.563793 systemd[1]: Finished initrd-setup-root.service. Jul 15 11:25:27.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:27.565438 systemd[1]: Starting ignition-mount.service... Jul 15 11:25:27.566726 systemd[1]: Starting sysroot-boot.service... Jul 15 11:25:27.571389 bash[801]: umount: /sysroot/usr/share/oem: not mounted. Jul 15 11:25:27.578638 ignition[803]: INFO : Ignition 2.14.0 Jul 15 11:25:27.578638 ignition[803]: INFO : Stage: mount Jul 15 11:25:27.580411 ignition[803]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 11:25:27.580411 ignition[803]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 11:25:27.580411 ignition[803]: INFO : mount: mount passed Jul 15 11:25:27.580411 ignition[803]: INFO : Ignition finished successfully Jul 15 11:25:27.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:27.580811 systemd[1]: Finished ignition-mount.service. Jul 15 11:25:27.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:27.585620 systemd[1]: Finished sysroot-boot.service. Jul 15 11:25:27.721472 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 15 11:25:27.726711 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (813) Jul 15 11:25:27.729384 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 11:25:27.729400 kernel: BTRFS info (device vda6): using free space tree Jul 15 11:25:27.729409 kernel: BTRFS info (device vda6): has skinny extents Jul 15 11:25:27.732355 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 15 11:25:27.734587 systemd[1]: Starting ignition-files.service... Jul 15 11:25:27.747088 ignition[833]: INFO : Ignition 2.14.0 Jul 15 11:25:27.747088 ignition[833]: INFO : Stage: files Jul 15 11:25:27.748775 ignition[833]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 11:25:27.748775 ignition[833]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 11:25:27.748775 ignition[833]: DEBUG : files: compiled without relabeling support, skipping Jul 15 11:25:27.752562 ignition[833]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 15 11:25:27.752562 ignition[833]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 15 11:25:27.752562 ignition[833]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 15 11:25:27.752562 ignition[833]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 15 11:25:27.752562 ignition[833]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 15 11:25:27.752562 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 15 11:25:27.752562 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 15 11:25:27.752562 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 15 11:25:27.752562 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 15 11:25:27.750800 unknown[833]: wrote ssh authorized keys file for user: core Jul 15 11:25:27.799014 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 15 11:25:27.926640 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 15 11:25:27.928655 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 15 11:25:27.928655 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 15 11:25:28.420431 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jul 15 11:25:28.519536 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 15 11:25:28.521565 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jul 15 11:25:28.521565 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jul 15 11:25:28.521565 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 15 11:25:28.521565 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 15 11:25:28.521565 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 11:25:28.521565 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 11:25:28.521565 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 11:25:28.521565 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 11:25:28.521565 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 11:25:28.521565 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 11:25:28.521565 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 15 11:25:28.521565 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 15 11:25:28.521565 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 15 11:25:28.521565 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 15 11:25:28.658805 systemd-networkd[717]: eth0: Gained IPv6LL Jul 15 11:25:29.265075 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jul 15 11:25:29.988840 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 15 11:25:29.988840 ignition[833]: INFO : files: op(d): [started] processing unit "containerd.service" Jul 15 11:25:29.994437 ignition[833]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 15 11:25:29.994437 ignition[833]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 15 11:25:29.994437 ignition[833]: INFO : files: op(d): [finished] processing unit "containerd.service" Jul 15 11:25:29.994437 ignition[833]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jul 15 11:25:29.994437 ignition[833]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 11:25:29.994437 ignition[833]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 11:25:29.994437 ignition[833]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jul 15 11:25:29.994437 ignition[833]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Jul 15 11:25:29.994437 ignition[833]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 11:25:29.994437 ignition[833]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 11:25:29.994437 ignition[833]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Jul 15 11:25:29.994437 ignition[833]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Jul 15 11:25:29.994437 ignition[833]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Jul 15 11:25:29.994437 ignition[833]: INFO : files: op(14): [started] setting preset to disabled for "coreos-metadata.service" Jul 15 11:25:29.994437 ignition[833]: INFO : files: op(14): op(15): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 11:25:30.029124 ignition[833]: INFO : files: op(14): op(15): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 11:25:30.030617 ignition[833]: INFO : files: op(14): [finished] setting preset to disabled for "coreos-metadata.service" Jul 15 11:25:30.030617 ignition[833]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 15 11:25:30.030617 ignition[833]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 15 11:25:30.030617 ignition[833]: INFO : files: files passed Jul 15 11:25:30.030617 ignition[833]: INFO : Ignition finished successfully Jul 15 11:25:30.037123 systemd[1]: Finished ignition-files.service. Jul 15 11:25:30.043817 kernel: kauditd_printk_skb: 24 callbacks suppressed Jul 15 11:25:30.043838 kernel: audit: type=1130 audit(1752578730.036:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:30.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:30.038175 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 15 11:25:30.043822 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 15 11:25:30.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:30.048284 initrd-setup-root-after-ignition[856]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Jul 15 11:25:30.053188 kernel: audit: type=1130 audit(1752578730.047:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:30.053203 kernel: audit: type=1130 audit(1752578730.052:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:30.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:30.044283 systemd[1]: Starting ignition-quench.service... Jul 15 11:25:30.060199 kernel: audit: type=1131 audit(1752578730.052:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:30.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:30.060277 initrd-setup-root-after-ignition[858]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 11:25:30.045658 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 15 11:25:30.048376 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 15 11:25:30.048432 systemd[1]: Finished ignition-quench.service. Jul 15 11:25:30.053261 systemd[1]: Reached target ignition-complete.target. Jul 15 11:25:30.060594 systemd[1]: Starting initrd-parse-etc.service... Jul 15 11:25:30.069940 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 15 11:25:30.070007 systemd[1]: Finished initrd-parse-etc.service. Jul 15 11:25:30.078433 kernel: audit: type=1130 audit(1752578730.070:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:30.078448 kernel: audit: type=1131 audit(1752578730.070:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:30.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:30.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:30.071680 systemd[1]: Reached target initrd-fs.target. Jul 15 11:25:30.078442 systemd[1]: Reached target initrd.target. Jul 15 11:25:30.079164 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 15 11:25:30.079653 systemd[1]: Starting dracut-pre-pivot.service... Jul 15 11:25:30.087548 systemd[1]: Finished dracut-pre-pivot.service. Jul 15 11:25:30.092474 kernel: audit: type=1130 audit(1752578730.087:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:30.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:30.088823 systemd[1]: Starting initrd-cleanup.service... Jul 15 11:25:30.096576 systemd[1]: Stopped target nss-lookup.target. Jul 15 11:25:30.097416 systemd[1]: Stopped target remote-cryptsetup.target. Jul 15 11:25:30.098959 systemd[1]: Stopped target timers.target. Jul 15 11:25:30.100489 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 15 11:25:30.106271 kernel: audit: type=1131 audit(1752578730.101:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:30.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:30.100571 systemd[1]: Stopped dracut-pre-pivot.service. Jul 15 11:25:30.102017 systemd[1]: Stopped target initrd.target. Jul 15 11:25:30.106341 systemd[1]: Stopped target basic.target. Jul 15 11:25:30.107808 systemd[1]: Stopped target ignition-complete.target. Jul 15 11:25:30.109289 systemd[1]: Stopped target ignition-diskful.target. Jul 15 11:25:30.110791 systemd[1]: Stopped target initrd-root-device.target. Jul 15 11:25:30.112400 systemd[1]: Stopped target remote-fs.target. Jul 15 11:25:30.113943 systemd[1]: Stopped target remote-fs-pre.target. Jul 15 11:25:30.115545 systemd[1]: Stopped target sysinit.target. Jul 15 11:25:30.116976 systemd[1]: Stopped target local-fs.target. Jul 15 11:25:30.118453 systemd[1]: Stopped target local-fs-pre.target. Jul 15 11:25:30.119924 systemd[1]: Stopped target swap.target. Jul 15 11:25:30.126787 kernel: audit: type=1131 audit(1752578730.122:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:30.122000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:30.121279 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 15 11:25:30.121359 systemd[1]: Stopped dracut-pre-mount.service. Jul 15 11:25:30.132657 kernel: audit: type=1131 audit(1752578730.127:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:30.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:30.122850 systemd[1]: Stopped target cryptsetup.target. Jul 15 11:25:30.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:30.126819 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 15 11:25:30.126906 systemd[1]: Stopped dracut-initqueue.service. Jul 15 11:25:30.128586 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 15 11:25:30.128665 systemd[1]: Stopped ignition-fetch-offline.service. Jul 15 11:25:30.132777 systemd[1]: Stopped target paths.target. Jul 15 11:25:30.134163 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 15 11:25:30.137722 systemd[1]: Stopped systemd-ask-password-console.path. Jul 15 11:25:30.139226 systemd[1]: Stopped target slices.target. Jul 15 11:25:30.140934 systemd[1]: Stopped target sockets.target. Jul 15 11:25:30.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:30.142461 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 15 11:25:30.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:30.142545 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 15 11:25:30.149010 iscsid[723]: iscsid shutting down. Jul 15 11:25:30.144097 systemd[1]: ignition-files.service: Deactivated successfully. Jul 15 11:25:30.144173 systemd[1]: Stopped ignition-files.service. Jul 15 11:25:30.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:30.153642 ignition[873]: INFO : Ignition 2.14.0 Jul 15 11:25:30.153642 ignition[873]: INFO : Stage: umount Jul 15 11:25:30.153642 ignition[873]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 11:25:30.153642 ignition[873]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 11:25:30.153642 ignition[873]: INFO : umount: umount passed Jul 15 11:25:30.153642 ignition[873]: INFO : Ignition finished successfully Jul 15 11:25:30.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:30.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:30.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:30.159000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:30.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:30.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:30.146042 systemd[1]: Stopping ignition-mount.service... Jul 15 11:25:30.147339 systemd[1]: Stopping iscsid.service... Jul 15 11:25:30.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:30.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:30.148929 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 15 11:25:30.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:30.149050 systemd[1]: Stopped kmod-static-nodes.service. Jul 15 11:25:30.150659 systemd[1]: Stopping sysroot-boot.service... Jul 15 11:25:30.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:30.152173 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 15 11:25:30.152308 systemd[1]: Stopped systemd-udev-trigger.service. Jul 15 11:25:30.153793 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 15 11:25:30.153911 systemd[1]: Stopped dracut-pre-trigger.service. Jul 15 11:25:30.157042 systemd[1]: iscsid.service: Deactivated successfully. Jul 15 11:25:30.157118 systemd[1]: Stopped iscsid.service. Jul 15 11:25:30.159046 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 15 11:25:30.159107 systemd[1]: Stopped ignition-mount.service. Jul 15 11:25:30.161328 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 15 11:25:30.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:30.161390 systemd[1]: Finished initrd-cleanup.service. Jul 15 11:25:30.162860 systemd[1]: iscsid.socket: Deactivated successfully. Jul 15 11:25:30.162883 systemd[1]: Closed iscsid.socket. Jul 15 11:25:30.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:30.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:30.163785 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 15 11:25:30.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:30.163826 systemd[1]: Stopped ignition-disks.service. Jul 15 11:25:30.164739 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 15 11:25:30.164767 systemd[1]: Stopped ignition-kargs.service. Jul 15 11:25:30.166159 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 15 11:25:30.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:30.166189 systemd[1]: Stopped ignition-setup.service. Jul 15 11:25:30.166608 systemd[1]: Stopping iscsiuio.service... Jul 15 11:25:30.167671 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 15 11:25:30.169186 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 15 11:25:30.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:30.169249 systemd[1]: Stopped iscsiuio.service. Jul 15 11:25:30.170028 systemd[1]: Stopped target network.target. Jul 15 11:25:30.171434 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 15 11:25:30.171459 systemd[1]: Closed iscsiuio.socket. Jul 15 11:25:30.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:30.172208 systemd[1]: Stopping systemd-networkd.service... Jul 15 11:25:30.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:30.205000 audit: BPF prog-id=6 op=UNLOAD Jul 15 11:25:30.173831 systemd[1]: Stopping systemd-resolved.service... Jul 15 11:25:30.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:30.177716 systemd-networkd[717]: eth0: DHCPv6 lease lost Jul 15 11:25:30.208000 audit: BPF prog-id=9 op=UNLOAD Jul 15 11:25:30.180173 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 15 11:25:30.180242 systemd[1]: Stopped systemd-networkd.service. Jul 15 11:25:30.210000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:30.181281 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 15 11:25:30.181313 systemd[1]: Closed systemd-networkd.socket. Jul 15 11:25:30.183608 systemd[1]: Stopping network-cleanup.service... Jul 15 11:25:30.184490 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 15 11:25:30.184538 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 15 11:25:30.186252 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 11:25:30.186283 systemd[1]: Stopped systemd-sysctl.service. Jul 15 11:25:30.187707 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 15 11:25:30.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:30.187743 systemd[1]: Stopped systemd-modules-load.service. Jul 15 11:25:30.189559 systemd[1]: Stopping systemd-udevd.service... Jul 15 11:25:30.192448 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 11:25:30.192798 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 15 11:25:30.192877 systemd[1]: Stopped systemd-resolved.service. Jul 15 11:25:30.197538 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 15 11:25:30.197638 systemd[1]: Stopped systemd-udevd.service. Jul 15 11:25:30.199962 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 15 11:25:30.199993 systemd[1]: Closed systemd-udevd-control.socket. Jul 15 11:25:30.201357 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 15 11:25:30.201381 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 15 11:25:30.202897 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 15 11:25:30.202930 systemd[1]: Stopped dracut-pre-udev.service. Jul 15 11:25:30.203499 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 15 11:25:30.203525 systemd[1]: Stopped dracut-cmdline.service. Jul 15 11:25:30.205326 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 15 11:25:30.205354 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 15 11:25:30.207212 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 15 11:25:30.209363 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 11:25:30.209404 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 15 11:25:30.215067 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 15 11:25:30.216704 systemd[1]: Stopped network-cleanup.service. Jul 15 11:25:30.220542 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 15 11:25:30.222509 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 15 11:25:30.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:30.237000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:30.249082 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 15 11:25:30.249957 systemd[1]: Stopped sysroot-boot.service. Jul 15 11:25:30.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:30.251392 systemd[1]: Reached target initrd-switch-root.target. Jul 15 11:25:30.253027 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 15 11:25:30.253918 systemd[1]: Stopped initrd-setup-root.service. Jul 15 11:25:30.254000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:30.255949 systemd[1]: Starting initrd-switch-root.service... Jul 15 11:25:30.261381 systemd[1]: Switching root. Jul 15 11:25:30.264000 audit: BPF prog-id=8 op=UNLOAD Jul 15 11:25:30.264000 audit: BPF prog-id=7 op=UNLOAD Jul 15 11:25:30.264000 audit: BPF prog-id=5 op=UNLOAD Jul 15 11:25:30.264000 audit: BPF prog-id=4 op=UNLOAD Jul 15 11:25:30.264000 audit: BPF prog-id=3 op=UNLOAD Jul 15 11:25:30.282741 systemd-journald[198]: Journal stopped Jul 15 11:25:33.030157 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). Jul 15 11:25:33.030213 kernel: SELinux: Class mctp_socket not defined in policy. Jul 15 11:25:33.030230 kernel: SELinux: Class anon_inode not defined in policy. Jul 15 11:25:33.030240 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 15 11:25:33.030250 kernel: SELinux: policy capability network_peer_controls=1 Jul 15 11:25:33.030262 kernel: SELinux: policy capability open_perms=1 Jul 15 11:25:33.030271 kernel: SELinux: policy capability extended_socket_class=1 Jul 15 11:25:33.030281 kernel: SELinux: policy capability always_check_network=0 Jul 15 11:25:33.030290 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 15 11:25:33.030299 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 15 11:25:33.030309 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 15 11:25:33.030319 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 15 11:25:33.030329 systemd[1]: Successfully loaded SELinux policy in 38.952ms. Jul 15 11:25:33.030349 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.640ms. Jul 15 11:25:33.030363 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 15 11:25:33.030373 systemd[1]: Detected virtualization kvm. Jul 15 11:25:33.030383 systemd[1]: Detected architecture x86-64. Jul 15 11:25:33.030397 systemd[1]: Detected first boot. Jul 15 11:25:33.030407 systemd[1]: Initializing machine ID from VM UUID. Jul 15 11:25:33.030418 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 15 11:25:33.030427 systemd[1]: Populated /etc with preset unit settings. Jul 15 11:25:33.030439 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 15 11:25:33.030451 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 15 11:25:33.030462 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 11:25:33.030476 systemd[1]: Queued start job for default target multi-user.target. Jul 15 11:25:33.030486 systemd[1]: Unnecessary job was removed for dev-vda6.device. Jul 15 11:25:33.030498 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 15 11:25:33.030510 systemd[1]: Created slice system-addon\x2drun.slice. Jul 15 11:25:33.030523 systemd[1]: Created slice system-getty.slice. Jul 15 11:25:33.030534 systemd[1]: Created slice system-modprobe.slice. Jul 15 11:25:33.030545 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 15 11:25:33.030555 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 15 11:25:33.030566 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 15 11:25:33.030577 systemd[1]: Created slice user.slice. Jul 15 11:25:33.030586 systemd[1]: Started systemd-ask-password-console.path. Jul 15 11:25:33.030597 systemd[1]: Started systemd-ask-password-wall.path. Jul 15 11:25:33.030607 systemd[1]: Set up automount boot.automount. Jul 15 11:25:33.030617 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 15 11:25:33.030628 systemd[1]: Reached target integritysetup.target. Jul 15 11:25:33.030638 systemd[1]: Reached target remote-cryptsetup.target. Jul 15 11:25:33.030649 systemd[1]: Reached target remote-fs.target. Jul 15 11:25:33.030659 systemd[1]: Reached target slices.target. Jul 15 11:25:33.030669 systemd[1]: Reached target swap.target. Jul 15 11:25:33.030679 systemd[1]: Reached target torcx.target. Jul 15 11:25:33.030702 systemd[1]: Reached target veritysetup.target. Jul 15 11:25:33.030717 systemd[1]: Listening on systemd-coredump.socket. Jul 15 11:25:33.030728 systemd[1]: Listening on systemd-initctl.socket. Jul 15 11:25:33.030739 systemd[1]: Listening on systemd-journald-audit.socket. Jul 15 11:25:33.030750 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 15 11:25:33.030762 systemd[1]: Listening on systemd-journald.socket. Jul 15 11:25:33.030773 systemd[1]: Listening on systemd-networkd.socket. Jul 15 11:25:33.030783 systemd[1]: Listening on systemd-udevd-control.socket. Jul 15 11:25:33.030794 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 15 11:25:33.030805 systemd[1]: Listening on systemd-userdbd.socket. Jul 15 11:25:33.030822 systemd[1]: Mounting dev-hugepages.mount... Jul 15 11:25:33.030833 systemd[1]: Mounting dev-mqueue.mount... Jul 15 11:25:33.030844 systemd[1]: Mounting media.mount... Jul 15 11:25:33.030855 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:25:33.030866 systemd[1]: Mounting sys-kernel-debug.mount... Jul 15 11:25:33.030876 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 15 11:25:33.030886 systemd[1]: Mounting tmp.mount... Jul 15 11:25:33.030896 systemd[1]: Starting flatcar-tmpfiles.service... Jul 15 11:25:33.030906 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 15 11:25:33.030916 systemd[1]: Starting kmod-static-nodes.service... Jul 15 11:25:33.030927 systemd[1]: Starting modprobe@configfs.service... Jul 15 11:25:33.030938 systemd[1]: Starting modprobe@dm_mod.service... Jul 15 11:25:33.030948 systemd[1]: Starting modprobe@drm.service... Jul 15 11:25:33.030958 systemd[1]: Starting modprobe@efi_pstore.service... Jul 15 11:25:33.030970 systemd[1]: Starting modprobe@fuse.service... Jul 15 11:25:33.030981 systemd[1]: Starting modprobe@loop.service... Jul 15 11:25:33.030992 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 15 11:25:33.031003 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 15 11:25:33.031013 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Jul 15 11:25:33.031024 systemd[1]: Starting systemd-journald.service... Jul 15 11:25:33.031035 kernel: fuse: init (API version 7.34) Jul 15 11:25:33.031045 kernel: loop: module loaded Jul 15 11:25:33.031054 systemd[1]: Starting systemd-modules-load.service... Jul 15 11:25:33.031065 systemd[1]: Starting systemd-network-generator.service... Jul 15 11:25:33.031075 systemd[1]: Starting systemd-remount-fs.service... Jul 15 11:25:33.031085 systemd[1]: Starting systemd-udev-trigger.service... Jul 15 11:25:33.031095 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:25:33.031115 systemd[1]: Mounted dev-hugepages.mount. Jul 15 11:25:33.031126 systemd[1]: Mounted dev-mqueue.mount. Jul 15 11:25:33.031138 systemd[1]: Mounted media.mount. Jul 15 11:25:33.031148 systemd[1]: Mounted sys-kernel-debug.mount. Jul 15 11:25:33.031158 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 15 11:25:33.031168 systemd[1]: Mounted tmp.mount. Jul 15 11:25:33.031178 systemd[1]: Finished kmod-static-nodes.service. Jul 15 11:25:33.031188 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 15 11:25:33.031199 systemd[1]: Finished modprobe@configfs.service. Jul 15 11:25:33.031217 systemd-journald[1010]: Journal started Jul 15 11:25:33.031253 systemd-journald[1010]: Runtime Journal (/run/log/journal/2c9250b9cd5b423d8eb61ef450d3df19) is 6.0M, max 48.5M, 42.5M free. Jul 15 11:25:32.945000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 15 11:25:32.945000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jul 15 11:25:33.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:33.028000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 15 11:25:33.028000 audit[1010]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7fff1b059020 a2=4000 a3=7fff1b0590bc items=0 ppid=1 pid=1010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:25:33.028000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 15 11:25:33.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:33.030000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:33.033352 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 11:25:33.033381 systemd[1]: Finished modprobe@dm_mod.service. Jul 15 11:25:33.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:33.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:33.036733 systemd[1]: Started systemd-journald.service. Jul 15 11:25:33.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:33.036570 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 11:25:33.036897 systemd[1]: Finished modprobe@drm.service. Jul 15 11:25:33.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:33.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:33.037956 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 11:25:33.038205 systemd[1]: Finished modprobe@efi_pstore.service. Jul 15 11:25:33.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:33.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:33.039572 systemd[1]: Finished flatcar-tmpfiles.service. Jul 15 11:25:33.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:33.040618 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 15 11:25:33.040934 systemd[1]: Finished modprobe@fuse.service. Jul 15 11:25:33.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:33.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:33.042047 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 11:25:33.042278 systemd[1]: Finished modprobe@loop.service. Jul 15 11:25:33.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:33.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:33.043555 systemd[1]: Finished systemd-modules-load.service. Jul 15 11:25:33.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:33.044782 systemd[1]: Finished systemd-network-generator.service. Jul 15 11:25:33.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:33.046142 systemd[1]: Finished systemd-remount-fs.service. Jul 15 11:25:33.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:33.047394 systemd[1]: Reached target network-pre.target. Jul 15 11:25:33.049325 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 15 11:25:33.051092 systemd[1]: Mounting sys-kernel-config.mount... Jul 15 11:25:33.051911 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 15 11:25:33.053092 systemd[1]: Starting systemd-hwdb-update.service... Jul 15 11:25:33.054961 systemd[1]: Starting systemd-journal-flush.service... Jul 15 11:25:33.055841 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 11:25:33.056919 systemd[1]: Starting systemd-random-seed.service... Jul 15 11:25:33.057936 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 15 11:25:33.059008 systemd[1]: Starting systemd-sysctl.service... Jul 15 11:25:33.060386 systemd-journald[1010]: Time spent on flushing to /var/log/journal/2c9250b9cd5b423d8eb61ef450d3df19 is 23.019ms for 1041 entries. Jul 15 11:25:33.060386 systemd-journald[1010]: System Journal (/var/log/journal/2c9250b9cd5b423d8eb61ef450d3df19) is 8.0M, max 195.6M, 187.6M free. Jul 15 11:25:33.098918 systemd-journald[1010]: Received client request to flush runtime journal. Jul 15 11:25:33.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:33.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:33.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:33.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:33.060858 systemd[1]: Starting systemd-sysusers.service... Jul 15 11:25:33.064350 systemd[1]: Finished systemd-udev-trigger.service. Jul 15 11:25:33.100149 udevadm[1055]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 15 11:25:33.065399 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 15 11:25:33.066350 systemd[1]: Mounted sys-kernel-config.mount. Jul 15 11:25:33.069457 systemd[1]: Starting systemd-udev-settle.service... Jul 15 11:25:33.071715 systemd[1]: Finished systemd-random-seed.service. Jul 15 11:25:33.072620 systemd[1]: Reached target first-boot-complete.target. Jul 15 11:25:33.083746 systemd[1]: Finished systemd-sysctl.service. Jul 15 11:25:33.085070 systemd[1]: Finished systemd-sysusers.service. Jul 15 11:25:33.086992 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 15 11:25:33.099947 systemd[1]: Finished systemd-journal-flush.service. Jul 15 11:25:33.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:33.106123 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 15 11:25:33.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:33.921360 systemd[1]: Finished systemd-hwdb-update.service. Jul 15 11:25:33.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:33.923344 systemd[1]: Starting systemd-udevd.service... Jul 15 11:25:33.938561 systemd-udevd[1067]: Using default interface naming scheme 'v252'. Jul 15 11:25:33.950572 systemd[1]: Started systemd-udevd.service. Jul 15 11:25:33.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:33.953627 systemd[1]: Starting systemd-networkd.service... Jul 15 11:25:33.960513 systemd[1]: Starting systemd-userdbd.service... Jul 15 11:25:33.973318 systemd[1]: Found device dev-ttyS0.device. Jul 15 11:25:34.003541 systemd[1]: Started systemd-userdbd.service. Jul 15 11:25:34.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:34.018718 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 15 11:25:34.024705 kernel: ACPI: button: Power Button [PWRF] Jul 15 11:25:34.028303 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 15 11:25:34.035000 audit[1086]: AVC avc: denied { confidentiality } for pid=1086 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 15 11:25:34.050716 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 15 11:25:34.051035 systemd-networkd[1080]: lo: Link UP Jul 15 11:25:34.035000 audit[1086]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5634e230ffb0 a1=338ac a2=7f5ce6ba3bc5 a3=5 items=110 ppid=1067 pid=1086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:25:34.051048 systemd-networkd[1080]: lo: Gained carrier Jul 15 11:25:34.051433 systemd-networkd[1080]: Enumeration completed Jul 15 11:25:34.051563 systemd[1]: Started systemd-networkd.service. Jul 15 11:25:34.051990 systemd-networkd[1080]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 11:25:34.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:34.035000 audit: CWD cwd="/" Jul 15 11:25:34.035000 audit: PATH item=0 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=1 name=(null) inode=16455 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=2 name=(null) inode=16455 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=3 name=(null) inode=16456 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=4 name=(null) inode=16455 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=5 name=(null) inode=16457 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=6 name=(null) inode=16455 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=7 name=(null) inode=16458 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=8 name=(null) inode=16458 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=9 name=(null) inode=16459 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=10 name=(null) inode=16458 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.053504 systemd-networkd[1080]: eth0: Link UP Jul 15 11:25:34.053514 systemd-networkd[1080]: eth0: Gained carrier Jul 15 11:25:34.035000 audit: PATH item=11 name=(null) inode=16460 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=12 name=(null) inode=16458 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=13 name=(null) inode=16461 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=14 name=(null) inode=16458 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=15 name=(null) inode=16462 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=16 name=(null) inode=16458 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=17 name=(null) inode=16463 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=18 name=(null) inode=16455 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=19 name=(null) inode=16464 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=20 name=(null) inode=16464 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=21 name=(null) inode=16465 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=22 name=(null) inode=16464 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=23 name=(null) inode=16466 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=24 name=(null) inode=16464 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=25 name=(null) inode=16467 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=26 name=(null) inode=16464 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=27 name=(null) inode=16468 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=28 name=(null) inode=16464 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=29 name=(null) inode=16469 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=30 name=(null) inode=16455 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=31 name=(null) inode=16470 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=32 name=(null) inode=16470 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=33 name=(null) inode=16471 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=34 name=(null) inode=16470 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=35 name=(null) inode=16472 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=36 name=(null) inode=16470 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=37 name=(null) inode=16473 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=38 name=(null) inode=16470 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=39 name=(null) inode=16474 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=40 name=(null) inode=16470 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=41 name=(null) inode=16475 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=42 name=(null) inode=16455 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=43 name=(null) inode=16476 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=44 name=(null) inode=16476 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=45 name=(null) inode=16477 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=46 name=(null) inode=16476 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=47 name=(null) inode=16478 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=48 name=(null) inode=16476 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=49 name=(null) inode=16479 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=50 name=(null) inode=16476 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=51 name=(null) inode=16480 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=52 name=(null) inode=16476 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=53 name=(null) inode=16481 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=54 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=55 name=(null) inode=16482 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=56 name=(null) inode=16482 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=57 name=(null) inode=16483 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=58 name=(null) inode=16482 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=59 name=(null) inode=16484 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=60 name=(null) inode=16482 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=61 name=(null) inode=16485 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=62 name=(null) inode=16485 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=63 name=(null) inode=16486 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=64 name=(null) inode=16485 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=65 name=(null) inode=16487 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=66 name=(null) inode=16485 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=67 name=(null) inode=16488 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=68 name=(null) inode=16485 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=69 name=(null) inode=16489 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=70 name=(null) inode=16485 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=71 name=(null) inode=16490 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=72 name=(null) inode=16482 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=73 name=(null) inode=16491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=74 name=(null) inode=16491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=75 name=(null) inode=16492 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=76 name=(null) inode=16491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=77 name=(null) inode=16493 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=78 name=(null) inode=16491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=79 name=(null) inode=16494 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=80 name=(null) inode=16491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=81 name=(null) inode=16495 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=82 name=(null) inode=16491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=83 name=(null) inode=16496 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=84 name=(null) inode=16482 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=85 name=(null) inode=16497 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=86 name=(null) inode=16497 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=87 name=(null) inode=16498 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=88 name=(null) inode=16497 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=89 name=(null) inode=16499 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=90 name=(null) inode=16497 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=91 name=(null) inode=16500 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=92 name=(null) inode=16497 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=93 name=(null) inode=16501 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=94 name=(null) inode=16497 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=95 name=(null) inode=16502 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=96 name=(null) inode=16482 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=97 name=(null) inode=16503 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=98 name=(null) inode=16503 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=99 name=(null) inode=16504 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=100 name=(null) inode=16503 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=101 name=(null) inode=16505 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=102 name=(null) inode=16503 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=103 name=(null) inode=16506 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=104 name=(null) inode=16503 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=105 name=(null) inode=16507 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=106 name=(null) inode=16503 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=107 name=(null) inode=16508 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PATH item=109 name=(null) inode=16509 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:25:34.035000 audit: PROCTITLE proctitle="(udev-worker)" Jul 15 11:25:34.065810 systemd-networkd[1080]: eth0: DHCPv4 address 10.0.0.10/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 15 11:25:34.078709 kernel: mousedev: PS/2 mouse device common for all mice Jul 15 11:25:34.113715 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 15 11:25:34.113967 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 15 11:25:34.114095 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 15 11:25:34.134719 kernel: kvm: Nested Virtualization enabled Jul 15 11:25:34.134793 kernel: SVM: kvm: Nested Paging enabled Jul 15 11:25:34.134821 kernel: SVM: Virtual VMLOAD VMSAVE supported Jul 15 11:25:34.134852 kernel: SVM: Virtual GIF supported Jul 15 11:25:34.155707 kernel: EDAC MC: Ver: 3.0.0 Jul 15 11:25:34.183143 systemd[1]: Finished systemd-udev-settle.service. Jul 15 11:25:34.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:34.185258 systemd[1]: Starting lvm2-activation-early.service... Jul 15 11:25:34.192355 lvm[1103]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 15 11:25:34.219440 systemd[1]: Finished lvm2-activation-early.service. Jul 15 11:25:34.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:34.220485 systemd[1]: Reached target cryptsetup.target. Jul 15 11:25:34.222375 systemd[1]: Starting lvm2-activation.service... Jul 15 11:25:34.225833 lvm[1105]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 15 11:25:34.259253 systemd[1]: Finished lvm2-activation.service. Jul 15 11:25:34.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:34.260138 systemd[1]: Reached target local-fs-pre.target. Jul 15 11:25:34.260961 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 15 11:25:34.260984 systemd[1]: Reached target local-fs.target. Jul 15 11:25:34.261839 systemd[1]: Reached target machines.target. Jul 15 11:25:34.263838 systemd[1]: Starting ldconfig.service... Jul 15 11:25:34.264818 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 15 11:25:34.264858 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:25:34.265820 systemd[1]: Starting systemd-boot-update.service... Jul 15 11:25:34.267404 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 15 11:25:34.269323 systemd[1]: Starting systemd-machine-id-commit.service... Jul 15 11:25:34.271221 systemd[1]: Starting systemd-sysext.service... Jul 15 11:25:34.274997 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1108 (bootctl) Jul 15 11:25:34.275829 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 15 11:25:34.277002 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 15 11:25:34.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:34.286565 systemd[1]: Unmounting usr-share-oem.mount... Jul 15 11:25:34.290556 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 15 11:25:34.290744 systemd[1]: Unmounted usr-share-oem.mount. Jul 15 11:25:34.299707 kernel: loop0: detected capacity change from 0 to 221472 Jul 15 11:25:34.306900 systemd-fsck[1116]: fsck.fat 4.2 (2021-01-31) Jul 15 11:25:34.306900 systemd-fsck[1116]: /dev/vda1: 790 files, 120725/258078 clusters Jul 15 11:25:34.308017 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 15 11:25:34.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:34.311109 systemd[1]: Mounting boot.mount... Jul 15 11:25:34.322734 systemd[1]: Mounted boot.mount. Jul 15 11:25:34.841250 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 15 11:25:34.840735 systemd[1]: Finished systemd-boot-update.service. Jul 15 11:25:34.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:34.860085 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 15 11:25:34.860643 systemd[1]: Finished systemd-machine-id-commit.service. Jul 15 11:25:34.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:34.869713 kernel: loop1: detected capacity change from 0 to 221472 Jul 15 11:25:34.872986 (sd-sysext)[1130]: Using extensions 'kubernetes'. Jul 15 11:25:34.873258 (sd-sysext)[1130]: Merged extensions into '/usr'. Jul 15 11:25:34.887661 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:25:34.889120 systemd[1]: Mounting usr-share-oem.mount... Jul 15 11:25:34.890133 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 15 11:25:34.891256 systemd[1]: Starting modprobe@dm_mod.service... Jul 15 11:25:34.893122 systemd[1]: Starting modprobe@efi_pstore.service... Jul 15 11:25:34.894821 systemd[1]: Starting modprobe@loop.service... Jul 15 11:25:34.895562 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 15 11:25:34.895659 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:25:34.895768 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:25:34.896577 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 11:25:34.896882 systemd[1]: Finished modprobe@dm_mod.service. Jul 15 11:25:34.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:34.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:34.898933 ldconfig[1107]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 15 11:25:34.899910 systemd[1]: Mounted usr-share-oem.mount. Jul 15 11:25:34.901044 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 11:25:34.901190 systemd[1]: Finished modprobe@efi_pstore.service. Jul 15 11:25:34.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:34.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:34.902429 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 11:25:34.902574 systemd[1]: Finished modprobe@loop.service. Jul 15 11:25:34.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:34.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:34.903784 systemd[1]: Finished ldconfig.service. Jul 15 11:25:34.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:34.904852 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 11:25:34.904944 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 15 11:25:34.906029 systemd[1]: Finished systemd-sysext.service. Jul 15 11:25:34.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:34.907940 systemd[1]: Starting ensure-sysext.service... Jul 15 11:25:34.909784 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 15 11:25:34.912564 systemd[1]: Reloading. Jul 15 11:25:34.918972 systemd-tmpfiles[1145]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 15 11:25:34.920085 systemd-tmpfiles[1145]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 15 11:25:34.921448 systemd-tmpfiles[1145]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 15 11:25:34.954767 /usr/lib/systemd/system-generators/torcx-generator[1165]: time="2025-07-15T11:25:34Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" Jul 15 11:25:34.954790 /usr/lib/systemd/system-generators/torcx-generator[1165]: time="2025-07-15T11:25:34Z" level=info msg="torcx already run" Jul 15 11:25:35.018894 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 15 11:25:35.018908 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 15 11:25:35.035414 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 11:25:35.088415 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 15 11:25:35.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:35.091611 kernel: kauditd_printk_skb: 209 callbacks suppressed Jul 15 11:25:35.091655 kernel: audit: type=1130 audit(1752578735.089:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:35.091594 systemd[1]: Starting audit-rules.service... Jul 15 11:25:35.095215 systemd[1]: Starting clean-ca-certificates.service... Jul 15 11:25:35.097042 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 15 11:25:35.099395 systemd[1]: Starting systemd-resolved.service... Jul 15 11:25:35.101392 systemd[1]: Starting systemd-timesyncd.service... Jul 15 11:25:35.102968 systemd[1]: Starting systemd-update-utmp.service... Jul 15 11:25:35.104383 systemd[1]: Finished clean-ca-certificates.service. Jul 15 11:25:35.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:35.111720 kernel: audit: type=1130 audit(1752578735.105:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:35.111781 kernel: audit: type=1127 audit(1752578735.106:134): pid=1226 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 15 11:25:35.106000 audit[1226]: SYSTEM_BOOT pid=1226 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 15 11:25:35.108476 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 11:25:35.110890 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 15 11:25:35.111956 systemd[1]: Starting modprobe@dm_mod.service... Jul 15 11:25:35.114301 systemd[1]: Starting modprobe@efi_pstore.service... Jul 15 11:25:35.116163 systemd[1]: Starting modprobe@loop.service... Jul 15 11:25:35.116917 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 15 11:25:35.117015 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:25:35.117096 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 11:25:35.118092 systemd[1]: Finished systemd-update-utmp.service. Jul 15 11:25:35.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:35.119313 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 11:25:35.119432 systemd[1]: Finished modprobe@dm_mod.service. Jul 15 11:25:35.122890 kernel: audit: type=1130 audit(1752578735.118:135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:35.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:35.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:35.126709 kernel: audit: type=1130 audit(1752578735.123:136): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:35.126733 kernel: audit: type=1131 audit(1752578735.123:137): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:35.123873 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 11:25:35.123987 systemd[1]: Finished modprobe@efi_pstore.service. Jul 15 11:25:35.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:35.133762 kernel: audit: type=1130 audit(1752578735.130:138): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:35.133784 kernel: audit: type=1131 audit(1752578735.130:139): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:35.133799 kernel: audit: type=1305 audit(1752578735.131:140): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 15 11:25:35.133815 kernel: audit: type=1300 audit(1752578735.131:140): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffd71b6960 a2=420 a3=0 items=0 ppid=1213 pid=1242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:25:35.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:25:35.131000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 15 11:25:35.131000 audit[1242]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffd71b6960 a2=420 a3=0 items=0 ppid=1213 pid=1242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:25:35.131000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 15 11:25:35.133974 augenrules[1242]: No rules Jul 15 11:25:35.131027 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 15 11:25:35.144518 systemd[1]: Finished audit-rules.service. Jul 15 11:25:35.145547 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 11:25:35.145709 systemd[1]: Finished modprobe@loop.service. Jul 15 11:25:35.148800 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 15 11:25:35.149854 systemd[1]: Starting modprobe@dm_mod.service... Jul 15 11:25:35.151323 systemd[1]: Starting modprobe@efi_pstore.service... Jul 15 11:25:35.152936 systemd[1]: Starting modprobe@loop.service... Jul 15 11:25:35.153649 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 15 11:25:35.153781 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:25:35.154938 systemd[1]: Starting systemd-update-done.service... Jul 15 11:25:35.155742 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 11:25:35.157041 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 11:25:35.157163 systemd[1]: Finished modprobe@dm_mod.service. Jul 15 11:25:35.158325 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 11:25:35.158445 systemd[1]: Finished modprobe@efi_pstore.service. Jul 15 11:25:35.159584 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 11:25:35.159725 systemd[1]: Finished modprobe@loop.service. Jul 15 11:25:35.160890 systemd[1]: Finished systemd-update-done.service. Jul 15 11:25:35.162293 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 11:25:35.162378 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 15 11:25:35.165401 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 15 11:25:35.166351 systemd[1]: Starting modprobe@dm_mod.service... Jul 15 11:25:35.168038 systemd[1]: Starting modprobe@drm.service... Jul 15 11:25:35.169614 systemd[1]: Starting modprobe@efi_pstore.service... Jul 15 11:25:35.171312 systemd[1]: Starting modprobe@loop.service... Jul 15 11:25:35.172360 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 15 11:25:35.172457 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:25:35.173375 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 15 11:25:35.174763 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 11:25:35.175648 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 11:25:35.175795 systemd[1]: Finished modprobe@dm_mod.service. Jul 15 11:25:35.177182 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 11:25:35.177296 systemd[1]: Finished modprobe@drm.service. Jul 15 11:25:35.178516 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 11:25:35.178626 systemd[1]: Finished modprobe@efi_pstore.service. Jul 15 11:25:35.179980 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 11:25:35.180239 systemd[1]: Finished modprobe@loop.service. Jul 15 11:25:35.181771 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 11:25:35.182721 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 15 11:25:35.183151 systemd[1]: Finished ensure-sysext.service. Jul 15 11:25:35.193329 systemd-resolved[1223]: Positive Trust Anchors: Jul 15 11:25:35.193343 systemd-resolved[1223]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 11:25:35.193371 systemd-resolved[1223]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 15 11:25:35.196104 systemd[1]: Started systemd-timesyncd.service. Jul 15 11:25:36.426063 systemd-timesyncd[1224]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 15 11:25:36.426101 systemd-timesyncd[1224]: Initial clock synchronization to Tue 2025-07-15 11:25:36.426008 UTC. Jul 15 11:25:36.426173 systemd[1]: Reached target time-set.target. Jul 15 11:25:36.429204 systemd-resolved[1223]: Defaulting to hostname 'linux'. Jul 15 11:25:36.430510 systemd[1]: Started systemd-resolved.service. Jul 15 11:25:36.431333 systemd[1]: Reached target network.target. Jul 15 11:25:36.432130 systemd[1]: Reached target nss-lookup.target. Jul 15 11:25:36.432954 systemd[1]: Reached target sysinit.target. Jul 15 11:25:36.433791 systemd[1]: Started motdgen.path. Jul 15 11:25:36.434515 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 15 11:25:36.435700 systemd[1]: Started logrotate.timer. Jul 15 11:25:36.436496 systemd[1]: Started mdadm.timer. Jul 15 11:25:36.437165 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 15 11:25:36.438066 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 15 11:25:36.438094 systemd[1]: Reached target paths.target. Jul 15 11:25:36.438854 systemd[1]: Reached target timers.target. Jul 15 11:25:36.439840 systemd[1]: Listening on dbus.socket. Jul 15 11:25:36.441585 systemd[1]: Starting docker.socket... Jul 15 11:25:36.443042 systemd[1]: Listening on sshd.socket. Jul 15 11:25:36.443876 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:25:36.444103 systemd[1]: Listening on docker.socket. Jul 15 11:25:36.444878 systemd[1]: Reached target sockets.target. Jul 15 11:25:36.445657 systemd[1]: Reached target basic.target. Jul 15 11:25:36.446510 systemd[1]: System is tainted: cgroupsv1 Jul 15 11:25:36.446545 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 15 11:25:36.446561 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 15 11:25:36.447396 systemd[1]: Starting containerd.service... Jul 15 11:25:36.448983 systemd[1]: Starting dbus.service... Jul 15 11:25:36.450521 systemd[1]: Starting enable-oem-cloudinit.service... Jul 15 11:25:36.452122 systemd[1]: Starting extend-filesystems.service... Jul 15 11:25:36.453116 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 15 11:25:36.454085 jq[1276]: false Jul 15 11:25:36.454196 systemd[1]: Starting motdgen.service... Jul 15 11:25:36.455758 systemd[1]: Starting prepare-helm.service... Jul 15 11:25:36.457513 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 15 11:25:36.459338 systemd[1]: Starting sshd-keygen.service... Jul 15 11:25:36.461610 systemd[1]: Starting systemd-logind.service... Jul 15 11:25:36.462494 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:25:36.462539 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 15 11:25:36.463513 systemd[1]: Starting update-engine.service... Jul 15 11:25:36.465108 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 15 11:25:36.466866 extend-filesystems[1277]: Found loop1 Jul 15 11:25:36.468506 extend-filesystems[1277]: Found sr0 Jul 15 11:25:36.468506 extend-filesystems[1277]: Found vda Jul 15 11:25:36.468506 extend-filesystems[1277]: Found vda1 Jul 15 11:25:36.468506 extend-filesystems[1277]: Found vda2 Jul 15 11:25:36.468506 extend-filesystems[1277]: Found vda3 Jul 15 11:25:36.468506 extend-filesystems[1277]: Found usr Jul 15 11:25:36.468506 extend-filesystems[1277]: Found vda4 Jul 15 11:25:36.468506 extend-filesystems[1277]: Found vda6 Jul 15 11:25:36.468506 extend-filesystems[1277]: Found vda7 Jul 15 11:25:36.468506 extend-filesystems[1277]: Found vda9 Jul 15 11:25:36.468506 extend-filesystems[1277]: Checking size of /dev/vda9 Jul 15 11:25:36.468242 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 15 11:25:36.482848 dbus-daemon[1275]: [system] SELinux support is enabled Jul 15 11:25:36.506628 extend-filesystems[1277]: Resized partition /dev/vda9 Jul 15 11:25:36.468473 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 15 11:25:36.507865 extend-filesystems[1328]: resize2fs 1.46.5 (30-Dec-2021) Jul 15 11:25:36.470809 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 15 11:25:36.472630 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 15 11:25:36.512365 jq[1296]: true Jul 15 11:25:36.475295 systemd[1]: motdgen.service: Deactivated successfully. Jul 15 11:25:36.475582 systemd[1]: Finished motdgen.service. Jul 15 11:25:36.512802 tar[1301]: linux-amd64/helm Jul 15 11:25:36.482992 systemd[1]: Started dbus.service. Jul 15 11:25:36.513320 jq[1306]: true Jul 15 11:25:36.485507 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 15 11:25:36.485530 systemd[1]: Reached target system-config.target. Jul 15 11:25:36.486645 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 15 11:25:36.486660 systemd[1]: Reached target user-config.target. Jul 15 11:25:36.517386 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 15 11:25:36.520260 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:25:36.520278 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:25:36.523823 update_engine[1292]: I0715 11:25:36.523514 1292 main.cc:92] Flatcar Update Engine starting Jul 15 11:25:36.525640 env[1303]: time="2025-07-15T11:25:36.525120570Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 15 11:25:36.527802 systemd[1]: Started update-engine.service. Jul 15 11:25:36.528386 update_engine[1292]: I0715 11:25:36.527855 1292 update_check_scheduler.cc:74] Next update check in 2m34s Jul 15 11:25:36.528388 systemd-logind[1290]: Watching system buttons on /dev/input/event1 (Power Button) Jul 15 11:25:36.528404 systemd-logind[1290]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 15 11:25:36.529927 systemd-logind[1290]: New seat seat0. Jul 15 11:25:36.532026 systemd[1]: Started locksmithd.service. Jul 15 11:25:36.535850 systemd[1]: Started systemd-logind.service. Jul 15 11:25:36.548831 bash[1332]: Updated "/home/core/.ssh/authorized_keys" Jul 15 11:25:36.549407 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 15 11:25:36.549789 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 15 11:25:36.551028 env[1303]: time="2025-07-15T11:25:36.550991879Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 15 11:25:36.574270 env[1303]: time="2025-07-15T11:25:36.574209381Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 15 11:25:36.575364 env[1303]: time="2025-07-15T11:25:36.575342356Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.188-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 15 11:25:36.575461 env[1303]: time="2025-07-15T11:25:36.575442043Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 15 11:25:36.575734 env[1303]: time="2025-07-15T11:25:36.575714534Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 15 11:25:36.575815 env[1303]: time="2025-07-15T11:25:36.575796768Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 15 11:25:36.575890 env[1303]: time="2025-07-15T11:25:36.575871057Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 15 11:25:36.575959 env[1303]: time="2025-07-15T11:25:36.575941449Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 15 11:25:36.576082 env[1303]: time="2025-07-15T11:25:36.576065441Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 15 11:25:36.576334 env[1303]: time="2025-07-15T11:25:36.576317724Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 15 11:25:36.576550 env[1303]: time="2025-07-15T11:25:36.576531245Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 15 11:25:36.576633 env[1303]: time="2025-07-15T11:25:36.576615423Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 15 11:25:36.576738 env[1303]: time="2025-07-15T11:25:36.576720810Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 15 11:25:36.576807 env[1303]: time="2025-07-15T11:25:36.576789669Z" level=info msg="metadata content store policy set" policy=shared Jul 15 11:25:36.577038 extend-filesystems[1328]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 15 11:25:36.577038 extend-filesystems[1328]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 15 11:25:36.577038 extend-filesystems[1328]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 15 11:25:36.581868 extend-filesystems[1277]: Resized filesystem in /dev/vda9 Jul 15 11:25:36.577840 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 15 11:25:36.578031 systemd[1]: Finished extend-filesystems.service. Jul 15 11:25:36.584477 locksmithd[1337]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 15 11:25:36.584735 env[1303]: time="2025-07-15T11:25:36.584707616Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 15 11:25:36.584775 env[1303]: time="2025-07-15T11:25:36.584737071Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 15 11:25:36.584775 env[1303]: time="2025-07-15T11:25:36.584750446Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 15 11:25:36.584813 env[1303]: time="2025-07-15T11:25:36.584778038Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 15 11:25:36.584813 env[1303]: time="2025-07-15T11:25:36.584794298Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 15 11:25:36.584813 env[1303]: time="2025-07-15T11:25:36.584807202Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 15 11:25:36.584868 env[1303]: time="2025-07-15T11:25:36.584818574Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 15 11:25:36.584868 env[1303]: time="2025-07-15T11:25:36.584831057Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 15 11:25:36.584868 env[1303]: time="2025-07-15T11:25:36.584843170Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 15 11:25:36.584868 env[1303]: time="2025-07-15T11:25:36.584854561Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 15 11:25:36.584868 env[1303]: time="2025-07-15T11:25:36.584865562Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 15 11:25:36.584956 env[1303]: time="2025-07-15T11:25:36.584876462Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 15 11:25:36.584977 env[1303]: time="2025-07-15T11:25:36.584955921Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 15 11:25:36.585031 env[1303]: time="2025-07-15T11:25:36.585018398Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 15 11:25:36.585306 env[1303]: time="2025-07-15T11:25:36.585287062Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 15 11:25:36.585329 env[1303]: time="2025-07-15T11:25:36.585311217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 15 11:25:36.585329 env[1303]: time="2025-07-15T11:25:36.585322719Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 15 11:25:36.585366 env[1303]: time="2025-07-15T11:25:36.585357494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 15 11:25:36.585398 env[1303]: time="2025-07-15T11:25:36.585368996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 15 11:25:36.585418 env[1303]: time="2025-07-15T11:25:36.585395435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 15 11:25:36.585418 env[1303]: time="2025-07-15T11:25:36.585405745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 15 11:25:36.585471 env[1303]: time="2025-07-15T11:25:36.585417216Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 15 11:25:36.585471 env[1303]: time="2025-07-15T11:25:36.585427716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 15 11:25:36.585471 env[1303]: time="2025-07-15T11:25:36.585446301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 15 11:25:36.585471 env[1303]: time="2025-07-15T11:25:36.585455708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 15 11:25:36.585471 env[1303]: time="2025-07-15T11:25:36.585467340Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 15 11:25:36.585579 env[1303]: time="2025-07-15T11:25:36.585566416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 15 11:25:36.585602 env[1303]: time="2025-07-15T11:25:36.585582025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 15 11:25:36.585602 env[1303]: time="2025-07-15T11:25:36.585592324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 15 11:25:36.585644 env[1303]: time="2025-07-15T11:25:36.585603095Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 15 11:25:36.585644 env[1303]: time="2025-07-15T11:25:36.585617732Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 15 11:25:36.585644 env[1303]: time="2025-07-15T11:25:36.585629143Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 15 11:25:36.585702 env[1303]: time="2025-07-15T11:25:36.585645364Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 15 11:25:36.585702 env[1303]: time="2025-07-15T11:25:36.585675400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 15 11:25:36.585884 env[1303]: time="2025-07-15T11:25:36.585837915Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 15 11:25:36.586413 env[1303]: time="2025-07-15T11:25:36.585887848Z" level=info msg="Connect containerd service" Jul 15 11:25:36.586413 env[1303]: time="2025-07-15T11:25:36.585914318Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 15 11:25:36.586413 env[1303]: time="2025-07-15T11:25:36.586337512Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 11:25:36.586535 env[1303]: time="2025-07-15T11:25:36.586521256Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 15 11:25:36.586562 env[1303]: time="2025-07-15T11:25:36.586553707Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 15 11:25:36.586597 env[1303]: time="2025-07-15T11:25:36.586588873Z" level=info msg="containerd successfully booted in 0.064549s" Jul 15 11:25:36.586648 systemd[1]: Started containerd.service. Jul 15 11:25:36.593931 env[1303]: time="2025-07-15T11:25:36.592698198Z" level=info msg="Start subscribing containerd event" Jul 15 11:25:36.593931 env[1303]: time="2025-07-15T11:25:36.592742812Z" level=info msg="Start recovering state" Jul 15 11:25:36.593931 env[1303]: time="2025-07-15T11:25:36.592791373Z" level=info msg="Start event monitor" Jul 15 11:25:36.593931 env[1303]: time="2025-07-15T11:25:36.592808254Z" level=info msg="Start snapshots syncer" Jul 15 11:25:36.593931 env[1303]: time="2025-07-15T11:25:36.592815688Z" level=info msg="Start cni network conf syncer for default" Jul 15 11:25:36.593931 env[1303]: time="2025-07-15T11:25:36.592821529Z" level=info msg="Start streaming server" Jul 15 11:25:36.863585 systemd-networkd[1080]: eth0: Gained IPv6LL Jul 15 11:25:36.865539 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 15 11:25:36.866957 systemd[1]: Reached target network-online.target. Jul 15 11:25:36.869077 systemd[1]: Starting kubelet.service... Jul 15 11:25:36.885409 sshd_keygen[1300]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 15 11:25:36.890019 tar[1301]: linux-amd64/LICENSE Jul 15 11:25:36.890019 tar[1301]: linux-amd64/README.md Jul 15 11:25:36.893655 systemd[1]: Finished prepare-helm.service. Jul 15 11:25:36.903286 systemd[1]: Finished sshd-keygen.service. Jul 15 11:25:36.905202 systemd[1]: Starting issuegen.service... Jul 15 11:25:36.909828 systemd[1]: issuegen.service: Deactivated successfully. Jul 15 11:25:36.910096 systemd[1]: Finished issuegen.service. Jul 15 11:25:36.912241 systemd[1]: Starting systemd-user-sessions.service... Jul 15 11:25:36.917636 systemd[1]: Finished systemd-user-sessions.service. Jul 15 11:25:36.919683 systemd[1]: Started getty@tty1.service. Jul 15 11:25:36.921351 systemd[1]: Started serial-getty@ttyS0.service. Jul 15 11:25:36.922316 systemd[1]: Reached target getty.target. Jul 15 11:25:37.511958 systemd[1]: Started kubelet.service. Jul 15 11:25:37.513147 systemd[1]: Reached target multi-user.target. Jul 15 11:25:37.515143 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 15 11:25:37.522611 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 15 11:25:37.522942 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 15 11:25:37.525470 systemd[1]: Startup finished in 6.216s (kernel) + 5.970s (userspace) = 12.187s. Jul 15 11:25:37.914257 kubelet[1376]: E0715 11:25:37.914150 1376 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 11:25:37.915618 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 11:25:37.915744 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 11:25:38.908029 systemd[1]: Created slice system-sshd.slice. Jul 15 11:25:38.909289 systemd[1]: Started sshd@0-10.0.0.10:22-10.0.0.1:42192.service. Jul 15 11:25:38.952743 sshd[1386]: Accepted publickey for core from 10.0.0.1 port 42192 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:25:38.954201 sshd[1386]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:25:38.961058 systemd[1]: Created slice user-500.slice. Jul 15 11:25:38.961904 systemd[1]: Starting user-runtime-dir@500.service... Jul 15 11:25:38.963549 systemd-logind[1290]: New session 1 of user core. Jul 15 11:25:38.970119 systemd[1]: Finished user-runtime-dir@500.service. Jul 15 11:25:38.971520 systemd[1]: Starting user@500.service... Jul 15 11:25:38.974358 (systemd)[1391]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:25:39.038899 systemd[1391]: Queued start job for default target default.target. Jul 15 11:25:39.039078 systemd[1391]: Reached target paths.target. Jul 15 11:25:39.039092 systemd[1391]: Reached target sockets.target. Jul 15 11:25:39.039103 systemd[1391]: Reached target timers.target. Jul 15 11:25:39.039113 systemd[1391]: Reached target basic.target. Jul 15 11:25:39.039149 systemd[1391]: Reached target default.target. Jul 15 11:25:39.039174 systemd[1391]: Startup finished in 59ms. Jul 15 11:25:39.039262 systemd[1]: Started user@500.service. Jul 15 11:25:39.040223 systemd[1]: Started session-1.scope. Jul 15 11:25:39.088454 systemd[1]: Started sshd@1-10.0.0.10:22-10.0.0.1:42194.service. Jul 15 11:25:39.127754 sshd[1400]: Accepted publickey for core from 10.0.0.1 port 42194 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:25:39.128583 sshd[1400]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:25:39.131594 systemd-logind[1290]: New session 2 of user core. Jul 15 11:25:39.132186 systemd[1]: Started session-2.scope. Jul 15 11:25:39.183037 sshd[1400]: pam_unix(sshd:session): session closed for user core Jul 15 11:25:39.185394 systemd[1]: Started sshd@2-10.0.0.10:22-10.0.0.1:42210.service. Jul 15 11:25:39.185806 systemd[1]: sshd@1-10.0.0.10:22-10.0.0.1:42194.service: Deactivated successfully. Jul 15 11:25:39.186532 systemd-logind[1290]: Session 2 logged out. Waiting for processes to exit. Jul 15 11:25:39.186568 systemd[1]: session-2.scope: Deactivated successfully. Jul 15 11:25:39.187302 systemd-logind[1290]: Removed session 2. Jul 15 11:25:39.223860 sshd[1406]: Accepted publickey for core from 10.0.0.1 port 42210 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:25:39.224839 sshd[1406]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:25:39.228123 systemd-logind[1290]: New session 3 of user core. Jul 15 11:25:39.229057 systemd[1]: Started session-3.scope. Jul 15 11:25:39.278504 sshd[1406]: pam_unix(sshd:session): session closed for user core Jul 15 11:25:39.280910 systemd[1]: Started sshd@3-10.0.0.10:22-10.0.0.1:42224.service. Jul 15 11:25:39.281278 systemd[1]: sshd@2-10.0.0.10:22-10.0.0.1:42210.service: Deactivated successfully. Jul 15 11:25:39.282260 systemd[1]: session-3.scope: Deactivated successfully. Jul 15 11:25:39.282267 systemd-logind[1290]: Session 3 logged out. Waiting for processes to exit. Jul 15 11:25:39.283042 systemd-logind[1290]: Removed session 3. Jul 15 11:25:39.319189 sshd[1413]: Accepted publickey for core from 10.0.0.1 port 42224 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:25:39.320109 sshd[1413]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:25:39.323485 systemd-logind[1290]: New session 4 of user core. Jul 15 11:25:39.324533 systemd[1]: Started session-4.scope. Jul 15 11:25:39.376590 sshd[1413]: pam_unix(sshd:session): session closed for user core Jul 15 11:25:39.378680 systemd[1]: Started sshd@4-10.0.0.10:22-10.0.0.1:42238.service. Jul 15 11:25:39.379033 systemd[1]: sshd@3-10.0.0.10:22-10.0.0.1:42224.service: Deactivated successfully. Jul 15 11:25:39.379802 systemd[1]: session-4.scope: Deactivated successfully. Jul 15 11:25:39.379950 systemd-logind[1290]: Session 4 logged out. Waiting for processes to exit. Jul 15 11:25:39.380654 systemd-logind[1290]: Removed session 4. Jul 15 11:25:39.417512 sshd[1420]: Accepted publickey for core from 10.0.0.1 port 42238 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:25:39.418482 sshd[1420]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:25:39.421125 systemd-logind[1290]: New session 5 of user core. Jul 15 11:25:39.421738 systemd[1]: Started session-5.scope. Jul 15 11:25:39.473970 sudo[1425]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 15 11:25:39.474137 sudo[1425]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 15 11:25:39.493806 systemd[1]: Starting docker.service... Jul 15 11:25:39.519684 env[1437]: time="2025-07-15T11:25:39.519639632Z" level=info msg="Starting up" Jul 15 11:25:39.521746 env[1437]: time="2025-07-15T11:25:39.521716647Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 15 11:25:39.521746 env[1437]: time="2025-07-15T11:25:39.521741263Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 15 11:25:39.521830 env[1437]: time="2025-07-15T11:25:39.521764456Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 15 11:25:39.521830 env[1437]: time="2025-07-15T11:25:39.521773834Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 15 11:25:39.523182 env[1437]: time="2025-07-15T11:25:39.523154413Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 15 11:25:39.523182 env[1437]: time="2025-07-15T11:25:39.523170212Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 15 11:25:39.523182 env[1437]: time="2025-07-15T11:25:39.523180722Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 15 11:25:39.523273 env[1437]: time="2025-07-15T11:25:39.523187885Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 15 11:25:41.160779 env[1437]: time="2025-07-15T11:25:41.160737029Z" level=warning msg="Your kernel does not support cgroup blkio weight" Jul 15 11:25:41.160779 env[1437]: time="2025-07-15T11:25:41.160763008Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Jul 15 11:25:41.161326 env[1437]: time="2025-07-15T11:25:41.160900426Z" level=info msg="Loading containers: start." Jul 15 11:25:41.861413 kernel: Initializing XFRM netlink socket Jul 15 11:25:41.887118 env[1437]: time="2025-07-15T11:25:41.887066096Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 15 11:25:41.932625 systemd-networkd[1080]: docker0: Link UP Jul 15 11:25:42.196708 env[1437]: time="2025-07-15T11:25:42.196605543Z" level=info msg="Loading containers: done." Jul 15 11:25:42.395809 env[1437]: time="2025-07-15T11:25:42.395745791Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 15 11:25:42.395973 env[1437]: time="2025-07-15T11:25:42.395917633Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 15 11:25:42.396018 env[1437]: time="2025-07-15T11:25:42.396005948Z" level=info msg="Daemon has completed initialization" Jul 15 11:25:42.839988 systemd[1]: Started docker.service. Jul 15 11:25:42.846337 env[1437]: time="2025-07-15T11:25:42.846284160Z" level=info msg="API listen on /run/docker.sock" Jul 15 11:25:43.472872 env[1303]: time="2025-07-15T11:25:43.472832638Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 15 11:25:47.225797 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3734628169.mount: Deactivated successfully. Jul 15 11:25:48.166559 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 15 11:25:48.166736 systemd[1]: Stopped kubelet.service. Jul 15 11:25:48.168198 systemd[1]: Starting kubelet.service... Jul 15 11:25:48.248521 systemd[1]: Started kubelet.service. Jul 15 11:25:48.576311 kubelet[1574]: E0715 11:25:48.576045 1574 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 11:25:48.579005 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 11:25:48.579151 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 11:25:53.147314 env[1303]: time="2025-07-15T11:25:53.147245420Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:25:53.150293 env[1303]: time="2025-07-15T11:25:53.150255033Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:25:53.151993 env[1303]: time="2025-07-15T11:25:53.151947576Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:25:53.153777 env[1303]: time="2025-07-15T11:25:53.153720981Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:25:53.154597 env[1303]: time="2025-07-15T11:25:53.154564072Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 15 11:25:53.155167 env[1303]: time="2025-07-15T11:25:53.155113031Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 15 11:25:55.095604 env[1303]: time="2025-07-15T11:25:55.095544570Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:25:55.097361 env[1303]: time="2025-07-15T11:25:55.097304029Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:25:55.098877 env[1303]: time="2025-07-15T11:25:55.098846190Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:25:55.100647 env[1303]: time="2025-07-15T11:25:55.100606912Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:25:55.101241 env[1303]: time="2025-07-15T11:25:55.101209582Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 15 11:25:55.101716 env[1303]: time="2025-07-15T11:25:55.101680916Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 15 11:25:56.775207 env[1303]: time="2025-07-15T11:25:56.775141434Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:25:56.776965 env[1303]: time="2025-07-15T11:25:56.776925709Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:25:56.778707 env[1303]: time="2025-07-15T11:25:56.778671593Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:25:56.780129 env[1303]: time="2025-07-15T11:25:56.780092417Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:25:56.780790 env[1303]: time="2025-07-15T11:25:56.780759688Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 15 11:25:56.781300 env[1303]: time="2025-07-15T11:25:56.781266628Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 15 11:25:58.069558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount941607621.mount: Deactivated successfully. Jul 15 11:25:58.652911 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 15 11:25:58.653080 systemd[1]: Stopped kubelet.service. Jul 15 11:25:58.654329 systemd[1]: Starting kubelet.service... Jul 15 11:25:58.866253 systemd[1]: Started kubelet.service. Jul 15 11:25:58.901043 kubelet[1591]: E0715 11:25:58.901003 1591 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 11:25:58.901663 env[1303]: time="2025-07-15T11:25:58.901616975Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:25:58.902961 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 11:25:58.903109 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 11:25:58.904068 env[1303]: time="2025-07-15T11:25:58.904029428Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:25:58.905480 env[1303]: time="2025-07-15T11:25:58.905421448Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:25:58.906752 env[1303]: time="2025-07-15T11:25:58.906726385Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:25:58.907130 env[1303]: time="2025-07-15T11:25:58.907104854Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 15 11:25:58.907578 env[1303]: time="2025-07-15T11:25:58.907549488Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 15 11:25:59.438165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2874470064.mount: Deactivated successfully. Jul 15 11:26:00.331146 env[1303]: time="2025-07-15T11:26:00.331076681Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:26:00.333203 env[1303]: time="2025-07-15T11:26:00.333157052Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:26:00.335201 env[1303]: time="2025-07-15T11:26:00.335156060Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:26:00.336947 env[1303]: time="2025-07-15T11:26:00.336921520Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:26:00.339500 env[1303]: time="2025-07-15T11:26:00.339460962Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 15 11:26:00.340173 env[1303]: time="2025-07-15T11:26:00.340140776Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 15 11:26:00.848840 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3080473351.mount: Deactivated successfully. Jul 15 11:26:00.854293 env[1303]: time="2025-07-15T11:26:00.854260200Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:26:00.856133 env[1303]: time="2025-07-15T11:26:00.856085613Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:26:00.857696 env[1303]: time="2025-07-15T11:26:00.857673540Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:26:00.859360 env[1303]: time="2025-07-15T11:26:00.859334725Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:26:00.859899 env[1303]: time="2025-07-15T11:26:00.859873866Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 15 11:26:00.860363 env[1303]: time="2025-07-15T11:26:00.860341262Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 15 11:26:01.350416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount644427509.mount: Deactivated successfully. Jul 15 11:26:03.813172 env[1303]: time="2025-07-15T11:26:03.813108971Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:26:03.814943 env[1303]: time="2025-07-15T11:26:03.814908505Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:26:03.816652 env[1303]: time="2025-07-15T11:26:03.816617910Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:26:03.818200 env[1303]: time="2025-07-15T11:26:03.818170732Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:26:03.818871 env[1303]: time="2025-07-15T11:26:03.818840337Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 15 11:26:05.646550 systemd[1]: Stopped kubelet.service. Jul 15 11:26:05.648407 systemd[1]: Starting kubelet.service... Jul 15 11:26:05.670006 systemd[1]: Reloading. Jul 15 11:26:05.728206 /usr/lib/systemd/system-generators/torcx-generator[1650]: time="2025-07-15T11:26:05Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" Jul 15 11:26:05.728544 /usr/lib/systemd/system-generators/torcx-generator[1650]: time="2025-07-15T11:26:05Z" level=info msg="torcx already run" Jul 15 11:26:05.905175 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 15 11:26:05.905192 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 15 11:26:05.922201 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 11:26:05.994646 systemd[1]: Started kubelet.service. Jul 15 11:26:05.996131 systemd[1]: Stopping kubelet.service... Jul 15 11:26:05.996428 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 11:26:05.996640 systemd[1]: Stopped kubelet.service. Jul 15 11:26:05.997937 systemd[1]: Starting kubelet.service... Jul 15 11:26:06.078665 systemd[1]: Started kubelet.service. Jul 15 11:26:06.109644 kubelet[1710]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 11:26:06.109644 kubelet[1710]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 15 11:26:06.109644 kubelet[1710]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 11:26:06.110025 kubelet[1710]: I0715 11:26:06.109686 1710 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 11:26:06.330618 kubelet[1710]: I0715 11:26:06.330488 1710 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 15 11:26:06.330618 kubelet[1710]: I0715 11:26:06.330519 1710 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 11:26:06.330779 kubelet[1710]: I0715 11:26:06.330772 1710 server.go:934] "Client rotation is on, will bootstrap in background" Jul 15 11:26:06.349451 kubelet[1710]: E0715 11:26:06.349422 1710 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:26:06.350178 kubelet[1710]: I0715 11:26:06.350151 1710 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 11:26:06.356182 kubelet[1710]: E0715 11:26:06.356134 1710 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 15 11:26:06.356182 kubelet[1710]: I0715 11:26:06.356160 1710 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 15 11:26:06.361224 kubelet[1710]: I0715 11:26:06.361199 1710 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 11:26:06.361937 kubelet[1710]: I0715 11:26:06.361909 1710 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 15 11:26:06.362068 kubelet[1710]: I0715 11:26:06.362034 1710 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 11:26:06.362256 kubelet[1710]: I0715 11:26:06.362061 1710 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 15 11:26:06.362382 kubelet[1710]: I0715 11:26:06.362262 1710 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 11:26:06.362382 kubelet[1710]: I0715 11:26:06.362273 1710 container_manager_linux.go:300] "Creating device plugin manager" Jul 15 11:26:06.362461 kubelet[1710]: I0715 11:26:06.362387 1710 state_mem.go:36] "Initialized new in-memory state store" Jul 15 11:26:06.368104 kubelet[1710]: I0715 11:26:06.368074 1710 kubelet.go:408] "Attempting to sync node with API server" Jul 15 11:26:06.368104 kubelet[1710]: I0715 11:26:06.368097 1710 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 11:26:06.368198 kubelet[1710]: I0715 11:26:06.368134 1710 kubelet.go:314] "Adding apiserver pod source" Jul 15 11:26:06.368198 kubelet[1710]: I0715 11:26:06.368152 1710 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 11:26:06.383183 kubelet[1710]: I0715 11:26:06.383150 1710 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 15 11:26:06.383543 kubelet[1710]: I0715 11:26:06.383520 1710 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 11:26:06.386706 kubelet[1710]: W0715 11:26:06.386676 1710 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 15 11:26:06.387511 kubelet[1710]: W0715 11:26:06.387471 1710 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Jul 15 11:26:06.387650 kubelet[1710]: E0715 11:26:06.387624 1710 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:26:06.389616 kubelet[1710]: W0715 11:26:06.389551 1710 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Jul 15 11:26:06.389616 kubelet[1710]: E0715 11:26:06.389610 1710 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:26:06.393258 kubelet[1710]: I0715 11:26:06.393228 1710 server.go:1274] "Started kubelet" Jul 15 11:26:06.393516 kubelet[1710]: I0715 11:26:06.393396 1710 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 11:26:06.393695 kubelet[1710]: I0715 11:26:06.393663 1710 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 11:26:06.393759 kubelet[1710]: I0715 11:26:06.393707 1710 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 11:26:06.394668 kubelet[1710]: I0715 11:26:06.394635 1710 server.go:449] "Adding debug handlers to kubelet server" Jul 15 11:26:06.396104 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 15 11:26:06.396399 kubelet[1710]: I0715 11:26:06.396214 1710 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 11:26:06.396399 kubelet[1710]: I0715 11:26:06.396360 1710 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 11:26:06.398488 kubelet[1710]: I0715 11:26:06.397860 1710 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 15 11:26:06.398488 kubelet[1710]: I0715 11:26:06.397934 1710 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 15 11:26:06.398488 kubelet[1710]: I0715 11:26:06.397985 1710 reconciler.go:26] "Reconciler: start to sync state" Jul 15 11:26:06.398488 kubelet[1710]: E0715 11:26:06.398155 1710 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:26:06.398488 kubelet[1710]: E0715 11:26:06.398216 1710 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="200ms" Jul 15 11:26:06.398488 kubelet[1710]: W0715 11:26:06.398305 1710 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Jul 15 11:26:06.398488 kubelet[1710]: E0715 11:26:06.398345 1710 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:26:06.398748 kubelet[1710]: I0715 11:26:06.398504 1710 factory.go:221] Registration of the systemd container factory successfully Jul 15 11:26:06.398748 kubelet[1710]: I0715 11:26:06.398561 1710 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 11:26:06.399609 kubelet[1710]: E0715 11:26:06.399528 1710 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 11:26:06.399744 kubelet[1710]: I0715 11:26:06.399722 1710 factory.go:221] Registration of the containerd container factory successfully Jul 15 11:26:06.403840 kubelet[1710]: E0715 11:26:06.402943 1710 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.10:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.10:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1852691a3bc25310 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-15 11:26:06.393201424 +0000 UTC m=+0.311094994,LastTimestamp:2025-07-15 11:26:06.393201424 +0000 UTC m=+0.311094994,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 15 11:26:06.412924 kubelet[1710]: I0715 11:26:06.412876 1710 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 11:26:06.414212 kubelet[1710]: I0715 11:26:06.414186 1710 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 11:26:06.414259 kubelet[1710]: I0715 11:26:06.414226 1710 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 15 11:26:06.414259 kubelet[1710]: I0715 11:26:06.414245 1710 kubelet.go:2321] "Starting kubelet main sync loop" Jul 15 11:26:06.414403 kubelet[1710]: E0715 11:26:06.414381 1710 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 11:26:06.414917 kubelet[1710]: W0715 11:26:06.414730 1710 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Jul 15 11:26:06.414917 kubelet[1710]: E0715 11:26:06.414761 1710 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:26:06.417977 kubelet[1710]: I0715 11:26:06.417957 1710 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 15 11:26:06.417977 kubelet[1710]: I0715 11:26:06.417969 1710 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 15 11:26:06.418056 kubelet[1710]: I0715 11:26:06.417982 1710 state_mem.go:36] "Initialized new in-memory state store" Jul 15 11:26:06.498240 kubelet[1710]: E0715 11:26:06.498205 1710 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:26:06.515503 kubelet[1710]: E0715 11:26:06.515466 1710 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 11:26:06.598399 kubelet[1710]: E0715 11:26:06.598324 1710 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:26:06.598547 kubelet[1710]: E0715 11:26:06.598519 1710 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="400ms" Jul 15 11:26:06.661521 kubelet[1710]: I0715 11:26:06.661506 1710 policy_none.go:49] "None policy: Start" Jul 15 11:26:06.662002 kubelet[1710]: I0715 11:26:06.661985 1710 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 15 11:26:06.662050 kubelet[1710]: I0715 11:26:06.662016 1710 state_mem.go:35] "Initializing new in-memory state store" Jul 15 11:26:06.667254 kubelet[1710]: I0715 11:26:06.667233 1710 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 11:26:06.667359 kubelet[1710]: I0715 11:26:06.667345 1710 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 11:26:06.667400 kubelet[1710]: I0715 11:26:06.667358 1710 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 11:26:06.667565 kubelet[1710]: I0715 11:26:06.667553 1710 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 11:26:06.668458 kubelet[1710]: E0715 11:26:06.668437 1710 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 15 11:26:06.768797 kubelet[1710]: I0715 11:26:06.768769 1710 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 11:26:06.769120 kubelet[1710]: E0715 11:26:06.769094 1710 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Jul 15 11:26:06.799390 kubelet[1710]: I0715 11:26:06.799342 1710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:26:06.799390 kubelet[1710]: I0715 11:26:06.799369 1710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:26:06.799494 kubelet[1710]: I0715 11:26:06.799399 1710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:26:06.799494 kubelet[1710]: I0715 11:26:06.799417 1710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1a32fd6348d7a427e63bef86865d556f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1a32fd6348d7a427e63bef86865d556f\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:26:06.799494 kubelet[1710]: I0715 11:26:06.799431 1710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:26:06.799494 kubelet[1710]: I0715 11:26:06.799446 1710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:26:06.799494 kubelet[1710]: I0715 11:26:06.799459 1710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1a32fd6348d7a427e63bef86865d556f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1a32fd6348d7a427e63bef86865d556f\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:26:06.799629 kubelet[1710]: I0715 11:26:06.799471 1710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 15 11:26:06.799629 kubelet[1710]: I0715 11:26:06.799483 1710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1a32fd6348d7a427e63bef86865d556f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1a32fd6348d7a427e63bef86865d556f\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:26:06.970521 kubelet[1710]: I0715 11:26:06.970482 1710 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 11:26:06.970826 kubelet[1710]: E0715 11:26:06.970796 1710 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Jul 15 11:26:06.999227 kubelet[1710]: E0715 11:26:06.999180 1710 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="800ms" Jul 15 11:26:07.022627 kubelet[1710]: E0715 11:26:07.022565 1710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:07.022627 kubelet[1710]: E0715 11:26:07.022609 1710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:07.022997 kubelet[1710]: E0715 11:26:07.022961 1710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:07.023869 env[1303]: time="2025-07-15T11:26:07.023509892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 15 11:26:07.023869 env[1303]: time="2025-07-15T11:26:07.023566418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1a32fd6348d7a427e63bef86865d556f,Namespace:kube-system,Attempt:0,}" Jul 15 11:26:07.023869 env[1303]: time="2025-07-15T11:26:07.023543936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 15 11:26:07.372721 kubelet[1710]: I0715 11:26:07.372609 1710 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 11:26:07.373207 kubelet[1710]: E0715 11:26:07.373149 1710 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Jul 15 11:26:07.482483 kubelet[1710]: W0715 11:26:07.482410 1710 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Jul 15 11:26:07.482483 kubelet[1710]: E0715 11:26:07.482474 1710 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:26:07.509336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1433431721.mount: Deactivated successfully. Jul 15 11:26:07.513592 env[1303]: time="2025-07-15T11:26:07.513545749Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:26:07.517204 env[1303]: time="2025-07-15T11:26:07.517175565Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:26:07.518750 env[1303]: time="2025-07-15T11:26:07.518729378Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:26:07.519837 env[1303]: time="2025-07-15T11:26:07.519810585Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:26:07.520644 env[1303]: time="2025-07-15T11:26:07.520618570Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:26:07.521747 env[1303]: time="2025-07-15T11:26:07.521723652Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:26:07.523908 env[1303]: time="2025-07-15T11:26:07.523885025Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:26:07.524806 env[1303]: time="2025-07-15T11:26:07.524784672Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:26:07.525943 env[1303]: time="2025-07-15T11:26:07.525923657Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:26:07.527091 env[1303]: time="2025-07-15T11:26:07.527064446Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:26:07.528799 env[1303]: time="2025-07-15T11:26:07.528759455Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:26:07.534385 env[1303]: time="2025-07-15T11:26:07.534346079Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:26:07.554059 env[1303]: time="2025-07-15T11:26:07.554010049Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:26:07.554059 env[1303]: time="2025-07-15T11:26:07.554041799Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:26:07.554059 env[1303]: time="2025-07-15T11:26:07.554051347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:26:07.554249 env[1303]: time="2025-07-15T11:26:07.554196198Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1d657e8199172db889e9770d8f6baa858feb3a5da344ca3b98d63ddfd1224f28 pid=1752 runtime=io.containerd.runc.v2 Jul 15 11:26:07.568843 env[1303]: time="2025-07-15T11:26:07.568665979Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:26:07.568843 env[1303]: time="2025-07-15T11:26:07.568700875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:26:07.568843 env[1303]: time="2025-07-15T11:26:07.568710042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:26:07.568843 env[1303]: time="2025-07-15T11:26:07.568808036Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/eebfede0e02e2ad6dcbe3a290fb2f3f59ecf55451b3ab0c735e95134fefb402e pid=1778 runtime=io.containerd.runc.v2 Jul 15 11:26:07.580201 env[1303]: time="2025-07-15T11:26:07.579289849Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:26:07.580201 env[1303]: time="2025-07-15T11:26:07.579364739Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:26:07.580201 env[1303]: time="2025-07-15T11:26:07.579412188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:26:07.580201 env[1303]: time="2025-07-15T11:26:07.579651317Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/73a674cbecf9a2ba8337d0ba4d9d7cd76436db0ee7e95b5cabce9787b9be4ddb pid=1809 runtime=io.containerd.runc.v2 Jul 15 11:26:07.585411 kubelet[1710]: W0715 11:26:07.584762 1710 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Jul 15 11:26:07.585411 kubelet[1710]: E0715 11:26:07.584849 1710 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:26:07.609498 env[1303]: time="2025-07-15T11:26:07.609455059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1a32fd6348d7a427e63bef86865d556f,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d657e8199172db889e9770d8f6baa858feb3a5da344ca3b98d63ddfd1224f28\"" Jul 15 11:26:07.617459 kubelet[1710]: E0715 11:26:07.617430 1710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:07.619206 env[1303]: time="2025-07-15T11:26:07.619177759Z" level=info msg="CreateContainer within sandbox \"1d657e8199172db889e9770d8f6baa858feb3a5da344ca3b98d63ddfd1224f28\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 15 11:26:07.624866 env[1303]: time="2025-07-15T11:26:07.624354716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"eebfede0e02e2ad6dcbe3a290fb2f3f59ecf55451b3ab0c735e95134fefb402e\"" Jul 15 11:26:07.625984 kubelet[1710]: E0715 11:26:07.625960 1710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:07.627061 env[1303]: time="2025-07-15T11:26:07.627033989Z" level=info msg="CreateContainer within sandbox \"eebfede0e02e2ad6dcbe3a290fb2f3f59ecf55451b3ab0c735e95134fefb402e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 15 11:26:07.634922 env[1303]: time="2025-07-15T11:26:07.634882345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"73a674cbecf9a2ba8337d0ba4d9d7cd76436db0ee7e95b5cabce9787b9be4ddb\"" Jul 15 11:26:07.635463 kubelet[1710]: E0715 11:26:07.635446 1710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:07.636524 env[1303]: time="2025-07-15T11:26:07.636495910Z" level=info msg="CreateContainer within sandbox \"73a674cbecf9a2ba8337d0ba4d9d7cd76436db0ee7e95b5cabce9787b9be4ddb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 15 11:26:07.639927 env[1303]: time="2025-07-15T11:26:07.639892569Z" level=info msg="CreateContainer within sandbox \"1d657e8199172db889e9770d8f6baa858feb3a5da344ca3b98d63ddfd1224f28\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bb1a3f9fe84b3d350f8ea6209ccdea0cb6ebc721ab271e1c39f04e0b14fa823a\"" Jul 15 11:26:07.640398 env[1303]: time="2025-07-15T11:26:07.640369443Z" level=info msg="StartContainer for \"bb1a3f9fe84b3d350f8ea6209ccdea0cb6ebc721ab271e1c39f04e0b14fa823a\"" Jul 15 11:26:07.644996 env[1303]: time="2025-07-15T11:26:07.644968997Z" level=info msg="CreateContainer within sandbox \"eebfede0e02e2ad6dcbe3a290fb2f3f59ecf55451b3ab0c735e95134fefb402e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e47dc935cf9e00aeba1677eae094b40cbc62185e13cab615d08f504358c363f2\"" Jul 15 11:26:07.645408 env[1303]: time="2025-07-15T11:26:07.645389676Z" level=info msg="StartContainer for \"e47dc935cf9e00aeba1677eae094b40cbc62185e13cab615d08f504358c363f2\"" Jul 15 11:26:07.654626 env[1303]: time="2025-07-15T11:26:07.654589826Z" level=info msg="CreateContainer within sandbox \"73a674cbecf9a2ba8337d0ba4d9d7cd76436db0ee7e95b5cabce9787b9be4ddb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"cd51c5357c417274a47539501e679bd021a77723ef6df6900681920055091b99\"" Jul 15 11:26:07.654964 env[1303]: time="2025-07-15T11:26:07.654945824Z" level=info msg="StartContainer for \"cd51c5357c417274a47539501e679bd021a77723ef6df6900681920055091b99\"" Jul 15 11:26:07.696080 env[1303]: time="2025-07-15T11:26:07.696040677Z" level=info msg="StartContainer for \"e47dc935cf9e00aeba1677eae094b40cbc62185e13cab615d08f504358c363f2\" returns successfully" Jul 15 11:26:07.699286 env[1303]: time="2025-07-15T11:26:07.699255975Z" level=info msg="StartContainer for \"bb1a3f9fe84b3d350f8ea6209ccdea0cb6ebc721ab271e1c39f04e0b14fa823a\" returns successfully" Jul 15 11:26:07.727569 env[1303]: time="2025-07-15T11:26:07.726488937Z" level=info msg="StartContainer for \"cd51c5357c417274a47539501e679bd021a77723ef6df6900681920055091b99\" returns successfully" Jul 15 11:26:08.174987 kubelet[1710]: I0715 11:26:08.174959 1710 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 11:26:08.419828 kubelet[1710]: E0715 11:26:08.419804 1710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:08.422058 kubelet[1710]: E0715 11:26:08.422045 1710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:08.423886 kubelet[1710]: E0715 11:26:08.423875 1710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:08.645922 kubelet[1710]: E0715 11:26:08.645811 1710 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 15 11:26:08.736446 kubelet[1710]: I0715 11:26:08.736402 1710 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 15 11:26:08.736446 kubelet[1710]: E0715 11:26:08.736440 1710 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 15 11:26:09.384513 kubelet[1710]: I0715 11:26:09.384485 1710 apiserver.go:52] "Watching apiserver" Jul 15 11:26:09.398162 kubelet[1710]: I0715 11:26:09.398120 1710 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 15 11:26:09.428672 kubelet[1710]: E0715 11:26:09.428637 1710 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 15 11:26:09.429027 kubelet[1710]: E0715 11:26:09.428771 1710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:10.198005 kubelet[1710]: E0715 11:26:10.197960 1710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:10.425662 kubelet[1710]: E0715 11:26:10.425632 1710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:11.045609 systemd[1]: Reloading. Jul 15 11:26:11.105345 /usr/lib/systemd/system-generators/torcx-generator[2002]: time="2025-07-15T11:26:11Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" Jul 15 11:26:11.105386 /usr/lib/systemd/system-generators/torcx-generator[2002]: time="2025-07-15T11:26:11Z" level=info msg="torcx already run" Jul 15 11:26:11.166902 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 15 11:26:11.166922 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 15 11:26:11.187438 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 11:26:11.267645 systemd[1]: Stopping kubelet.service... Jul 15 11:26:11.292850 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 11:26:11.293131 systemd[1]: Stopped kubelet.service. Jul 15 11:26:11.294774 systemd[1]: Starting kubelet.service... Jul 15 11:26:11.377522 systemd[1]: Started kubelet.service. Jul 15 11:26:11.409476 kubelet[2059]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 11:26:11.409476 kubelet[2059]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 15 11:26:11.409476 kubelet[2059]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 11:26:11.410270 kubelet[2059]: I0715 11:26:11.409514 2059 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 11:26:11.417555 kubelet[2059]: I0715 11:26:11.417508 2059 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 15 11:26:11.417555 kubelet[2059]: I0715 11:26:11.417532 2059 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 11:26:11.418247 kubelet[2059]: I0715 11:26:11.418228 2059 server.go:934] "Client rotation is on, will bootstrap in background" Jul 15 11:26:11.421601 kubelet[2059]: I0715 11:26:11.421570 2059 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 15 11:26:11.423286 kubelet[2059]: I0715 11:26:11.423257 2059 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 11:26:11.426346 kubelet[2059]: E0715 11:26:11.426293 2059 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 15 11:26:11.426495 kubelet[2059]: I0715 11:26:11.426481 2059 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 15 11:26:11.429806 kubelet[2059]: I0715 11:26:11.429789 2059 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 11:26:11.430186 kubelet[2059]: I0715 11:26:11.430174 2059 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 15 11:26:11.430366 kubelet[2059]: I0715 11:26:11.430338 2059 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 11:26:11.430621 kubelet[2059]: I0715 11:26:11.430456 2059 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 15 11:26:11.430756 kubelet[2059]: I0715 11:26:11.430741 2059 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 11:26:11.430824 kubelet[2059]: I0715 11:26:11.430811 2059 container_manager_linux.go:300] "Creating device plugin manager" Jul 15 11:26:11.430908 kubelet[2059]: I0715 11:26:11.430895 2059 state_mem.go:36] "Initialized new in-memory state store" Jul 15 11:26:11.431056 kubelet[2059]: I0715 11:26:11.431043 2059 kubelet.go:408] "Attempting to sync node with API server" Jul 15 11:26:11.431143 kubelet[2059]: I0715 11:26:11.431127 2059 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 11:26:11.431262 kubelet[2059]: I0715 11:26:11.431248 2059 kubelet.go:314] "Adding apiserver pod source" Jul 15 11:26:11.431358 kubelet[2059]: I0715 11:26:11.431343 2059 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 11:26:11.433675 kubelet[2059]: I0715 11:26:11.433636 2059 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 15 11:26:11.433987 kubelet[2059]: I0715 11:26:11.433960 2059 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 11:26:11.434320 kubelet[2059]: I0715 11:26:11.434293 2059 server.go:1274] "Started kubelet" Jul 15 11:26:11.438044 kubelet[2059]: I0715 11:26:11.438008 2059 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 11:26:11.441921 kubelet[2059]: I0715 11:26:11.441841 2059 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 11:26:11.442533 kubelet[2059]: I0715 11:26:11.442507 2059 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 11:26:11.442732 kubelet[2059]: I0715 11:26:11.442714 2059 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 11:26:11.443299 kubelet[2059]: I0715 11:26:11.443250 2059 server.go:449] "Adding debug handlers to kubelet server" Jul 15 11:26:11.443559 kubelet[2059]: I0715 11:26:11.443536 2059 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 11:26:11.445691 kubelet[2059]: E0715 11:26:11.445668 2059 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 11:26:11.446743 kubelet[2059]: I0715 11:26:11.445893 2059 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 15 11:26:11.446975 kubelet[2059]: I0715 11:26:11.446861 2059 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 15 11:26:11.447097 kubelet[2059]: I0715 11:26:11.446177 2059 factory.go:221] Registration of the systemd container factory successfully Jul 15 11:26:11.447324 kubelet[2059]: I0715 11:26:11.447195 2059 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 11:26:11.447499 kubelet[2059]: I0715 11:26:11.447487 2059 reconciler.go:26] "Reconciler: start to sync state" Jul 15 11:26:11.448549 kubelet[2059]: I0715 11:26:11.448342 2059 factory.go:221] Registration of the containerd container factory successfully Jul 15 11:26:11.456006 kubelet[2059]: I0715 11:26:11.455966 2059 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 11:26:11.457364 kubelet[2059]: I0715 11:26:11.457316 2059 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 11:26:11.457364 kubelet[2059]: I0715 11:26:11.457342 2059 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 15 11:26:11.457457 kubelet[2059]: I0715 11:26:11.457416 2059 kubelet.go:2321] "Starting kubelet main sync loop" Jul 15 11:26:11.457489 kubelet[2059]: E0715 11:26:11.457460 2059 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 11:26:11.486924 kubelet[2059]: I0715 11:26:11.486900 2059 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 15 11:26:11.487073 kubelet[2059]: I0715 11:26:11.487057 2059 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 15 11:26:11.487146 kubelet[2059]: I0715 11:26:11.487134 2059 state_mem.go:36] "Initialized new in-memory state store" Jul 15 11:26:11.487337 kubelet[2059]: I0715 11:26:11.487323 2059 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 15 11:26:11.487436 kubelet[2059]: I0715 11:26:11.487407 2059 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 15 11:26:11.487503 kubelet[2059]: I0715 11:26:11.487490 2059 policy_none.go:49] "None policy: Start" Jul 15 11:26:11.488026 kubelet[2059]: I0715 11:26:11.488017 2059 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 15 11:26:11.488114 kubelet[2059]: I0715 11:26:11.488102 2059 state_mem.go:35] "Initializing new in-memory state store" Jul 15 11:26:11.488292 kubelet[2059]: I0715 11:26:11.488281 2059 state_mem.go:75] "Updated machine memory state" Jul 15 11:26:11.489252 kubelet[2059]: I0715 11:26:11.489239 2059 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 11:26:11.489454 kubelet[2059]: I0715 11:26:11.489443 2059 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 11:26:11.489540 kubelet[2059]: I0715 11:26:11.489513 2059 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 11:26:11.490647 kubelet[2059]: I0715 11:26:11.490635 2059 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 11:26:11.563837 kubelet[2059]: E0715 11:26:11.563801 2059 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 15 11:26:11.595699 kubelet[2059]: I0715 11:26:11.595685 2059 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 11:26:11.600566 kubelet[2059]: I0715 11:26:11.600548 2059 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 15 11:26:11.600635 kubelet[2059]: I0715 11:26:11.600609 2059 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 15 11:26:11.649239 kubelet[2059]: I0715 11:26:11.649125 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:26:11.649239 kubelet[2059]: I0715 11:26:11.649165 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1a32fd6348d7a427e63bef86865d556f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1a32fd6348d7a427e63bef86865d556f\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:26:11.649239 kubelet[2059]: I0715 11:26:11.649186 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:26:11.649239 kubelet[2059]: I0715 11:26:11.649200 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:26:11.649239 kubelet[2059]: I0715 11:26:11.649215 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:26:11.649558 kubelet[2059]: I0715 11:26:11.649517 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:26:11.649732 kubelet[2059]: I0715 11:26:11.649589 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 15 11:26:11.649732 kubelet[2059]: I0715 11:26:11.649635 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1a32fd6348d7a427e63bef86865d556f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1a32fd6348d7a427e63bef86865d556f\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:26:11.649732 kubelet[2059]: I0715 11:26:11.649653 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1a32fd6348d7a427e63bef86865d556f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1a32fd6348d7a427e63bef86865d556f\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:26:11.863458 kubelet[2059]: E0715 11:26:11.863418 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:11.864436 kubelet[2059]: E0715 11:26:11.864418 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:11.864674 kubelet[2059]: E0715 11:26:11.864637 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:12.009787 sudo[2095]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 15 11:26:12.009976 sudo[2095]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 15 11:26:12.432541 kubelet[2059]: I0715 11:26:12.432505 2059 apiserver.go:52] "Watching apiserver" Jul 15 11:26:12.447957 kubelet[2059]: I0715 11:26:12.447926 2059 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 15 11:26:12.464524 sudo[2095]: pam_unix(sudo:session): session closed for user root Jul 15 11:26:12.466848 kubelet[2059]: E0715 11:26:12.466811 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:12.467747 kubelet[2059]: E0715 11:26:12.467629 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:12.473433 kubelet[2059]: E0715 11:26:12.473289 2059 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 15 11:26:12.473529 kubelet[2059]: E0715 11:26:12.473468 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:12.495955 kubelet[2059]: I0715 11:26:12.495727 2059 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.495710765 podStartE2EDuration="2.495710765s" podCreationTimestamp="2025-07-15 11:26:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:26:12.487292146 +0000 UTC m=+1.106580826" watchObservedRunningTime="2025-07-15 11:26:12.495710765 +0000 UTC m=+1.114999436" Jul 15 11:26:12.513590 kubelet[2059]: I0715 11:26:12.513526 2059 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.513503618 podStartE2EDuration="1.513503618s" podCreationTimestamp="2025-07-15 11:26:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:26:12.495937611 +0000 UTC m=+1.115226291" watchObservedRunningTime="2025-07-15 11:26:12.513503618 +0000 UTC m=+1.132792298" Jul 15 11:26:13.468196 kubelet[2059]: E0715 11:26:13.468158 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:14.307393 sudo[1425]: pam_unix(sudo:session): session closed for user root Jul 15 11:26:14.308552 sshd[1420]: pam_unix(sshd:session): session closed for user core Jul 15 11:26:14.310480 systemd[1]: sshd@4-10.0.0.10:22-10.0.0.1:42238.service: Deactivated successfully. Jul 15 11:26:14.311396 systemd-logind[1290]: Session 5 logged out. Waiting for processes to exit. Jul 15 11:26:14.311424 systemd[1]: session-5.scope: Deactivated successfully. Jul 15 11:26:14.312063 systemd-logind[1290]: Removed session 5. Jul 15 11:26:14.605512 kubelet[2059]: E0715 11:26:14.605403 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:16.903984 kubelet[2059]: I0715 11:26:16.903951 2059 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 15 11:26:16.904348 env[1303]: time="2025-07-15T11:26:16.904313078Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 15 11:26:16.904562 kubelet[2059]: I0715 11:26:16.904503 2059 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 15 11:26:17.783212 kubelet[2059]: I0715 11:26:17.783157 2059 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=6.783140299 podStartE2EDuration="6.783140299s" podCreationTimestamp="2025-07-15 11:26:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:26:12.513428604 +0000 UTC m=+1.132717274" watchObservedRunningTime="2025-07-15 11:26:17.783140299 +0000 UTC m=+6.402428980" Jul 15 11:26:17.889163 kubelet[2059]: I0715 11:26:17.889106 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c53c450e-6189-4726-9fc9-b76c1e0aa067-lib-modules\") pod \"kube-proxy-n9vjx\" (UID: \"c53c450e-6189-4726-9fc9-b76c1e0aa067\") " pod="kube-system/kube-proxy-n9vjx" Jul 15 11:26:17.889163 kubelet[2059]: I0715 11:26:17.889143 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0af08191-10ec-44c4-a087-f6925b4b6bf9-cilium-cgroup\") pod \"cilium-jdvzn\" (UID: \"0af08191-10ec-44c4-a087-f6925b4b6bf9\") " pod="kube-system/cilium-jdvzn" Jul 15 11:26:17.889163 kubelet[2059]: I0715 11:26:17.889157 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0af08191-10ec-44c4-a087-f6925b4b6bf9-lib-modules\") pod \"cilium-jdvzn\" (UID: \"0af08191-10ec-44c4-a087-f6925b4b6bf9\") " pod="kube-system/cilium-jdvzn" Jul 15 11:26:17.889163 kubelet[2059]: I0715 11:26:17.889170 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0af08191-10ec-44c4-a087-f6925b4b6bf9-clustermesh-secrets\") pod \"cilium-jdvzn\" (UID: \"0af08191-10ec-44c4-a087-f6925b4b6bf9\") " pod="kube-system/cilium-jdvzn" Jul 15 11:26:17.889449 kubelet[2059]: I0715 11:26:17.889183 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0af08191-10ec-44c4-a087-f6925b4b6bf9-hostproc\") pod \"cilium-jdvzn\" (UID: \"0af08191-10ec-44c4-a087-f6925b4b6bf9\") " pod="kube-system/cilium-jdvzn" Jul 15 11:26:17.889449 kubelet[2059]: I0715 11:26:17.889200 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0af08191-10ec-44c4-a087-f6925b4b6bf9-etc-cni-netd\") pod \"cilium-jdvzn\" (UID: \"0af08191-10ec-44c4-a087-f6925b4b6bf9\") " pod="kube-system/cilium-jdvzn" Jul 15 11:26:17.889449 kubelet[2059]: I0715 11:26:17.889256 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c53c450e-6189-4726-9fc9-b76c1e0aa067-kube-proxy\") pod \"kube-proxy-n9vjx\" (UID: \"c53c450e-6189-4726-9fc9-b76c1e0aa067\") " pod="kube-system/kube-proxy-n9vjx" Jul 15 11:26:17.889449 kubelet[2059]: I0715 11:26:17.889289 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0af08191-10ec-44c4-a087-f6925b4b6bf9-cilium-config-path\") pod \"cilium-jdvzn\" (UID: \"0af08191-10ec-44c4-a087-f6925b4b6bf9\") " pod="kube-system/cilium-jdvzn" Jul 15 11:26:17.889449 kubelet[2059]: I0715 11:26:17.889308 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0af08191-10ec-44c4-a087-f6925b4b6bf9-cni-path\") pod \"cilium-jdvzn\" (UID: \"0af08191-10ec-44c4-a087-f6925b4b6bf9\") " pod="kube-system/cilium-jdvzn" Jul 15 11:26:17.889449 kubelet[2059]: I0715 11:26:17.889324 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0af08191-10ec-44c4-a087-f6925b4b6bf9-host-proc-sys-net\") pod \"cilium-jdvzn\" (UID: \"0af08191-10ec-44c4-a087-f6925b4b6bf9\") " pod="kube-system/cilium-jdvzn" Jul 15 11:26:17.889602 kubelet[2059]: I0715 11:26:17.889341 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0af08191-10ec-44c4-a087-f6925b4b6bf9-host-proc-sys-kernel\") pod \"cilium-jdvzn\" (UID: \"0af08191-10ec-44c4-a087-f6925b4b6bf9\") " pod="kube-system/cilium-jdvzn" Jul 15 11:26:17.889602 kubelet[2059]: I0715 11:26:17.889357 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0af08191-10ec-44c4-a087-f6925b4b6bf9-hubble-tls\") pod \"cilium-jdvzn\" (UID: \"0af08191-10ec-44c4-a087-f6925b4b6bf9\") " pod="kube-system/cilium-jdvzn" Jul 15 11:26:17.889602 kubelet[2059]: I0715 11:26:17.889391 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c53c450e-6189-4726-9fc9-b76c1e0aa067-xtables-lock\") pod \"kube-proxy-n9vjx\" (UID: \"c53c450e-6189-4726-9fc9-b76c1e0aa067\") " pod="kube-system/kube-proxy-n9vjx" Jul 15 11:26:17.889602 kubelet[2059]: I0715 11:26:17.889408 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0af08191-10ec-44c4-a087-f6925b4b6bf9-cilium-run\") pod \"cilium-jdvzn\" (UID: \"0af08191-10ec-44c4-a087-f6925b4b6bf9\") " pod="kube-system/cilium-jdvzn" Jul 15 11:26:17.889602 kubelet[2059]: I0715 11:26:17.889424 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0af08191-10ec-44c4-a087-f6925b4b6bf9-xtables-lock\") pod \"cilium-jdvzn\" (UID: \"0af08191-10ec-44c4-a087-f6925b4b6bf9\") " pod="kube-system/cilium-jdvzn" Jul 15 11:26:17.889730 kubelet[2059]: I0715 11:26:17.889466 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxbzk\" (UniqueName: \"kubernetes.io/projected/c53c450e-6189-4726-9fc9-b76c1e0aa067-kube-api-access-wxbzk\") pod \"kube-proxy-n9vjx\" (UID: \"c53c450e-6189-4726-9fc9-b76c1e0aa067\") " pod="kube-system/kube-proxy-n9vjx" Jul 15 11:26:17.889730 kubelet[2059]: I0715 11:26:17.889483 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0af08191-10ec-44c4-a087-f6925b4b6bf9-bpf-maps\") pod \"cilium-jdvzn\" (UID: \"0af08191-10ec-44c4-a087-f6925b4b6bf9\") " pod="kube-system/cilium-jdvzn" Jul 15 11:26:17.889730 kubelet[2059]: I0715 11:26:17.889508 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtr49\" (UniqueName: \"kubernetes.io/projected/0af08191-10ec-44c4-a087-f6925b4b6bf9-kube-api-access-gtr49\") pod \"cilium-jdvzn\" (UID: \"0af08191-10ec-44c4-a087-f6925b4b6bf9\") " pod="kube-system/cilium-jdvzn" Jul 15 11:26:17.990910 kubelet[2059]: I0715 11:26:17.990861 2059 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jul 15 11:26:18.086032 kubelet[2059]: E0715 11:26:18.085901 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:18.086524 env[1303]: time="2025-07-15T11:26:18.086480311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n9vjx,Uid:c53c450e-6189-4726-9fc9-b76c1e0aa067,Namespace:kube-system,Attempt:0,}" Jul 15 11:26:18.089990 kubelet[2059]: E0715 11:26:18.089969 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:18.090348 env[1303]: time="2025-07-15T11:26:18.090202390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jdvzn,Uid:0af08191-10ec-44c4-a087-f6925b4b6bf9,Namespace:kube-system,Attempt:0,}" Jul 15 11:26:18.091691 kubelet[2059]: I0715 11:26:18.091662 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8a07036d-f6a5-42d4-b607-0aa4500de691-cilium-config-path\") pod \"cilium-operator-5d85765b45-7tsm5\" (UID: \"8a07036d-f6a5-42d4-b607-0aa4500de691\") " pod="kube-system/cilium-operator-5d85765b45-7tsm5" Jul 15 11:26:18.091761 kubelet[2059]: I0715 11:26:18.091696 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfrtf\" (UniqueName: \"kubernetes.io/projected/8a07036d-f6a5-42d4-b607-0aa4500de691-kube-api-access-dfrtf\") pod \"cilium-operator-5d85765b45-7tsm5\" (UID: \"8a07036d-f6a5-42d4-b607-0aa4500de691\") " pod="kube-system/cilium-operator-5d85765b45-7tsm5" Jul 15 11:26:18.101876 env[1303]: time="2025-07-15T11:26:18.101820906Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:26:18.101946 env[1303]: time="2025-07-15T11:26:18.101895178Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:26:18.101946 env[1303]: time="2025-07-15T11:26:18.101917991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:26:18.102134 env[1303]: time="2025-07-15T11:26:18.102040104Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/43233273031dc501543ab8b1caf19c7a706f010f33caec311aa9d2652cd03eda pid=2155 runtime=io.containerd.runc.v2 Jul 15 11:26:18.108826 env[1303]: time="2025-07-15T11:26:18.108763278Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:26:18.108826 env[1303]: time="2025-07-15T11:26:18.108796922Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:26:18.108826 env[1303]: time="2025-07-15T11:26:18.108806059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:26:18.109018 env[1303]: time="2025-07-15T11:26:18.108908784Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5c25e06ea65c611537f23a3af934ce295b99ee6d3fcee171773af80a256b36f2 pid=2176 runtime=io.containerd.runc.v2 Jul 15 11:26:18.141984 env[1303]: time="2025-07-15T11:26:18.141789753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jdvzn,Uid:0af08191-10ec-44c4-a087-f6925b4b6bf9,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c25e06ea65c611537f23a3af934ce295b99ee6d3fcee171773af80a256b36f2\"" Jul 15 11:26:18.142400 kubelet[2059]: E0715 11:26:18.142360 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:18.144055 env[1303]: time="2025-07-15T11:26:18.144022726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n9vjx,Uid:c53c450e-6189-4726-9fc9-b76c1e0aa067,Namespace:kube-system,Attempt:0,} returns sandbox id \"43233273031dc501543ab8b1caf19c7a706f010f33caec311aa9d2652cd03eda\"" Jul 15 11:26:18.144386 kubelet[2059]: E0715 11:26:18.144362 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:18.210030 env[1303]: time="2025-07-15T11:26:18.209987790Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 15 11:26:18.210860 env[1303]: time="2025-07-15T11:26:18.210839373Z" level=info msg="CreateContainer within sandbox \"43233273031dc501543ab8b1caf19c7a706f010f33caec311aa9d2652cd03eda\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 15 11:26:18.259601 env[1303]: time="2025-07-15T11:26:18.259549446Z" level=info msg="CreateContainer within sandbox \"43233273031dc501543ab8b1caf19c7a706f010f33caec311aa9d2652cd03eda\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"df9fb0fb623685bd854765b056a3e49051ddf33f51fc78059bf98f4768565674\"" Jul 15 11:26:18.260071 env[1303]: time="2025-07-15T11:26:18.260020133Z" level=info msg="StartContainer for \"df9fb0fb623685bd854765b056a3e49051ddf33f51fc78059bf98f4768565674\"" Jul 15 11:26:18.299896 env[1303]: time="2025-07-15T11:26:18.299858191Z" level=info msg="StartContainer for \"df9fb0fb623685bd854765b056a3e49051ddf33f51fc78059bf98f4768565674\" returns successfully" Jul 15 11:26:18.333528 kubelet[2059]: E0715 11:26:18.333500 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:18.334023 env[1303]: time="2025-07-15T11:26:18.333993749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-7tsm5,Uid:8a07036d-f6a5-42d4-b607-0aa4500de691,Namespace:kube-system,Attempt:0,}" Jul 15 11:26:18.348331 env[1303]: time="2025-07-15T11:26:18.348220382Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:26:18.348483 env[1303]: time="2025-07-15T11:26:18.348258836Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:26:18.348483 env[1303]: time="2025-07-15T11:26:18.348268524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:26:18.348631 env[1303]: time="2025-07-15T11:26:18.348605516Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c15d53e49ccb54196fd2c14350e1e75dc15f2d73fbf1f02065fb6e18829055e6 pid=2279 runtime=io.containerd.runc.v2 Jul 15 11:26:18.390799 env[1303]: time="2025-07-15T11:26:18.390762591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-7tsm5,Uid:8a07036d-f6a5-42d4-b607-0aa4500de691,Namespace:kube-system,Attempt:0,} returns sandbox id \"c15d53e49ccb54196fd2c14350e1e75dc15f2d73fbf1f02065fb6e18829055e6\"" Jul 15 11:26:18.391818 kubelet[2059]: E0715 11:26:18.391448 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:18.477768 kubelet[2059]: E0715 11:26:18.477723 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:18.486188 kubelet[2059]: I0715 11:26:18.486145 2059 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-n9vjx" podStartSLOduration=1.486128126 podStartE2EDuration="1.486128126s" podCreationTimestamp="2025-07-15 11:26:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:26:18.486104601 +0000 UTC m=+7.105393271" watchObservedRunningTime="2025-07-15 11:26:18.486128126 +0000 UTC m=+7.105416796" Jul 15 11:26:20.993611 kubelet[2059]: E0715 11:26:20.993561 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:21.304120 kubelet[2059]: E0715 11:26:21.303895 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:21.345492 update_engine[1292]: I0715 11:26:21.345439 1292 update_attempter.cc:509] Updating boot flags... Jul 15 11:26:21.490693 kubelet[2059]: E0715 11:26:21.489964 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:21.490693 kubelet[2059]: E0715 11:26:21.490020 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:23.528903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1072320366.mount: Deactivated successfully. Jul 15 11:26:24.609186 kubelet[2059]: E0715 11:26:24.609157 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:27.234620 env[1303]: time="2025-07-15T11:26:27.234551700Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:26:27.237013 env[1303]: time="2025-07-15T11:26:27.236962730Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:26:27.239005 env[1303]: time="2025-07-15T11:26:27.238969246Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:26:27.239441 env[1303]: time="2025-07-15T11:26:27.239411872Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 15 11:26:27.242702 env[1303]: time="2025-07-15T11:26:27.242659314Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 15 11:26:27.248279 env[1303]: time="2025-07-15T11:26:27.248238437Z" level=info msg="CreateContainer within sandbox \"5c25e06ea65c611537f23a3af934ce295b99ee6d3fcee171773af80a256b36f2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 15 11:26:27.259130 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount796998807.mount: Deactivated successfully. Jul 15 11:26:27.260623 env[1303]: time="2025-07-15T11:26:27.260585561Z" level=info msg="CreateContainer within sandbox \"5c25e06ea65c611537f23a3af934ce295b99ee6d3fcee171773af80a256b36f2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"198101f2b4b9311f0c416f1bccfc9132ef68b2041cc5665e091b15baafd907aa\"" Jul 15 11:26:27.261005 env[1303]: time="2025-07-15T11:26:27.260974497Z" level=info msg="StartContainer for \"198101f2b4b9311f0c416f1bccfc9132ef68b2041cc5665e091b15baafd907aa\"" Jul 15 11:26:27.348449 env[1303]: time="2025-07-15T11:26:27.348404858Z" level=info msg="StartContainer for \"198101f2b4b9311f0c416f1bccfc9132ef68b2041cc5665e091b15baafd907aa\" returns successfully" Jul 15 11:26:27.606271 kubelet[2059]: E0715 11:26:27.605842 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:27.607109 env[1303]: time="2025-07-15T11:26:27.607047398Z" level=info msg="shim disconnected" id=198101f2b4b9311f0c416f1bccfc9132ef68b2041cc5665e091b15baafd907aa Jul 15 11:26:27.607109 env[1303]: time="2025-07-15T11:26:27.607088786Z" level=warning msg="cleaning up after shim disconnected" id=198101f2b4b9311f0c416f1bccfc9132ef68b2041cc5665e091b15baafd907aa namespace=k8s.io Jul 15 11:26:27.607109 env[1303]: time="2025-07-15T11:26:27.607097683Z" level=info msg="cleaning up dead shim" Jul 15 11:26:27.616293 env[1303]: time="2025-07-15T11:26:27.616239344Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:26:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2498 runtime=io.containerd.runc.v2\n" Jul 15 11:26:28.257192 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-198101f2b4b9311f0c416f1bccfc9132ef68b2041cc5665e091b15baafd907aa-rootfs.mount: Deactivated successfully. Jul 15 11:26:28.611174 kubelet[2059]: E0715 11:26:28.611054 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:28.613252 env[1303]: time="2025-07-15T11:26:28.613205411Z" level=info msg="CreateContainer within sandbox \"5c25e06ea65c611537f23a3af934ce295b99ee6d3fcee171773af80a256b36f2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 15 11:26:30.301809 env[1303]: time="2025-07-15T11:26:30.301750023Z" level=info msg="CreateContainer within sandbox \"5c25e06ea65c611537f23a3af934ce295b99ee6d3fcee171773af80a256b36f2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0999ab611d687221738ec90d58d70e830f830984fa913b5b3ebf4df02b42e015\"" Jul 15 11:26:30.302738 env[1303]: time="2025-07-15T11:26:30.302146963Z" level=info msg="StartContainer for \"0999ab611d687221738ec90d58d70e830f830984fa913b5b3ebf4df02b42e015\"" Jul 15 11:26:30.408124 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 11:26:30.408366 systemd[1]: Stopped systemd-sysctl.service. Jul 15 11:26:30.408530 systemd[1]: Stopping systemd-sysctl.service... Jul 15 11:26:30.409963 systemd[1]: Starting systemd-sysctl.service... Jul 15 11:26:30.412199 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 11:26:30.420014 systemd[1]: Finished systemd-sysctl.service. Jul 15 11:26:30.884499 env[1303]: time="2025-07-15T11:26:30.884398628Z" level=info msg="StartContainer for \"0999ab611d687221738ec90d58d70e830f830984fa913b5b3ebf4df02b42e015\" returns successfully" Jul 15 11:26:30.897824 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0999ab611d687221738ec90d58d70e830f830984fa913b5b3ebf4df02b42e015-rootfs.mount: Deactivated successfully. Jul 15 11:26:30.918417 env[1303]: time="2025-07-15T11:26:30.918359847Z" level=info msg="shim disconnected" id=0999ab611d687221738ec90d58d70e830f830984fa913b5b3ebf4df02b42e015 Jul 15 11:26:30.918664 env[1303]: time="2025-07-15T11:26:30.918617132Z" level=warning msg="cleaning up after shim disconnected" id=0999ab611d687221738ec90d58d70e830f830984fa913b5b3ebf4df02b42e015 namespace=k8s.io Jul 15 11:26:30.918664 env[1303]: time="2025-07-15T11:26:30.918637440Z" level=info msg="cleaning up dead shim" Jul 15 11:26:30.925013 env[1303]: time="2025-07-15T11:26:30.924986381Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:26:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2563 runtime=io.containerd.runc.v2\n" Jul 15 11:26:31.520599 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2658596693.mount: Deactivated successfully. Jul 15 11:26:31.891016 kubelet[2059]: E0715 11:26:31.890780 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:31.892810 env[1303]: time="2025-07-15T11:26:31.892761534Z" level=info msg="CreateContainer within sandbox \"5c25e06ea65c611537f23a3af934ce295b99ee6d3fcee171773af80a256b36f2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 15 11:26:31.915691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2156214816.mount: Deactivated successfully. Jul 15 11:26:31.942495 env[1303]: time="2025-07-15T11:26:31.942431928Z" level=info msg="CreateContainer within sandbox \"5c25e06ea65c611537f23a3af934ce295b99ee6d3fcee171773af80a256b36f2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"de4d07254faa5b1a5a13b6c6b74de89120a46e3051c03aca8f5a91f581ff9268\"" Jul 15 11:26:31.943273 env[1303]: time="2025-07-15T11:26:31.943216900Z" level=info msg="StartContainer for \"de4d07254faa5b1a5a13b6c6b74de89120a46e3051c03aca8f5a91f581ff9268\"" Jul 15 11:26:32.003878 env[1303]: time="2025-07-15T11:26:32.003808898Z" level=info msg="StartContainer for \"de4d07254faa5b1a5a13b6c6b74de89120a46e3051c03aca8f5a91f581ff9268\" returns successfully" Jul 15 11:26:32.031088 env[1303]: time="2025-07-15T11:26:32.031043226Z" level=info msg="shim disconnected" id=de4d07254faa5b1a5a13b6c6b74de89120a46e3051c03aca8f5a91f581ff9268 Jul 15 11:26:32.031088 env[1303]: time="2025-07-15T11:26:32.031082120Z" level=warning msg="cleaning up after shim disconnected" id=de4d07254faa5b1a5a13b6c6b74de89120a46e3051c03aca8f5a91f581ff9268 namespace=k8s.io Jul 15 11:26:32.031088 env[1303]: time="2025-07-15T11:26:32.031090386Z" level=info msg="cleaning up dead shim" Jul 15 11:26:32.043222 env[1303]: time="2025-07-15T11:26:32.043181610Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:26:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2620 runtime=io.containerd.runc.v2\n" Jul 15 11:26:32.469389 env[1303]: time="2025-07-15T11:26:32.469317654Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:26:32.471036 env[1303]: time="2025-07-15T11:26:32.470999577Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:26:32.472464 env[1303]: time="2025-07-15T11:26:32.472405651Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:26:32.472866 env[1303]: time="2025-07-15T11:26:32.472832336Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 15 11:26:32.474620 env[1303]: time="2025-07-15T11:26:32.474578070Z" level=info msg="CreateContainer within sandbox \"c15d53e49ccb54196fd2c14350e1e75dc15f2d73fbf1f02065fb6e18829055e6\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 15 11:26:32.483981 env[1303]: time="2025-07-15T11:26:32.483930978Z" level=info msg="CreateContainer within sandbox \"c15d53e49ccb54196fd2c14350e1e75dc15f2d73fbf1f02065fb6e18829055e6\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"75b250562037332a3348e18dce812cea3cb5aa267edd4d7d7b9153d9eadd832c\"" Jul 15 11:26:32.484370 env[1303]: time="2025-07-15T11:26:32.484343236Z" level=info msg="StartContainer for \"75b250562037332a3348e18dce812cea3cb5aa267edd4d7d7b9153d9eadd832c\"" Jul 15 11:26:32.518245 env[1303]: time="2025-07-15T11:26:32.518175935Z" level=info msg="StartContainer for \"75b250562037332a3348e18dce812cea3cb5aa267edd4d7d7b9153d9eadd832c\" returns successfully" Jul 15 11:26:32.895401 kubelet[2059]: E0715 11:26:32.895275 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:32.897429 kubelet[2059]: E0715 11:26:32.897402 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:32.898668 env[1303]: time="2025-07-15T11:26:32.898624770Z" level=info msg="CreateContainer within sandbox \"5c25e06ea65c611537f23a3af934ce295b99ee6d3fcee171773af80a256b36f2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 15 11:26:32.916501 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount59263296.mount: Deactivated successfully. Jul 15 11:26:32.917133 env[1303]: time="2025-07-15T11:26:32.917094652Z" level=info msg="CreateContainer within sandbox \"5c25e06ea65c611537f23a3af934ce295b99ee6d3fcee171773af80a256b36f2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"501c5b5a78ece7801a403e9ef6be6c5cff1aeda7398e228edfee791cd8c1321d\"" Jul 15 11:26:32.918969 kubelet[2059]: I0715 11:26:32.918596 2059 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-7tsm5" podStartSLOduration=0.837122287 podStartE2EDuration="14.918575565s" podCreationTimestamp="2025-07-15 11:26:18 +0000 UTC" firstStartedPulling="2025-07-15 11:26:18.392083637 +0000 UTC m=+7.011372317" lastFinishedPulling="2025-07-15 11:26:32.473536915 +0000 UTC m=+21.092825595" observedRunningTime="2025-07-15 11:26:32.916146963 +0000 UTC m=+21.535435643" watchObservedRunningTime="2025-07-15 11:26:32.918575565 +0000 UTC m=+21.537864245" Jul 15 11:26:32.920962 env[1303]: time="2025-07-15T11:26:32.920921462Z" level=info msg="StartContainer for \"501c5b5a78ece7801a403e9ef6be6c5cff1aeda7398e228edfee791cd8c1321d\"" Jul 15 11:26:32.990595 env[1303]: time="2025-07-15T11:26:32.990558482Z" level=info msg="StartContainer for \"501c5b5a78ece7801a403e9ef6be6c5cff1aeda7398e228edfee791cd8c1321d\" returns successfully" Jul 15 11:26:33.233607 env[1303]: time="2025-07-15T11:26:33.233551515Z" level=info msg="shim disconnected" id=501c5b5a78ece7801a403e9ef6be6c5cff1aeda7398e228edfee791cd8c1321d Jul 15 11:26:33.233607 env[1303]: time="2025-07-15T11:26:33.233602171Z" level=warning msg="cleaning up after shim disconnected" id=501c5b5a78ece7801a403e9ef6be6c5cff1aeda7398e228edfee791cd8c1321d namespace=k8s.io Jul 15 11:26:33.233607 env[1303]: time="2025-07-15T11:26:33.233613032Z" level=info msg="cleaning up dead shim" Jul 15 11:26:33.241616 env[1303]: time="2025-07-15T11:26:33.241555172Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:26:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2714 runtime=io.containerd.runc.v2\ntime=\"2025-07-15T11:26:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Jul 15 11:26:33.516954 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-501c5b5a78ece7801a403e9ef6be6c5cff1aeda7398e228edfee791cd8c1321d-rootfs.mount: Deactivated successfully. Jul 15 11:26:33.901690 kubelet[2059]: E0715 11:26:33.901583 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:33.902025 kubelet[2059]: E0715 11:26:33.901698 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:33.903306 env[1303]: time="2025-07-15T11:26:33.903266322Z" level=info msg="CreateContainer within sandbox \"5c25e06ea65c611537f23a3af934ce295b99ee6d3fcee171773af80a256b36f2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 15 11:26:33.928677 env[1303]: time="2025-07-15T11:26:33.928621457Z" level=info msg="CreateContainer within sandbox \"5c25e06ea65c611537f23a3af934ce295b99ee6d3fcee171773af80a256b36f2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f5fffa0fe386b0feabf95046c7f2234eca2f300d1c670372588520565cdfe95d\"" Jul 15 11:26:33.929170 env[1303]: time="2025-07-15T11:26:33.929146366Z" level=info msg="StartContainer for \"f5fffa0fe386b0feabf95046c7f2234eca2f300d1c670372588520565cdfe95d\"" Jul 15 11:26:33.972864 env[1303]: time="2025-07-15T11:26:33.972321603Z" level=info msg="StartContainer for \"f5fffa0fe386b0feabf95046c7f2234eca2f300d1c670372588520565cdfe95d\" returns successfully" Jul 15 11:26:34.046028 kubelet[2059]: I0715 11:26:34.045094 2059 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 15 11:26:34.103529 kubelet[2059]: I0715 11:26:34.103458 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76d3cc4d-7126-4e26-83a7-37c43b938b25-config-volume\") pod \"coredns-7c65d6cfc9-ll6rj\" (UID: \"76d3cc4d-7126-4e26-83a7-37c43b938b25\") " pod="kube-system/coredns-7c65d6cfc9-ll6rj" Jul 15 11:26:34.103741 kubelet[2059]: I0715 11:26:34.103722 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nmws\" (UniqueName: \"kubernetes.io/projected/e9db1f1f-622a-4891-b89f-a7f286f52a33-kube-api-access-5nmws\") pod \"coredns-7c65d6cfc9-58km4\" (UID: \"e9db1f1f-622a-4891-b89f-a7f286f52a33\") " pod="kube-system/coredns-7c65d6cfc9-58km4" Jul 15 11:26:34.103829 kubelet[2059]: I0715 11:26:34.103813 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5bl7\" (UniqueName: \"kubernetes.io/projected/76d3cc4d-7126-4e26-83a7-37c43b938b25-kube-api-access-w5bl7\") pod \"coredns-7c65d6cfc9-ll6rj\" (UID: \"76d3cc4d-7126-4e26-83a7-37c43b938b25\") " pod="kube-system/coredns-7c65d6cfc9-ll6rj" Jul 15 11:26:34.103919 kubelet[2059]: I0715 11:26:34.103903 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e9db1f1f-622a-4891-b89f-a7f286f52a33-config-volume\") pod \"coredns-7c65d6cfc9-58km4\" (UID: \"e9db1f1f-622a-4891-b89f-a7f286f52a33\") " pod="kube-system/coredns-7c65d6cfc9-58km4" Jul 15 11:26:34.370085 kubelet[2059]: E0715 11:26:34.370038 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:34.370702 env[1303]: time="2025-07-15T11:26:34.370668375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ll6rj,Uid:76d3cc4d-7126-4e26-83a7-37c43b938b25,Namespace:kube-system,Attempt:0,}" Jul 15 11:26:34.372935 kubelet[2059]: E0715 11:26:34.372915 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:34.373192 env[1303]: time="2025-07-15T11:26:34.373148751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-58km4,Uid:e9db1f1f-622a-4891-b89f-a7f286f52a33,Namespace:kube-system,Attempt:0,}" Jul 15 11:26:34.905995 kubelet[2059]: E0715 11:26:34.905963 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:34.920212 kubelet[2059]: I0715 11:26:34.920131 2059 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jdvzn" podStartSLOduration=8.887250056 podStartE2EDuration="17.920110683s" podCreationTimestamp="2025-07-15 11:26:17 +0000 UTC" firstStartedPulling="2025-07-15 11:26:18.209592097 +0000 UTC m=+6.828880777" lastFinishedPulling="2025-07-15 11:26:27.242452724 +0000 UTC m=+15.861741404" observedRunningTime="2025-07-15 11:26:34.920111134 +0000 UTC m=+23.539399814" watchObservedRunningTime="2025-07-15 11:26:34.920110683 +0000 UTC m=+23.539399363" Jul 15 11:26:35.907265 kubelet[2059]: E0715 11:26:35.907225 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:36.044684 systemd-networkd[1080]: cilium_host: Link UP Jul 15 11:26:36.044840 systemd-networkd[1080]: cilium_net: Link UP Jul 15 11:26:36.044844 systemd-networkd[1080]: cilium_net: Gained carrier Jul 15 11:26:36.045006 systemd-networkd[1080]: cilium_host: Gained carrier Jul 15 11:26:36.051435 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 15 11:26:36.051604 systemd-networkd[1080]: cilium_host: Gained IPv6LL Jul 15 11:26:36.119911 systemd-networkd[1080]: cilium_vxlan: Link UP Jul 15 11:26:36.119920 systemd-networkd[1080]: cilium_vxlan: Gained carrier Jul 15 11:26:36.175520 systemd-networkd[1080]: cilium_net: Gained IPv6LL Jul 15 11:26:36.305423 kernel: NET: Registered PF_ALG protocol family Jul 15 11:26:36.851502 systemd-networkd[1080]: lxc_health: Link UP Jul 15 11:26:36.864215 systemd-networkd[1080]: lxc_health: Gained carrier Jul 15 11:26:36.864418 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 15 11:26:36.909294 kubelet[2059]: E0715 11:26:36.909112 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:36.966966 systemd-networkd[1080]: lxcc77711c83782: Link UP Jul 15 11:26:36.972404 kernel: eth0: renamed from tmp4b7a1 Jul 15 11:26:36.983778 systemd-networkd[1080]: lxc0b69377ca76f: Link UP Jul 15 11:26:36.984399 kernel: eth0: renamed from tmp6d0b2 Jul 15 11:26:36.995456 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcc77711c83782: link becomes ready Jul 15 11:26:36.995545 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc0b69377ca76f: link becomes ready Jul 15 11:26:36.993890 systemd-networkd[1080]: lxcc77711c83782: Gained carrier Jul 15 11:26:36.994016 systemd-networkd[1080]: lxc0b69377ca76f: Gained carrier Jul 15 11:26:37.291814 systemd-networkd[1080]: cilium_vxlan: Gained IPv6LL Jul 15 11:26:38.092064 kubelet[2059]: E0715 11:26:38.092027 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:38.111542 systemd-networkd[1080]: lxc_health: Gained IPv6LL Jul 15 11:26:38.687550 systemd-networkd[1080]: lxc0b69377ca76f: Gained IPv6LL Jul 15 11:26:39.007609 systemd-networkd[1080]: lxcc77711c83782: Gained IPv6LL Jul 15 11:26:40.366029 env[1303]: time="2025-07-15T11:26:40.365966868Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:26:40.366029 env[1303]: time="2025-07-15T11:26:40.366005701Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:26:40.366029 env[1303]: time="2025-07-15T11:26:40.366014648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:26:40.366415 env[1303]: time="2025-07-15T11:26:40.366252596Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6d0b2c5fcba977821e0fb3b839f0c7378c8d32e098df6197bc12414046265de3 pid=3280 runtime=io.containerd.runc.v2 Jul 15 11:26:40.378829 systemd[1]: run-containerd-runc-k8s.io-6d0b2c5fcba977821e0fb3b839f0c7378c8d32e098df6197bc12414046265de3-runc.XrXyXT.mount: Deactivated successfully. Jul 15 11:26:40.389196 systemd-resolved[1223]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 11:26:40.408856 env[1303]: time="2025-07-15T11:26:40.408826214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ll6rj,Uid:76d3cc4d-7126-4e26-83a7-37c43b938b25,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d0b2c5fcba977821e0fb3b839f0c7378c8d32e098df6197bc12414046265de3\"" Jul 15 11:26:40.409525 kubelet[2059]: E0715 11:26:40.409493 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:40.411765 env[1303]: time="2025-07-15T11:26:40.411741962Z" level=info msg="CreateContainer within sandbox \"6d0b2c5fcba977821e0fb3b839f0c7378c8d32e098df6197bc12414046265de3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 11:26:40.454398 env[1303]: time="2025-07-15T11:26:40.454318014Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:26:40.454398 env[1303]: time="2025-07-15T11:26:40.454357760Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:26:40.454398 env[1303]: time="2025-07-15T11:26:40.454366797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:26:40.454703 env[1303]: time="2025-07-15T11:26:40.454641524Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4b7a1f2dc10cf101956c1186f92f205c55b056876aaf41f65bf1436e81472055 pid=3321 runtime=io.containerd.runc.v2 Jul 15 11:26:40.472473 systemd-resolved[1223]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 11:26:40.492914 env[1303]: time="2025-07-15T11:26:40.492865475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-58km4,Uid:e9db1f1f-622a-4891-b89f-a7f286f52a33,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b7a1f2dc10cf101956c1186f92f205c55b056876aaf41f65bf1436e81472055\"" Jul 15 11:26:40.493492 kubelet[2059]: E0715 11:26:40.493465 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:40.494888 env[1303]: time="2025-07-15T11:26:40.494863546Z" level=info msg="CreateContainer within sandbox \"4b7a1f2dc10cf101956c1186f92f205c55b056876aaf41f65bf1436e81472055\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 11:26:40.886316 env[1303]: time="2025-07-15T11:26:40.886247814Z" level=info msg="CreateContainer within sandbox \"6d0b2c5fcba977821e0fb3b839f0c7378c8d32e098df6197bc12414046265de3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2d8b44e6ad084cba0d507725a2ccd537ff8630ad54086821a8edd0b3e882a87e\"" Jul 15 11:26:40.887044 env[1303]: time="2025-07-15T11:26:40.886981385Z" level=info msg="StartContainer for \"2d8b44e6ad084cba0d507725a2ccd537ff8630ad54086821a8edd0b3e882a87e\"" Jul 15 11:26:40.888917 env[1303]: time="2025-07-15T11:26:40.888881721Z" level=info msg="CreateContainer within sandbox \"4b7a1f2dc10cf101956c1186f92f205c55b056876aaf41f65bf1436e81472055\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"af0a70b655c081471ad808ca30e2811a7e99828dacac01486ba64ba755b5a655\"" Jul 15 11:26:40.889275 env[1303]: time="2025-07-15T11:26:40.889251979Z" level=info msg="StartContainer for \"af0a70b655c081471ad808ca30e2811a7e99828dacac01486ba64ba755b5a655\"" Jul 15 11:26:41.059862 env[1303]: time="2025-07-15T11:26:41.059814330Z" level=info msg="StartContainer for \"2d8b44e6ad084cba0d507725a2ccd537ff8630ad54086821a8edd0b3e882a87e\" returns successfully" Jul 15 11:26:41.062450 env[1303]: time="2025-07-15T11:26:41.062413891Z" level=info msg="StartContainer for \"af0a70b655c081471ad808ca30e2811a7e99828dacac01486ba64ba755b5a655\" returns successfully" Jul 15 11:26:41.921552 kubelet[2059]: E0715 11:26:41.921256 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:41.924265 kubelet[2059]: E0715 11:26:41.924243 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:42.057336 kubelet[2059]: I0715 11:26:42.057244 2059 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-58km4" podStartSLOduration=24.057221934 podStartE2EDuration="24.057221934s" podCreationTimestamp="2025-07-15 11:26:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:26:42.056235859 +0000 UTC m=+30.675524550" watchObservedRunningTime="2025-07-15 11:26:42.057221934 +0000 UTC m=+30.676510644" Jul 15 11:26:42.077231 kubelet[2059]: I0715 11:26:42.076983 2059 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-ll6rj" podStartSLOduration=24.076964005 podStartE2EDuration="24.076964005s" podCreationTimestamp="2025-07-15 11:26:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:26:42.076742237 +0000 UTC m=+30.696030917" watchObservedRunningTime="2025-07-15 11:26:42.076964005 +0000 UTC m=+30.696252685" Jul 15 11:26:42.926185 kubelet[2059]: E0715 11:26:42.926154 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:42.926691 kubelet[2059]: E0715 11:26:42.926166 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:43.927541 kubelet[2059]: E0715 11:26:43.927513 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:43.927541 kubelet[2059]: E0715 11:26:43.927544 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:44.965611 systemd[1]: Started sshd@5-10.0.0.10:22-10.0.0.1:54702.service. Jul 15 11:26:45.007916 sshd[3434]: Accepted publickey for core from 10.0.0.1 port 54702 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:26:45.009105 sshd[3434]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:26:45.012550 systemd-logind[1290]: New session 6 of user core. Jul 15 11:26:45.013296 systemd[1]: Started session-6.scope. Jul 15 11:26:45.151729 sshd[3434]: pam_unix(sshd:session): session closed for user core Jul 15 11:26:45.155163 systemd[1]: sshd@5-10.0.0.10:22-10.0.0.1:54702.service: Deactivated successfully. Jul 15 11:26:45.156362 systemd[1]: session-6.scope: Deactivated successfully. Jul 15 11:26:45.156928 systemd-logind[1290]: Session 6 logged out. Waiting for processes to exit. Jul 15 11:26:45.157813 systemd-logind[1290]: Removed session 6. Jul 15 11:26:45.806664 kubelet[2059]: I0715 11:26:45.806624 2059 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 11:26:45.807497 kubelet[2059]: E0715 11:26:45.807473 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:45.931448 kubelet[2059]: E0715 11:26:45.931418 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:26:50.154454 systemd[1]: Started sshd@6-10.0.0.10:22-10.0.0.1:55694.service. Jul 15 11:26:50.193139 sshd[3451]: Accepted publickey for core from 10.0.0.1 port 55694 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:26:50.194046 sshd[3451]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:26:50.197188 systemd-logind[1290]: New session 7 of user core. Jul 15 11:26:50.198348 systemd[1]: Started session-7.scope. Jul 15 11:26:50.325179 sshd[3451]: pam_unix(sshd:session): session closed for user core Jul 15 11:26:50.326922 systemd[1]: sshd@6-10.0.0.10:22-10.0.0.1:55694.service: Deactivated successfully. Jul 15 11:26:50.327600 systemd[1]: session-7.scope: Deactivated successfully. Jul 15 11:26:50.328298 systemd-logind[1290]: Session 7 logged out. Waiting for processes to exit. Jul 15 11:26:50.329048 systemd-logind[1290]: Removed session 7. Jul 15 11:26:55.327859 systemd[1]: Started sshd@7-10.0.0.10:22-10.0.0.1:55696.service. Jul 15 11:26:55.368896 sshd[3466]: Accepted publickey for core from 10.0.0.1 port 55696 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:26:55.370117 sshd[3466]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:26:55.373851 systemd-logind[1290]: New session 8 of user core. Jul 15 11:26:55.374587 systemd[1]: Started session-8.scope. Jul 15 11:26:55.486274 sshd[3466]: pam_unix(sshd:session): session closed for user core Jul 15 11:26:55.488469 systemd[1]: sshd@7-10.0.0.10:22-10.0.0.1:55696.service: Deactivated successfully. Jul 15 11:26:55.489467 systemd[1]: session-8.scope: Deactivated successfully. Jul 15 11:26:55.490483 systemd-logind[1290]: Session 8 logged out. Waiting for processes to exit. Jul 15 11:26:55.491263 systemd-logind[1290]: Removed session 8. Jul 15 11:27:00.489687 systemd[1]: Started sshd@8-10.0.0.10:22-10.0.0.1:54248.service. Jul 15 11:27:00.528770 sshd[3482]: Accepted publickey for core from 10.0.0.1 port 54248 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:27:00.529852 sshd[3482]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:27:00.533634 systemd-logind[1290]: New session 9 of user core. Jul 15 11:27:00.534470 systemd[1]: Started session-9.scope. Jul 15 11:27:00.643871 sshd[3482]: pam_unix(sshd:session): session closed for user core Jul 15 11:27:00.645726 systemd[1]: sshd@8-10.0.0.10:22-10.0.0.1:54248.service: Deactivated successfully. Jul 15 11:27:00.646623 systemd[1]: session-9.scope: Deactivated successfully. Jul 15 11:27:00.647272 systemd-logind[1290]: Session 9 logged out. Waiting for processes to exit. Jul 15 11:27:00.648089 systemd-logind[1290]: Removed session 9. Jul 15 11:27:05.647131 systemd[1]: Started sshd@9-10.0.0.10:22-10.0.0.1:54260.service. Jul 15 11:27:05.686364 sshd[3498]: Accepted publickey for core from 10.0.0.1 port 54260 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:27:05.687435 sshd[3498]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:27:05.690832 systemd-logind[1290]: New session 10 of user core. Jul 15 11:27:05.691635 systemd[1]: Started session-10.scope. Jul 15 11:27:05.792527 sshd[3498]: pam_unix(sshd:session): session closed for user core Jul 15 11:27:05.795245 systemd[1]: Started sshd@10-10.0.0.10:22-10.0.0.1:54266.service. Jul 15 11:27:05.795744 systemd[1]: sshd@9-10.0.0.10:22-10.0.0.1:54260.service: Deactivated successfully. Jul 15 11:27:05.796717 systemd-logind[1290]: Session 10 logged out. Waiting for processes to exit. Jul 15 11:27:05.796797 systemd[1]: session-10.scope: Deactivated successfully. Jul 15 11:27:05.797729 systemd-logind[1290]: Removed session 10. Jul 15 11:27:05.837738 sshd[3512]: Accepted publickey for core from 10.0.0.1 port 54266 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:27:05.839655 sshd[3512]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:27:05.843979 systemd-logind[1290]: New session 11 of user core. Jul 15 11:27:05.844675 systemd[1]: Started session-11.scope. Jul 15 11:27:06.127333 sshd[3512]: pam_unix(sshd:session): session closed for user core Jul 15 11:27:06.129756 systemd[1]: Started sshd@11-10.0.0.10:22-10.0.0.1:54268.service. Jul 15 11:27:06.133300 systemd[1]: sshd@10-10.0.0.10:22-10.0.0.1:54266.service: Deactivated successfully. Jul 15 11:27:06.136226 systemd[1]: session-11.scope: Deactivated successfully. Jul 15 11:27:06.136841 systemd-logind[1290]: Session 11 logged out. Waiting for processes to exit. Jul 15 11:27:06.140831 systemd-logind[1290]: Removed session 11. Jul 15 11:27:06.173299 sshd[3523]: Accepted publickey for core from 10.0.0.1 port 54268 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:27:06.174770 sshd[3523]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:27:06.178959 systemd-logind[1290]: New session 12 of user core. Jul 15 11:27:06.180060 systemd[1]: Started session-12.scope. Jul 15 11:27:06.316297 sshd[3523]: pam_unix(sshd:session): session closed for user core Jul 15 11:27:06.319175 systemd[1]: sshd@11-10.0.0.10:22-10.0.0.1:54268.service: Deactivated successfully. Jul 15 11:27:06.320537 systemd-logind[1290]: Session 12 logged out. Waiting for processes to exit. Jul 15 11:27:06.320595 systemd[1]: session-12.scope: Deactivated successfully. Jul 15 11:27:06.321593 systemd-logind[1290]: Removed session 12. Jul 15 11:27:11.319782 systemd[1]: Started sshd@12-10.0.0.10:22-10.0.0.1:48088.service. Jul 15 11:27:11.361630 sshd[3539]: Accepted publickey for core from 10.0.0.1 port 48088 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:27:11.363137 sshd[3539]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:27:11.367032 systemd-logind[1290]: New session 13 of user core. Jul 15 11:27:11.367754 systemd[1]: Started session-13.scope. Jul 15 11:27:11.517009 sshd[3539]: pam_unix(sshd:session): session closed for user core Jul 15 11:27:11.519160 systemd[1]: sshd@12-10.0.0.10:22-10.0.0.1:48088.service: Deactivated successfully. Jul 15 11:27:11.520104 systemd-logind[1290]: Session 13 logged out. Waiting for processes to exit. Jul 15 11:27:11.520136 systemd[1]: session-13.scope: Deactivated successfully. Jul 15 11:27:11.521017 systemd-logind[1290]: Removed session 13. Jul 15 11:27:16.521438 systemd[1]: Started sshd@13-10.0.0.10:22-10.0.0.1:48096.service. Jul 15 11:27:16.562016 sshd[3557]: Accepted publickey for core from 10.0.0.1 port 48096 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:27:16.563400 sshd[3557]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:27:16.566924 systemd-logind[1290]: New session 14 of user core. Jul 15 11:27:16.567765 systemd[1]: Started session-14.scope. Jul 15 11:27:16.672552 sshd[3557]: pam_unix(sshd:session): session closed for user core Jul 15 11:27:16.674917 systemd[1]: sshd@13-10.0.0.10:22-10.0.0.1:48096.service: Deactivated successfully. Jul 15 11:27:16.676075 systemd[1]: session-14.scope: Deactivated successfully. Jul 15 11:27:16.676076 systemd-logind[1290]: Session 14 logged out. Waiting for processes to exit. Jul 15 11:27:16.677015 systemd-logind[1290]: Removed session 14. Jul 15 11:27:21.675761 systemd[1]: Started sshd@14-10.0.0.10:22-10.0.0.1:48798.service. Jul 15 11:27:21.713886 sshd[3573]: Accepted publickey for core from 10.0.0.1 port 48798 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:27:21.714938 sshd[3573]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:27:21.718300 systemd-logind[1290]: New session 15 of user core. Jul 15 11:27:21.719257 systemd[1]: Started session-15.scope. Jul 15 11:27:21.820906 sshd[3573]: pam_unix(sshd:session): session closed for user core Jul 15 11:27:21.824073 systemd[1]: Started sshd@15-10.0.0.10:22-10.0.0.1:48810.service. Jul 15 11:27:21.824600 systemd[1]: sshd@14-10.0.0.10:22-10.0.0.1:48798.service: Deactivated successfully. Jul 15 11:27:21.826297 systemd[1]: session-15.scope: Deactivated successfully. Jul 15 11:27:21.826760 systemd-logind[1290]: Session 15 logged out. Waiting for processes to exit. Jul 15 11:27:21.827644 systemd-logind[1290]: Removed session 15. Jul 15 11:27:21.862028 sshd[3586]: Accepted publickey for core from 10.0.0.1 port 48810 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:27:21.863143 sshd[3586]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:27:21.866347 systemd-logind[1290]: New session 16 of user core. Jul 15 11:27:21.867213 systemd[1]: Started session-16.scope. Jul 15 11:27:22.215813 sshd[3586]: pam_unix(sshd:session): session closed for user core Jul 15 11:27:22.218332 systemd[1]: Started sshd@16-10.0.0.10:22-10.0.0.1:48812.service. Jul 15 11:27:22.218761 systemd[1]: sshd@15-10.0.0.10:22-10.0.0.1:48810.service: Deactivated successfully. Jul 15 11:27:22.219891 systemd[1]: session-16.scope: Deactivated successfully. Jul 15 11:27:22.220476 systemd-logind[1290]: Session 16 logged out. Waiting for processes to exit. Jul 15 11:27:22.221488 systemd-logind[1290]: Removed session 16. Jul 15 11:27:22.260508 sshd[3599]: Accepted publickey for core from 10.0.0.1 port 48812 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:27:22.261486 sshd[3599]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:27:22.264815 systemd-logind[1290]: New session 17 of user core. Jul 15 11:27:22.265583 systemd[1]: Started session-17.scope. Jul 15 11:27:24.135474 sshd[3599]: pam_unix(sshd:session): session closed for user core Jul 15 11:27:24.138687 systemd[1]: Started sshd@17-10.0.0.10:22-10.0.0.1:48828.service. Jul 15 11:27:24.139262 systemd[1]: sshd@16-10.0.0.10:22-10.0.0.1:48812.service: Deactivated successfully. Jul 15 11:27:24.140344 systemd[1]: session-17.scope: Deactivated successfully. Jul 15 11:27:24.141066 systemd-logind[1290]: Session 17 logged out. Waiting for processes to exit. Jul 15 11:27:24.142080 systemd-logind[1290]: Removed session 17. Jul 15 11:27:24.181807 sshd[3617]: Accepted publickey for core from 10.0.0.1 port 48828 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:27:24.182865 sshd[3617]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:27:24.185768 systemd-logind[1290]: New session 18 of user core. Jul 15 11:27:24.186484 systemd[1]: Started session-18.scope. Jul 15 11:27:24.864960 sshd[3617]: pam_unix(sshd:session): session closed for user core Jul 15 11:27:24.867713 systemd[1]: Started sshd@18-10.0.0.10:22-10.0.0.1:48836.service. Jul 15 11:27:24.868255 systemd[1]: sshd@17-10.0.0.10:22-10.0.0.1:48828.service: Deactivated successfully. Jul 15 11:27:24.869777 systemd[1]: session-18.scope: Deactivated successfully. Jul 15 11:27:24.870685 systemd-logind[1290]: Session 18 logged out. Waiting for processes to exit. Jul 15 11:27:24.871852 systemd-logind[1290]: Removed session 18. Jul 15 11:27:24.909517 sshd[3631]: Accepted publickey for core from 10.0.0.1 port 48836 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:27:24.910852 sshd[3631]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:27:24.914467 systemd-logind[1290]: New session 19 of user core. Jul 15 11:27:24.915243 systemd[1]: Started session-19.scope. Jul 15 11:27:25.034011 sshd[3631]: pam_unix(sshd:session): session closed for user core Jul 15 11:27:25.036432 systemd[1]: sshd@18-10.0.0.10:22-10.0.0.1:48836.service: Deactivated successfully. Jul 15 11:27:25.037332 systemd-logind[1290]: Session 19 logged out. Waiting for processes to exit. Jul 15 11:27:25.037343 systemd[1]: session-19.scope: Deactivated successfully. Jul 15 11:27:25.038060 systemd-logind[1290]: Removed session 19. Jul 15 11:27:28.458043 kubelet[2059]: E0715 11:27:28.458007 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:27:30.037999 systemd[1]: Started sshd@19-10.0.0.10:22-10.0.0.1:59156.service. Jul 15 11:27:30.079502 sshd[3646]: Accepted publickey for core from 10.0.0.1 port 59156 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:27:30.080571 sshd[3646]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:27:30.084173 systemd-logind[1290]: New session 20 of user core. Jul 15 11:27:30.085109 systemd[1]: Started session-20.scope. Jul 15 11:27:30.331145 sshd[3646]: pam_unix(sshd:session): session closed for user core Jul 15 11:27:30.333744 systemd[1]: sshd@19-10.0.0.10:22-10.0.0.1:59156.service: Deactivated successfully. Jul 15 11:27:30.334997 systemd-logind[1290]: Session 20 logged out. Waiting for processes to exit. Jul 15 11:27:30.335110 systemd[1]: session-20.scope: Deactivated successfully. Jul 15 11:27:30.335933 systemd-logind[1290]: Removed session 20. Jul 15 11:27:33.458889 kubelet[2059]: E0715 11:27:33.458844 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:27:35.334772 systemd[1]: Started sshd@20-10.0.0.10:22-10.0.0.1:59166.service. Jul 15 11:27:35.375276 sshd[3663]: Accepted publickey for core from 10.0.0.1 port 59166 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:27:35.376350 sshd[3663]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:27:35.379676 systemd-logind[1290]: New session 21 of user core. Jul 15 11:27:35.380622 systemd[1]: Started session-21.scope. Jul 15 11:27:35.480907 sshd[3663]: pam_unix(sshd:session): session closed for user core Jul 15 11:27:35.482870 systemd[1]: sshd@20-10.0.0.10:22-10.0.0.1:59166.service: Deactivated successfully. Jul 15 11:27:35.483872 systemd-logind[1290]: Session 21 logged out. Waiting for processes to exit. Jul 15 11:27:35.483928 systemd[1]: session-21.scope: Deactivated successfully. Jul 15 11:27:35.484847 systemd-logind[1290]: Removed session 21. Jul 15 11:27:40.483838 systemd[1]: Started sshd@21-10.0.0.10:22-10.0.0.1:53516.service. Jul 15 11:27:40.525313 sshd[3677]: Accepted publickey for core from 10.0.0.1 port 53516 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:27:40.526570 sshd[3677]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:27:40.530599 systemd-logind[1290]: New session 22 of user core. Jul 15 11:27:40.531282 systemd[1]: Started session-22.scope. Jul 15 11:27:40.630480 sshd[3677]: pam_unix(sshd:session): session closed for user core Jul 15 11:27:40.632782 systemd[1]: sshd@21-10.0.0.10:22-10.0.0.1:53516.service: Deactivated successfully. Jul 15 11:27:40.633513 systemd[1]: session-22.scope: Deactivated successfully. Jul 15 11:27:40.634255 systemd-logind[1290]: Session 22 logged out. Waiting for processes to exit. Jul 15 11:27:40.634927 systemd-logind[1290]: Removed session 22. Jul 15 11:27:41.458183 kubelet[2059]: E0715 11:27:41.458153 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:27:44.458032 kubelet[2059]: E0715 11:27:44.458001 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:27:45.634287 systemd[1]: Started sshd@22-10.0.0.10:22-10.0.0.1:53524.service. Jul 15 11:27:45.675603 sshd[3691]: Accepted publickey for core from 10.0.0.1 port 53524 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:27:45.676711 sshd[3691]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:27:45.680021 systemd-logind[1290]: New session 23 of user core. Jul 15 11:27:45.680972 systemd[1]: Started session-23.scope. Jul 15 11:27:45.778278 sshd[3691]: pam_unix(sshd:session): session closed for user core Jul 15 11:27:45.781259 systemd[1]: Started sshd@23-10.0.0.10:22-10.0.0.1:53530.service. Jul 15 11:27:45.781937 systemd[1]: sshd@22-10.0.0.10:22-10.0.0.1:53524.service: Deactivated successfully. Jul 15 11:27:45.782881 systemd-logind[1290]: Session 23 logged out. Waiting for processes to exit. Jul 15 11:27:45.782922 systemd[1]: session-23.scope: Deactivated successfully. Jul 15 11:27:45.783922 systemd-logind[1290]: Removed session 23. Jul 15 11:27:45.819576 sshd[3704]: Accepted publickey for core from 10.0.0.1 port 53530 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:27:45.820807 sshd[3704]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:27:45.824096 systemd-logind[1290]: New session 24 of user core. Jul 15 11:27:45.825025 systemd[1]: Started session-24.scope. Jul 15 11:27:48.125518 env[1303]: time="2025-07-15T11:27:48.125472726Z" level=info msg="StopContainer for \"75b250562037332a3348e18dce812cea3cb5aa267edd4d7d7b9153d9eadd832c\" with timeout 30 (s)" Jul 15 11:27:48.125967 env[1303]: time="2025-07-15T11:27:48.125792705Z" level=info msg="Stop container \"75b250562037332a3348e18dce812cea3cb5aa267edd4d7d7b9153d9eadd832c\" with signal terminated" Jul 15 11:27:48.132847 systemd[1]: run-containerd-runc-k8s.io-f5fffa0fe386b0feabf95046c7f2234eca2f300d1c670372588520565cdfe95d-runc.iPVMBc.mount: Deactivated successfully. Jul 15 11:27:48.149251 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75b250562037332a3348e18dce812cea3cb5aa267edd4d7d7b9153d9eadd832c-rootfs.mount: Deactivated successfully. Jul 15 11:27:48.151313 env[1303]: time="2025-07-15T11:27:48.151119696Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 11:27:48.156259 env[1303]: time="2025-07-15T11:27:48.156211837Z" level=info msg="shim disconnected" id=75b250562037332a3348e18dce812cea3cb5aa267edd4d7d7b9153d9eadd832c Jul 15 11:27:48.156468 env[1303]: time="2025-07-15T11:27:48.156428769Z" level=warning msg="cleaning up after shim disconnected" id=75b250562037332a3348e18dce812cea3cb5aa267edd4d7d7b9153d9eadd832c namespace=k8s.io Jul 15 11:27:48.156468 env[1303]: time="2025-07-15T11:27:48.156449118Z" level=info msg="cleaning up dead shim" Jul 15 11:27:48.159173 env[1303]: time="2025-07-15T11:27:48.159136227Z" level=info msg="StopContainer for \"f5fffa0fe386b0feabf95046c7f2234eca2f300d1c670372588520565cdfe95d\" with timeout 2 (s)" Jul 15 11:27:48.159413 env[1303]: time="2025-07-15T11:27:48.159340996Z" level=info msg="Stop container \"f5fffa0fe386b0feabf95046c7f2234eca2f300d1c670372588520565cdfe95d\" with signal terminated" Jul 15 11:27:48.163899 env[1303]: time="2025-07-15T11:27:48.163853896Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:27:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3753 runtime=io.containerd.runc.v2\n" Jul 15 11:27:48.166529 systemd-networkd[1080]: lxc_health: Link DOWN Jul 15 11:27:48.166536 systemd-networkd[1080]: lxc_health: Lost carrier Jul 15 11:27:48.167173 env[1303]: time="2025-07-15T11:27:48.167052477Z" level=info msg="StopContainer for \"75b250562037332a3348e18dce812cea3cb5aa267edd4d7d7b9153d9eadd832c\" returns successfully" Jul 15 11:27:48.167750 env[1303]: time="2025-07-15T11:27:48.167720036Z" level=info msg="StopPodSandbox for \"c15d53e49ccb54196fd2c14350e1e75dc15f2d73fbf1f02065fb6e18829055e6\"" Jul 15 11:27:48.167833 env[1303]: time="2025-07-15T11:27:48.167778498Z" level=info msg="Container to stop \"75b250562037332a3348e18dce812cea3cb5aa267edd4d7d7b9153d9eadd832c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 11:27:48.170652 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c15d53e49ccb54196fd2c14350e1e75dc15f2d73fbf1f02065fb6e18829055e6-shm.mount: Deactivated successfully. Jul 15 11:27:48.197300 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c15d53e49ccb54196fd2c14350e1e75dc15f2d73fbf1f02065fb6e18829055e6-rootfs.mount: Deactivated successfully. Jul 15 11:27:48.203680 env[1303]: time="2025-07-15T11:27:48.203620600Z" level=info msg="shim disconnected" id=c15d53e49ccb54196fd2c14350e1e75dc15f2d73fbf1f02065fb6e18829055e6 Jul 15 11:27:48.203680 env[1303]: time="2025-07-15T11:27:48.203674822Z" level=warning msg="cleaning up after shim disconnected" id=c15d53e49ccb54196fd2c14350e1e75dc15f2d73fbf1f02065fb6e18829055e6 namespace=k8s.io Jul 15 11:27:48.203680 env[1303]: time="2025-07-15T11:27:48.203686624Z" level=info msg="cleaning up dead shim" Jul 15 11:27:48.212719 env[1303]: time="2025-07-15T11:27:48.212678390Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:27:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3795 runtime=io.containerd.runc.v2\n" Jul 15 11:27:48.212983 env[1303]: time="2025-07-15T11:27:48.212955818Z" level=info msg="TearDown network for sandbox \"c15d53e49ccb54196fd2c14350e1e75dc15f2d73fbf1f02065fb6e18829055e6\" successfully" Jul 15 11:27:48.212983 env[1303]: time="2025-07-15T11:27:48.212976036Z" level=info msg="StopPodSandbox for \"c15d53e49ccb54196fd2c14350e1e75dc15f2d73fbf1f02065fb6e18829055e6\" returns successfully" Jul 15 11:27:48.222914 env[1303]: time="2025-07-15T11:27:48.222870517Z" level=info msg="shim disconnected" id=f5fffa0fe386b0feabf95046c7f2234eca2f300d1c670372588520565cdfe95d Jul 15 11:27:48.223156 env[1303]: time="2025-07-15T11:27:48.223114060Z" level=warning msg="cleaning up after shim disconnected" id=f5fffa0fe386b0feabf95046c7f2234eca2f300d1c670372588520565cdfe95d namespace=k8s.io Jul 15 11:27:48.223156 env[1303]: time="2025-07-15T11:27:48.223146933Z" level=info msg="cleaning up dead shim" Jul 15 11:27:48.230566 env[1303]: time="2025-07-15T11:27:48.230511404Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:27:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3821 runtime=io.containerd.runc.v2\n" Jul 15 11:27:48.233135 env[1303]: time="2025-07-15T11:27:48.233101788Z" level=info msg="StopContainer for \"f5fffa0fe386b0feabf95046c7f2234eca2f300d1c670372588520565cdfe95d\" returns successfully" Jul 15 11:27:48.233693 env[1303]: time="2025-07-15T11:27:48.233652495Z" level=info msg="StopPodSandbox for \"5c25e06ea65c611537f23a3af934ce295b99ee6d3fcee171773af80a256b36f2\"" Jul 15 11:27:48.233762 env[1303]: time="2025-07-15T11:27:48.233735173Z" level=info msg="Container to stop \"f5fffa0fe386b0feabf95046c7f2234eca2f300d1c670372588520565cdfe95d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 11:27:48.233762 env[1303]: time="2025-07-15T11:27:48.233755701Z" level=info msg="Container to stop \"0999ab611d687221738ec90d58d70e830f830984fa913b5b3ebf4df02b42e015\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 11:27:48.233817 env[1303]: time="2025-07-15T11:27:48.233770159Z" level=info msg="Container to stop \"de4d07254faa5b1a5a13b6c6b74de89120a46e3051c03aca8f5a91f581ff9268\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 11:27:48.233817 env[1303]: time="2025-07-15T11:27:48.233784246Z" level=info msg="Container to stop \"198101f2b4b9311f0c416f1bccfc9132ef68b2041cc5665e091b15baafd907aa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 11:27:48.233817 env[1303]: time="2025-07-15T11:27:48.233796479Z" level=info msg="Container to stop \"501c5b5a78ece7801a403e9ef6be6c5cff1aeda7398e228edfee791cd8c1321d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 11:27:48.257638 env[1303]: time="2025-07-15T11:27:48.257580577Z" level=info msg="shim disconnected" id=5c25e06ea65c611537f23a3af934ce295b99ee6d3fcee171773af80a256b36f2 Jul 15 11:27:48.257638 env[1303]: time="2025-07-15T11:27:48.257628698Z" level=warning msg="cleaning up after shim disconnected" id=5c25e06ea65c611537f23a3af934ce295b99ee6d3fcee171773af80a256b36f2 namespace=k8s.io Jul 15 11:27:48.257638 env[1303]: time="2025-07-15T11:27:48.257637034Z" level=info msg="cleaning up dead shim" Jul 15 11:27:48.263989 env[1303]: time="2025-07-15T11:27:48.263966638Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:27:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3852 runtime=io.containerd.runc.v2\n" Jul 15 11:27:48.264306 env[1303]: time="2025-07-15T11:27:48.264285314Z" level=info msg="TearDown network for sandbox \"5c25e06ea65c611537f23a3af934ce295b99ee6d3fcee171773af80a256b36f2\" successfully" Jul 15 11:27:48.264422 env[1303]: time="2025-07-15T11:27:48.264364865Z" level=info msg="StopPodSandbox for \"5c25e06ea65c611537f23a3af934ce295b99ee6d3fcee171773af80a256b36f2\" returns successfully" Jul 15 11:27:48.306711 kubelet[2059]: I0715 11:27:48.306655 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0af08191-10ec-44c4-a087-f6925b4b6bf9-host-proc-sys-net\") pod \"0af08191-10ec-44c4-a087-f6925b4b6bf9\" (UID: \"0af08191-10ec-44c4-a087-f6925b4b6bf9\") " Jul 15 11:27:48.306711 kubelet[2059]: I0715 11:27:48.306718 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0af08191-10ec-44c4-a087-f6925b4b6bf9-hubble-tls\") pod \"0af08191-10ec-44c4-a087-f6925b4b6bf9\" (UID: \"0af08191-10ec-44c4-a087-f6925b4b6bf9\") " Jul 15 11:27:48.307191 kubelet[2059]: I0715 11:27:48.306743 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0af08191-10ec-44c4-a087-f6925b4b6bf9-cilium-cgroup\") pod \"0af08191-10ec-44c4-a087-f6925b4b6bf9\" (UID: \"0af08191-10ec-44c4-a087-f6925b4b6bf9\") " Jul 15 11:27:48.307191 kubelet[2059]: I0715 11:27:48.306753 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0af08191-10ec-44c4-a087-f6925b4b6bf9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0af08191-10ec-44c4-a087-f6925b4b6bf9" (UID: "0af08191-10ec-44c4-a087-f6925b4b6bf9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 11:27:48.307191 kubelet[2059]: I0715 11:27:48.306763 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0af08191-10ec-44c4-a087-f6925b4b6bf9-bpf-maps\") pod \"0af08191-10ec-44c4-a087-f6925b4b6bf9\" (UID: \"0af08191-10ec-44c4-a087-f6925b4b6bf9\") " Jul 15 11:27:48.307191 kubelet[2059]: I0715 11:27:48.306792 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0af08191-10ec-44c4-a087-f6925b4b6bf9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0af08191-10ec-44c4-a087-f6925b4b6bf9" (UID: "0af08191-10ec-44c4-a087-f6925b4b6bf9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 11:27:48.307191 kubelet[2059]: I0715 11:27:48.306813 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0af08191-10ec-44c4-a087-f6925b4b6bf9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0af08191-10ec-44c4-a087-f6925b4b6bf9" (UID: "0af08191-10ec-44c4-a087-f6925b4b6bf9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 11:27:48.307317 kubelet[2059]: I0715 11:27:48.306833 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtr49\" (UniqueName: \"kubernetes.io/projected/0af08191-10ec-44c4-a087-f6925b4b6bf9-kube-api-access-gtr49\") pod \"0af08191-10ec-44c4-a087-f6925b4b6bf9\" (UID: \"0af08191-10ec-44c4-a087-f6925b4b6bf9\") " Jul 15 11:27:48.307317 kubelet[2059]: I0715 11:27:48.306860 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0af08191-10ec-44c4-a087-f6925b4b6bf9-hostproc\") pod \"0af08191-10ec-44c4-a087-f6925b4b6bf9\" (UID: \"0af08191-10ec-44c4-a087-f6925b4b6bf9\") " Jul 15 11:27:48.307317 kubelet[2059]: I0715 11:27:48.306874 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0af08191-10ec-44c4-a087-f6925b4b6bf9-cilium-run\") pod \"0af08191-10ec-44c4-a087-f6925b4b6bf9\" (UID: \"0af08191-10ec-44c4-a087-f6925b4b6bf9\") " Jul 15 11:27:48.307317 kubelet[2059]: I0715 11:27:48.306890 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfrtf\" (UniqueName: \"kubernetes.io/projected/8a07036d-f6a5-42d4-b607-0aa4500de691-kube-api-access-dfrtf\") pod \"8a07036d-f6a5-42d4-b607-0aa4500de691\" (UID: \"8a07036d-f6a5-42d4-b607-0aa4500de691\") " Jul 15 11:27:48.307317 kubelet[2059]: I0715 11:27:48.306906 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0af08191-10ec-44c4-a087-f6925b4b6bf9-etc-cni-netd\") pod \"0af08191-10ec-44c4-a087-f6925b4b6bf9\" (UID: \"0af08191-10ec-44c4-a087-f6925b4b6bf9\") " Jul 15 11:27:48.307317 kubelet[2059]: I0715 11:27:48.306924 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0af08191-10ec-44c4-a087-f6925b4b6bf9-cni-path\") pod \"0af08191-10ec-44c4-a087-f6925b4b6bf9\" (UID: \"0af08191-10ec-44c4-a087-f6925b4b6bf9\") " Jul 15 11:27:48.307503 kubelet[2059]: I0715 11:27:48.306928 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0af08191-10ec-44c4-a087-f6925b4b6bf9-hostproc" (OuterVolumeSpecName: "hostproc") pod "0af08191-10ec-44c4-a087-f6925b4b6bf9" (UID: "0af08191-10ec-44c4-a087-f6925b4b6bf9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 11:27:48.307503 kubelet[2059]: I0715 11:27:48.306940 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0af08191-10ec-44c4-a087-f6925b4b6bf9-host-proc-sys-kernel\") pod \"0af08191-10ec-44c4-a087-f6925b4b6bf9\" (UID: \"0af08191-10ec-44c4-a087-f6925b4b6bf9\") " Jul 15 11:27:48.307503 kubelet[2059]: I0715 11:27:48.306959 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8a07036d-f6a5-42d4-b607-0aa4500de691-cilium-config-path\") pod \"8a07036d-f6a5-42d4-b607-0aa4500de691\" (UID: \"8a07036d-f6a5-42d4-b607-0aa4500de691\") " Jul 15 11:27:48.307503 kubelet[2059]: I0715 11:27:48.306979 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0af08191-10ec-44c4-a087-f6925b4b6bf9-clustermesh-secrets\") pod \"0af08191-10ec-44c4-a087-f6925b4b6bf9\" (UID: \"0af08191-10ec-44c4-a087-f6925b4b6bf9\") " Jul 15 11:27:48.307503 kubelet[2059]: I0715 11:27:48.306998 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0af08191-10ec-44c4-a087-f6925b4b6bf9-cilium-config-path\") pod \"0af08191-10ec-44c4-a087-f6925b4b6bf9\" (UID: \"0af08191-10ec-44c4-a087-f6925b4b6bf9\") " Jul 15 11:27:48.307503 kubelet[2059]: I0715 11:27:48.307014 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0af08191-10ec-44c4-a087-f6925b4b6bf9-xtables-lock\") pod \"0af08191-10ec-44c4-a087-f6925b4b6bf9\" (UID: \"0af08191-10ec-44c4-a087-f6925b4b6bf9\") " Jul 15 11:27:48.307657 kubelet[2059]: I0715 11:27:48.307033 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0af08191-10ec-44c4-a087-f6925b4b6bf9-lib-modules\") pod \"0af08191-10ec-44c4-a087-f6925b4b6bf9\" (UID: \"0af08191-10ec-44c4-a087-f6925b4b6bf9\") " Jul 15 11:27:48.307657 kubelet[2059]: I0715 11:27:48.307065 2059 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0af08191-10ec-44c4-a087-f6925b4b6bf9-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 15 11:27:48.307657 kubelet[2059]: I0715 11:27:48.307088 2059 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0af08191-10ec-44c4-a087-f6925b4b6bf9-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 15 11:27:48.307657 kubelet[2059]: I0715 11:27:48.307099 2059 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0af08191-10ec-44c4-a087-f6925b4b6bf9-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 15 11:27:48.307657 kubelet[2059]: I0715 11:27:48.307107 2059 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0af08191-10ec-44c4-a087-f6925b4b6bf9-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 15 11:27:48.307657 kubelet[2059]: I0715 11:27:48.307126 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0af08191-10ec-44c4-a087-f6925b4b6bf9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0af08191-10ec-44c4-a087-f6925b4b6bf9" (UID: "0af08191-10ec-44c4-a087-f6925b4b6bf9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 11:27:48.307657 kubelet[2059]: I0715 11:27:48.307146 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0af08191-10ec-44c4-a087-f6925b4b6bf9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0af08191-10ec-44c4-a087-f6925b4b6bf9" (UID: "0af08191-10ec-44c4-a087-f6925b4b6bf9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 11:27:48.307820 kubelet[2059]: I0715 11:27:48.307452 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0af08191-10ec-44c4-a087-f6925b4b6bf9-cni-path" (OuterVolumeSpecName: "cni-path") pod "0af08191-10ec-44c4-a087-f6925b4b6bf9" (UID: "0af08191-10ec-44c4-a087-f6925b4b6bf9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 11:27:48.307820 kubelet[2059]: I0715 11:27:48.307481 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0af08191-10ec-44c4-a087-f6925b4b6bf9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0af08191-10ec-44c4-a087-f6925b4b6bf9" (UID: "0af08191-10ec-44c4-a087-f6925b4b6bf9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 11:27:48.307820 kubelet[2059]: I0715 11:27:48.307497 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0af08191-10ec-44c4-a087-f6925b4b6bf9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0af08191-10ec-44c4-a087-f6925b4b6bf9" (UID: "0af08191-10ec-44c4-a087-f6925b4b6bf9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 11:27:48.307820 kubelet[2059]: I0715 11:27:48.307553 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0af08191-10ec-44c4-a087-f6925b4b6bf9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0af08191-10ec-44c4-a087-f6925b4b6bf9" (UID: "0af08191-10ec-44c4-a087-f6925b4b6bf9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 11:27:48.310293 kubelet[2059]: I0715 11:27:48.310258 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0af08191-10ec-44c4-a087-f6925b4b6bf9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0af08191-10ec-44c4-a087-f6925b4b6bf9" (UID: "0af08191-10ec-44c4-a087-f6925b4b6bf9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 15 11:27:48.310628 kubelet[2059]: I0715 11:27:48.310596 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0af08191-10ec-44c4-a087-f6925b4b6bf9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0af08191-10ec-44c4-a087-f6925b4b6bf9" (UID: "0af08191-10ec-44c4-a087-f6925b4b6bf9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 15 11:27:48.310760 kubelet[2059]: I0715 11:27:48.310742 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a07036d-f6a5-42d4-b607-0aa4500de691-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8a07036d-f6a5-42d4-b607-0aa4500de691" (UID: "8a07036d-f6a5-42d4-b607-0aa4500de691"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 15 11:27:48.310951 kubelet[2059]: I0715 11:27:48.310908 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0af08191-10ec-44c4-a087-f6925b4b6bf9-kube-api-access-gtr49" (OuterVolumeSpecName: "kube-api-access-gtr49") pod "0af08191-10ec-44c4-a087-f6925b4b6bf9" (UID: "0af08191-10ec-44c4-a087-f6925b4b6bf9"). InnerVolumeSpecName "kube-api-access-gtr49". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 15 11:27:48.312092 kubelet[2059]: I0715 11:27:48.312052 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0af08191-10ec-44c4-a087-f6925b4b6bf9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0af08191-10ec-44c4-a087-f6925b4b6bf9" (UID: "0af08191-10ec-44c4-a087-f6925b4b6bf9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 15 11:27:48.312736 kubelet[2059]: I0715 11:27:48.312696 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a07036d-f6a5-42d4-b607-0aa4500de691-kube-api-access-dfrtf" (OuterVolumeSpecName: "kube-api-access-dfrtf") pod "8a07036d-f6a5-42d4-b607-0aa4500de691" (UID: "8a07036d-f6a5-42d4-b607-0aa4500de691"). InnerVolumeSpecName "kube-api-access-dfrtf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 15 11:27:48.408188 kubelet[2059]: I0715 11:27:48.408143 2059 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtr49\" (UniqueName: \"kubernetes.io/projected/0af08191-10ec-44c4-a087-f6925b4b6bf9-kube-api-access-gtr49\") on node \"localhost\" DevicePath \"\"" Jul 15 11:27:48.408188 kubelet[2059]: I0715 11:27:48.408186 2059 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0af08191-10ec-44c4-a087-f6925b4b6bf9-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 15 11:27:48.408328 kubelet[2059]: I0715 11:27:48.408200 2059 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfrtf\" (UniqueName: \"kubernetes.io/projected/8a07036d-f6a5-42d4-b607-0aa4500de691-kube-api-access-dfrtf\") on node \"localhost\" DevicePath \"\"" Jul 15 11:27:48.408328 kubelet[2059]: I0715 11:27:48.408212 2059 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0af08191-10ec-44c4-a087-f6925b4b6bf9-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 15 11:27:48.408328 kubelet[2059]: I0715 11:27:48.408222 2059 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0af08191-10ec-44c4-a087-f6925b4b6bf9-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 15 11:27:48.408328 kubelet[2059]: I0715 11:27:48.408233 2059 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0af08191-10ec-44c4-a087-f6925b4b6bf9-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 15 11:27:48.408328 kubelet[2059]: I0715 11:27:48.408245 2059 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8a07036d-f6a5-42d4-b607-0aa4500de691-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 15 11:27:48.408328 kubelet[2059]: I0715 11:27:48.408257 2059 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0af08191-10ec-44c4-a087-f6925b4b6bf9-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 15 11:27:48.408328 kubelet[2059]: I0715 11:27:48.408270 2059 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0af08191-10ec-44c4-a087-f6925b4b6bf9-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 15 11:27:48.408328 kubelet[2059]: I0715 11:27:48.408286 2059 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0af08191-10ec-44c4-a087-f6925b4b6bf9-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 15 11:27:48.408538 kubelet[2059]: I0715 11:27:48.408296 2059 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0af08191-10ec-44c4-a087-f6925b4b6bf9-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 15 11:27:48.408538 kubelet[2059]: I0715 11:27:48.408307 2059 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0af08191-10ec-44c4-a087-f6925b4b6bf9-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 15 11:27:48.458185 kubelet[2059]: E0715 11:27:48.458132 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:27:49.032297 kubelet[2059]: I0715 11:27:49.032268 2059 scope.go:117] "RemoveContainer" containerID="f5fffa0fe386b0feabf95046c7f2234eca2f300d1c670372588520565cdfe95d" Jul 15 11:27:49.033875 env[1303]: time="2025-07-15T11:27:49.033809255Z" level=info msg="RemoveContainer for \"f5fffa0fe386b0feabf95046c7f2234eca2f300d1c670372588520565cdfe95d\"" Jul 15 11:27:49.037438 env[1303]: time="2025-07-15T11:27:49.037403336Z" level=info msg="RemoveContainer for \"f5fffa0fe386b0feabf95046c7f2234eca2f300d1c670372588520565cdfe95d\" returns successfully" Jul 15 11:27:49.037661 kubelet[2059]: I0715 11:27:49.037638 2059 scope.go:117] "RemoveContainer" containerID="501c5b5a78ece7801a403e9ef6be6c5cff1aeda7398e228edfee791cd8c1321d" Jul 15 11:27:49.038689 env[1303]: time="2025-07-15T11:27:49.038660827Z" level=info msg="RemoveContainer for \"501c5b5a78ece7801a403e9ef6be6c5cff1aeda7398e228edfee791cd8c1321d\"" Jul 15 11:27:49.041871 env[1303]: time="2025-07-15T11:27:49.041833396Z" level=info msg="RemoveContainer for \"501c5b5a78ece7801a403e9ef6be6c5cff1aeda7398e228edfee791cd8c1321d\" returns successfully" Jul 15 11:27:49.041984 kubelet[2059]: I0715 11:27:49.041965 2059 scope.go:117] "RemoveContainer" containerID="de4d07254faa5b1a5a13b6c6b74de89120a46e3051c03aca8f5a91f581ff9268" Jul 15 11:27:49.042840 env[1303]: time="2025-07-15T11:27:49.042814330Z" level=info msg="RemoveContainer for \"de4d07254faa5b1a5a13b6c6b74de89120a46e3051c03aca8f5a91f581ff9268\"" Jul 15 11:27:49.046515 env[1303]: time="2025-07-15T11:27:49.046456563Z" level=info msg="RemoveContainer for \"de4d07254faa5b1a5a13b6c6b74de89120a46e3051c03aca8f5a91f581ff9268\" returns successfully" Jul 15 11:27:49.046655 kubelet[2059]: I0715 11:27:49.046618 2059 scope.go:117] "RemoveContainer" containerID="0999ab611d687221738ec90d58d70e830f830984fa913b5b3ebf4df02b42e015" Jul 15 11:27:49.048554 env[1303]: time="2025-07-15T11:27:49.047758488Z" level=info msg="RemoveContainer for \"0999ab611d687221738ec90d58d70e830f830984fa913b5b3ebf4df02b42e015\"" Jul 15 11:27:49.054958 env[1303]: time="2025-07-15T11:27:49.054915279Z" level=info msg="RemoveContainer for \"0999ab611d687221738ec90d58d70e830f830984fa913b5b3ebf4df02b42e015\" returns successfully" Jul 15 11:27:49.055169 kubelet[2059]: I0715 11:27:49.055146 2059 scope.go:117] "RemoveContainer" containerID="198101f2b4b9311f0c416f1bccfc9132ef68b2041cc5665e091b15baafd907aa" Jul 15 11:27:49.056170 env[1303]: time="2025-07-15T11:27:49.056148463Z" level=info msg="RemoveContainer for \"198101f2b4b9311f0c416f1bccfc9132ef68b2041cc5665e091b15baafd907aa\"" Jul 15 11:27:49.059543 env[1303]: time="2025-07-15T11:27:49.059511415Z" level=info msg="RemoveContainer for \"198101f2b4b9311f0c416f1bccfc9132ef68b2041cc5665e091b15baafd907aa\" returns successfully" Jul 15 11:27:49.059671 kubelet[2059]: I0715 11:27:49.059652 2059 scope.go:117] "RemoveContainer" containerID="f5fffa0fe386b0feabf95046c7f2234eca2f300d1c670372588520565cdfe95d" Jul 15 11:27:49.059962 env[1303]: time="2025-07-15T11:27:49.059894182Z" level=error msg="ContainerStatus for \"f5fffa0fe386b0feabf95046c7f2234eca2f300d1c670372588520565cdfe95d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f5fffa0fe386b0feabf95046c7f2234eca2f300d1c670372588520565cdfe95d\": not found" Jul 15 11:27:49.060099 kubelet[2059]: E0715 11:27:49.060079 2059 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f5fffa0fe386b0feabf95046c7f2234eca2f300d1c670372588520565cdfe95d\": not found" containerID="f5fffa0fe386b0feabf95046c7f2234eca2f300d1c670372588520565cdfe95d" Jul 15 11:27:49.060168 kubelet[2059]: I0715 11:27:49.060103 2059 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f5fffa0fe386b0feabf95046c7f2234eca2f300d1c670372588520565cdfe95d"} err="failed to get container status \"f5fffa0fe386b0feabf95046c7f2234eca2f300d1c670372588520565cdfe95d\": rpc error: code = NotFound desc = an error occurred when try to find container \"f5fffa0fe386b0feabf95046c7f2234eca2f300d1c670372588520565cdfe95d\": not found" Jul 15 11:27:49.060168 kubelet[2059]: I0715 11:27:49.060166 2059 scope.go:117] "RemoveContainer" containerID="501c5b5a78ece7801a403e9ef6be6c5cff1aeda7398e228edfee791cd8c1321d" Jul 15 11:27:49.060321 env[1303]: time="2025-07-15T11:27:49.060279475Z" level=error msg="ContainerStatus for \"501c5b5a78ece7801a403e9ef6be6c5cff1aeda7398e228edfee791cd8c1321d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"501c5b5a78ece7801a403e9ef6be6c5cff1aeda7398e228edfee791cd8c1321d\": not found" Jul 15 11:27:49.060402 kubelet[2059]: E0715 11:27:49.060386 2059 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"501c5b5a78ece7801a403e9ef6be6c5cff1aeda7398e228edfee791cd8c1321d\": not found" containerID="501c5b5a78ece7801a403e9ef6be6c5cff1aeda7398e228edfee791cd8c1321d" Jul 15 11:27:49.060496 kubelet[2059]: I0715 11:27:49.060472 2059 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"501c5b5a78ece7801a403e9ef6be6c5cff1aeda7398e228edfee791cd8c1321d"} err="failed to get container status \"501c5b5a78ece7801a403e9ef6be6c5cff1aeda7398e228edfee791cd8c1321d\": rpc error: code = NotFound desc = an error occurred when try to find container \"501c5b5a78ece7801a403e9ef6be6c5cff1aeda7398e228edfee791cd8c1321d\": not found" Jul 15 11:27:49.060496 kubelet[2059]: I0715 11:27:49.060496 2059 scope.go:117] "RemoveContainer" containerID="de4d07254faa5b1a5a13b6c6b74de89120a46e3051c03aca8f5a91f581ff9268" Jul 15 11:27:49.060662 env[1303]: time="2025-07-15T11:27:49.060624601Z" level=error msg="ContainerStatus for \"de4d07254faa5b1a5a13b6c6b74de89120a46e3051c03aca8f5a91f581ff9268\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"de4d07254faa5b1a5a13b6c6b74de89120a46e3051c03aca8f5a91f581ff9268\": not found" Jul 15 11:27:49.060754 kubelet[2059]: E0715 11:27:49.060739 2059 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"de4d07254faa5b1a5a13b6c6b74de89120a46e3051c03aca8f5a91f581ff9268\": not found" containerID="de4d07254faa5b1a5a13b6c6b74de89120a46e3051c03aca8f5a91f581ff9268" Jul 15 11:27:49.060800 kubelet[2059]: I0715 11:27:49.060754 2059 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"de4d07254faa5b1a5a13b6c6b74de89120a46e3051c03aca8f5a91f581ff9268"} err="failed to get container status \"de4d07254faa5b1a5a13b6c6b74de89120a46e3051c03aca8f5a91f581ff9268\": rpc error: code = NotFound desc = an error occurred when try to find container \"de4d07254faa5b1a5a13b6c6b74de89120a46e3051c03aca8f5a91f581ff9268\": not found" Jul 15 11:27:49.060800 kubelet[2059]: I0715 11:27:49.060774 2059 scope.go:117] "RemoveContainer" containerID="0999ab611d687221738ec90d58d70e830f830984fa913b5b3ebf4df02b42e015" Jul 15 11:27:49.060919 env[1303]: time="2025-07-15T11:27:49.060874525Z" level=error msg="ContainerStatus for \"0999ab611d687221738ec90d58d70e830f830984fa913b5b3ebf4df02b42e015\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0999ab611d687221738ec90d58d70e830f830984fa913b5b3ebf4df02b42e015\": not found" Jul 15 11:27:49.061050 kubelet[2059]: E0715 11:27:49.061034 2059 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0999ab611d687221738ec90d58d70e830f830984fa913b5b3ebf4df02b42e015\": not found" containerID="0999ab611d687221738ec90d58d70e830f830984fa913b5b3ebf4df02b42e015" Jul 15 11:27:49.061050 kubelet[2059]: I0715 11:27:49.061047 2059 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0999ab611d687221738ec90d58d70e830f830984fa913b5b3ebf4df02b42e015"} err="failed to get container status \"0999ab611d687221738ec90d58d70e830f830984fa913b5b3ebf4df02b42e015\": rpc error: code = NotFound desc = an error occurred when try to find container \"0999ab611d687221738ec90d58d70e830f830984fa913b5b3ebf4df02b42e015\": not found" Jul 15 11:27:49.061158 kubelet[2059]: I0715 11:27:49.061057 2059 scope.go:117] "RemoveContainer" containerID="198101f2b4b9311f0c416f1bccfc9132ef68b2041cc5665e091b15baafd907aa" Jul 15 11:27:49.061271 env[1303]: time="2025-07-15T11:27:49.061219752Z" level=error msg="ContainerStatus for \"198101f2b4b9311f0c416f1bccfc9132ef68b2041cc5665e091b15baafd907aa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"198101f2b4b9311f0c416f1bccfc9132ef68b2041cc5665e091b15baafd907aa\": not found" Jul 15 11:27:49.061352 kubelet[2059]: E0715 11:27:49.061337 2059 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"198101f2b4b9311f0c416f1bccfc9132ef68b2041cc5665e091b15baafd907aa\": not found" containerID="198101f2b4b9311f0c416f1bccfc9132ef68b2041cc5665e091b15baafd907aa" Jul 15 11:27:49.061409 kubelet[2059]: I0715 11:27:49.061363 2059 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"198101f2b4b9311f0c416f1bccfc9132ef68b2041cc5665e091b15baafd907aa"} err="failed to get container status \"198101f2b4b9311f0c416f1bccfc9132ef68b2041cc5665e091b15baafd907aa\": rpc error: code = NotFound desc = an error occurred when try to find container \"198101f2b4b9311f0c416f1bccfc9132ef68b2041cc5665e091b15baafd907aa\": not found" Jul 15 11:27:49.061409 kubelet[2059]: I0715 11:27:49.061387 2059 scope.go:117] "RemoveContainer" containerID="75b250562037332a3348e18dce812cea3cb5aa267edd4d7d7b9153d9eadd832c" Jul 15 11:27:49.062113 env[1303]: time="2025-07-15T11:27:49.062095035Z" level=info msg="RemoveContainer for \"75b250562037332a3348e18dce812cea3cb5aa267edd4d7d7b9153d9eadd832c\"" Jul 15 11:27:49.065025 env[1303]: time="2025-07-15T11:27:49.064996710Z" level=info msg="RemoveContainer for \"75b250562037332a3348e18dce812cea3cb5aa267edd4d7d7b9153d9eadd832c\" returns successfully" Jul 15 11:27:49.065125 kubelet[2059]: I0715 11:27:49.065108 2059 scope.go:117] "RemoveContainer" containerID="75b250562037332a3348e18dce812cea3cb5aa267edd4d7d7b9153d9eadd832c" Jul 15 11:27:49.065305 env[1303]: time="2025-07-15T11:27:49.065260220Z" level=error msg="ContainerStatus for \"75b250562037332a3348e18dce812cea3cb5aa267edd4d7d7b9153d9eadd832c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"75b250562037332a3348e18dce812cea3cb5aa267edd4d7d7b9153d9eadd832c\": not found" Jul 15 11:27:49.065436 kubelet[2059]: E0715 11:27:49.065417 2059 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"75b250562037332a3348e18dce812cea3cb5aa267edd4d7d7b9153d9eadd832c\": not found" containerID="75b250562037332a3348e18dce812cea3cb5aa267edd4d7d7b9153d9eadd832c" Jul 15 11:27:49.065484 kubelet[2059]: I0715 11:27:49.065442 2059 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"75b250562037332a3348e18dce812cea3cb5aa267edd4d7d7b9153d9eadd832c"} err="failed to get container status \"75b250562037332a3348e18dce812cea3cb5aa267edd4d7d7b9153d9eadd832c\": rpc error: code = NotFound desc = an error occurred when try to find container \"75b250562037332a3348e18dce812cea3cb5aa267edd4d7d7b9153d9eadd832c\": not found" Jul 15 11:27:49.128238 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f5fffa0fe386b0feabf95046c7f2234eca2f300d1c670372588520565cdfe95d-rootfs.mount: Deactivated successfully. Jul 15 11:27:49.128359 systemd[1]: var-lib-kubelet-pods-8a07036d\x2df6a5\x2d42d4\x2db607\x2d0aa4500de691-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddfrtf.mount: Deactivated successfully. Jul 15 11:27:49.128469 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c25e06ea65c611537f23a3af934ce295b99ee6d3fcee171773af80a256b36f2-rootfs.mount: Deactivated successfully. Jul 15 11:27:49.128541 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5c25e06ea65c611537f23a3af934ce295b99ee6d3fcee171773af80a256b36f2-shm.mount: Deactivated successfully. Jul 15 11:27:49.128630 systemd[1]: var-lib-kubelet-pods-0af08191\x2d10ec\x2d44c4\x2da087\x2df6925b4b6bf9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgtr49.mount: Deactivated successfully. Jul 15 11:27:49.128734 systemd[1]: var-lib-kubelet-pods-0af08191\x2d10ec\x2d44c4\x2da087\x2df6925b4b6bf9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 15 11:27:49.128819 systemd[1]: var-lib-kubelet-pods-0af08191\x2d10ec\x2d44c4\x2da087\x2df6925b4b6bf9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 15 11:27:49.407702 sshd[3704]: pam_unix(sshd:session): session closed for user core Jul 15 11:27:49.410229 systemd[1]: Started sshd@24-10.0.0.10:22-10.0.0.1:53544.service. Jul 15 11:27:49.411115 systemd[1]: sshd@23-10.0.0.10:22-10.0.0.1:53530.service: Deactivated successfully. Jul 15 11:27:49.412203 systemd[1]: session-24.scope: Deactivated successfully. Jul 15 11:27:49.412709 systemd-logind[1290]: Session 24 logged out. Waiting for processes to exit. Jul 15 11:27:49.413661 systemd-logind[1290]: Removed session 24. Jul 15 11:27:49.452883 sshd[3871]: Accepted publickey for core from 10.0.0.1 port 53544 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:27:49.454051 sshd[3871]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:27:49.457456 systemd-logind[1290]: New session 25 of user core. Jul 15 11:27:49.458455 systemd[1]: Started session-25.scope. Jul 15 11:27:49.460134 kubelet[2059]: I0715 11:27:49.460100 2059 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0af08191-10ec-44c4-a087-f6925b4b6bf9" path="/var/lib/kubelet/pods/0af08191-10ec-44c4-a087-f6925b4b6bf9/volumes" Jul 15 11:27:49.460621 kubelet[2059]: I0715 11:27:49.460603 2059 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a07036d-f6a5-42d4-b607-0aa4500de691" path="/var/lib/kubelet/pods/8a07036d-f6a5-42d4-b607-0aa4500de691/volumes" Jul 15 11:27:50.457885 kubelet[2059]: E0715 11:27:50.457853 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:27:50.566160 sshd[3871]: pam_unix(sshd:session): session closed for user core Jul 15 11:27:50.569456 systemd[1]: Started sshd@25-10.0.0.10:22-10.0.0.1:50084.service. Jul 15 11:27:50.569981 systemd[1]: sshd@24-10.0.0.10:22-10.0.0.1:53544.service: Deactivated successfully. Jul 15 11:27:50.571015 systemd[1]: session-25.scope: Deactivated successfully. Jul 15 11:27:50.573995 systemd-logind[1290]: Session 25 logged out. Waiting for processes to exit. Jul 15 11:27:50.575090 systemd-logind[1290]: Removed session 25. Jul 15 11:27:50.588482 kubelet[2059]: E0715 11:27:50.588444 2059 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0af08191-10ec-44c4-a087-f6925b4b6bf9" containerName="mount-bpf-fs" Jul 15 11:27:50.588482 kubelet[2059]: E0715 11:27:50.588472 2059 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0af08191-10ec-44c4-a087-f6925b4b6bf9" containerName="clean-cilium-state" Jul 15 11:27:50.588482 kubelet[2059]: E0715 11:27:50.588481 2059 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0af08191-10ec-44c4-a087-f6925b4b6bf9" containerName="mount-cgroup" Jul 15 11:27:50.588482 kubelet[2059]: E0715 11:27:50.588489 2059 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0af08191-10ec-44c4-a087-f6925b4b6bf9" containerName="apply-sysctl-overwrites" Jul 15 11:27:50.588890 kubelet[2059]: E0715 11:27:50.588496 2059 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8a07036d-f6a5-42d4-b607-0aa4500de691" containerName="cilium-operator" Jul 15 11:27:50.588890 kubelet[2059]: E0715 11:27:50.588503 2059 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0af08191-10ec-44c4-a087-f6925b4b6bf9" containerName="cilium-agent" Jul 15 11:27:50.588890 kubelet[2059]: I0715 11:27:50.588529 2059 memory_manager.go:354] "RemoveStaleState removing state" podUID="0af08191-10ec-44c4-a087-f6925b4b6bf9" containerName="cilium-agent" Jul 15 11:27:50.588890 kubelet[2059]: I0715 11:27:50.588536 2059 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a07036d-f6a5-42d4-b607-0aa4500de691" containerName="cilium-operator" Jul 15 11:27:50.621884 sshd[3885]: Accepted publickey for core from 10.0.0.1 port 50084 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:27:50.622270 kubelet[2059]: I0715 11:27:50.622237 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a3d9cd88-89f5-45db-a69f-6961bc287774-cilium-cgroup\") pod \"cilium-f2prt\" (UID: \"a3d9cd88-89f5-45db-a69f-6961bc287774\") " pod="kube-system/cilium-f2prt" Jul 15 11:27:50.622329 kubelet[2059]: I0715 11:27:50.622278 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a3d9cd88-89f5-45db-a69f-6961bc287774-bpf-maps\") pod \"cilium-f2prt\" (UID: \"a3d9cd88-89f5-45db-a69f-6961bc287774\") " pod="kube-system/cilium-f2prt" Jul 15 11:27:50.622329 kubelet[2059]: I0715 11:27:50.622300 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a3d9cd88-89f5-45db-a69f-6961bc287774-host-proc-sys-net\") pod \"cilium-f2prt\" (UID: \"a3d9cd88-89f5-45db-a69f-6961bc287774\") " pod="kube-system/cilium-f2prt" Jul 15 11:27:50.622329 kubelet[2059]: I0715 11:27:50.622320 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a3d9cd88-89f5-45db-a69f-6961bc287774-cilium-config-path\") pod \"cilium-f2prt\" (UID: \"a3d9cd88-89f5-45db-a69f-6961bc287774\") " pod="kube-system/cilium-f2prt" Jul 15 11:27:50.622459 kubelet[2059]: I0715 11:27:50.622337 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a3d9cd88-89f5-45db-a69f-6961bc287774-hubble-tls\") pod \"cilium-f2prt\" (UID: \"a3d9cd88-89f5-45db-a69f-6961bc287774\") " pod="kube-system/cilium-f2prt" Jul 15 11:27:50.622459 kubelet[2059]: I0715 11:27:50.622355 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a3d9cd88-89f5-45db-a69f-6961bc287774-cni-path\") pod \"cilium-f2prt\" (UID: \"a3d9cd88-89f5-45db-a69f-6961bc287774\") " pod="kube-system/cilium-f2prt" Jul 15 11:27:50.622459 kubelet[2059]: I0715 11:27:50.622371 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a3d9cd88-89f5-45db-a69f-6961bc287774-lib-modules\") pod \"cilium-f2prt\" (UID: \"a3d9cd88-89f5-45db-a69f-6961bc287774\") " pod="kube-system/cilium-f2prt" Jul 15 11:27:50.622459 kubelet[2059]: I0715 11:27:50.622400 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a3d9cd88-89f5-45db-a69f-6961bc287774-hostproc\") pod \"cilium-f2prt\" (UID: \"a3d9cd88-89f5-45db-a69f-6961bc287774\") " pod="kube-system/cilium-f2prt" Jul 15 11:27:50.622459 kubelet[2059]: I0715 11:27:50.622417 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a3d9cd88-89f5-45db-a69f-6961bc287774-clustermesh-secrets\") pod \"cilium-f2prt\" (UID: \"a3d9cd88-89f5-45db-a69f-6961bc287774\") " pod="kube-system/cilium-f2prt" Jul 15 11:27:50.622459 kubelet[2059]: I0715 11:27:50.622434 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a3d9cd88-89f5-45db-a69f-6961bc287774-cilium-run\") pod \"cilium-f2prt\" (UID: \"a3d9cd88-89f5-45db-a69f-6961bc287774\") " pod="kube-system/cilium-f2prt" Jul 15 11:27:50.622588 kubelet[2059]: I0715 11:27:50.622449 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a3d9cd88-89f5-45db-a69f-6961bc287774-cilium-ipsec-secrets\") pod \"cilium-f2prt\" (UID: \"a3d9cd88-89f5-45db-a69f-6961bc287774\") " pod="kube-system/cilium-f2prt" Jul 15 11:27:50.622588 kubelet[2059]: I0715 11:27:50.622466 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a3d9cd88-89f5-45db-a69f-6961bc287774-etc-cni-netd\") pod \"cilium-f2prt\" (UID: \"a3d9cd88-89f5-45db-a69f-6961bc287774\") " pod="kube-system/cilium-f2prt" Jul 15 11:27:50.622588 kubelet[2059]: I0715 11:27:50.622482 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a3d9cd88-89f5-45db-a69f-6961bc287774-host-proc-sys-kernel\") pod \"cilium-f2prt\" (UID: \"a3d9cd88-89f5-45db-a69f-6961bc287774\") " pod="kube-system/cilium-f2prt" Jul 15 11:27:50.622588 kubelet[2059]: I0715 11:27:50.622503 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a3d9cd88-89f5-45db-a69f-6961bc287774-xtables-lock\") pod \"cilium-f2prt\" (UID: \"a3d9cd88-89f5-45db-a69f-6961bc287774\") " pod="kube-system/cilium-f2prt" Jul 15 11:27:50.622588 kubelet[2059]: I0715 11:27:50.622519 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr5p9\" (UniqueName: \"kubernetes.io/projected/a3d9cd88-89f5-45db-a69f-6961bc287774-kube-api-access-dr5p9\") pod \"cilium-f2prt\" (UID: \"a3d9cd88-89f5-45db-a69f-6961bc287774\") " pod="kube-system/cilium-f2prt" Jul 15 11:27:50.623143 sshd[3885]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:27:50.626276 systemd-logind[1290]: New session 26 of user core. Jul 15 11:27:50.626942 systemd[1]: Started session-26.scope. Jul 15 11:27:50.744464 sshd[3885]: pam_unix(sshd:session): session closed for user core Jul 15 11:27:50.748228 systemd[1]: Started sshd@26-10.0.0.10:22-10.0.0.1:50100.service. Jul 15 11:27:50.752186 kubelet[2059]: E0715 11:27:50.752160 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:27:50.752764 systemd[1]: sshd@25-10.0.0.10:22-10.0.0.1:50084.service: Deactivated successfully. Jul 15 11:27:50.753223 env[1303]: time="2025-07-15T11:27:50.753182748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f2prt,Uid:a3d9cd88-89f5-45db-a69f-6961bc287774,Namespace:kube-system,Attempt:0,}" Jul 15 11:27:50.753841 systemd[1]: session-26.scope: Deactivated successfully. Jul 15 11:27:50.755431 systemd-logind[1290]: Session 26 logged out. Waiting for processes to exit. Jul 15 11:27:50.759434 systemd-logind[1290]: Removed session 26. Jul 15 11:27:50.792676 sshd[3902]: Accepted publickey for core from 10.0.0.1 port 50100 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:27:50.793630 sshd[3902]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:27:50.796421 systemd-logind[1290]: New session 27 of user core. Jul 15 11:27:50.797088 systemd[1]: Started session-27.scope. Jul 15 11:27:50.948648 env[1303]: time="2025-07-15T11:27:50.947770788Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:27:50.948648 env[1303]: time="2025-07-15T11:27:50.947800955Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:27:50.948648 env[1303]: time="2025-07-15T11:27:50.947809952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:27:50.948648 env[1303]: time="2025-07-15T11:27:50.947903159Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/69082cecd7d388afbe89ed0007aed40504e95b175fbec401255c5184bfcd200b pid=3922 runtime=io.containerd.runc.v2 Jul 15 11:27:50.985201 env[1303]: time="2025-07-15T11:27:50.985154497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f2prt,Uid:a3d9cd88-89f5-45db-a69f-6961bc287774,Namespace:kube-system,Attempt:0,} returns sandbox id \"69082cecd7d388afbe89ed0007aed40504e95b175fbec401255c5184bfcd200b\"" Jul 15 11:27:50.987133 kubelet[2059]: E0715 11:27:50.986518 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:27:50.988614 env[1303]: time="2025-07-15T11:27:50.988585136Z" level=info msg="CreateContainer within sandbox \"69082cecd7d388afbe89ed0007aed40504e95b175fbec401255c5184bfcd200b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 15 11:27:51.001555 env[1303]: time="2025-07-15T11:27:51.001420142Z" level=info msg="CreateContainer within sandbox \"69082cecd7d388afbe89ed0007aed40504e95b175fbec401255c5184bfcd200b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b1904e2fd36e92397f45cf558150bd91954d546b579bb3d374f5ad065e2cabee\"" Jul 15 11:27:51.002023 env[1303]: time="2025-07-15T11:27:51.001993882Z" level=info msg="StartContainer for \"b1904e2fd36e92397f45cf558150bd91954d546b579bb3d374f5ad065e2cabee\"" Jul 15 11:27:51.036157 env[1303]: time="2025-07-15T11:27:51.036105816Z" level=info msg="StartContainer for \"b1904e2fd36e92397f45cf558150bd91954d546b579bb3d374f5ad065e2cabee\" returns successfully" Jul 15 11:27:51.049661 env[1303]: time="2025-07-15T11:27:51.049627369Z" level=info msg="StopContainer for \"b1904e2fd36e92397f45cf558150bd91954d546b579bb3d374f5ad065e2cabee\" with timeout 2 (s)" Jul 15 11:27:51.051042 env[1303]: time="2025-07-15T11:27:51.051020887Z" level=info msg="Stop container \"b1904e2fd36e92397f45cf558150bd91954d546b579bb3d374f5ad065e2cabee\" with signal terminated" Jul 15 11:27:51.081265 env[1303]: time="2025-07-15T11:27:51.081210230Z" level=info msg="shim disconnected" id=b1904e2fd36e92397f45cf558150bd91954d546b579bb3d374f5ad065e2cabee Jul 15 11:27:51.081265 env[1303]: time="2025-07-15T11:27:51.081262258Z" level=warning msg="cleaning up after shim disconnected" id=b1904e2fd36e92397f45cf558150bd91954d546b579bb3d374f5ad065e2cabee namespace=k8s.io Jul 15 11:27:51.081265 env[1303]: time="2025-07-15T11:27:51.081273029Z" level=info msg="cleaning up dead shim" Jul 15 11:27:51.087941 env[1303]: time="2025-07-15T11:27:51.087890619Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:27:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4010 runtime=io.containerd.runc.v2\n" Jul 15 11:27:51.090979 env[1303]: time="2025-07-15T11:27:51.090928409Z" level=info msg="StopContainer for \"b1904e2fd36e92397f45cf558150bd91954d546b579bb3d374f5ad065e2cabee\" returns successfully" Jul 15 11:27:51.091463 env[1303]: time="2025-07-15T11:27:51.091440983Z" level=info msg="StopPodSandbox for \"69082cecd7d388afbe89ed0007aed40504e95b175fbec401255c5184bfcd200b\"" Jul 15 11:27:51.091526 env[1303]: time="2025-07-15T11:27:51.091488843Z" level=info msg="Container to stop \"b1904e2fd36e92397f45cf558150bd91954d546b579bb3d374f5ad065e2cabee\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 11:27:51.109731 env[1303]: time="2025-07-15T11:27:51.109687681Z" level=info msg="shim disconnected" id=69082cecd7d388afbe89ed0007aed40504e95b175fbec401255c5184bfcd200b Jul 15 11:27:51.109731 env[1303]: time="2025-07-15T11:27:51.109726996Z" level=warning msg="cleaning up after shim disconnected" id=69082cecd7d388afbe89ed0007aed40504e95b175fbec401255c5184bfcd200b namespace=k8s.io Jul 15 11:27:51.109731 env[1303]: time="2025-07-15T11:27:51.109736093Z" level=info msg="cleaning up dead shim" Jul 15 11:27:51.116028 env[1303]: time="2025-07-15T11:27:51.115992537Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:27:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4042 runtime=io.containerd.runc.v2\n" Jul 15 11:27:51.116278 env[1303]: time="2025-07-15T11:27:51.116247932Z" level=info msg="TearDown network for sandbox \"69082cecd7d388afbe89ed0007aed40504e95b175fbec401255c5184bfcd200b\" successfully" Jul 15 11:27:51.116278 env[1303]: time="2025-07-15T11:27:51.116269263Z" level=info msg="StopPodSandbox for \"69082cecd7d388afbe89ed0007aed40504e95b175fbec401255c5184bfcd200b\" returns successfully" Jul 15 11:27:51.125921 kubelet[2059]: I0715 11:27:51.125892 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a3d9cd88-89f5-45db-a69f-6961bc287774-cilium-run\") pod \"a3d9cd88-89f5-45db-a69f-6961bc287774\" (UID: \"a3d9cd88-89f5-45db-a69f-6961bc287774\") " Jul 15 11:27:51.125921 kubelet[2059]: I0715 11:27:51.125928 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a3d9cd88-89f5-45db-a69f-6961bc287774-cni-path\") pod \"a3d9cd88-89f5-45db-a69f-6961bc287774\" (UID: \"a3d9cd88-89f5-45db-a69f-6961bc287774\") " Jul 15 11:27:51.126155 kubelet[2059]: I0715 11:27:51.125951 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a3d9cd88-89f5-45db-a69f-6961bc287774-lib-modules\") pod \"a3d9cd88-89f5-45db-a69f-6961bc287774\" (UID: \"a3d9cd88-89f5-45db-a69f-6961bc287774\") " Jul 15 11:27:51.126155 kubelet[2059]: I0715 11:27:51.125975 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a3d9cd88-89f5-45db-a69f-6961bc287774-clustermesh-secrets\") pod \"a3d9cd88-89f5-45db-a69f-6961bc287774\" (UID: \"a3d9cd88-89f5-45db-a69f-6961bc287774\") " Jul 15 11:27:51.126155 kubelet[2059]: I0715 11:27:51.125995 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a3d9cd88-89f5-45db-a69f-6961bc287774-cilium-config-path\") pod \"a3d9cd88-89f5-45db-a69f-6961bc287774\" (UID: \"a3d9cd88-89f5-45db-a69f-6961bc287774\") " Jul 15 11:27:51.126155 kubelet[2059]: I0715 11:27:51.126013 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a3d9cd88-89f5-45db-a69f-6961bc287774-etc-cni-netd\") pod \"a3d9cd88-89f5-45db-a69f-6961bc287774\" (UID: \"a3d9cd88-89f5-45db-a69f-6961bc287774\") " Jul 15 11:27:51.126155 kubelet[2059]: I0715 11:27:51.126013 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3d9cd88-89f5-45db-a69f-6961bc287774-cni-path" (OuterVolumeSpecName: "cni-path") pod "a3d9cd88-89f5-45db-a69f-6961bc287774" (UID: "a3d9cd88-89f5-45db-a69f-6961bc287774"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 11:27:51.126155 kubelet[2059]: I0715 11:27:51.126032 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dr5p9\" (UniqueName: \"kubernetes.io/projected/a3d9cd88-89f5-45db-a69f-6961bc287774-kube-api-access-dr5p9\") pod \"a3d9cd88-89f5-45db-a69f-6961bc287774\" (UID: \"a3d9cd88-89f5-45db-a69f-6961bc287774\") " Jul 15 11:27:51.126364 kubelet[2059]: I0715 11:27:51.126049 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a3d9cd88-89f5-45db-a69f-6961bc287774-bpf-maps\") pod \"a3d9cd88-89f5-45db-a69f-6961bc287774\" (UID: \"a3d9cd88-89f5-45db-a69f-6961bc287774\") " Jul 15 11:27:51.126364 kubelet[2059]: I0715 11:27:51.126081 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a3d9cd88-89f5-45db-a69f-6961bc287774-cilium-ipsec-secrets\") pod \"a3d9cd88-89f5-45db-a69f-6961bc287774\" (UID: \"a3d9cd88-89f5-45db-a69f-6961bc287774\") " Jul 15 11:27:51.126364 kubelet[2059]: I0715 11:27:51.126099 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a3d9cd88-89f5-45db-a69f-6961bc287774-host-proc-sys-kernel\") pod \"a3d9cd88-89f5-45db-a69f-6961bc287774\" (UID: \"a3d9cd88-89f5-45db-a69f-6961bc287774\") " Jul 15 11:27:51.126364 kubelet[2059]: I0715 11:27:51.126117 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a3d9cd88-89f5-45db-a69f-6961bc287774-cilium-cgroup\") pod \"a3d9cd88-89f5-45db-a69f-6961bc287774\" (UID: \"a3d9cd88-89f5-45db-a69f-6961bc287774\") " Jul 15 11:27:51.126364 kubelet[2059]: I0715 11:27:51.126136 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a3d9cd88-89f5-45db-a69f-6961bc287774-hubble-tls\") pod \"a3d9cd88-89f5-45db-a69f-6961bc287774\" (UID: \"a3d9cd88-89f5-45db-a69f-6961bc287774\") " Jul 15 11:27:51.126364 kubelet[2059]: I0715 11:27:51.126155 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a3d9cd88-89f5-45db-a69f-6961bc287774-xtables-lock\") pod \"a3d9cd88-89f5-45db-a69f-6961bc287774\" (UID: \"a3d9cd88-89f5-45db-a69f-6961bc287774\") " Jul 15 11:27:51.126593 kubelet[2059]: I0715 11:27:51.126174 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a3d9cd88-89f5-45db-a69f-6961bc287774-host-proc-sys-net\") pod \"a3d9cd88-89f5-45db-a69f-6961bc287774\" (UID: \"a3d9cd88-89f5-45db-a69f-6961bc287774\") " Jul 15 11:27:51.126593 kubelet[2059]: I0715 11:27:51.126192 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a3d9cd88-89f5-45db-a69f-6961bc287774-hostproc\") pod \"a3d9cd88-89f5-45db-a69f-6961bc287774\" (UID: \"a3d9cd88-89f5-45db-a69f-6961bc287774\") " Jul 15 11:27:51.126593 kubelet[2059]: I0715 11:27:51.126220 2059 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a3d9cd88-89f5-45db-a69f-6961bc287774-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 15 11:27:51.126593 kubelet[2059]: I0715 11:27:51.126250 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3d9cd88-89f5-45db-a69f-6961bc287774-hostproc" (OuterVolumeSpecName: "hostproc") pod "a3d9cd88-89f5-45db-a69f-6961bc287774" (UID: "a3d9cd88-89f5-45db-a69f-6961bc287774"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 11:27:51.126593 kubelet[2059]: I0715 11:27:51.126277 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3d9cd88-89f5-45db-a69f-6961bc287774-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a3d9cd88-89f5-45db-a69f-6961bc287774" (UID: "a3d9cd88-89f5-45db-a69f-6961bc287774"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 11:27:51.127495 kubelet[2059]: I0715 11:27:51.127458 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3d9cd88-89f5-45db-a69f-6961bc287774-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a3d9cd88-89f5-45db-a69f-6961bc287774" (UID: "a3d9cd88-89f5-45db-a69f-6961bc287774"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 11:27:51.127552 kubelet[2059]: I0715 11:27:51.127506 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3d9cd88-89f5-45db-a69f-6961bc287774-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a3d9cd88-89f5-45db-a69f-6961bc287774" (UID: "a3d9cd88-89f5-45db-a69f-6961bc287774"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 11:27:51.128890 kubelet[2059]: I0715 11:27:51.127857 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3d9cd88-89f5-45db-a69f-6961bc287774-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a3d9cd88-89f5-45db-a69f-6961bc287774" (UID: "a3d9cd88-89f5-45db-a69f-6961bc287774"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 11:27:51.128890 kubelet[2059]: I0715 11:27:51.127904 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3d9cd88-89f5-45db-a69f-6961bc287774-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a3d9cd88-89f5-45db-a69f-6961bc287774" (UID: "a3d9cd88-89f5-45db-a69f-6961bc287774"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 11:27:51.128890 kubelet[2059]: I0715 11:27:51.127926 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3d9cd88-89f5-45db-a69f-6961bc287774-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a3d9cd88-89f5-45db-a69f-6961bc287774" (UID: "a3d9cd88-89f5-45db-a69f-6961bc287774"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 11:27:51.128890 kubelet[2059]: I0715 11:27:51.128005 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3d9cd88-89f5-45db-a69f-6961bc287774-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a3d9cd88-89f5-45db-a69f-6961bc287774" (UID: "a3d9cd88-89f5-45db-a69f-6961bc287774"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 11:27:51.128890 kubelet[2059]: I0715 11:27:51.128026 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3d9cd88-89f5-45db-a69f-6961bc287774-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a3d9cd88-89f5-45db-a69f-6961bc287774" (UID: "a3d9cd88-89f5-45db-a69f-6961bc287774"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 11:27:51.129035 kubelet[2059]: I0715 11:27:51.128034 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3d9cd88-89f5-45db-a69f-6961bc287774-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a3d9cd88-89f5-45db-a69f-6961bc287774" (UID: "a3d9cd88-89f5-45db-a69f-6961bc287774"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 15 11:27:51.129418 kubelet[2059]: I0715 11:27:51.129393 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3d9cd88-89f5-45db-a69f-6961bc287774-kube-api-access-dr5p9" (OuterVolumeSpecName: "kube-api-access-dr5p9") pod "a3d9cd88-89f5-45db-a69f-6961bc287774" (UID: "a3d9cd88-89f5-45db-a69f-6961bc287774"). InnerVolumeSpecName "kube-api-access-dr5p9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 15 11:27:51.129651 kubelet[2059]: I0715 11:27:51.129631 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3d9cd88-89f5-45db-a69f-6961bc287774-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "a3d9cd88-89f5-45db-a69f-6961bc287774" (UID: "a3d9cd88-89f5-45db-a69f-6961bc287774"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 15 11:27:51.131793 kubelet[2059]: I0715 11:27:51.130756 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3d9cd88-89f5-45db-a69f-6961bc287774-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a3d9cd88-89f5-45db-a69f-6961bc287774" (UID: "a3d9cd88-89f5-45db-a69f-6961bc287774"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 15 11:27:51.131793 kubelet[2059]: I0715 11:27:51.131027 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3d9cd88-89f5-45db-a69f-6961bc287774-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a3d9cd88-89f5-45db-a69f-6961bc287774" (UID: "a3d9cd88-89f5-45db-a69f-6961bc287774"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 15 11:27:51.227188 kubelet[2059]: I0715 11:27:51.227167 2059 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a3d9cd88-89f5-45db-a69f-6961bc287774-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 15 11:27:51.227188 kubelet[2059]: I0715 11:27:51.227188 2059 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a3d9cd88-89f5-45db-a69f-6961bc287774-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Jul 15 11:27:51.227308 kubelet[2059]: I0715 11:27:51.227197 2059 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a3d9cd88-89f5-45db-a69f-6961bc287774-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 15 11:27:51.227308 kubelet[2059]: I0715 11:27:51.227203 2059 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a3d9cd88-89f5-45db-a69f-6961bc287774-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 15 11:27:51.227308 kubelet[2059]: I0715 11:27:51.227210 2059 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a3d9cd88-89f5-45db-a69f-6961bc287774-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 15 11:27:51.227308 kubelet[2059]: I0715 11:27:51.227220 2059 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a3d9cd88-89f5-45db-a69f-6961bc287774-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 15 11:27:51.227308 kubelet[2059]: I0715 11:27:51.227231 2059 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a3d9cd88-89f5-45db-a69f-6961bc287774-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 15 11:27:51.227308 kubelet[2059]: I0715 11:27:51.227240 2059 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a3d9cd88-89f5-45db-a69f-6961bc287774-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 15 11:27:51.227308 kubelet[2059]: I0715 11:27:51.227252 2059 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a3d9cd88-89f5-45db-a69f-6961bc287774-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 15 11:27:51.227308 kubelet[2059]: I0715 11:27:51.227259 2059 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a3d9cd88-89f5-45db-a69f-6961bc287774-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 15 11:27:51.227503 kubelet[2059]: I0715 11:27:51.227265 2059 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a3d9cd88-89f5-45db-a69f-6961bc287774-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 15 11:27:51.227503 kubelet[2059]: I0715 11:27:51.227271 2059 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a3d9cd88-89f5-45db-a69f-6961bc287774-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 15 11:27:51.227503 kubelet[2059]: I0715 11:27:51.227278 2059 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dr5p9\" (UniqueName: \"kubernetes.io/projected/a3d9cd88-89f5-45db-a69f-6961bc287774-kube-api-access-dr5p9\") on node \"localhost\" DevicePath \"\"" Jul 15 11:27:51.227503 kubelet[2059]: I0715 11:27:51.227284 2059 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a3d9cd88-89f5-45db-a69f-6961bc287774-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 15 11:27:51.515293 kubelet[2059]: E0715 11:27:51.515254 2059 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 15 11:27:51.731447 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-69082cecd7d388afbe89ed0007aed40504e95b175fbec401255c5184bfcd200b-rootfs.mount: Deactivated successfully. Jul 15 11:27:51.731578 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-69082cecd7d388afbe89ed0007aed40504e95b175fbec401255c5184bfcd200b-shm.mount: Deactivated successfully. Jul 15 11:27:51.731659 systemd[1]: var-lib-kubelet-pods-a3d9cd88\x2d89f5\x2d45db\x2da69f\x2d6961bc287774-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddr5p9.mount: Deactivated successfully. Jul 15 11:27:51.731744 systemd[1]: var-lib-kubelet-pods-a3d9cd88\x2d89f5\x2d45db\x2da69f\x2d6961bc287774-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 15 11:27:51.731820 systemd[1]: var-lib-kubelet-pods-a3d9cd88\x2d89f5\x2d45db\x2da69f\x2d6961bc287774-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 15 11:27:51.731898 systemd[1]: var-lib-kubelet-pods-a3d9cd88\x2d89f5\x2d45db\x2da69f\x2d6961bc287774-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 15 11:27:52.043055 kubelet[2059]: I0715 11:27:52.043025 2059 scope.go:117] "RemoveContainer" containerID="b1904e2fd36e92397f45cf558150bd91954d546b579bb3d374f5ad065e2cabee" Jul 15 11:27:52.044081 env[1303]: time="2025-07-15T11:27:52.044017884Z" level=info msg="RemoveContainer for \"b1904e2fd36e92397f45cf558150bd91954d546b579bb3d374f5ad065e2cabee\"" Jul 15 11:27:52.048177 env[1303]: time="2025-07-15T11:27:52.048100818Z" level=info msg="RemoveContainer for \"b1904e2fd36e92397f45cf558150bd91954d546b579bb3d374f5ad065e2cabee\" returns successfully" Jul 15 11:27:52.048299 kubelet[2059]: I0715 11:27:52.048267 2059 scope.go:117] "RemoveContainer" containerID="b1904e2fd36e92397f45cf558150bd91954d546b579bb3d374f5ad065e2cabee" Jul 15 11:27:52.049546 env[1303]: time="2025-07-15T11:27:52.049465229Z" level=error msg="ContainerStatus for \"b1904e2fd36e92397f45cf558150bd91954d546b579bb3d374f5ad065e2cabee\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b1904e2fd36e92397f45cf558150bd91954d546b579bb3d374f5ad065e2cabee\": not found" Jul 15 11:27:52.049678 kubelet[2059]: E0715 11:27:52.049648 2059 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b1904e2fd36e92397f45cf558150bd91954d546b579bb3d374f5ad065e2cabee\": not found" containerID="b1904e2fd36e92397f45cf558150bd91954d546b579bb3d374f5ad065e2cabee" Jul 15 11:27:52.049728 kubelet[2059]: I0715 11:27:52.049681 2059 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b1904e2fd36e92397f45cf558150bd91954d546b579bb3d374f5ad065e2cabee"} err="failed to get container status \"b1904e2fd36e92397f45cf558150bd91954d546b579bb3d374f5ad065e2cabee\": rpc error: code = NotFound desc = an error occurred when try to find container \"b1904e2fd36e92397f45cf558150bd91954d546b579bb3d374f5ad065e2cabee\": not found" Jul 15 11:27:52.080249 kubelet[2059]: E0715 11:27:52.080212 2059 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a3d9cd88-89f5-45db-a69f-6961bc287774" containerName="mount-cgroup" Jul 15 11:27:52.080249 kubelet[2059]: I0715 11:27:52.080261 2059 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3d9cd88-89f5-45db-a69f-6961bc287774" containerName="mount-cgroup" Jul 15 11:27:52.132848 kubelet[2059]: I0715 11:27:52.132803 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cc643c33-1be1-4fd0-8ccd-d3de619ecf6f-cni-path\") pod \"cilium-k8pmq\" (UID: \"cc643c33-1be1-4fd0-8ccd-d3de619ecf6f\") " pod="kube-system/cilium-k8pmq" Jul 15 11:27:52.132848 kubelet[2059]: I0715 11:27:52.132845 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cc643c33-1be1-4fd0-8ccd-d3de619ecf6f-xtables-lock\") pod \"cilium-k8pmq\" (UID: \"cc643c33-1be1-4fd0-8ccd-d3de619ecf6f\") " pod="kube-system/cilium-k8pmq" Jul 15 11:27:52.132848 kubelet[2059]: I0715 11:27:52.132863 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cc643c33-1be1-4fd0-8ccd-d3de619ecf6f-cilium-run\") pod \"cilium-k8pmq\" (UID: \"cc643c33-1be1-4fd0-8ccd-d3de619ecf6f\") " pod="kube-system/cilium-k8pmq" Jul 15 11:27:52.133084 kubelet[2059]: I0715 11:27:52.132877 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cc643c33-1be1-4fd0-8ccd-d3de619ecf6f-hostproc\") pod \"cilium-k8pmq\" (UID: \"cc643c33-1be1-4fd0-8ccd-d3de619ecf6f\") " pod="kube-system/cilium-k8pmq" Jul 15 11:27:52.133084 kubelet[2059]: I0715 11:27:52.132894 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cc643c33-1be1-4fd0-8ccd-d3de619ecf6f-host-proc-sys-net\") pod \"cilium-k8pmq\" (UID: \"cc643c33-1be1-4fd0-8ccd-d3de619ecf6f\") " pod="kube-system/cilium-k8pmq" Jul 15 11:27:52.133084 kubelet[2059]: I0715 11:27:52.132939 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cc643c33-1be1-4fd0-8ccd-d3de619ecf6f-etc-cni-netd\") pod \"cilium-k8pmq\" (UID: \"cc643c33-1be1-4fd0-8ccd-d3de619ecf6f\") " pod="kube-system/cilium-k8pmq" Jul 15 11:27:52.133084 kubelet[2059]: I0715 11:27:52.132995 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cc643c33-1be1-4fd0-8ccd-d3de619ecf6f-clustermesh-secrets\") pod \"cilium-k8pmq\" (UID: \"cc643c33-1be1-4fd0-8ccd-d3de619ecf6f\") " pod="kube-system/cilium-k8pmq" Jul 15 11:27:52.133084 kubelet[2059]: I0715 11:27:52.133014 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cc643c33-1be1-4fd0-8ccd-d3de619ecf6f-host-proc-sys-kernel\") pod \"cilium-k8pmq\" (UID: \"cc643c33-1be1-4fd0-8ccd-d3de619ecf6f\") " pod="kube-system/cilium-k8pmq" Jul 15 11:27:52.133208 kubelet[2059]: I0715 11:27:52.133028 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cc643c33-1be1-4fd0-8ccd-d3de619ecf6f-cilium-config-path\") pod \"cilium-k8pmq\" (UID: \"cc643c33-1be1-4fd0-8ccd-d3de619ecf6f\") " pod="kube-system/cilium-k8pmq" Jul 15 11:27:52.133208 kubelet[2059]: I0715 11:27:52.133040 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cc643c33-1be1-4fd0-8ccd-d3de619ecf6f-bpf-maps\") pod \"cilium-k8pmq\" (UID: \"cc643c33-1be1-4fd0-8ccd-d3de619ecf6f\") " pod="kube-system/cilium-k8pmq" Jul 15 11:27:52.133208 kubelet[2059]: I0715 11:27:52.133096 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cc643c33-1be1-4fd0-8ccd-d3de619ecf6f-cilium-cgroup\") pod \"cilium-k8pmq\" (UID: \"cc643c33-1be1-4fd0-8ccd-d3de619ecf6f\") " pod="kube-system/cilium-k8pmq" Jul 15 11:27:52.133208 kubelet[2059]: I0715 11:27:52.133139 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/cc643c33-1be1-4fd0-8ccd-d3de619ecf6f-cilium-ipsec-secrets\") pod \"cilium-k8pmq\" (UID: \"cc643c33-1be1-4fd0-8ccd-d3de619ecf6f\") " pod="kube-system/cilium-k8pmq" Jul 15 11:27:52.133208 kubelet[2059]: I0715 11:27:52.133158 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cc643c33-1be1-4fd0-8ccd-d3de619ecf6f-hubble-tls\") pod \"cilium-k8pmq\" (UID: \"cc643c33-1be1-4fd0-8ccd-d3de619ecf6f\") " pod="kube-system/cilium-k8pmq" Jul 15 11:27:52.133208 kubelet[2059]: I0715 11:27:52.133187 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s45lm\" (UniqueName: \"kubernetes.io/projected/cc643c33-1be1-4fd0-8ccd-d3de619ecf6f-kube-api-access-s45lm\") pod \"cilium-k8pmq\" (UID: \"cc643c33-1be1-4fd0-8ccd-d3de619ecf6f\") " pod="kube-system/cilium-k8pmq" Jul 15 11:27:52.133347 kubelet[2059]: I0715 11:27:52.133220 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cc643c33-1be1-4fd0-8ccd-d3de619ecf6f-lib-modules\") pod \"cilium-k8pmq\" (UID: \"cc643c33-1be1-4fd0-8ccd-d3de619ecf6f\") " pod="kube-system/cilium-k8pmq" Jul 15 11:27:52.397604 kubelet[2059]: E0715 11:27:52.397460 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:27:52.397987 env[1303]: time="2025-07-15T11:27:52.397950483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k8pmq,Uid:cc643c33-1be1-4fd0-8ccd-d3de619ecf6f,Namespace:kube-system,Attempt:0,}" Jul 15 11:27:52.411443 env[1303]: time="2025-07-15T11:27:52.411351520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:27:52.411571 env[1303]: time="2025-07-15T11:27:52.411421083Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:27:52.411571 env[1303]: time="2025-07-15T11:27:52.411455067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:27:52.411673 env[1303]: time="2025-07-15T11:27:52.411646971Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f2635048e76a161deb61293a0aa99ca7f78fbfff750d882596e624ed30671ad0 pid=4070 runtime=io.containerd.runc.v2 Jul 15 11:27:52.442102 env[1303]: time="2025-07-15T11:27:52.442026960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k8pmq,Uid:cc643c33-1be1-4fd0-8ccd-d3de619ecf6f,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2635048e76a161deb61293a0aa99ca7f78fbfff750d882596e624ed30671ad0\"" Jul 15 11:27:52.442603 kubelet[2059]: E0715 11:27:52.442582 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:27:52.445010 env[1303]: time="2025-07-15T11:27:52.444964859Z" level=info msg="CreateContainer within sandbox \"f2635048e76a161deb61293a0aa99ca7f78fbfff750d882596e624ed30671ad0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 15 11:27:52.490882 env[1303]: time="2025-07-15T11:27:52.490816748Z" level=info msg="CreateContainer within sandbox \"f2635048e76a161deb61293a0aa99ca7f78fbfff750d882596e624ed30671ad0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fccde9dacb51beca6b0ecfbdec4cf2bde19f6b1bf7144c164e5464b5be6a81f4\"" Jul 15 11:27:52.491498 env[1303]: time="2025-07-15T11:27:52.491455350Z" level=info msg="StartContainer for \"fccde9dacb51beca6b0ecfbdec4cf2bde19f6b1bf7144c164e5464b5be6a81f4\"" Jul 15 11:27:52.529577 env[1303]: time="2025-07-15T11:27:52.529522467Z" level=info msg="StartContainer for \"fccde9dacb51beca6b0ecfbdec4cf2bde19f6b1bf7144c164e5464b5be6a81f4\" returns successfully" Jul 15 11:27:52.741073 env[1303]: time="2025-07-15T11:27:52.741012509Z" level=info msg="shim disconnected" id=fccde9dacb51beca6b0ecfbdec4cf2bde19f6b1bf7144c164e5464b5be6a81f4 Jul 15 11:27:52.741073 env[1303]: time="2025-07-15T11:27:52.741055831Z" level=warning msg="cleaning up after shim disconnected" id=fccde9dacb51beca6b0ecfbdec4cf2bde19f6b1bf7144c164e5464b5be6a81f4 namespace=k8s.io Jul 15 11:27:52.741073 env[1303]: time="2025-07-15T11:27:52.741074015Z" level=info msg="cleaning up dead shim" Jul 15 11:27:52.746922 env[1303]: time="2025-07-15T11:27:52.746897404Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:27:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4154 runtime=io.containerd.runc.v2\n" Jul 15 11:27:53.045951 kubelet[2059]: E0715 11:27:53.045611 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:27:53.047854 env[1303]: time="2025-07-15T11:27:53.047809148Z" level=info msg="CreateContainer within sandbox \"f2635048e76a161deb61293a0aa99ca7f78fbfff750d882596e624ed30671ad0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 15 11:27:53.459811 kubelet[2059]: I0715 11:27:53.459773 2059 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3d9cd88-89f5-45db-a69f-6961bc287774" path="/var/lib/kubelet/pods/a3d9cd88-89f5-45db-a69f-6961bc287774/volumes" Jul 15 11:27:53.629452 env[1303]: time="2025-07-15T11:27:53.629357945Z" level=info msg="CreateContainer within sandbox \"f2635048e76a161deb61293a0aa99ca7f78fbfff750d882596e624ed30671ad0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5e63a6fdc19db002c09024c23807d173b4f863472a45f18ca0c50d42b66c4b59\"" Jul 15 11:27:53.630009 env[1303]: time="2025-07-15T11:27:53.629962844Z" level=info msg="StartContainer for \"5e63a6fdc19db002c09024c23807d173b4f863472a45f18ca0c50d42b66c4b59\"" Jul 15 11:27:53.796945 env[1303]: time="2025-07-15T11:27:53.796817419Z" level=info msg="StartContainer for \"5e63a6fdc19db002c09024c23807d173b4f863472a45f18ca0c50d42b66c4b59\" returns successfully" Jul 15 11:27:53.812275 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e63a6fdc19db002c09024c23807d173b4f863472a45f18ca0c50d42b66c4b59-rootfs.mount: Deactivated successfully. Jul 15 11:27:53.956340 env[1303]: time="2025-07-15T11:27:53.956286475Z" level=info msg="shim disconnected" id=5e63a6fdc19db002c09024c23807d173b4f863472a45f18ca0c50d42b66c4b59 Jul 15 11:27:53.956340 env[1303]: time="2025-07-15T11:27:53.956334736Z" level=warning msg="cleaning up after shim disconnected" id=5e63a6fdc19db002c09024c23807d173b4f863472a45f18ca0c50d42b66c4b59 namespace=k8s.io Jul 15 11:27:53.956340 env[1303]: time="2025-07-15T11:27:53.956343613Z" level=info msg="cleaning up dead shim" Jul 15 11:27:53.963159 env[1303]: time="2025-07-15T11:27:53.963094789Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:27:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4216 runtime=io.containerd.runc.v2\n" Jul 15 11:27:54.050850 kubelet[2059]: E0715 11:27:54.050395 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:27:54.051849 env[1303]: time="2025-07-15T11:27:54.051812558Z" level=info msg="CreateContainer within sandbox \"f2635048e76a161deb61293a0aa99ca7f78fbfff750d882596e624ed30671ad0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 15 11:27:54.069105 kubelet[2059]: I0715 11:27:54.069050 2059 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-15T11:27:54Z","lastTransitionTime":"2025-07-15T11:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 15 11:27:54.377294 env[1303]: time="2025-07-15T11:27:54.377159861Z" level=info msg="CreateContainer within sandbox \"f2635048e76a161deb61293a0aa99ca7f78fbfff750d882596e624ed30671ad0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"49c96e379d163b1e217394456bb91f629881a1d2403cf7877f210491c90c3892\"" Jul 15 11:27:54.377883 env[1303]: time="2025-07-15T11:27:54.377670469Z" level=info msg="StartContainer for \"49c96e379d163b1e217394456bb91f629881a1d2403cf7877f210491c90c3892\"" Jul 15 11:27:54.426952 env[1303]: time="2025-07-15T11:27:54.426889642Z" level=info msg="StartContainer for \"49c96e379d163b1e217394456bb91f629881a1d2403cf7877f210491c90c3892\" returns successfully" Jul 15 11:27:54.453317 env[1303]: time="2025-07-15T11:27:54.453264128Z" level=info msg="shim disconnected" id=49c96e379d163b1e217394456bb91f629881a1d2403cf7877f210491c90c3892 Jul 15 11:27:54.453317 env[1303]: time="2025-07-15T11:27:54.453316046Z" level=warning msg="cleaning up after shim disconnected" id=49c96e379d163b1e217394456bb91f629881a1d2403cf7877f210491c90c3892 namespace=k8s.io Jul 15 11:27:54.453317 env[1303]: time="2025-07-15T11:27:54.453324813Z" level=info msg="cleaning up dead shim" Jul 15 11:27:54.460591 env[1303]: time="2025-07-15T11:27:54.460526130Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:27:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4272 runtime=io.containerd.runc.v2\n" Jul 15 11:27:54.731823 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49c96e379d163b1e217394456bb91f629881a1d2403cf7877f210491c90c3892-rootfs.mount: Deactivated successfully. Jul 15 11:27:55.053539 kubelet[2059]: E0715 11:27:55.053437 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:27:55.055545 env[1303]: time="2025-07-15T11:27:55.055157754Z" level=info msg="CreateContainer within sandbox \"f2635048e76a161deb61293a0aa99ca7f78fbfff750d882596e624ed30671ad0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 15 11:27:55.143509 env[1303]: time="2025-07-15T11:27:55.143463865Z" level=info msg="CreateContainer within sandbox \"f2635048e76a161deb61293a0aa99ca7f78fbfff750d882596e624ed30671ad0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"98ab727d41ab2177e576ef5f08dd80138fcb5a3ca481ed6a8e200b8cd11a07e1\"" Jul 15 11:27:55.143941 env[1303]: time="2025-07-15T11:27:55.143916053Z" level=info msg="StartContainer for \"98ab727d41ab2177e576ef5f08dd80138fcb5a3ca481ed6a8e200b8cd11a07e1\"" Jul 15 11:27:55.182006 env[1303]: time="2025-07-15T11:27:55.181963083Z" level=info msg="StartContainer for \"98ab727d41ab2177e576ef5f08dd80138fcb5a3ca481ed6a8e200b8cd11a07e1\" returns successfully" Jul 15 11:27:55.198326 env[1303]: time="2025-07-15T11:27:55.198274513Z" level=info msg="shim disconnected" id=98ab727d41ab2177e576ef5f08dd80138fcb5a3ca481ed6a8e200b8cd11a07e1 Jul 15 11:27:55.198326 env[1303]: time="2025-07-15T11:27:55.198321171Z" level=warning msg="cleaning up after shim disconnected" id=98ab727d41ab2177e576ef5f08dd80138fcb5a3ca481ed6a8e200b8cd11a07e1 namespace=k8s.io Jul 15 11:27:55.198326 env[1303]: time="2025-07-15T11:27:55.198331632Z" level=info msg="cleaning up dead shim" Jul 15 11:27:55.204763 env[1303]: time="2025-07-15T11:27:55.204737175Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:27:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4325 runtime=io.containerd.runc.v2\n" Jul 15 11:27:55.732069 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98ab727d41ab2177e576ef5f08dd80138fcb5a3ca481ed6a8e200b8cd11a07e1-rootfs.mount: Deactivated successfully. Jul 15 11:27:56.057278 kubelet[2059]: E0715 11:27:56.056957 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:27:56.058606 env[1303]: time="2025-07-15T11:27:56.058567008Z" level=info msg="CreateContainer within sandbox \"f2635048e76a161deb61293a0aa99ca7f78fbfff750d882596e624ed30671ad0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 15 11:27:56.247643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4116487136.mount: Deactivated successfully. Jul 15 11:27:56.412701 env[1303]: time="2025-07-15T11:27:56.412644454Z" level=info msg="CreateContainer within sandbox \"f2635048e76a161deb61293a0aa99ca7f78fbfff750d882596e624ed30671ad0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c609d8f2f583a436ce489060859c1dc3edae11a5d1ffbd481cae034864efe499\"" Jul 15 11:27:56.413233 env[1303]: time="2025-07-15T11:27:56.413209827Z" level=info msg="StartContainer for \"c609d8f2f583a436ce489060859c1dc3edae11a5d1ffbd481cae034864efe499\"" Jul 15 11:27:56.452887 env[1303]: time="2025-07-15T11:27:56.451824336Z" level=info msg="StartContainer for \"c609d8f2f583a436ce489060859c1dc3edae11a5d1ffbd481cae034864efe499\" returns successfully" Jul 15 11:27:56.691403 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 15 11:27:57.061596 kubelet[2059]: E0715 11:27:57.061570 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:27:57.074271 kubelet[2059]: I0715 11:27:57.073956 2059 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-k8pmq" podStartSLOduration=5.073938644 podStartE2EDuration="5.073938644s" podCreationTimestamp="2025-07-15 11:27:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:27:57.073725501 +0000 UTC m=+105.693014181" watchObservedRunningTime="2025-07-15 11:27:57.073938644 +0000 UTC m=+105.693227314" Jul 15 11:27:58.398681 kubelet[2059]: E0715 11:27:58.398629 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:27:59.246724 systemd-networkd[1080]: lxc_health: Link UP Jul 15 11:27:59.254500 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 15 11:27:59.254246 systemd-networkd[1080]: lxc_health: Gained carrier Jul 15 11:28:00.399459 kubelet[2059]: E0715 11:28:00.399404 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:28:01.067878 kubelet[2059]: E0715 11:28:01.067835 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:28:01.257802 systemd-networkd[1080]: lxc_health: Gained IPv6LL Jul 15 11:28:02.069484 kubelet[2059]: E0715 11:28:02.069449 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:28:05.458748 kubelet[2059]: E0715 11:28:05.458715 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:28:07.553470 sshd[3902]: pam_unix(sshd:session): session closed for user core Jul 15 11:28:07.555869 systemd[1]: sshd@26-10.0.0.10:22-10.0.0.1:50100.service: Deactivated successfully. Jul 15 11:28:07.556922 systemd[1]: session-27.scope: Deactivated successfully. Jul 15 11:28:07.557351 systemd-logind[1290]: Session 27 logged out. Waiting for processes to exit. Jul 15 11:28:07.558216 systemd-logind[1290]: Removed session 27.